Is it a Knowledge Check or a Quiz?
In the midst of designing a facilitator-led curriculum for a client, we were met with a conundrum: according to our SME(s), one particular class just had to have a quiz at the end.
There were many problems with this idea, including the fact that none of the other 6 courses in the curriculum ended with a quiz and that the audience was new-hires - so how intimidating would a quiz be?
We finally compromised on a Knowledge Check - that way our SME felt fulfilled (and we fulfilled compliance requirements) but the learners wouldn't be too intimidated (we hoped).
What's the difference?
A quiz is used to check for comprehension. Did your attendees learn what you taught? A quiz can come in many forms - you might ask your learners to recognize an answer, as in the case of a multiple choice text. You might ask them to recall an answer, as in the case of fill-in-the-blank. Or you may ask them to think of the answer by giving a "case" and asking: What should you do next? In all cases the results of the test matter. There is a score (perhaps numeric, perhaps pass/fail). There is a record of that score. And often the scores are compared to one another - resulting in a ranking of some sort.
Alternatively, a knowledge check is more of a review. It's used to determine if the learners can find the answer. They are often allowed to use their learning materials (handouts, workbooks, etc.) and potentially to work together. A knowledge check might be in the form of a game (such as jeopardy) or it might be a solitary activity. Knowledge checks are often used to help solidify the learning, allow learners to review the content one more time, and enable them to leave the training more confident in what they learned.
A knowledge check is appropriate in all situations; a quiz is only appropriate if you have to ensure people know the answers before they leave training. There is some consequence to not knowing the answers (such as performing the job incorrectly), and you need to prove the "results" of the training.
Interview with Will Thalheimer, PhD
What motivated you to write this book?
I've worried about my own smile sheets (aka response forms, reaction forms, level 1's) for years! I know they're not completely worthless because I got useful feedback when I was a mediocre leadership trainer-feedback that helped me get better.
But I've also seen the research (two meta-analyses covering over 150 scientific studies) showing that smile sheets are NOT correlated with learning results-that is, smile sheets don't tell us anything about learning! I also saw clients-chief learning officers and other learning executives-completely paralyzed by their organizations' smile-sheet results. They knew their training was largely ineffective, but they couldn't get any impetus for change because the smile-sheet results seemed fine.
So I asked myself, should we throw out our smile sheets or is it possible to improve them? I concluded that organizations would use smile sheets anyway, so we had to try to improve them. I wrote the book after figuring out how smile sheets could be improved.
If you could distill your message down to just one - what would it be?
Smile sheets should (1) draw from the wisdom distilled from the science-of-learning findings, and (2) smile-sheet questions ought to be designed to (2a) support learners in making more precise smile-sheet decisions and (2b) should produce results that are clear and actionable. Too often we use smile sheets to produce a singular score for our courses. "My course is a 4.1!" But these sorts of numerical averages leave everyone wondering what to do.
How can trainers use this book to assist them in the work that they do?
Organizations, and learning-and-development professionals in particular, can use my book to gain wisdom about the limitations of their current evaluation approaches. They can review almost 30 candidate questions to consider utilizing in their own smile sheets. They can learn how to persuade others in using this radical new approach to smile-sheet design. Finally, they can use the book to give them the confidence and impetus to finally make improvements in their smile-sheet designs-improvements that will enable them to create a virtuous cycle of continuous improvement in terms of their learning designs.
Getting valid feedback is the key to any improvement. My book is designed to help organizations get better feedback on their learning results.
Do you have a personal motto that you live by?
Be open to improvement. Look for the best sources of information-look to scientific research in particular to enable practical improvements. Be careful. Don't take the research at face value. Instead, understand it in relation to other research sources and, most importantly, utilize the research from a practical perspective.
Will Thalheimer, PhD, PresidentWork-Learning Research, Inc.
How to Conduct a Level 3 Evaluation
According to best-selling author Marcus Buckingham, performance ratings rely on "bad data." Labeled the "idiosyncratic rater effect," he states that who we pay, what we pay, who we promote and the training we offer is based on the assumption that one's "rating" is reflective of the one being rated - when in fact it is reflective of the one doing the rating.
Often, when conducting Level 3 evaluations, we ask a manager or some other entity to "rate" a newly trained employee in order to confirm they have learned and can apply their new skills on the job. In order to not succumb to the idiosyncratic rater effect, it is wise to use an impartial observation sheet, so that the rater simply confirms whether or not the employee is performing the job as expected.
For example:
Comments
Answers phone within 3 rings ¨ Yes ¨ No
States name and badge number ¨ Yes ¨ No
Asks permission to put caller on hold ¨ Yes ¨ No
But even a seemingly straightforward observation checklist can be fraught with imprecision that may skew the rating results. Before designing a Level 3 evaluation for your own training, consider these factors which may impact your learner's reported "success."
Who should be the observer?
What should be the setting?
Should the trainee be told in advance they will be observed?
Does the time of day matter?
Does the day of the week matter?
Should it be a simulated scenario or a real life one?
How long should the observation last?
Should the observer give them feedback? When?
Should the trainee explain what they are doing?
If you need assistance with designing training evaluations for your organization, visit our web page.
How To Assess Real Results From Your Corporate Training
The four levels of corporate training evaluation (and the futility of most training evaluation) was discussed in this earlier blog post; but in this post we will discuss the types of training evaluation that allow you to assess real results.
Level three evaluations are the most logical evaluations to deploy because they get at the purpose of the training – to change people’s behavior on the job. A Level three evaluation then determines if people have actually changed their behavior by either observing them in action, asking them for their own assessment, or asking for a third-party’s assessment.
Level three evaluations incorporate Level two evaluations because the evaluator is able to determine if the trainee is utilizing the knowledge that they acquired during the training and applying it to their work.
Level Three Evaluations
Observation – by a manager, quality control or even a training person An observation form must be utilized so that the evaluation is not subjective (Did the trainee acquire the customer information, using the five prescribed questions, in the correct order? vs. Did the trainee begin the customer interaction correctly?)
Personal assessment – is frequently used for Level three evaluations because many organizations find observation to be cumbersome (it requires asking a third-party to conduct it, it requires disseminating and retrieving information, and other administrative tasks which are all subject to not being completed). In a personal assessment the trainee, once they are back on the job for a period of tame (three weeks, three months) reports on their own changed behavior.
Questions utilized include:
Have you applied the ___ process in your day-to-day work?
How many times a day would you say you utilize the process?
Have you seen positive results from utilize the process?
Can you provide an example of when you used the process and what the outcome was?
These types of questions not only help the training department to understand how the training is being utilized on the job, they also cause the employee to realize how they have changed their behavior as a result of training, and further, if the individual has not changed their behavior, these types of assessments help to reinforce the fact the training is an investment the organization has made in that individual and it is an investment the organization intends to follow up on.
Level Four Evaluations
Level four evaluations then tell us whether the investment in the training was worth it. For example: if the intention was to increase sales, did sales numbers go up? These types of evaluation require a lot of number crunching AND require a baseline of data to compare against, which many organizations simply don’t possess.
Factors and Nuances
One nuance which makes Level four evaluations difficult to conduct is determining how long it will take for the training to become “the way we work.” When can the training department be confident that what was taught is truly ingrained in to the trainee’s everyday work responsibilities? In other words – when should the measurement take place? If a goal was set prior to the training process - say, increasing sales by 50%, and sales increase by only 20% in the first three months following training – would that be considered a failure? What if, instead, the trainees were able to increase their sales by 20% every quarter following the training? Then that outcome would far exceed the 50% goal. So when is the “line in the sand” drawn and success or failure determined?
Another nuance is that the long-term effects of training can be quite difficult to factor. For instance, if the intent was to increase sales, the training department might evaluate the sales numbers three months or six months after the training; but rarely will they evaluate it again a year after the training. And in some cases, where sales results are residual, the ongoing effect of the training is never quantified. For instance, in insurance sales, teaching salespeople to cross-sell (e.g. selling an umbrella policy to a current homeowner’ policy owner) not only results in an immediate uptick in sales, but also, when the policy is renewed, that sales training results in an ongoing increase in sales.
Sadly, most companies don’t take the time to extrapolate their training outcomes to Level three and Level four. It is acknowledged that evaluation at these levels can be time consuming and cumbersome, but these results are crucial for training departments to measure and communicate their worth to the organization as a whole.
Training Evaluation - What Does It Tell Us? Not Much!
Most companies who do conduct evaluation of their training programs will stop at Level 2 evaluations (see graphic).
Level one evaluations are often called smile-sheets or butts-in-seats evaluations. They are realistically opinion gauges. they ask too many questions, including questions about the facilitator’s knowledge and skill, the quality of the learning materials, the comfort of the training room or delivery methodology (e.g. if it were e-Learning), etc. Unfortunately, the responses provide little useable information in return. Smile-sheets could be revitalized and used to a better purpose with just a bit of tweaking of the questioning process.
Level two evaluations are intended to test knowledge. They are typically a type of test – either paper-and-pencil (or these days, computer generated) or a demonstration / performance of skill (for instance, if you are teaching an individual to run a cash register, you wouldn’t want to stop at simply asking them questions about cash register operations – you would want to see them physically operate the cash register as well).The biggest drawback of Level two evaluations is that they realistically gauge short-term memory. They are typically distributed immediately after the training concludes, so most individuals have a relatively good chance of passing that type of evaluation.
Level three and Level four evaluations - those that assess whether the training is being used on the job and whether the intended business impact of the training was realized, are more complicated to design and administer and more often than not, simply not utilized in most businesses.
If you’d like to learn more about effective training evaluation, see this associated posted: How to Assess Real Results From Your Corporate Training.
Why we can't effectively measure training outcomes (say the CLOs)
CLOs (Chief Learning Officers) believe their ability to deploy an effective [training] measurement process is limited by: Lack of resources, lack of management support, and an inability to bring data together from different functions. When the measurement programs are weak, most CLOs report their influence and role in helping achieve organizational priorities is also weak.
Source: Stagnant Outlook for Learning Measurement published in CLO Magazine, May 2015