Six steps for evaluating training effectiveness.
Six steps for evaluating training effectiveness.
Of course you’ll need to understand the purpose of your programmes. Hopefully you had conversations with relevant stakeholders about how the programmes fit into the business agenda. If you haven’t, now is the time to do so because one of the prioritization criteria is strategic weight.
Basically, you need to compare your portfolio in terms of which programmes support the achievement of critical business objectives or must-win battles and which are more nice to’s. Sometimes the overarching purpose of a programme is directly connected to strategic goals. On other occasions it’s not that obvious and needs some more work. If you can’t find the strategic fit of any particular programme – we’d say you have a problem. This risks being a huge waste of your organisation’s resources.
The second prioritization criteria we see and recommend is usually linked to the first, although not always: volume. Or in other words: cost. Hopefully there are no programmes in your portfolio that lack a connection to strategic goals. Then you might want to look at the volumes, or the cost of the programmes instead, when prioritizing.
Programmes with the highest strategic weight and highest cost (including indirect and alternative cost) should be prioritized.
To get the full guide of our six steps for evaluating training effectiveness, download our whitepaper: The Kodo Way.
Now that you know what to focus on, it’s time to outline the results (you should already have a clear understanding of these from step 1) and define the behaviours that will drive those results. When the behavioural drivers are defined you should define the learning objectives.
What develops the driving behaviours?
Ajzen’s theory of planned behaviour gives us some inspiration on how to find that out, and this has inspired us to use the KAIB™ model. KAIB™ signifies: Knowledge, Attitude, Intention and Behaviour.
The idea is that if you want to develop new behaviours, you need to create new intentions. This is the immediate effect you should be looking for with your intervention: an intention to behave in a certain way in a specific, critical, situation.
In order to create the intentions to behave in a certain way, you convey knowledge about e.g. tools and how to use those tools. But that isn’t enough. You also have to create motivation, and a positive attitude towards the tools.
How well defined behavioural drivers and learning objectives look like
When it comes to defining behavioural drivers and learning objectives, and then measuring these, you want to keep in mind how learners learn. Bloom’s taxonomies are incredibly flexible and can be used in conjunction with most training and development programmes. Its popularity stems from the fact that it is highly adaptable and versatile, making it well-suited to evaluate training effectiveness.
To get the full guide of our six steps for evaluating training effectiveness, download our whitepaper: The Kodo Way.
Now that you’ve finished the first two steps in our 6-step model for evaluating training effectiveness you know what to prioritize. You now also have enough to start designing a program with well defined and measurable learning objectives as the foundation.
If the program is already designed, the process through the first two steps might have given you a lot to think about. In particular the second step, Align. Our customers often discover, when using the Kodo Way, that they have stuffed their programs with too many learnings. Sometimes it also becomes clear that the design is for taxonomy levels that are lower than the learning objectives. Many times they decide to redesign or adjust the programmes.
And oh boy, oh boy - There’s a lot to think of when designing programmes, but below are a few fundamental things that research, time and time again, has shown to be strongly related to the transfer of learning to the job:
Start with Learning Goals:
All training should have clear objectives that are precisely communicated. By opening your training with goals, you help the learners understand exactly what type of performance you’re looking for, and how they can expect to use the skills on the job.
Make Sure It’s Relevant:
Facilitators need to make sure that the skills being taught are applicable to the actual work that the employees perform. Avoid training for the sake of training, or making a training mandatory for all departments when the content is only relevant to one or two teams.
Practice, Practice, Practice:
The training should have plenty of opportunity for practice. Learners should be able to work towards mastery during the training - rather than practising for the first time on the job. Practice sessions should incorporate feedback and opportunities for reinforcement.
Be Clear About Rules:
If you have a learning point to make, stating it in the form of a rule is more effective than saying it more generally. What does that look like in action? It’s the difference between saying “Shutting down computers is good for the machinery.” vs. “Computers must be shut down at the end of the day to preserve equipment.”
Show Them What Not to Do:
Just as it’s good to demonstrate positive examples and the consequences thereof, giving your learners visual and detailed examples of how not to act and what happens as a result, can help them cement what they’ve been taught.
To get the full guide of our six steps for evaluating training effectiveness, download our whitepaper: The Kodo Way.
You might have plenty of interventions going on to drive learning in your organization, but there’s no guarantee that the learning quality is high, that learning transfer takes place and new behaviours are developed. Now that we’ve covered the first three steps for evaluating training effectiveness, it’s time to look at exactly how to measure that.
Making a web-search on how to determine the effectiveness of your training and development will render you a lot of inspiration on the topic. There’s qualitative ways of doing this, such as the success case methodology, and there’s quantitative ways of doing it, such as through measurements.
It’s easy enough to measure level 1. There are some generic questions you can use in a training effectiveness evaluation form on that level. Most also know how to look at Kirkpatrick’s level 4. Almost everybody agrees that it’s harder to evaluate effectiveness on Kirkpatrick’s levels 2 and 3.
Kodo Survey is all about making evaluation of training effectiveness easy. We’re therefore offering an automated way to evaluate training effectiveness with our SaaS platform. Moreover, we focus on Kirkpatrick’s levels 2 and 3 since level 1 has shown not have a correlation to learning transfer and behavioural development.
In order to evaluate the effectiveness of your training, you need to evaluate knowledge gained, changed attitude, created intentions as well as developed behaviour. As you can see, the KAIB™ model comes into play again. Luckily, we’ve already defined measurable learning objectives with Bloom’s taxonomies and KAIB™ in mind.
To get the full guide of our six steps for evaluating training effectiveness, download our whitepaper: The Kodo Way.
Now it’s time to measure, monitor and quickly take action on issues so that you correct and improve your intervention. Typically you should measure before a program, after a program and 3-6 months down the line. This is what is usually referred to as pre-, post-, and job-evaluation, or test.
Check the metrics regularly to make sure everything is top-notch: that it’s the right people attending trainings, that the quality is high, there’s learning transfer, new behaviours are developed etc.
Don’t wait to take action!
Let’s say you’re about to train 2,000 employees in order to increase productivity in a particular area. You’ve done the pre-, post- and job-evaluation and are following your dashboard closely.
For the first couple of programmes, the dashboard doesn’t show the results you were looking for and expecting which gives you an indication that you need to act. You should not wait here. You should immediately remedy based on the data-driven insights you get from Kodo Survey. If needed, you should postpone the programme for the remaining participants in order to investigate causes of the lack of results, actions for improvement and implementation of those actions.
To get the full guide of our six steps for evaluating training effectiveness, download our whitepaper: The Kodo Way.
As you are evaluating the results coming in you also need to think about how to share them... and you need to share them. The sixth step – Share, can be done in parallel with the fifth step – Evaluate. So, don’t be misconceived when we call it step 6. With the data and insights that you receive in our plattform you can easily present your story in a suitable way.
Tell a story with data
As you may have understood from the fifth step of our way to evaluate training effectiveness, our dashboard will make it easy for you to follow the results as they come in, monitor progress, improvement areas and success. Our dashboards will give you a clear oversight of the metrics that are key when evaluating training effectiveness and learning impact. You will quickly be able to take action that drives quality and ensures long-term impact. You will however want to dive deeper into the data for a more thorough analysis. Meaning that you need reporting with more detailed data, perhaps on the individual level or on a learning objective level. Kodo Survey is of course providing this so that you can tell the whole story and identify the reasons your program was or wasn’t successful.
With the data, you can easily present your story in a way that suits the relevant stakeholders, internal or external.
To get the full guide of our six steps for evaluating training effectiveness, download our whitepaper: The Kodo Way.
Read less