Five Impact-busting myths about Learning & Development

Back in 1994,  I wrote a book with colleague Steve Gill titled The Learning Alliance: Systems Thinking in Human Resource Development. The premise of this book is “…that learning adds value and improves human performance only when it is viewed and managed as an integral process within a larger system…”

In that book, we identified Five Myths about which we said “… the more that these myths are believed and practiced, the less likely it is that L&D will be producing value.” (p. 40).

Here are the myths and what we said about them back then. Fast forward now nearly 30 years. To what extent have we busted these myths? Are they still shaping practice in ways that they impede value? Or have we moved beyond them?

What do you think?

  1. Training makes a difference
    In our impact evaluation studies of training programs (leadership development, manager skills, etc., etc)  we always – always! – find that in the same program, some people used their training to produce worthwhile results, and some did not. But the training was nominally the same for all of them. What makes for the difference?  What makes the difference is almost always in the “before” and after” environment how were they prepared for and introduced to the training; what were expectations for their using it, or not; what kind of support did they get to apply their learning, and so forth.
  1. The purpose of training is to achieve learning objectives
    Nearly all the books and articles you can find about training design talk at great length about the need to express learning objectives in clear, concrete, and especially measurable terms. Good advice, as far as it goes; but too many practitioners let the story end there. New learning only provides the capability. What turns capability into value is behaviors – application of the learning in new or improved performance. A focus only or even mostly on learning frustrates value.
  1. The L&D professional’s job is to manage successful training programs
    The fallacy in this myth lies not in the word successful, but in the word programs. The training business is flooded to the gills with training programs and content – with more vendors selling more programs and content every day.  The myth extends to the habitual belief that processing people though these programs with efficiency is the L&D department’s job. Wrong, if it ends there. The job is to help the client organization use learning (the stuff that is presumably in these programs) to accelerate the execution of important change and strategic initiatives.
  1. Training is L&D’s job
    Most larger companies in the world have a training department. This fact attests to the emergence of training as a profession and the importance of the role in today’s organization. (Note: back then there was no such thing as a CLO). The implication of the name on the door is that the function therein is responsible for producing the performance improvement and results that are the reason for training’s existence. But turning learning into performance requires an alliance wherein trainees supervisors, their managers, and senior leadership all have a role to play in making sure that there is alignment, focus, intentionality, and accountability for the performance focus and support that must be embedded within the training initiative. In short, it takes a village to get results from training.  Delegating learning and performance accountability to the L&D function is a fool’s errand.
  1. Trainees should enjoy the training they receive.
    No quarrel with the premise that training is often (though not always) more effective when it is enjoyable, and we certainly do not promote learning designs that are on a par with root canals. But focusing on satisfaction (think the ubiquitous “level one” survey) is the value-buster.. At the root of this myth is confusion about who are the rightful customers of training. If training is meant to help the business perform, then the legitimate customer for training is senior leadership and the managers of trainees – those who are accountable for the goals that trainee performance should be helping to drive, that the training is presumably meant to impact.

Busted myths?

To what extent have we busted these myths? Please let us know your thoughts by voting at our mentimeter poll: VOTE HERE!

The results:

Increase ROI of Soft Skills training by making it more measurable

The beauty of soft skills – E.g., active listening, conflict management, giving and receiving feedback, etc. – is that they can be used in so many ways and places. The problem with soft skills is that they can be used in so many ways and places.  And this paradoxical conundrum lies at the heart of why we L&D folks struggle with evaluating soft skills training, and why so much soft skills training has earned a bad – but based in truth – rap that it doesn’t work very well.

Soft skills like those mentioned above can really pay off when they are used at the right time and place, or they can be nearly worthless if they are not used at all, or not at the right time. This is why those of us who subscribe to the High-Performance Learning Journey (HPLJ) principles always build an extra high-yield element into our learning journeys.

Deploying your new skills

Imagine for a moment that you have taken some training in listening skills – one of the softest of these so-called soft skills. So now that you have mastered some listening skills, what are you going to listen to better? A podcast? The conversation taking place in the cubicle next to yours? The football game you’re watching on TV while your domestic partner would like to have a word about your continuing predilection to leave your socks on the bedroom floor? Exactly when, and where, and especially why, are you going to deploy your new skills?

Or imagine this more concrete scenario: Your teen-aged child wants some advice from you about how to help a depressed school classmate, and at the same time your phone rings and it is a cold-call insurance sales representative who asks some questions about your financial security.  I’m going to go out on a limb here and bet that you will decide that talking with your teenager is a higher priority moment to deploy your new listening skills.

Moments that matter

So what makes this the better choice? It is all about the importance of the moment, and the extent to which it enables you to accomplish a goal that is worthwhile to you. This request of your teenager is what we call in our HPLJ lexicon a “moment-that-matters” (MTM).  And your ability to identify and prioritize what that moments-that-matter are in your life are the key to using soft skills to improve your performance.

Here are some likely MTMs in some different job roles where a listening skill might really payoff:

  • An HR member assigned to conduct exit interviews with disgruntled employees in an organization that is facing a talent drain and doesn’t understand why people are departing.
  • A large-account sales rep has been asked by a key sales prospect to meet to discuss a possible application of their high-margin product in a new venture.
  • A manager of a business unit with dropping performance is meeting with a group of employees who have some ideas on how to increase production.
  • A trusted colleague offers to give you some advice on how to motivate your new team member, a person with whom she has worked in the past.

A moment-that-matters is a scenario in a person’s job where, if that MTM is executed effectively, will help achieve a worthwhile goal, and afford an opportunity to use a newly learned skill.  These MTMs will vary, of course: among job roles, within daily variations in job circumstances, and according to different people’s natural skill strengths and weaknesses.

Soft skills need impact-assuring elements

For sure, the success of any soft-skill training program depends on whether your program is efficacious enough to build the soft skills it aims at. But in most programs worth their salt that efficacy is a given; it is the easy part.  The make-or-break part for payoff comes when participants are back into their jobs – using, or not using, their new skills. And given the poor track record of typical soft-skill training transfer, this means we need impact-assuring elements in our soft-skills initiatives.

This is the key to make sure you milk your soft-skill programs for every ounce of ROI it can produce. Build in plenty of tools and exercises that will help participants analyze their own job performance needs and circumstances and then identify their unique and key MTMs where their new soft-skill expertise can be deployed fruitfully. Structure practice opportunities so they can try them out; first in safer MTM-like circumstances, and later in increasingly more challenging actual MTMs. This makes soft skills “hard” – easy for participants to apply in ways that will pay off, and easy to measure and evaluate.

Don’t let the calendar determine who gets what training, when!

When we conduct training impact evaluation studies, we interview samples of participants who got great results from applying their learning, and also samples of those who got little or no value. 

We know that training success depends on a lot more than the quality of the training content and methods themselves. So, we always try to identify the contextual factors that helped support success, and those factors that impeded success.

We learn a lot about when training works, why it works, and when it does not work, what gets in the way. And one lesson is just so simple and blatantly obvious that it is easy to overlook.

What is that lesson?  Read these two interview excerpts and figure it out.

Background: The program we were evaluating was team building and leadership training for high-potential leaders in a Fortune 50 globally diverse company. 

Participant A: Used the training frequently over past six months and put together a team that opened a new market in the Far East that is out-selling competitors and ranks in the top 15% for new sales company-wide. 

Interviewer: “You have no doubt gotten training in your career that just did not seem to score for you, but this one, according to you, was hugely helpful. What do you think was the difference? Why did this prove so helpful to you?”

Participant A: “You’re right about that. I’ve spent time in training in the past that frankly was a waste of my time. But this one hit me at exactly the right time. It was not like this content was brand new to me; I have an MBA and had heard most of this stuff before. But, I had just been assigned a new leadership role of a struggling business unit. The tools, the exercises, and the feedback I got from this program were just what I needed to recall and do, at just the right time for me to put it to use. This training was my “crutch” that got me through the first three months putting together a new team and re-motivating the older team members. Without this training, I cringe to think what may have happened. It could have been bad”

Participant B:  Got the same training in the same cohort as Participant A, but made no use of it.

Interviewer: “We wanted to talk to you as you, like some others in this program,, just seemed to get no value from it? What do you think went wrong?”

Participant B: “Well, it was certainly no fault of the training leaders or the program. That was all pretty darn good in my view. I’ve certainly seen worse in my career. But at the time I was on leave for six weeks to try to straighten out some family issues. To be honest, my mind was barely on my work at all, and I just had no mental space for making any kind of change. Fortunately, my work was running OK on its own. We were holding relatively steady in sales and margins, and my team were fine at keeping the status quo. In retrospect, I never should have signed up for the program. But, I thought, hey, I have some time free, why not do at least something. So I enrolled.”

There you have it. Over our several decades of doing these impact studies, Participant A and Participant B have become familiar acquaintances!  We could not count the times, when we ask “So, how did you get into this program?” that we hear something like “Well, it fit my calendar pretty well – I had some time and opportunity open, so I went for it.”

Our advice? Find some Participant A and Participant B examples in your own organization. Tell their story, and advise your L&D clients and customers that if they want to leverage their training investments into results, let Need & Opportunity-to-Apply Learning drive enrollment, not Calendars!

Leveraging Learning and Development Investments into Worthwhile Business Value

A wise man is purported to have said that the more things change, the more they stay the same. The challenge that training leaders face today is both the same and it is also changing. What remains the same is that training leaders must produce worthwhile business results. The change is that they must produce them faster than ever, with greater impact from decreasing resources. Shifting global markets, emerging new technologies and uncertain economic conditions require organizations to develop new strategic priorities ever more frequently.  The even greater challenge is to get employees to execute these new initiatives that are introduced ever more frequently.

But training as an organizational function remains stuck with the fundamental weakness it has had for decades. Research, and our own dozens of evaluation studies, show that less than 20 percent or so of trainees take what they learned in training back to the job and use it in ways that are aligned with strategic goals and likely to achieve worthwhile results for their businesses.

If we look at training as only a sort of staff benefit, then maybe these results are tolerable. To be sure, learning and development have staff-benefit value. It is impossible to recruit, develop and retain employees without it. But conceiving of training as only a staff benefit is a fatal error, for it will always be just a cost and a vulnerable overhead expense.

Inevitable new business scenarios – a merger, a new product launch, a new strategic direction – require training that works to achieve 80% and better impact rates. Too many companies may be betting the business on defective training. Training has to drive performance change and improvement. If it does not, strategic initiatives die from failure to execute. At a 20% impact rate, by the time training produces a sufficient mass of employees actually executing a new strategy, the strategy is obsolete and it is time to launch a new one.

Good News and Bad News

In the past few years, we have dug deep into the causes of poor training and development.

We almost always find that some percentage of trainees apply critical elements of their new learning in improved job performance making substantial contributions to important business goals and strategies. The good news is that training works. The bad news is that it doesn’t work this well enough of the time with enough trainees, and so the typical training initiative leaves a lot of impact—and thereby a lot of money—on the table.

There is more good news. The reasons why this larger percentage of trainees does not effectively apply their training is NOT that the training itself is a failure; we already know it works for some people very well. The majority of the participants that did not apply their learning (i.e., did not execute the strategic behaviours) did not fail because they did not learn the right stuff. Instead, they failed to apply what they learned because they too often encountered one or more negative performance system factors. When we do a better job of managing these key performance system factors, we see dramatically improved results: more people using their training as well as the few best ones did.

Here are some of the most important factors we consistently find that cause training to fail:

  • Senior leadership did not understand or believe in the business need for the training and thus failed to support it rigorously and thoroughly
  • Trainees were sent to training without adequate preparation; they did not have a clear line of sight as to why the training was important, exactly what they most needed to learn, and how they could use it to drive their (and their business unit’s) performance
  • Trainees got trained at the wrong time, when they were not positioned to make the most of it in their work. The calendar, not their needs, drove their involvement.
  • Managers did not support or reinforce or hold employees accountable for applying their new learning in their jobs
  • The managers of trainees’ managers failed to hold their direct reports accountable for coaching and supporting their employees’ efforts to apply their training
  • Incentives and other performance factors were misaligned with applying the learning in new job behaviours

Note that none of these failure causes are centered in the training events and materials; they are failures of the “operating system”. And, they can be relatively easily mitigated and managed positively, as our High Impact Learning converts have demonstrated, time and again.

Getting Out of the Morass

There are two fronts on which we must fight the battle to turn training results around. First, we have to expand our thinking beyond simply delivering a powerful learning event, and manage the larger process of getting senior leaders involved, getting managers to prepare and support trainees, and so forth. This helps us improve results, for a while.

This takes us to the second battlefront. We must begin a long term strategy to educate the larger organization and change the way that training is perceived and managed. Getting impact from training is the responsibility of the whole organization since the critical failure factors lie outside the bureaucratic purview of the L&D function.  Accountability for training impact cannot be delegated to a training department. Teaching this lesson and beginning the cultural transformation, however, lies squarely in our accountability laps. Evaluation and measurement are the tools that can best help us begin and complete the journey.

We have to be relentless in measuring and evaluating the results we get; not only measuring the business impact of the training, but assessing who did what to help (or hinder) impact. And we have to provide the feedback to all of the stakeholders in the learning-to-performance process. We do not evaluate training; instead, we evaluate how well the organization is using training to get results, what’s working, and what is not. If some managers really did their jobs in preparing trainees, for example, then our evaluation should dig these facts out, and document the good work they did and the results it brought them, their employees, and the business. And we must also show what was lost when the performance support chain was broken, so that senior leaders can see that there is a true business case for holding their managers accountable for supporting training and development.

The Prescription is Simple:

  • Stop just “delivering” training; start building methods and tools for the organization to use to be sure training sticks and gets results
  • Educate senior leaders and managers about their role in making training work. Show them what’s at stake when it works, and what’s at risk when it doesn’t
  • Relentlessly measure the results you get, and show how the performance system factors were the make-or-break factors in success
  • Tell the truth about training results. When it does not work, say so clearly. Be sure everyone understands what the reasons for failure were, and what costs were incurred by failure and what benefits were not realized.
  • Provide feedback to all the stakeholders in the value chain so they can clearly see how their support (or lack of it) makes a difference
  • Tell the story loud and clear. When you make a strong business case for managing training as a process—and only then—will you build the organization that gets consistently great results.