Author: Ryan Tracey

Learning & Development innovation specialist.

Supercharge your digital training

We’ve all been there.

The organisation invests an obscene amount of money in a course library, and after much fanfare and an initial spike of excitement, activity steadily dwindles until the platform resembles a ghost town vacated by all but the most enthusiastic of fans.

Similar problems with learner engagement beset other forms of digital training too; whether it’s the famously low completions rates of MOOCs, or the constant chasing up of laggards who are yet to complete their compliance modules.

So when David Swaddle called out for tips to help fight what he described as “zombie digital learning”, I was all too willing to share the Top 3 pieces of advice that I’ve formulated on my quest to transform conventional digital training into blended learning experiences.

Here they are…

Rusty old car in front of a deserted shack.

1. Make time

Everyone’s too busy and they don’t have enough time to devote to their own learning and development. This has been the case ever since I started my career in this field and probably will remain so long after I retire.

So make time.

Add reminders into your participants’ calendars; schedule learning blocks; benchmark progress by declaring where they should be up by now; and host a complementary social networking group to keep the flame alive.

2. Provide context

Digital content can be generic by design, because it’s intended to scale up far and wide. However our audience may struggle to join the dots between what they see on screen and what they do on the job.

By supplementing the generic content with customised content, we can explain the implications of the former in the organisational context.

And by facilitating live interactive sessions that explore that context further, we reinforce it.

3. Assess application

Whether it’s a fair reputation or not, digital training is notorious for being a tick & flick exercise that fails to change behaviour in the real world.

So we need to ensure that the knowledge and skills that are developed via the learning experience are transferred by the employee to their day-to-day role.

By weaving an application activity into the instructional design – and by assessing the evidence of that application – we make it happen.

Electric sports car recharging

These are by no means the only ways to evolve your digital training.

However I hope that by implementing my three tips, you’ll supercharge it.

I don’t know

Despite its honesty, the humble phrase “I don’t know” is widely feared.

From the fake-it-til-you-make-it mindset of consultants to the face-saving responses of executives, we puny humans are psychologically conditioned to have all the answers – or at least be seen to.

Of course, demanding all the answers is the premise of summative assessment, especially when it’s in the form of the much maligned multiple-choice quiz. And our test takers respond in kind – whether it’s via “when in doubt, pick C” or by madly selecting the remaining options in a quasi zig-zag pattern as they run out of time.

But that’s precisely the kind of behaviour we don’t want to see on the job! Imagine your doctor wondering if a symptom pertains to the heart, kidney, liver or gall bladder, and feeling content to prescribe you medication for the third one. Or any random one in the 15th minute.

Of course my comparison is extreme for effect, and it may very well be inauthentic; after all, the learned doctor would almost certainly look it up. But I’d like to reiterate that in a typical organisational setting, having all the information we need at our fingertips is a myth.

Moreover, as Schema Theory maintains, an efficient and effective worker quickly retrieves the knowledge they need on a daily basis from the network they’ve embedded in their longterm memory. We can’t have our contact centre staff putting our customers on hold every 5 seconds while they ask their team leader yet another question, or our plumber shrugging his shoulders at every tap or toilet he claps his eyes on until he reads a manual. Of course, these recourses are totally acceptable… if they’re the exception rather than the rule.

And notwithstanding being a notch or two less serious than the life and death scenarios with which doctors deal, it wouldn’t be much fun if your loan or lavatory were the subject of a blind guess.

So yes, we humans can never know it all. And what we don’t know, we can find out. But the more we do know, the better we perform.

Two dice showing double sixes

Thus we don’t want our colleagues gaming their assessments. Randomly guessing a correct answer falsely indicates knowledge they don’t really have, and hence the gap won’t be remediated.

So I propose we normalise “I don’t know” as an answer option.

Particularly if a recursive feedback approach were to be adopted, a candid admission of ignorance motivated by a growth mindset would be much more meaningful than a lucky roll of the dice.

I don’t mean to underestimate the shift in culture that would be necessary to effect such a change, but I contend the benefits would be worth it – both to the organisation and to the individual.

In time, maybe identifying your own knowledge gaps with a view to continuously improving your performance will displace getting it right in the test and wrong on the job.

Approaching perfection

I’ve never understood the rationale of the 80% pass mark.

Which 20% of our work are we prepared to do wrongly?

It might explain the universally poor state of CX that companies are evidently willing to wear, but it’s arguably more serious when we consider the acronym-laden topics that are typically rolled out via e-learning, such as OHS and CTF. Which 20% of safety are we willing to risk? Which 20% of terrorism are we willing to fund?

There has to be a better way.

I’ve previously contended that an assessment first philosophy renders the concept of a pass mark obsolete, but went on to state that such a radical idea is a story for another day. Well my friends, that day has arrived.

An arrow pointing from Diagnose to Remediate then back to Diagnose.

Recursive feedback

Back in 2016, the University of Illinois’ excellent mooc e-Learning Ecologies: Innovative Approaches to Teaching and Learning for the Digital Age piqued my interest in the affordance of “recursive feedback” – defined by the instructor as rapid and repeatable cycles of feedback or formative assessment, designed to continually diagnose and remediate knowledge gaps.

I propose we adopt a similar approach in the corporate sector. Drop the arbitrary pass mark, while still recording the score and completion status in the LMS. But don’t stop there. Follow it up with cycles of targeted intervention to close the gaps, coupled with re-assessment to refresh the employee’s capability profile.

Depending on the domain, our people may never reach a score of 100%. Or if they do, they might not maintain it over time. After all, we’re human.

However the recursive approach isn’t about achieving perfection. It’s about continuous improvement approaching perfection.

One arrow with a single red dot; another arrow with a wavy green line.

Way of working

While the mooc instructor’s notion of recursive feedback aligns to formative assessment, my proposal aligns it to summative assessment. And that’s OK. His primary focus is on learning. Mine is on performance. We occupy two sides of the same coin.

To push the contrarianism even further, I’m also comfortable with the large-scale distribution of an e-learning module. However, where such an approach has notoriously been treated as a tick & flick, I consider it a phase in a longer term strategy.

Post-remediation efforts, I see no sense in retaking the e-learning module. Rather, a micro-assessment approach promotes operational efficiency – not to mention employee sanity – without sacrificing pedagogical effectiveness.

In this way, recursive feedback becomes a way of working.

And the L&D department’s “big bang” initiatives can be saved for the needs that demand them.

The right stuff

Well that was unexpected.

When I hit the Publish button on Not our job, I braced myself for a barrage of misunderstanding and its evil twin, misrepresentation.

But it didn’t happen. On the contrary, my peers who contacted me about it were downright agreeable. (A former colleague did politely pose a comment as a disagreement, but I happened to agree with everything she stated.)

I like to think I called a spade a spade: we’re responsible for learning & development; our colleagues are responsible for performance; and if they’re willing to collaborate, we have value to add.

Bar graph showing the impact of your ideas inside your brain much lower than the impact of your ideas when you put them out there.

The post was a thought bubble that finally precipitated after one sunny day, a long time ago, when Shai Desai asked me why I thought evaluation was so underdone by the L&D profession.

My post posited one reason – essentially, the inaccessibility of the data – but there are several other reasons closer to the bone that I think are also worth crystallising.

1. We don’t know how to do it.

I’m a Science grad, so statistical method is in my blood, but most L&D pro’s are not. If they haven’t found their way here via an Education or HR degree, they’ve probably fallen into it from somewhere else à la Richard in The Beach.

Which means they don’t have a grounding in statistics, so concepts such as regression and analysis of variance are alien and intimidating.

Rather than undertake the arduous journey of learning it – or worse, screw it up – we’d rather leave it well alone.

2. We’re too busy to do it.

This is an age old excuse for not doing something, but in an era of furloughs, restructures and budget freezes, it’s all too real.

Given our client’s ever-increasing demand for output, we might be forgiven for prioritising our next deliverable over what we’ve already delivered.

3. We don’t have to do it.

And it’s a two-way street. The client’s ever-increasing demand for output also means they prioritise our next deliverable over what we’ve already delivered.

If they don’t ask for evaluation, it’s tempting to leave it in the shadows.

4. We fear the result.

Even when all the planets align – we can access the data and we’ve got the wherewithal to use it – we may have a sneaking suspicion that the outcome will be undesirable. Either no significant difference will be observed, or worse.

This fear will be exacerbated when we design a horse, but are forced by the vagaries of corporate dynamics to deliver a camel.

A woman conjuring data from a tablet.

The purpose of this post isn’t to comment on the ethics of our profession nor lament the flaws of the corporate construct. After all, it boils down to human nature.

On the contrary, my intention is to expose the business reality for what it is so that we can do something about it.

Previously I’ve shared my idea for a Training Evaluation Officer – an expert in the science of data analysis, armed with the authority to make it happen. The role builds a bridge that connects learning & development with performance, keeping those responsible for each accountable to one another.

I was buoyed by Sue Wetherbee’s comment proposing a similar position:

…a People & Culture (HR) Analyst Business Partner who would be the one to funnel all other information to across all aspects of business input to derive “the story” for those who order it, pay for it and deliver it!

Sue, great minds think alike ;-)

And I was intrigued by Ant Pugh’s Elephant In The Room in which he challenges the assumption that one learning designer should do it all:

Should we spend time doing work we don’t enjoy or excel at, when there are others better equipped?

Just because it’s the way things are, doesn’t mean it’s the way things should be.

I believe a future exists where these expectations are relinquished. A future where the end result is not dictated by our ability to master all aforementioned skills, but by our ability to specialise on those tasks we enjoy.

How that will manifest, I don’t know (although I do have some ideas).

Ant, I’m curious… is one of those ideas an evaluation specialist? Using the ADDIE model as a guide, that same person might also attend to Analysis (so a better job title might be L&D Analyst) while other specialists focus on Design, Development and Implementation.

Then e-learning developers mightn’t feel the compulsion to call themselves Learning Experience Designers, and trainers won’t be similarly shamed into euphemising their titles. Specialists such as these can have the courage to embrace their expertise and do what they do best.

And important dimensions of our work – including evaluation – won’t only be done. They’ll be done right.

Not our job

Despite the prevailing rhetoric for the Learning & Development function to be “data driven”, data for the purposes of evaluating what we do is notoriously hard to come by.

Typically we collect feedback from happy sheets (which I prefer to call unhappy sheets) and confirm learning outcomes via some form of assessment.

In my experience, however, behavioural change is reported much less often, while anything to do with business metrics even less so. While I recognise multiple reasons for the latter in particular, one of them is simply the difficulty we mere mortals have in accessing the numbers.

Which has been a long-standing mystery to me. We’re all on the same team, so why am I denied the visibility of the information I need to do my job?

I’ve always suspected the root cause is a combination of human foibles (pride, fear, territoriality), substandard technology (exacerbated by policy) and a lack of skill or will to use the technology even when it is available.

Notwithstanding these ever-present problems, it’s been dawning on me that the biggest blocker to our ability to work with the numbers is the fact that, actually, it’s not our job.

Business woman presenting data to two colleagues

Consider a bank that discovers a major pain point among its customers is the long turnaround time on their home loan applications. To accelerate throughput and thus improve the customer experience, the C-suite makes a strategic decision to invest in an AI-assisted processing platform.

I contend the following:

  • It’s the job of the implementation team to ensure the platform is implemented properly.
  • It’s the job of the L&D team to build the employees’ capability to use it.
  • It’s the job of the service manager to report the turnaround times.
  • It’s the job of the CX researchers to measure the customer experience.
  • It’s the job of the C-suite to justify their strategy.

In this light, it’s clear why we L&D folks have so much trouble trying to do the other things on the list that don’t mention us. Not only are we not expected to do them, but those who are don’t want us to do them.

In short, we shouldn’t be doing them.

Caveat

At this juncture I wish to caution against conflating learning & development with performance consulting.

Yes, learning & development is a driver of performance, and an L&D specialist may be an integral member of a performance centre, but I urge anyone who’s endeavouring to rebrand their role as such to heed my caveat.

My point here is that if you are responsible for learning & development, be responsible for it; and let those who are responsible for performance be responsible for it.

Value

Having said that, there is plenty we should be doing within the bounds of our role to maximise the performance of the business. Ensuring our learning objectives are action oriented and their assessment authentic are two that spring to mind.

And I don’t wish to breathe air into the juvenile petulance that the phrase “not my job” can entail. On the contrary, we should be collaborating with our colleagues on activities related to our remit – for example training needs analysis, engineering the right environmental conditions for transfer, and even Level 4 evaluation – to achieve win-win outcomes.

But do it with them, not for them, and don’t let them offload their accountability for it being done. If they don’t wish to collaborate, so be it.

Essentially it boils down to Return on Expectation (ROE). In our quest to justify the Return on Investment (ROI) of our own service offering, we need to be mindful of what it is our financiers consider that service to be.

Anything beyond that is an inefficient use of our time and expertise.