Tag: captive audience

Reality bites

Evidence-based practice is the darling of Learning & Development geeks.

And with good reason. Amongst all the myths and barrow pushing, not to mention the myriad ways to approach any given problem, empiricism injects a sorely needed dose of confidence into what we do.

Friends of my blog will already know that my undergraduate degree was in science, and although I graduated a thousand years ago, it forever equipped me with a lens through which I see the world. Suffice to say my search for statistical significance is serious.

But it is through this same lens that I also see the limitations of research. Hence I urge caution when using it to inform our decisions.

A scientist looking through a microscope.

For instance, I recently called out the “captive audience” that characterises many of the experiments undertaken in the educational domain. Subjects who are compelled to participate in these activities may not behave in the same manner as when they are not; which could, for example, complicate your informal learning strategy.

Hence studies undertaken in the K12 or higher education sectors may not translate so well in the corporate sector, where the dynamics of the environment are different. But even when a study is set among more authentic or relevant conditions, the results are unlikely to be universally representative. By design it locks down the variables, making it challenging to compare apples to apples.

In short, all organisations are different. They have different budgets, systems, policies, processes, and most consequentially, cultures. So if a study or even a meta-analysis demonstrates the efficacy of the flipped classroom approach to workplace training, it mightn’t replicate at your company because no one does the pre-work.

Essentially it’s a question of probability. Subscribing to the research theoretically shifts the odds of success in your favour, but it’s not a sure thing. You never know where along the normal distribution you’re gonna be.

The Trolley Problem originally illustrated by Jesse Prinz and adapted by Kareem Carr.

My argument thus far has assumed that quality research exists. Yet despite the plethora of journals and books and reports and articles, we’ve all struggled to find coverage of that one specific topic. Perhaps it’s behind a paywall, or the experimental design is flawed, or the study simply hasn’t been done.

So while we may agree in principle that evidence-based practice is imperative, the reality of business dictates that in fact much of that evidence is lacking. In its absence, we have no choice but to rely on logic, experience, and sometimes gut instinct; informed by observation, conversation, and innovation.

In your capacity as an L&D professional, you need to run your own experiments within the construct of your own organisation. By all means take your cues from the available research, but do so with a critical mindset, and fill in the gaps with action.

Find out what works in your world.

The point of preference

I often miss webinars.

That might be because it was delivered at 3:00am local time, or during a manic working day, or after hours when frankly I’m not in the mood.

So I’m grateful when the event is recorded and I can play it back later. But, more often than not, I don’t do that either.

There’s just something about a 1-hour recording that turns me off. Unless the topic is irresistible, it’s too easy to move on to more pressing matters.

I’d rather read a summary of the key points – and according to the likes received by my recent tweet and the results of a poll I ran on LinkedIn, I’m not alone. In this era of time poverty and relentless distractions, it’s no surprise.

Yet there are good reasons to watch the footage. As Anna Sabramowicz points out, “Taking notes makes me remember the key points more, plus there’s so much nuance in the language people use, a summary might just diminish that. Context matters of course!” Indeed it does.

A LinkedIn poll answering the question Which would you prefer? as 22% for Play a webinar recording and 78% for Read a summary.

Of course there are no wrong answers here. Your preference is your prerogative, after all. And it is this point of preference that I maintain is missed in the discourse about learning styles.

As any L&D professional worth their salt will know, there is no weight of evidence to support the widely disseminated notion that some people learn more effectively visually, while others learn more effectively verbally. Case in point, whether I read a text or watch a video about a certain topic, I’d be confident to nail a quiz about it, so the mode of delivery doesn’t really matter.

However, empirical research in the educational domain is usually based on what L&D pro’s with a dark sense of humour would call a “captive audience”. Whether it’s a high school exam, a university assessment, or perhaps a mandatory online module, the participants are compelled to participate.

But as the 70:20:10 model attests, this does not represent the real world most of the time – especially in the corporate sector. In this largely informal learning environment, the mode of delivery does matter because the decision to participate rests with the individual.

In other words, regardless of some people’s preference to read text over watching a video, they’ll learn just fine by watching the video. It’s just that they’ll probably choose not to. So in the absence of the text, they’ll end up not learning anything.

The upshot is that if you’re a webinar producer, you could be failing to meet the needs of the majority of your target audience. Use the results of my research (as unscientific as it is) as an incentive to write up summaries for those who want them. As Donald Clark points out, “AI can do it all in seconds… and will.” So there’ll be no excuse.

And if you’re smart about it, the information you share in the summary will compel us to play the video.