Tag: assessment

Supercharge your digital training

We’ve all been there.

The organisation invests an obscene amount of money in a course library, and after much fanfare and an initial spike of excitement, activity steadily dwindles until the platform resembles a ghost town vacated by all but the most enthusiastic of fans.

Similar problems with learner engagement beset other forms of digital training too; whether it’s the famously low completions rates of MOOCs, or the constant chasing up of laggards who are yet to complete their compliance modules.

So when David Swaddle called out for tips to help fight what he described as “zombie digital learning”, I was all too willing to share the Top 3 pieces of advice that I’ve formulated on my quest to transform conventional digital training into blended learning experiences.

Here they are…

Rusty old car in front of a deserted shack.

1. Make time

Everyone’s too busy and they don’t have enough time to devote to their own learning and development. This has been the case ever since I started my career in this field and probably will remain so long after I retire.

So make time.

Add reminders into your participants’ calendars; schedule learning blocks; benchmark progress by declaring where they should be up by now; and host a complementary social networking group to keep the flame alive.

2. Provide context

Digital content can be generic by design, because it’s intended to scale up far and wide. However our audience may struggle to join the dots between what they see on screen and what they do on the job.

By supplementing the generic content with customised content, we can explain the implications of the former in the organisational context.

And by facilitating live interactive sessions that explore that context further, we reinforce it.

3. Assess application

Whether it’s a fair reputation or not, digital training is notorious for being a tick & flick exercise that fails to change behaviour in the real world.

So we need to ensure that the knowledge and skills that are developed via the learning experience are transferred by the employee to their day-to-day role.

By weaving an application activity into the instructional design – and by assessing the evidence of that application – we make it happen.

Electric sports car recharging

These are by no means the only ways to evolve your digital training.

However I hope that by implementing my three tips, you’ll supercharge it.

I don’t know

Despite its honesty, the humble phrase “I don’t know” is widely feared.

From the fake-it-til-you-make-it mindset of consultants to the face-saving responses of executives, we puny humans are psychologically conditioned to have all the answers – or at least be seen to.

Of course, demanding all the answers is the premise of summative assessment, especially when it’s in the form of the much maligned multiple-choice quiz. And our test takers respond in kind – whether it’s via “when in doubt, pick C” or by madly selecting the remaining options in a quasi zig-zag pattern as they run out of time.

But that’s precisely the kind of behaviour we don’t want to see on the job! Imagine your doctor wondering if a symptom pertains to the heart, kidney, liver or gall bladder, and feeling content to prescribe you medication for the third one. Or any random one in the 15th minute.

Of course my comparison is extreme for effect, and it may very well be inauthentic; after all, the learned doctor would almost certainly look it up. But I’d like to reiterate that in a typical organisational setting, having all the information we need at our fingertips is a myth.

Moreover, as Schema Theory maintains, an efficient and effective worker quickly retrieves the knowledge they need on a daily basis from the network they’ve embedded in their longterm memory. We can’t have our contact centre staff putting our customers on hold every 5 seconds while they ask their team leader yet another question, or our plumber shrugging his shoulders at every tap or toilet he claps his eyes on until he reads a manual. Of course, these recourses are totally acceptable… if they’re the exception rather than the rule.

And notwithstanding being a notch or two less serious than the life and death scenarios with which doctors deal, it wouldn’t be much fun if your loan or lavatory were the subject of a blind guess.

So yes, we humans can never know it all. And what we don’t know, we can find out. But the more we do know, the better we perform.

Two dice showing double sixes

Thus we don’t want our colleagues gaming their assessments. Randomly guessing a correct answer falsely indicates knowledge they don’t really have, and hence the gap won’t be remediated.

So I propose we normalise “I don’t know” as an answer option.

Particularly if a recursive feedback approach were to be adopted, a candid admission of ignorance motivated by a growth mindset would be much more meaningful than a lucky roll of the dice.

I don’t mean to underestimate the shift in culture that would be necessary to effect such a change, but I contend the benefits would be worth it – both to the organisation and to the individual.

In time, maybe identifying your own knowledge gaps with a view to continuously improving your performance will displace getting it right in the test and wrong on the job.

Approaching perfection

I’ve never understood the rationale of the 80% pass mark.

Which 20% of our work are we prepared to do wrongly?

It might explain the universally poor state of CX that companies are evidently willing to wear, but it’s arguably more serious when we consider the acronym-laden topics that are typically rolled out via e-learning, such as OHS and CTF. Which 20% of safety are we willing to risk? Which 20% of terrorism are we willing to fund?

There has to be a better way.

I’ve previously contended that an assessment first philosophy renders the concept of a pass mark obsolete, but went on to state that such a radical idea is a story for another day. Well my friends, that day has arrived.

An arrow pointing from Diagnose to Remediate then back to Diagnose.

Recursive feedback

Back in 2016, the University of Illinois’ excellent mooc e-Learning Ecologies: Innovative Approaches to Teaching and Learning for the Digital Age piqued my interest in the affordance of “recursive feedback” – defined by the instructor as rapid and repeatable cycles of feedback or formative assessment, designed to continually diagnose and remediate knowledge gaps.

I propose we adopt a similar approach in the corporate sector. Drop the arbitrary pass mark, while still recording the score and completion status in the LMS. But don’t stop there. Follow it up with cycles of targeted intervention to close the gaps, coupled with re-assessment to refresh the employee’s capability profile.

Depending on the domain, our people may never reach a score of 100%. Or if they do, they might not maintain it over time. After all, we’re human.

However the recursive approach isn’t about achieving perfection. It’s about continuous improvement approaching perfection.

One arrow with a single red dot; another arrow with a wavy green line.

Way of working

While the mooc instructor’s notion of recursive feedback aligns to formative assessment, my proposal aligns it to summative assessment. And that’s OK. His primary focus is on learning. Mine is on performance. We occupy two sides of the same coin.

To push the contrarianism even further, I’m also comfortable with the large-scale distribution of an e-learning module. However, where such an approach has notoriously been treated as a tick & flick, I consider it a phase in a longer term strategy.

Post-remediation efforts, I see no sense in retaking the e-learning module. Rather, a micro-assessment approach promotes operational efficiency – not to mention employee sanity – without sacrificing pedagogical effectiveness.

In this way, recursive feedback becomes a way of working.

And the L&D department’s “big bang” initiatives can be saved for the needs that demand them.

Violets are blue

When I pressed the Publish button on Roses are red, it capstoned a year of semantics for me which spilled over into this year.

In addition to my annual list of conferences in Australia for digital educators, I applied my cognitive surplus to another nine posts that dive deeper into the murky waters of meaning.

Purple petals scattered on the pages of an open book.

Bunch of pansies.

I’m keen to hear your views among mine, so feel free to add a comment to each of the conversations.

If you already have, I salute you!

Higher Assessment

I find it strange when a blogger doesn’t approve my comment.

I consider comments the life blood of my own blog, and whether they be positive or negative, classy or rude, they all add to the diversity of the conversation. If your fragile ego can’t handle that, don’t blog.

I recently submitted a constructive comment to a particular blog post, twice, and it never eventuated. A later comment by someone else has.

Right, rather than waste my thought bubble, I’ve decided to reproduce the thrust of it here…

Looking up at Mannheim City Water Tower

The OP was about the future of Higher Education being modular and flexible, which I agreed with. However something that caught my eye was the author’s observation about the assessment of prior learning via an essay or exam defeating the point of documentary evidence of previous course content or work experience.

Yet I feel that assessment via an essay or exam or some other means is the point. We needn’t rely so much on the bureaucracy if we could simply demonstrate what we know – regardless of how we came to know it.

When accrediting prior learning, a university needn’t get bogged down with evaluating myriad external permutations that may be worthy of credit, because what matters is the outcome of those permutations.

Similarly from the student’s point of view, it wouldn’t matter if they’ve done a mooc but not paid for the certificate, or if they did a course many years ago and worked in the field thereafter. What matters is the knowledge they can demonstrate now.

As a bastion of education, the university is losing ground to external competitors. Yet it maintains a certain gravitas that I suggest can be channelled into more of an assessment-driven role for society, whereby it validates knowledge at a certain standard and awards its qualifications accordingly.

It’s role in teaching and learning is retained, of course, to fill in the gaps; powered by research to keep it at the forefront of the science.