Tag: assessment

Scaling up

In Roses are red, I proposed definitions for oft-used yet ambiguous terms such as “competency” and “capability”.

Not only did I suggest a competency be considered a task, but also that its measurement be binary: competent or not yet competent.

As a more general construct, a capability is not so readily measured in a binary fashion. For instance, the question is unlikely to be whether you can analyse data, but the degree to which you can do so. Hence capabilities are preferably measured via a proficiency scale.

Feet on scales

Of course numerous proficiency scales exist. For example:

No doubt each of these scales aligns to the purpose for which it was defined. So I wonder if a scale for the purpose of organisational development might align to the Kirkpatrick Model of Evaluation:

 Level  Label  Evidence 
0 Not Yet Assessed  None
1 Self Rater Self rated
2 Knower Passes an assessment
3 Doer Observed by others
4 Performer Meets relevant KPIs
5 Collaborator Teaches others

Table 1. Tracey Proficiency Scale (CC BY-NC-SA)

I contend that such a scale simplifies the measurement of proficiency for L&D professionals, and is presented in a language that is clear and self-evident for our target audience.

Hence it is ahem scalable across the organisation.

More than just a pretty face

I’ve blogged in favour of digital badges in the past, not because they’re colourful motivators – which arguably they are, at least for some people – but because they represent an achievement.

While the robustness of the criteria for earning a badge may be challenged, as may be the assessment of meeting said criteria, the concept holds true: a badge must be earned by demonstrating that you have done something.

What that something is is a variable to be defined. Some badges such as the ones that are popular among IT geeks are earned by completing a training program or by passing an exam. I call these “certification badges”.

However I maintain a stronger implementation of the idea emerges when we earn the badge by successfully executing a task (or a suite of tasks). I call these “practitioner badges”.

Assorted badges, including one stating Qualified Dog-Petter

For example, you might complete a 40-hour course and pass a massive multiple-choice quiz to earn an XYZ-issued “Project Management” badge. That’s quite an achievement.

But I’d be more impressed (and more confident as an employer) if you were to demonstrate how you’ve applied the XYZ-endorsed principles to a real project in the real world, thereby earning a “Project Manager” badge. To me, that’s a greater achievement because it shifts the focus of the exercise from the activity (learning) to its outcome (performance).

In an organisational context, I see opportunities to blend the tasks to enrich the experience. For example, one task may be to apply a principle to your current project, while the next task is to share your reflection of doing so on the enterprise social network; thereby facilitating not only metacognition and expert feedback, but also peer-to-peer knowledge sharing.

Celebrating the latest cohort of people who’ve earned badges in the same forum may also generate a bit of FOMO.

In any case, my point is a badge should be more than just a pretty face. I propose we distinguish between two types of badge – namely a certification badge and a practitioner badge – with the latter representing an achievement above and beyond the former.

Transformers

It seems like everyone’s spruiking the “new normal” of work.

The COVID-19 pandemic is keeping millions of previously office-bound employees at home, forcing L&D professionals to turn on a dime.

Under pressure to maintain business continuity, our profession has been widely congratulated for its herculean effort in adapting to change.

I’m not so generous.

Our typical response to the changing circumstances appears to have been to lift and shift our classroom sessions over to webinars.

In The next normal, which I published relatively early during lockdown, several of my peers and I recognised the knee-jerk nature of this response.

And that’s not really something that ought to be congratulated.

Who led the digital transformation of your company? The CEO (incorrect), The CTO (incorrect), COVID-19 (correct)

For starters, the virus exposed a shocking lack of risk management on our part. Digital technology is hardly novel, and our neglect in embracing it left us unprepared for when we suddenly needed it.

Look no further than the Higher Education sector for a prime example. They’re suffering a free-fall in income from international students, despite the consensus that people can access the Internet from other countries.

Beyond our misgivings with technology, moreover, the virus has also shone a light on our pedagogy. The broadcast approach that we deliver virtually today is largely a continuation of our practice pre-pandemic. It wasn’t quite right then, and it isn’t quite right now. In fact, isolation, digital distractions and Zoom fatigue probably make it worse.

I feel this is important to point out because the genie is out of the bottle. Employee surveys reveal that the majority of us either don’t want to return to the office, or we’ll want to split our working week at home. That means while in-person classes can resume, remote learning will remain the staple.

So now is our moment of opportunity. In the midst of the crisis, we have the moral authority to mature our service offering. To innovate our way out of the underwhelming “new normal” and usher in the modern “next normal”.

In some cases that will mean pivoting away from training in favour of more progressive methodologies. While I advocate these, I also maintain that direct instruction is warranted under some circumstances. So instead of joining the rallying cry against training per se, I propose transforming it so that it becomes more efficient, engaging and effective in our brave new world.

Transformer-style toy robot

Good things come in small packages

To begin, I suggest we go micro.

So-called “bite sized” pieces of content have the dual benefit of not only being easier to process from a cognitive load perspective, but also more responsive to the busy working week.

For example, if we were charged with upskilling our colleagues across the business in Design Thinking, we might kick off by sharing Chris Nodder’s 1.5-minute video clip in which he breaks the news that “you are not your users”.

This short but sweet piece of content piques the curiosity of the learner, while introducing the concept of Empathize in the d.school’s 5-stage model.

We’re all in this together

Next, I suggest we go social.

Posting the video clip to the enterprise social network seeds a discussion, by which anyone and everyone can share their experiences and insights, and thus learn from one another.

It’s important to note that facilitating the discussion demands a new skillset from the trainer, as they shift their role from “sage on the stage” to “guide on the side”.

It’s also important to note that the learning process shifts from synchronous to asynchronous – or perhaps more accurately, semi-synchronous – empowering the learner to consume the content at a time that is most convenient for them (rather than for the L&D department).

There is no try

Next, I suggest we go practical.

If the raison d’être of learning & development is to improve performance, then our newly acquired knowledge needs to be converted into action.

Follow-up posts on the social network shift from the “what” to the “how”, while a synchronous session in the virtual classroom enables the learner to practise the latter in a safe environment.

Returning to our Design Thinking example, we might post content such as sample questions to ask prospective users, active listening techniques, or an observation checklist. The point of the synchronous session then is to use these resources – to stumble and bumble, receive feedback, tweak and repeat; to push through the uncomfortable process we call “learning” towards mastery.

It’s important to recognise the class has been flipped. While time off the floor will indeed be required to attend it, it has become a shorter yet value-added activity focusing on the application of the knowledge rather than its transmission.

Again, it’s also important to note that facilitating the flipped class demands a new skillset from the trainer.

A journey of a thousand miles

Next, I suggest we go experiential.

Learning is redundant if it fails to transfer into the real world, so my suggestion is to set tasks or challenges for the learner to do back on the job.

Returning to our Design Thinking example, we might charge the learner with empathising with a certain number of end users in their current project, and report back their reflections via the social network.

In this way our return on investment begins immediately, prior to moving on to the next stage in the model.

Pics or it didn’t happen

Finally, I suggest we go evidential.

I have long argued in favour of informalising learning and formalising its assessment. Bums on seats misses the point of training which, let’s remind ourselves again, is to improve performance.

How you learned something is way less interesting to me than if you learned it – and the way to measure that is via assessment.

Returning to our Design Thinking example, we need a way to demonstrate the learner’s mastery of the methodology in a real-world context, and I maintain the past tense of open badges fits the bill.

In addition to the other benefits that badges offer corporates, the crux of the matter is that a badge must be earned.

Informalise learning. Formalise its assessment.

I am cognisant of the fact that my proposal may be considered heretical in certain quarters.

The consumption of content on the social network, for example, may be difficult to track and report. But my reply is “so what” – we don’t really need to record activity so why hide it behind the walls of an LMS?

If the openness of the training means that our colleagues outside of the cohort learn something too, great! Besides, they’ll have their own stories to tell and insights to share, thereby enriching the learning experience for everyone.

Instead it is the outcome we need to focus on, and that’s formalised by the assessment. Measure what matters, and record that in the LMS.

In other words, the disruptive force of the COVID-19 pandemic is an impetus for us to reflect on our habits. The way it has always been done is no substitute for the way it can be done better.

Our moment has arrived to transform our way out of mode lock.

Micro-learning’s unsung sibling

Micro-learning is so hot right now.

But I’m not going to deliberate over its definition. If you’re into that, check out Shannon Tipton’s Microlearning: The Misunderstood Buzzword and 7 Deadly Myths of Microlearning.

Nor am I going to try to convince you to jump on board, or abandon ship.

Instead, I’m going to consider firstly how micro-learning might be used in a corporate training context; and secondly, pivot towards something slightly different.

And if you were to find any value in these musings, I’d be delighted.

How micro-learning might be used

The nature of micro-learning lends itself to the campaign model.

Independent but related packets of content that are distributed over time can be woven into the working day of the target audience, and hence reduce time “off the floor”. In this context, the micro-learning is the training.

Similarly I see an opportunity for micro-learning to be deployed before the training. The content can prime the target audience for the experience to follow, perhaps in the form of a flipped class.

And of course I also see an opportunity for micro-learning to be deployed after the training: what one may call “reinforcement” to improve retention and increase the probability of knowledge transfer.

Sure, but does it work?

Well cognitive science suggests it does. I recommend reading up on the forgetting curve, subsumption theory, Piaget, cognitive load, the spacing effect and interleaving. It’s worth it.

A hand holding a pen pointing to a chart.

The pivot

While I’m obviously an advocate of micro-learning, a less buzzy but perhaps just-as-important variant is micro-assessment.

This is similar to micro-learning except the content is in question format – preferably scenario based and feedback rich.

In one sense, the two approaches may be conflated. Formative assessment is nothing new, and a few daily questions over a set timespan could constitute training, or prompt critical thinking pre-training, or promote application post-training.

If you want more bedtime reading, I suggest looking up the testing effect or its synonyms, retrieval practice and active recall.

However I feel the untapped potential of micro-assessment lay in its summative power. As the bank of results builds up over time, the data can be used to diagnose the population’s understanding of the subject matter. If the questions are aligned to competencies, the knowledge gaps can be identified and closed with further interventions.

Hence, micro-assessment can be leveraged to execute an assessment first strategy, thereby increasing the relevance of the L&D service offering to the business.

And if you want yet more bedtime reading, I suggest exploring metacognition and its effect on motivation.

On that note, good night!

The L&D maturity curve

Over the course of my career, I’ve witnessed a slow but steady shift away from formal learning to informal learning.

Of course, remnants of the “formal first” philosophy still exist, whereby every conceivable problem is attempted to be fixed by a training solution, typically in the form of a course. Over time, the traditional classroom-based delivery of such courses has increasingly given way to online modules, but that’s merely a change in format – not strategy.

While courses certainly have their place in the L&D portfolio, the forgetting curve places a question mark over their longterm effectiveness on their own.

The informal first philosophy balances the pendulum by empowering the employee to self-direct their learning in accordance with their personal needs.

While in some cases informal learning obviates the need for training, in other cases it will complement it. For example, I see the informalisation of learning as an opportunity to deliver the content (for example, via a wiki) which can be consumed at the discretion of the employee. The focus of the course then pivots to the application of the content, which is the point of learning it in the first place. Similarly, the assessment evaluates the learning in the context of real-world scenarios, which is what the learner will encounter post-course.

And since the content remains accessible, it can be used for ongoing reference long after the course has been completed.

A hand holding a pen pointing to a chart.

While I consider the informal first philosophy a giant leap in L&D maturity, it essentially pertains to instructional design. For a more holistic view of L&D, I propose an “assessment first” philosophy by which the capability of the target audience is analysed prior to any design work being undertaken.

The rationale for this philosophy is best appreciated in the context of an existing employee base (rather than greenhorn new starters). Such a group comprises adults who have a wide range of knowledge, skills and experiences. Not to mention they’ve probably been doing the job for a number of years.

Sheep dipping everyone in this group with the same training doesn’t make much sense. For a minority it might be a worthwhile learning experience, but for the majority it is likely to be redundant. This renders the training an ineffective waste of time, and an unnecessary burden on the L&D team.

By firstly assessing the target audience’s proficiency in the competencies that matter, a knowledge gap analysis can identify those in which the population is weak, and targeted training can be delivered in response. Individuals who are “not yet competent” in particular areas can be assigned personalised interventions.

This approach avoids the solution first trap. By focusing the L&D team’s attention on the real needs of the business, not only does the volume of demand reduce, but the work becomes more relevant.

The assessment first philosophy may appear incongruent where new starters are concerned, who by definition are assumed to be weak in all competencies – after all, they’ve only just walked through the door! – but I counter that assumption on two fronts.

Firstly, not all new starters are doe-eyed college grads. Many have had previous jobs in the industry or in other industries, and so they arrive armed with transferable knowledge, skills and experiences.

And regardless, the informal first philosophy holds true. That is to say, the new starter can consume the content (or not) as they see fit, demonstrate their understanding in the scenario-oriented “course”, and formalise it via the assessment.

The results of the assessment dictate any further intervention that is necessary.

Of course, some topics such as the company’s own products or processes will necessitate significant front-end loading via content development and maybe even curricula, but these may be considered the exception rather than the rule. By looking through the lens of assessment first, the L&D team works backwards to focus that kind of energy on where it is warranted.

It is also worth noting the assessment first philosophy renders the traditional “pass mark” obsolete, but such a radical idea is a story for another day!

Laptop showing business metrics.

While the assessment first philosophy represents an exponential leap in the maturity of L&D, there is yet another leap to make: “performance first”.

The raison d’être of the L&D team is to improve performance, so it’s always been a mystery to me as to why our work is so often disconnected to the business results. I do appreciate the barriers that are in our way – such as the inexplicable difficulty of obtaining the stats – but still, we can and should be doing more.

Under the performance first paradigm, it is not knowledge gaps that are analysed, but rather performance gaps. A root cause analysis identifies whether the cause is a capability deficiency or not – in the case of the former, a capability analysis feeds into the assessment first approach; in the case of the latter, a solution other than training is pursued instead.

As with assessment first, performance first may appear incongruent where new starters are concerned. After all, their stats thus far are zero, and waiting to recognise poor performance may have unacceptable consequences.

So again we have another exception to the rule whereby some folks may be scaffolded through L&D intervention prior to their performance being analysed. However the point is, we needn’t force everyone down that road. It depends on the circumstances.

And again, by looking through the lens of performance first, the L&D team works backwards to focus its energy on where it is needed. But this time with results at the forefront of the team’s purpose, its relevance to the business goes through the roof.

The L&D Maturity Curve, featuring Formal First rising to Informal First rising to Assessment First rising to Performance First. The x-axis represents maturity of the L&D function and the y-axis represents its relevance to the business.

I realise my take on L&D maturity might freak some of my peers out. Concurrently, others will argue that we should leapfrog to performance first now and get on with it.

Personally I consider the maturity curve a journey. Yes, it is theoretically possible to skip stages, but I feel that would be a shock to the system. From a change management perspective, I believe an organisation at one stage of the curve would achieve more success by growing into the next stage of the curve, while ironing out the bugs and creating the new normal along the way.

Besides, it isn’t a race. Important journeys take time. What matters is the direction in which that journey is heading.