Archive for the ‘instructional design’ category

The 70:20:10 lens

9 February 2016

In 70:20:10 for trainers I advocated the use of the 70:20:10 model by L&D professionals as a lens through which to view their instructional design.

The excellent comments on my post, and insightful blog posts by others – notably Mark Britz, Clark Quinn and Arun Pradhan – have prompted me to think deeper about my premise.

I continue to reject the notion that 70:20:10 is a formula or a goal, because it is not a model of what “should be”. For example, we needn’t assign 70% of our time, effort and money on OTJ interventions, 20% on social learning, and 10% on formal training. Similarly, we shouldn’t mandate that our target audience aligns its learning activity according to these proportions. Both of these approaches miss the point.

The point is that 70:20:10 is a model of what “is”. Our target audience does undertake 70% of its learning on the job, 20% via interacting with others, and 10% off the job (or thereabouts). Mark Britz calls it a principle. It’s not right and it’s not wrong. It just is.

Our role then as L&D professionals is to support and facilitate this learning as best we can. One of the ways I propose we do this is by using 70:20:10 as a lens. By this I mean using it as a framework to structure our thinking and prompt us on what to consider. Less a recipe citing specific ingredients and amounts, more a shopping basket containing various ingredients that we can use in different combinations depending on the meal.

For this purpose I have created the following diagram. To avoid the formula trap, I decided against labelling each segment 70, 20 and 10, and instead chose their 3E equivalents of Experience, Exposure and Education. For the same reason, I sized each segment evenly rather than to scale.

The 3 E's: Education, Exposure, Experience

Using the framework at face value is straight-forward. Given a learning objective, we consider whether a course or a resource may be suitable; whether a social forum might be of use; if matching mentees with mentors would be worthwhile. Perhaps it would be helpful to develop some reference content, or provide a job aid. When looking through the lens, we see alternatives and complements beyond the usual event-based intervention.

Yet we can see more. Consider not only the elements in the framework, but also the interactions between them. For example, in our course we could assign an on-the-job task to the learners, and ask them to share their experiences with it on the ESN. In the language of the framework, we are connecting education to experience, which in turn we connect to exposure. Conversely we can ask workshop attendees to share their experiences in class (connecting experience to education) or encourage them to call out for project opportunities (connecting exposure to experience). The possibilities for integrating the elements are endless.

Those who see L&D as the arbiter of all learning in the workplace may find all this overwhelming. But I see L&D as a support function. To me, 70:20:10 is not about engineering the perfect solution. It’s about adding value to what already happens in our absence.

70:20:10 for trainers

12 January 2016

Learning & Development Professional has been running a poll on the following question:

Is the 70:20:10 model still relevant today?

And I’m shocked by the results. At the time of writing this blog, over half the respondents have chosen “No”. Assuming they are all L&D professionals, the extrapolation means most of us don’t think the 70:20:10 model is relevant to our work.

But what does this really mean?

In LDP’s article The 70:20:10 model – how fair dinkum is it in 2015? – by the way, “fair dinkum” is Australian slang for “real” or “genuine” – Emeritus Professor David Boud says he doesn’t think there is proper evidence available for the effectiveness of the model.

If this is a backlash against the numbers, I urge us all to let it go already. Others have explained umpteen times that 70:20:10 is not a formula. It just refers to the general observation that the majority of learning in the workplace is done on the job, a substantial chunk is done by interacting with others, while a much smaller proportion is done off the job (eg in a classroom).

Indeed this observation doesn’t boast a wealth of empirical evidence to support it, although there is some – see here, here and here.

Nonetheless, I wonder if the hoo-ha is really about the evidence. After all, plenty of research can be cited to support the efficacy of on-the-job learning, social learning and formal training. To quibble over their relative proportions seems a bit pointless.

Consequently, some point the finger at trainers. These people are relics of a bygone era, clinging to the old paradigm because “that’s how we’ve always done it”. And while this might sound a bit harsh, it may contain a seed of truth. Change is hard, and no one wants their livelihood threatened.

If you feel deep down that you are one of the folks who views 70:20:10 as an “us vs them” proposition, I have two important messages that I wish to convey to you…

1. Training will never die.

While I believe the overall amount of formal training in the workplace will continue to decrease, it will never disappear altogether – principally for the reasons I’ve outlined in Let’s get rid of the instructors!.

Ergo, trainers will remain necessary for the foreseeable future.

2. The 70:20:10 model will improve your effectiveness.

As the forgetting curve illustrates, no matter how brilliant your workshops are, they are likely to be ineffective on their own.

Ebbinghaus Forgetting Curve showing exponentially decreasing retention over time

To overcome this problem, I suggest using the 70:20:10 model as a lens through which you view your instructional design.

For example, suppose you are charged with training the sales team on a new product. As a trainer, you will smash the “10” with an informative and engaging workshop filled with handouts, scenarios, role plays, activities etc.

Then your trainees return to their desks, put the handouts in a drawer, and try to remember all the important information for as long as humanly possible.

To help your audience remember, why not provide them with reference content in a central location, such as on the corporate intranet or in a wiki. Then they can look it up just in time when they need it; for example, in the waiting room while visiting a client.

Job aids would also be useful, especially for skills-based information; for example, the sequence of key messages to convey in a client conversation.

To improve the effectiveness of your workshop even further, consider doing the following:

  • Engage each trainee’s manager to act as their coach or mentor. Not only does this extend the learning experience, but it also bakes in accountability for the learning.

  • Encourage the manager to engineer opportunities for the trainee to put their learning into practice. These can form part of the assessment.

  • Set up a community of practice forum in which the trainee can ask questions in the moment. This fosters collaboration among the team and reduces the burden on the L&D department to respond to each and every request.

  • Partner each trainee with a buddy to accompany them on their sales calls. The buddy can act as a role model and provide feedback to the trainee.

In my humble opinion, it is counter-productive to rail against 70:20:10.

As an L&D professional, it is in your interest to embrace it.

Collateral damage

4 August 2015

The L&D community may be divided into two camps: (1) Those for whom the mere mention of learning styles makes their blood boil; and (2) Those who are inexplicably unaware of the hullabaloo and are thus oblivious to the aforementioned boiling of blood.

All the things meme guy

Credit: Based on original artwork by Allie Brosh in This is Why I’ll Never be an Adult, Hyperbole and a Half.

The antagonism stems from the popularity of learning styles in the educational discourse – not to mention vocational curricula – despite a lack of empirical evidence supporting their effectiveness when incorporated into instructional design. The argument is that in the absence of such evidence, don’t waste time and money trying to match your teaching style to everyone’s learning styles; instead, divert that energy towards other, evidence-based pedagogy.

This is sound advice.

Nonetheless, I urge my peers not to throw the baby out with the bath water. By this I mean regardless of the existence or impact of learning styles, a phenomenon that enjoys universal recognition is that of learner preferences. And I fear it may be an unintended casualty of the war on learning styles.

For example, a deduction from the literature might be that a teacher need not tailor his or her delivery to meet the needs of the audience. Since learning styles are bunk, I can do what I like because it won’t make a difference anyway. Such a view is conveniently teacher centric, and it flies in the face of the thought leadership on learner centeredness that we have advanced so far. Sure, the deduction may be unreasonable, but extremists rarely listen to reason.

However, a more insidious factor is the dominance of the literature on formal learning. Studies of the impact of learning styles are typically based on teaching in a classroom setting, often in the K12 sector. Furthermore, the statistics are based on scores achieved via formal assessment. Yet we know in the workplace the vast majority of learning is informal.

Let me illustrate my concern here with a personal example. When I need to find out how to perform a particular task in a particular software program, I strongly prefer text-based instructions over video. I’m annoyed by having to play a clip, wait for it to load, and then wait for the presenter to get to the bit that is relevant to me. Instead, I prefer to scan the step-by-step instructions at my own speed and get on with it.

Now, if only video was available and I weren’t such a diligent employee, I might postpone the task or forget about it all together. Yet if you were to put me in a classroom, force me to watch the video, then test my ability to perform the task – sure, I’ll ace it. But that’s not the point.

The point is that the learner’s preference hasn’t been taken into account in the instructional design, and that can affect his performance in the real world.

If you don’t agree with me, perhaps because you happen to like video, suppose a manual was the only form of instruction available. Would you read it? Perhaps you would because you are a diligent employee.

Isn’t everyone?

All the things meme guy, sad

Credit: Based on X all Y (Sad) In HD by CanineWritter, in turn based on original artwork by Allie Brosh in This is Why I’ll Never be an Adult, Hyperbole and a Half.

In case your blood is beginning to boil, let me emphasise: (1) Learning styles appear to have no significant effect on learning outcomes; and (2) The nature of the content probably dictates its most effective mode of delivery.

If we assume that learning styles are highly correlated with learner preferences – indeed, for some they are synonymous – then we might be tempted to conclude that learner preferences have no significant effect on learning outcomes. I consider this a false conclusion.

Indeed in a controlled environment, learner preferences don’t really matter. The participants are forced to do it whether they like it or not, or they somehow feel obliged to comply.

Outside of the controlled environment, however, learner preferences do matter. We sometimes see this in formal settings (which is why universities enforce a minimum percentage of lecture attendance), but it appears most starkly in informal settings where the learner is empowered to do it or not. If they don’t like doing it, odds are they won’t.

So we need to be mindful of the interaction between pedagogical effectiveness and learner preference. An experience that your learners love but is ineffective is ultimately worthless. But so too is an experience that is effective but your learners loathe.

As a profession we need to aim for experiences that are both effective and liked by our audience – or at the very least, don’t turn them away.

A framework for content curation

17 June 2015

In conversation at EduTECH earlier this month, Harold Jarche evoked George E. P. Box’s quote that “all models are wrong, but some are useful”.

Of course, the purpose of a model is to simplify a complex system so that something purposeful can be done within it. By definition, then, the model can only ever be an approximation of reality; by human error, furthermore, it won’t be as approximate as it could be.

Nevertheless, if we accept the inherent variability in (and fallibility of) the model, we can achieve a much better outcome by using it than by not.

It is with this in mind that I have started thinking about a model – or perhaps more accurately, a framework – for content curation.

I have grown weary of hotchpotch lists of resources that we L&D pro’s tend to cobble together. Sure, they may be thoughtfully filtered and informatively annotated, but a hotchpotch is a hotchpotch. I should know: I’ve used them as a student, I’ve seen my peers create them, and I’ve created them myself.

Surely we can put more design into our curation efforts so that the fruits of our labour are more efficient, meaningful, and effective…?

A mess of jigsaw pieces.

Consider the trusty instructional design heuristic of Tell Me, Show Me, Let Me, Test Me. As far as heuristics go, I’ve found this to be a good one. It reminds us that transmission is ineffective on its own; learners really need to see the concept in action and give it a go themselves. As the Chinese saying goes, “Tell me and I forget. Show me and I remember. Involve me and I understand.” *

* Truisms such as this one are typically met with suspicion from certain quarters of the L&D community, but in this case the research on the comparative efficacies of lectures, worked examples, PBL etc appears to add up.

As a framework for content curation, however, I feel the heuristic doesn’t go far enough. In an age in which learners in the workplace are expected to be more autodidactic than ever before, it needs refurbishment to remain relevant.

So I propose the following dimensions of a new-and-improved framework…

Pyramid: Attract me, Motivate me, Tell me, Show me, Let me, Support me, Extend me, Value me

Attract me

An important piece of content curated for the target audience is one that attracts them to the curation in the first place, and promotes word-of-mouth marketing among their colleagues.

While related to the subject matter, this content need not be “educational” in the traditional sense. Instead, its role is to be funny, fascinating or otherwise engaging enough to pull the learners in.

Motivate me

As learning in the workplace inevitably informalises, the motivation of employees to drive their own development becomes increasingly pivotal to their performance.

Old-school extrinsic motivators (such as attendance rosters and exams) don’t exist in this space, so the curator needs to convince the audience to proceed. Essentially this means putting the topic into context for them, clarifying how it relates to their role, and explaining why they should bother learning it.

Tell me

This content is new knowledge. I recommend covering only one key concept (or a few at most) to reduce cognitive load. It’s worth remembering that education is not the provision of information; it is sense making.

It’s important for this content to actually teach something. I see far too much curation that waxes lyrical “about” the subject, yet offers nothing practical to be applied on the job. They’re beyond the sales pitch at this stage; give ’em something they can use.

Show me

This content demonstrates the “Tell me” content in action, so the employee can see what the right behaviour looks like, and through that make further sense of the concept.

Real-world scenarios are especially powerful.

Let me

By putting the content into practice, the learner puts his or her understanding to the test.

Interactive exercises and immersive simulations – with feedback – allow the learner to play, fail and succeed in a safe environment.

Support me

This content jumps the knowing-doing gap by helping the learner apply the concepts back on the job.

This is principally achieved via job aids, and perhaps a social forum to facilitate ad hoc Q&A.

Extend me

This content assists the employee who is keen to learn more by raising their awareness of other learning opportunities. These might explore the concepts in more depth, or introduce other concepts more broadly.

All that extra curation that we would have been tempted to shove under “Tell me” can live here instead.

Value me

Everyone is an SME in something, so they have an opportunity to participate in the curation effort. Whether the content they use is self generated or found elsewhere, it is likely to be useful for their colleagues too.

Leverage this opportunity by providing a mechanism by which anyone can contribute better content.

A mess of jigsaw pieces.

As you have no doubt deduced by now, the overarching theme of my proposed framework is “less is more”. It values quality over quantity.

It may prove useful beyond curation too. For example, it may inform the sequence of an online course. (In such a circumstance, a “Test me” dimension might be inserted after “Let me” to add summative assessment to the formative.)

In any case, it is very much a work in progress. And given it is #wolweek, I ask you… What are your thoughts?

Let’s get rid of the instructional designers!

12 August 2014

That’s the view of some user-oriented design proponents.

It’s something I remembered while writing my last blog post about user-generated content. Whereas that post explored the role of the learner in the content development process, how about their role in the broader instructional design process?

I wrote a short (1000 word) assignment on the latter at uni several years ago – in the form of a review of a chapter written by Alison Carr-Chellman and Michael Savoy – and it’s a concept that has resonated with me ever since.

Here I shall share with you that review, unadulterated from its original form, except for the graphic of the user empowerment continuum and the hyperlink to the reference, both of which I have added for this post.

Whether or not the more “progressive” design philosophies resonate with you, at the very least I hope they provoke your thinking…

Users co-designing

Introduction

Carr-Chellman & Savoy (2004) provide a broad overview of user design. They define the term user design, compare it against other methodologies of user-oriented design, identify obstacles to its successful implementation, and finally make recommendations for the direction of further research.

Definition

According to Carr-Chellman & Savoy (2004), traditional instructional design methodologies disenfranchise the user from the design process. In a corporate organisation, for example, the leaders will typically initiate the instructional design project, an expert designer will then analyse the situation and create a design, and finally, the leaders will review the design and either approve it or reject it. The role of the user, then, is simply to use the system (or perhaps circumvent it).

In contrast to traditional instructional design methodologies, user design enables the users to participate in the design process. Instead of just using the system, they are involved in its design. Furthermore, their role is more than just providing input; they are active participants in the decision-making process.

Comparison against other methodologies

Carr-Chellman & Savoy (2004) carefully distinguish user design from other methodologies of user-oriented design, namely user-centered design and emancipatory design.

User-centered design

According to Carr-Chellman & Savoy (2004), user-centered design methodologies consider the needs of the user during the design process. In educational situations, for example, the expert designer may analyse the target audience, identify their preferred learning styles, and perhaps run a pretest. In tool usage situations, he or she may distribute user surveys or conduct usability testing. The goal of these activities is to obtain extra information to assist the designer in creating a better system for the users.

The key difference between user-centered design and user design is the level of participation of the users in the design process. Under a user-centered design model, the designer considers the needs of the users, but ultimately makes the design decisions on their behalf.

Under a user design model, however, the needs of the users go beyond mere food for thought. The users are empowered to make their own design decisions and thereby assume an active role in the design process.

User empowerment continuum, featuring traditional instructional design at the lowest extremity, then user-centered design, then user design, then emancipatory design at the highest extremity.

Emancipatory design

If traditional design occupies the lowest extremity of the user empowerment continuum, and user-centered design occupies a step up from that position, then emancipatory design occupies the opposite extremity.

Emancipatory design dispenses with the role of the expert designer and elevates the role of the users, so that in effect they are the designers. This methodology charges the users with full responsibility over all facets of the design process, from initiation, through analysis, design, review, to approval. Instead of having a system imposed on them, the users have truly designed it for themselves, according to their own, independent design decisions.

Emancipatory design is founded on issues of conflict and harmony in the disciplines of social economics and industrial relations. Carr-Chellman & Savoy (2004) recognise that the goal of emancipatory design is “more to create change and vest the users and frontline workers in organisational outcomes than it is actually to create a working instructional system”. Hence, emancipatory design may not be a universal instructional design methodology.

User design

User design fits between the extremes of the user empowerment continuum. Whereas traditional design and user-centered design remove the user from the active design process, and conversely, emancipatory design removes the expert designer from the process, user design merges the roles into the shared role of “co-designer”. It strikes a balance between the two perspectives by including contributions from both parties.

Arguably, user design is a universal instructional design methodology. Whereas traditional design and user-centered design devalue the role of the users in the active design process, emancipatory design devalues the role of the expert designer.

User design, however, values both roles. It recognises the necessity of the active involvement of users, because they are the experts in their domain and will be the ones operating the system. However, users can not be expected to understand the science of design. The active involvement of an expert designer is critical in guiding the design process and driving the work towards an efficient and effective outcome.

Obstacles

Carr-Chellman & Savoy (2004) identify numerous obstacles to the successful implementation of user design, including the reluctance of designers and leaders to share their decision-making powers with users, the inclusion of users too late in the design process, the tendency to categorise users into a homogenous group, and the lack of user motivation to participate in design activities.

Further Research

Carr-Chellman & Savoy (2004) claim that research specific to user design within instruction systems is scarce, and much of the research into other user-oriented design methodologies lacks scientific rigour. Therefore, they recommend the following actions for the research community:

  1. To create a standardised language to define user design and to distinguish it from other user-oriented design methodologies,

  2. To study the implementation of user design across different variables, such as user profile, subject area and mode of delivery, and

  3. To communicate the success of user design in terms of “traditional measures of effectiveness” for the purpose of influencing policymakers.

Furthermore, Carr-Chellman & Savoy (2004) recommend that researchers adopt the participatory action research (PAR) method of inquiry. They argue that PAR democratises the research process and, consequently, is ideologically aligned with the principles of user design.

It can be argued, therefore, that Carr-Chellman & Savoy (2004) promote both user design and user research. Their vision for users is not only to assume the role of “co-designer”, but also of “co-researcher”.

Reference

Carr-Chellman, A. & Savoy, M. (2004). User-design research, in Handbook of Research on Educational Communication and Technology, 2nd ed, D. H. Jonassen (Ed), pp. 701-716, New Jersey, USA: Lawrence Erlbaum.

The triple-threat scenario

31 March 2014

There’s no shortage of theories as to why a scenario works so well as an educational device. But for me, it boils down to context.

An authentic and relevant context facilitates two important processes.

1. Sense making

The authenticity and relevance of the scenario contextualises the content so that it becomes more meaningful for the learner. It approximates a real life situation with which she is familiar so that she can make better sense of it.

2. Transfer

When the learner finds herself in a similar situation in real life, she will associate the current context with the scenario and thus apply her experience from it more readily.

When we combine these two affordances with the engagement power of video, we create a triple threat which dramatically increases our probability of success.

An offer they can’t refuse

10 March 2014

One of the best conference sessions I have ever attended was presented by Chris Bessell-Browne from Qantas College.

E-Learning at an airline is challenging because a relatively high proportion of the workforce does not have ready access to a computer. This poses a problem when, for example, you need to roll out compliance training to each and every individual.

One way in which Qantas solves this problem is by showing a series of video scenarios to large groups of their employees. The scenarios involve real employees as well as paid actors, and they recreate scenes that have actually happened at the organisation – eg a young woman receiving unwanted attention from a colleague at the Christmas party, a baggage handler being bullied by a peer in his team, a manager reprimanding one of his team members for her dishevelled appearance, etc. Each video is then followed by a slide featuring several discussion questions, asking if so-and-so was in the wrong, that kind of thing.

According to Chris, the discussions get quite animated as people argue their case for or against. Because there is often no clear “correct” or “incorrect” answer, the interaction represents a melting pot of views and perspectives – carefully facilitated by the L&D pro. It makes the learning experience engaging, relevant and authentic. In other words, nothing like typical compliance training.

As Chris proceeded with her presentation at the conference, everyone in the audience was on the edge of their seat as they eagerly anticipated the next instalment.

When was the last time anyone reacted like that to your training?

Businessman with information and resources streaming out of his smartphone

Video breathes life into content.

For example, while reading about how to provide effective feedback and perhaps downloading a 6-step job aid may be enough to improve your feedback giving skills, suppose you could also watch a video of a manager providing feedback to her direct report. Now you have a role model to follow, and a real-world example to make sense of.

So why doesn’t everyone do this? We have the tools at our disposal – from the camera on our smartphones to a plethora of free editing software downloadable from the internet.

I suspect one of the barriers is fear. We look at the slick productions such as those commissioned by Qantas, and we’re afraid our own efforts will appear amateurish in comparison. And you know what: they will!

When professional production houses shoot a video, they do so beautifully. The picture is rich and sharp. The audio is crisp and clear. The lighting is perfect. That is, after all, what you are paying them for. And it ain’t cheap.

When we record a video on our smartphone, the picture might be somewhat dull, the audio tinny, the lighting dodgy. But I put to you that if the quality of your production is good enough to see and hear, then it’s good enough to learn from.

And if the content is relevant, you’ll find your target audience surprisingly forgiving. You needn’t be Francis Ford Coppola because what really matters is the learning outcome.

So my advice is simply to give it a go. Test a few home-made clips on a pilot group to see how they fare. Incorporate constructive feedback, build on your success and scale it up. Your videography skills will improve over time, and you might even consider buying better equipment and software.

Sure, a beautifully crafted production will always be preferable, but it’s not always attainable or even necessary. You have the power right now to provide your audience with a learning experience that’s engaging, relevant and authentic.

So make them an offer they can’t refuse.