Posted tagged ‘assessment’

Micro-learning’s unsung sibling

9 April 2019

Micro-learning is so hot right now.

But I’m not going to deliberate over its definition. If you’re into that, check out Shannon Tipton’s Microlearning: The Misunderstood Buzzword and 7 Deadly Myths of Microlearning.

Nor am I going to try to convince you to jump on board, or abandon ship.

Instead, I’m going to consider firstly how micro-learning might be used in a corporate training context; and secondly, pivot towards something slightly different.

And if you were to find any value in these musings, I’d be delighted.

How micro-learning might be used

The nature of micro-learning lends itself to the campaign model.

Independent but related packets of content that are distributed over time can be woven into the working day of the target audience, and hence reduce time “off the floor”. In this context, the micro-learning is the training.

Similarly I see an opportunity for micro-learning to be deployed before the training. The content can prime the target audience for the experience to follow, perhaps in the form of a flipped class.

And of course I also see an opportunity for micro-learning to be deployed after the training: what one may call “reinforcement” to improve retention and increase the probability of knowledge transfer.

Sure, but does it work?

Well cognitive science suggests it does. I recommend reading up on the forgetting curve, subsumption theory, Piaget, cognitive load, the spacing effect and interleaving. It’s worth it.

A hand holding a pen pointing to a chart.

The pivot

While I’m obviously an advocate of micro-learning, a less buzzy but perhaps just-as-important variant is micro-assessment.

This is similar to micro-learning except the content is in question format – preferably scenario based and feedback rich.

In one sense, the two approaches may be conflated. Formative assessment is nothing new, and a few daily questions over a set timespan could constitute training, or prompt critical thinking pre-training, or promote application post-training.

If you want more bedtime reading, I suggest looking up the testing effect or its synonyms, retrieval practice and active recall.

However I feel the untapped potential of micro-assessment lay in its summative power. As the bank of results builds up over time, the data can be used to diagnose the population’s understanding of the subject matter. If the questions are aligned to competencies, the knowledge gaps can be identified and closed with further interventions.

Hence, micro-assessment can be leveraged to execute an assessment first strategy, thereby increasing the relevance of the L&D service offering to the business.

And if you want yet more bedtime reading, I suggest exploring metacognition and its effect on motivation.

On that note, good night!

Advertisements

The L&D maturity curve

4 March 2019

Over the course of my career, I’ve witnessed a slow but steady shift away from formal learning to informal learning.

Of course, remnants of the “formal first” philosophy still exist, whereby every conceivable problem is attempted to be fixed by a training solution, typically in the form of a course. Over time, the traditional classroom-based delivery of such courses has increasingly given way to online modules, but that’s merely a change in format – not strategy.

While courses certainly have their place in the L&D portfolio, the forgetting curve places a question mark over their longterm effectiveness on their own.

The informal first philosophy balances the pendulum by empowering the employee to self-direct their learning in accordance with their personal needs.

While in some cases informal learning obviates the need for training, in other cases it will complement it. For example, I see the informalisation of learning as an opportunity to deliver the content (for example, via a wiki) which can be consumed at the discretion of the employee. The focus of the course then pivots to the application of the content, which is the point of learning it in the first place. Similarly, the assessment evaluates the learning in the context of real-world scenarios, which is what the learner will encounter post-course.

And since the content remains accessible, it can be used for ongoing reference long after the course has been completed.

A hand holding a pen pointing to a chart.

While I consider the informal first philosophy a giant leap in L&D maturity, it essentially pertains to instructional design. For a more holistic view of L&D, I propose an “assessment first” philosophy by which the capability of the target audience is analysed prior to any design work being undertaken.

The rationale for this philosophy is best appreciated in the context of an existing employee base (rather than greenhorn new starters). Such a group comprises adults who have a wide range of knowledge, skills and experiences. Not to mention they’ve probably been doing the job for a number of years.

Sheep dipping everyone in this group with the same training doesn’t make much sense. For a minority it might be a worthwhile learning experience, but for the majority it is likely to be redundant. This renders the training an ineffective waste of time, and an unnecessary burden on the L&D team.

By firstly assessing the target audience’s proficiency in the competencies that matter, a knowledge gap analysis can identify those in which the population is weak, and targeted training can be delivered in response. Individuals who are “not yet competent” in particular areas can be assigned personalised interventions.

This approach avoids the solution first trap. By focusing the L&D team’s attention on the real needs of the business, not only does the volume of demand reduce, but the work becomes more relevant.

The assessment first philosophy may appear incongruent where new starters are concerned, who by definition are assumed to be weak in all competencies – after all, they’ve only just walked through the door! – but I counter that assumption on two fronts.

Firstly, not all new starters are doe-eyed college grads. Many have had previous jobs in the industry or in other industries, and so they arrive armed with transferable knowledge, skills and experiences.

And regardless, the informal first philosophy holds true. That is to say, the new starter can consume the content (or not) as they see fit, demonstrate their understanding in the scenario-oriented “course”, and formalise it via the assessment.

The results of the assessment dictate any further intervention that is necessary.

Of course, some topics such as the company’s own products or processes will necessitate significant front-end loading via content development and maybe even curricula, but these may be considered the exception rather than the rule. By looking through the lens of assessment first, the L&D team works backwards to focus that kind of energy on where it is warranted.

It is also worth noting the assessment first philosophy renders the traditional “pass mark” obsolete, but such a radical idea is a story for another day!

Laptop showing business metrics.

While the assessment first philosophy represents an exponential leap in the maturity of L&D, there is yet another leap to make: “performance first”.

The raison d’être of the L&D team is to improve performance, so it’s always been a mystery to me as to why our work is so often disconnected to the business results. I do appreciate the barriers that are in our way – such as the inexplicable difficulty of obtaining the stats – but still, we can and should be doing more.

Under the performance first paradigm, it is not knowledge gaps that are analysed, but rather performance gaps. A root cause analysis identifies whether the cause is a capability deficiency or not – in the case of the former, a capability analysis feeds into the assessment first approach; in the case of the latter, a solution other than training is pursued instead.

As with assessment first, performance first may appear incongruent where new starters are concerned. After all, their stats thus far are zero, and waiting to recognise poor performance may have unacceptable consequences.

So again we have another exception to the rule whereby some folks may be scaffolded through L&D intervention prior to their performance being analysed. However the point is, we needn’t force everyone down that road. It depends on the circumstances.

And again, by looking through the lens of performance first, the L&D team works backwards to focus its energy on where it is needed. But this time with results at the forefront of the team’s purpose, its relevance to the business goes through the roof.

The L&D Maturity Curve, featuring Formal First rising to Informal First rising to Assessment First rising to Performance First. The x-axis represents maturity of the L&D function and the y-axis represents its relevance to the business.

I realise my take on L&D maturity might freak some of my peers out. Concurrently, others will argue that we should leapfrog to performance first now and get on with it.

Personally I consider the maturity curve a journey. Yes, it is theoretically possible to skip stages, but I feel that would be a shock to the system. From a change management perspective, I believe an organisation at one stage of the curve would achieve more success by growing into the next stage of the curve, while ironing out the bugs and creating the new normal along the way.

Besides, it isn’t a race. Important journeys take time. What matters is the direction in which that journey is heading.

Louder than words

13 November 2017

My last couple of blog posts have argued in favour of extracting value out of organisational capabilities. Due to the nature of my role I have posited these arguments in terms of employee development.

However, I further advocate the use of organisational capabilities across all parts of the employee lifecycle.

Using the 4+4 Part Employee Lifecycle as my guide, I will share my thoughts on some of the ways in which your capability framework can add value to your organisation’s activities in terms of recruitment, onboarding, performance, and offboarding.

The 4+4 Part Employee Lifecycle: (1) Recruitment; (2) Onboarding; (3) Performance; and (4) Offboarding; plus (1) Performance Management; (2) Development; (3) Health & Wellbeing; and (4) Retention.

Recruitment

Everyone knows that change management is hard. Culture eats strategy for breakfast; an organisation’s culture doesn’t change over night; something about herding cats; the change curve; etc. etc.

We’ve heard it all before, and yes it’s probably true.

But there’s a big elephant in the room: the power of recruitment to accelerate cultural change. That is to say, bring in from the outside the people whose capabilities you desperately need on the inside.

Which begs the question… what capabilities? Well, organisations that focus like an eagle know precisely the capabilities to assess each candidate against, because they are the ones that align to their strategic imperatives.

If your organisation needs to become more collaborative, recruit collaborative people. If it needs to become more innovative, recruit innovative people. And if it needs to become more digitally literate, recruit digitally literate people.

This approach may seem too obvious to mention, yet I dare you to examine your organisation’s current recruitment practices.

Onboarding

Onboarding is one of those pies that everyone wants to stick their fingers into, but nobody wants to own. Yet it is crucial for setting up the new recruit for success.

From an organisational capability perspective, a gold-plated opportunity arises during this phase in the employee’s lifecycle to draw their attention to the capability framework and the riches therein. The new recruit is motivated, keen to prove themselves, and hungry to learn.

Highlight the resources that are available to them to develop their capabilities now. This is important because the first few weeks of their experience in the organisation colours their remaining tenure.

Ensure they start their journey the way you’d like them to continue it: productively.

Performance

Capability powers performance, so the capability framework is a tool you can use to improve all four subparts of Performance in the 4+4 Part Employee Lifecycle.

Performance Management

Effective performance management complements development planning to provide the employee with guidance on improving said performance.

When seen through the lens of the capability framework, an employee’s performance appraisal can identify meaningful development opportunities. Performance weak spots may be (at least partly) attributable to gaps in specific capabilities; while a strengths-based approach might also be adopted, whereby an already strong capability is enhanced to drive higher performance.

To inform these decisions with data, I’d be keen to correlate capability assessments against individual performances and observe the relationship between the variables over time.

Development

It’s all very well to have a poetic capability framework, but if learning opportunities aren’t mapped to it, then its value is inherently limited.

If the framework’s capabilities align to leadership stages, I suggest the following question be put to the user: Do you want to excel in your current role or prepare for your next role?

Not only does this question focus the user’s development goal, it also identifies the relevant leadership stage so the capabilities can be presented in the right context.

A follow-up question may then be posed: Would you like to browse all the capabilities – useful for those who want to explore, or already know which capability to develop – focus on our strategic imperatives – useful for those who are time poor – or assess your capabilities – useful for those who seek a personal diagnosis.

The answers to these questions lead to a selection of capabilities which, beyond the provision of clear descriptions, outline the opportunities for development.

Resist the urge to dump masses of resources into their respective buckets. Instead, curate them. I suggest the following approaches:

KASAB is an esoteric extension of the KSA heuristic in teaching circles, and I like it because it includes “B” for “Behaviour”.

For example, help your colleagues move beyond the consumption of teamwork videos, design thinking workshops, and moocs on digital business; by encouraging them to contribute to communities of practice, submit ideas to the enterprise idea management system, and participate in the company’s social media campaign.

Health & Wellbeing

I see organisational capabilities applying to health & wellbeing in two ways.

The first way concerns the impact of employee development on mental health. Given the satisfaction and pride of building mastery drives engagement, the capability framework presents opportunities to improve mental health across the enterprise.

The second way concerns the composition of the capability framework. Given a healthy employee is a productive employee, why isn’t Wellness itself an organisational capability?

Retention

I’ve seen with my own eyes the impact of employee development (or lack thereof) on retention.

Given the sense of support and growth that the investment in people’s learning brings, the capability framework presents opportunities to retain talent across the enterprise.

Offboarding

Capabilities that align to leadership stages are useful for succession planning. Not only do they identify the capabilities that someone needs to succeed in their current role, but also the capabilities they need to succeed in their next role. Assessment of the latter informs the readiness of the employee for promotion.

Conversely, when the employee leaves the team (or exits the organisation) the capability framework can be used to assess the skills gap that remains.

Girl with home-made wings

In 7 tips for custodians of capability frameworks I declared a capability framework that remains unused is merely a bunch of words. But it’s worse than that. It is unrealised value across the employee lifecycle.

So use your capability framework to improve the organisation’s recruitment, onboarding, performance, and offboarding.

Actions speak louder than words.

Top 5 benefits of open badges for corporates

17 July 2013

I’ve been blogging a lot about open badges lately. That really means I’ve been thinking a lot about open badges lately, as I use my blog as a sense-making platform.

Through my blogging, combined with the insightful discussions following both Badges of honour and The past tense of open badges, I have been able to consolidate my thoughts somewhat.

This consolidation I rehash share with you now in the form of my Top 5 benefits of open badges for corporates.

Carrot badge

1. Open badges can motivate employees to learn.

Badges are widely perceived as being childish, yet there is no denying that the game mechanics that underpin them can work. Some people are incredibly motivated by badges. Once they’ve earned one, they want to earn another.

You will note that I am using weasel words such as “can” and “some”. This is because badges don’t motivate everyone – just ask Foursquare! But my view is if they motivate a significant proportion of your target audience, then that makes them worthwhile.

I consider this an important point because as learning in the corporate sector becomes more informal, the employee’s motivation to drive their own development will become increasingly pivotal to their performance, and hence to the performance of the organisation as a whole.

Credential badge

2. Open badges can credential in-house training.

Yes, corporates can print off certificates of completion for employees who undertake their in-house training offerings, only for them to be pinned to a workstation or hidden in a drawer.

And yes, corporates typically track and record completion statuses in their LMS, but that lacks visibility for pretty much everyone but the employee him- or herself.

In contrast, open badges are the epitome of visibility. They’re shiny and colourful, the employee can collect them in their online backpack, and they can be shown off via a plugin on a website or blog – or intranet profile.

Badges therefore give corporates the opportunity to recognise the employees who have completed their in-house training, within an enterprise-wide framework.

Portable badge

3. Open badges are portable.

Currently, if you undertake training at one organisation and then leave to join another, you leave your completion records behind. However, if badges were earned through that training, their openness and centralisation in the cloud means that you can continue to “wear” them when you move to your next employer.

This portability of open badges would be enhanced if third parties were also able to endorse the training. So an APRA-endorsed badge earned at Bank A, for example, would be meaningful to my next employer, Bank B, because this bank is also regulated by APRA.

Still, the concept holds without third-party endorsement; that is to say, much of the training provided by Bank A would probably still be meaningful to Bank B – because Bank A and Bank B do very similar things.

Task-oriented badge

4. Open badges are task oriented.

Despite my talk of “training” thus far, open badges are in fact task oriented. That means they recognise the execution of specific actions, and hence the mastery of skills.

I love this aspect of open badges because it means they don’t promise that you can do a particular task, but rather demonstrate that you have already done it.

That gives employers confidence in your capability to perform on the job.

Assessment badge

5. Open badges can formally recognise informal learning.

I have argued previously that in the modern workplace, we should informalise learning and formalise assessment.

My rationale is that the vast majority of learning in the workplace is informal anyway. Employees learn in all kinds of ways – from reading a newsfeed or watching a video clip, to playing with new software or chatting with colleagues over lunch.

The question is how to manage all of that learning. The answer is you don’t.

If a particular competency is important to the business, you assess it. Assessment represents the sum of all the learning that the employee has undertaken in relation to that competency, regardless of where, when or how it was done.

I see open badges as micro-assessments of specific tasks. If you execute a task according to the pre-defined criteria (whatever that may be), then you earn its badge. In this way, the badge represents the sum of all the learning that you have undertaken to perform the task successfully, regardless of where, when or how that learning was done.

Opinion badge

This is my blog, so of course all of the above assertions are the product of my own opinion. Naturally, I believe it to be an opinion informed by experience.

Other people have different opinions – some concordant, some contrary, as the comments under Badges of honour and The past tense of open badges will attest.

So, I’m curious… what’s your opinion?

The future of learning management

11 February 2013

People familiar with my blog will know that I’m not a member of the anti-LMS brigade.

On the contrary, I think a Learning Management System is a valuable piece of educational technology – particularly in large organisations. It is indispensible for managing registrations, deploying e-learning, marking grades, recording completion statuses, centralising performance agreements and documenting performance appraisals.

In other words – and the name gives it away – an LMS is useful for managing learning.

Yet while LMSs are widely used in the corporate sector, I suspect they are not being used to their full potential. You see, when most people think of an LMS, they think of formal learning. I don’t.

I think of informal learning. I think of the vast majority of knowledge that is acquired outside of the classroom. I think of the plethora of skills that are developed away from the cubicle. I think of reading a newspaper and chatting around the water cooler, and the myriad of other ways that people learn stuff. Relevant stuff. Stuff that actually makes a difference to their performance.

And I wonder how we can acknowledge all of that learning. We can hardly stick the newspaper or the water cooler into the LMS, although many will try in vain.

No – the way we can acknowledge informal learning is via assessment. Assessment represents the sum of learning in relation to a domain, regardless of where, when or how that learning was done.

The assessment need not be a multiple-choice quiz (although I am not necessarily against such a device), nor need it be online. The LMS only needs to manage it. And by that I mean record the learner’s score, assign a pass or fail status, and impart a competency at a particular proficiency.

In this way, the purpose of learning shifts from activity to outcome.

Wheelbarrow

Having said that, the LMS suffers a big problem: portability.

I’m not referring to the content. We have SCORM to ensure our courses are compatible with different systems. Although, if you think migrating SCORM-compliant content from one LMS to another is problem free, I have an opera house to sell you. It has pointy white sails and a great view of the harbour.

No – I’m referring to the learner’s training records. That’s the whole point of the LMS, but they’re locked in there. Sure, if the organisation transfers from one LMS to another, it can migrate the data while spending a tonne of money and shedding blood, sweat and tears in the process.

But worse, if the learner leaves the organisation to join another, they also leave their training records behind. Haha… we don’t care if you complied with the same regulations at your last organisation. Or that you were working with the same types of products. Or that you were using the same computer system. We’re going to make you do your training all over again. Sucker.

It’s hardly learner-centered, and it sure as hell ain’t a smart way of doing business.

Enter Tin Can.

Tin can in the cloud

According to my understanding, Tin Can is designed to overcome the problem of training record portability. I imagine everyone having a big tin can in the cloud, connected to the interwebs. When I complete a course at Organisation A, my record is recorded in my tin can. When I leave Organisation A for a better job at Organisation B, no worries because I’ve still got my tin can. It’s mine, sitting in the sky, keeping all my training records accessible.

This idea has taken the education world by storm, and some LMSs such as UpsideLMS have already integrated the API into their proprietary architecture.

Furthermore, I can update my tin can manually. For example, if I read a newspaper article or have an enlightening conversation with someone around the water cooler, I can log into my account and record it.

This sounds admirable prima facie, but for me it raises a couple of concerns. Firstly, the system is reliant on the learner’s honour – ! – but more concerningly, its philosophy reverts back to activity over outcome. Recording reams and reams of minor learning interactions all seems a bit pointless to me.

So where to from here?

Enter Plurality.

Plurality is a brilliant short film watched by the participants in Week 2 of The University of Edinburgh’s E-learning and Digital Cultures course.

The film paints a dystopian vision of the future whereby everyone’s personal details are stored in an online grid, which is controlled of course by the government. When you swipe your finger over a scanner, the computer reads your DNA and identifies you. This is convenient for automatically deducting the cost of a sandwich from your bank account, or unlocking your car, but not so convenient when you are on the run from the cops and they can track you through everything you touch.

Despite the Big Brother message pushed by the film, it prompted me to recognise an emerging opportunity for Tin Can if it were to re-align its focus on assessment and exploit the Internet of Things.

Suppose for example you are sitting in a jumbo jet waiting to take off to London or New York. If the cockpit had a scanner that required the pilot to swipe his finger, the computer could check his tin can to confirm he has acquired the relevant competencies at the required proficiencies before activating the engine.

Or suppose you are meeting a financial advisor. With a portable scanner, you could check that she has been keeping up with the continuing education points required by the relevant accreditation agency.

Competencies and assessment tend to cop a beating in the academic sphere, but in the real world you want to be reasonably confident that your pilot can fly a plane and your financial advisor knows what she’s talking about.

DNA strand

If the film’s portrayal of DNA is too far-fetched, it need not be the mechanism. For example, the pilot could key in his personal credentials, or you could key in the financial advisor’s agency code.

But maybe it’s not so far-fetched after all. The Consortium for the Barcode of Life – based at the Smithsonian Institution’s National Museum of Natural History, no less – is currently researching DNA barcoding.

And still, maybe Plurality is looking at it the wrong way around. We can already store digital information in synthetic DNA. Perhaps in the not-too-distant future our training records will be coded into our natural DNA and injected back into our bodies. Then instead of the scanner referring to your tin can in the cloud, it mines your data right there in your genes.

And you thought science fiction was scary!

The future of MOOCs

26 November 2012

MOOCs get a bad rap. Dismissed as prescriptive, or teacher-centric, or unsocial, or something else, it’s like a badge of honour to espouse why you dislike MOOCs.

Despite their pedagogical flaws, however, MOOCs provide unprecedented access to quality content for millions of learners.

It’s all very well for Apple-owning, organic-buying professionals to cast aspersions, but consider the girl in Pakistan who’s too scared to set foot in a classroom. Consider the teenager in central Australia whose school has only one teacher. Consider the young woman in Indonesia who can’t afford college. Consider the boy in San Francisco whose maths teacher simply doesn’t teach very well.

Don’t all these people deserve a better education? And isn’t content sourced from some of the world’s best providers a giant leap in that direction?

Sure, the pedagogy may not be perfect, but the alternative is much worse.

Child learning on a computer

MOOC proponent George Siemens distinguishes between two types of MOOC: the xMOOC and the cMOOC.

The former is the subject of such disdain. Involving little more than knowledge transmission and perhaps a quiz at the end, the xMOOC is widely seen as replicating old-fashioned lectures and exams.

In contrast, the latter leverages the connectedness of the participants. Seeded with content, the cMOOC empowers – read “expects” – the learner to discuss, debate, discover, share and co-create new knowledge with his or her fellow learners.

The cMOOC’s participant is active whereas the xMOOC’s participant is passive. As Siemens puts it, cMOOCs focus on knowledge creation and generation whereas xMOOCs focus on knowledge duplication.

Despite Siemens’ evangelism though, I don’t think the cMOOC is necessarily better than the xMOOC. (I’ll explain later.)

Ethernet cable

Love them or loathe them, xMOOC or cMOOC, the fact remains: MOOCs have arrived, and they are here to stay.

Moreover, I submit they are yet to wreak their full vengeance on the education industry. When I look into my crystal ball, I foresee that MOOCs will rock our world, and they will do so in 15 ways…

Fortune teller

1. Universities will finally accept they are service providers.

As the latest edition of Educause Review indicates, universities are fee-for-service businesses. That means they are subjected to market forces such as competition.

MOOCs beg the question: If I can study at Stanford University for free, why would I pay tens of thousands of dollars to study at your dinky university and subject myself to your arcane rules?

2. The vast majority of students will be overseas.

Countries that currently attract foreign students to their shores will need to brace for the impact on their local economies, as an ever-increasing proportion of students choose to gain an international education without leaving their home country.

3. The pecking order will be reshuffled.

While the world’s most prestigious institutions will enjoy a windfall of new students, those that rely more on age than ability will ultimately fail as the target audience realises how pedestrian they are.

Conversely, some of the smaller, younger institutions will emerge from the shadows as the world sees how good they really are.

4. Research will become a competitive advantage.

There’s nowhere to hide on the global stage, and cutting-edge expertise will be one of the few aspects that a university will have to distinguish itself from the others.

No more lazy professors, no more specious journal articles. Faculty who don’t generate a flow of new knowledge for their students will have their tenure terminated.

5. Universities will flip their classrooms.

Bricks’n’mortar establishments will become expensive relics unless their owners redeploy them. One way to do that is to leverage MOOCs for content delivery and provide value-added instruction (discussion, Q&A, worked examples, role plays etc) to local students – who of course will pay a premium for the privilege.

Studying on campus will become a status symbol.

6. The role of the teacher will evolve.

There’s no point rehashing the same lectures when the world’s best authorities have already recorded them and offered them to the world as OERs. It’s how the teacher uses that content to support learning that will make the difference.

7. The pedagogy of MOOCs will be enriched.

While MOOCs typically comprise video clips and perhaps a quiz, they will inevitably include more instructional devices to assist distance learning (and remain competitive).

Over time, content providers will supplement their core offerings with live webinars, interactive exercises, discussion forums, wikis, social networks etc. Some may even organise real-life meetups at selected sites around the world.

8. Content providers will charge for assessment.

A certificate of completion is good; an official grade is better.

Assessment is one of the ways universities will monetise their MOOCs, and edX is already going one step further by offering proctored exams.

9. Universities will offer credits for MOOCs.

Again, this is already being considered by the American Council on Education.

Of course, a certificate of completion won’t suffice. Ka ching!

10. Online cheating will mushroom.

An ever-present thorn in the side of online education, cheating will be almost impossible to prevent in the MOOC space. But surely we can do better than onsite exams?

11. Academic inflation will skyrocket.

Every man and his dog will have a ream of courses listed on his CV. Employers will consider certificates of completion meaningless, while maintaining a reserved suspicion over assessment scores.

Outcomes-based activities that demonstrate the applicant’s knowledge and skills will become a component of best-practice recruitment.

12. Offshoring will become the rule, not the exception.

Deloitte’s global CLO, Nick van Dam, told me that American firms are using MOOCs to upskill accountants based in India on US accounting practices.

Dental, anyone?

13. MOOCs will target the corporate sector.

Current MOOCs are heavily geared towards school and college audiences. Over time, an increasing number of narrow, specific topics that link to corporate competencies will emerge.

Content providers will wag the long tail.

14. The corporate sector will embrace xMOOCs.

Learners in the workplace are time poor. They don’t have the luxury to explore, discover, and “make sense of the chaos”. They need the knowledge now and they are happy for the expert to transmit it to them.

15. An xcMOOC hybrid will emerge as the third variant.

Sooner or later, the powers that be will remember that an instructivist approach suits novices, while an increasingly constructivist and connectivist approach suits learners as they develop their expertise.

Hence, the MOOC of the future may resemble an xMOOC in its early stages, and morph into a cMOOC in its later stages.

3 hot resources for best practice multiple-choice quizzing

27 April 2011

In my previous post, 14 reasons why your multiple-choice quiz sucks, I listed typical clangers whose only purpose is to render your assessments ineffective.

Thumbs down

If they’re the bad and ugly aspects of MCQ design, what’s the good?

To answer that question I hit Google and a couple of academic databases, but mostly in vain.

It may be due to my poor researching skills, but I found very little empirical evidence of best practice multiple-choice quizzing. Plenty of unsubstantiated opinion (of course) but not much science.

Cartoon

You see, Google wasn’t much help because “best practice” is frequently confused with “common practice” – but it’s not the same thing.

The peer-reviewed literature wasn’t much better. Alarmingly, many of the studies were inconclusive, adopted a flawed experimental design, and/or didn’t compare the performances of the quiz-takers on the job under the different treatments – which is the whole point!

However, through a combination of persistence, serendipity and social networking, I finally uncovered 3 resources that I consider worth recommending: a journal article, a website and a blog.

1. A Review of Multiple-Choice Item-Writing Guidelines for Classroom Assessment – In this article, Thomas Haladyna, Steven Downing & Michael Rodriguez validate a taxonomy of 31 multiple-choice item-writing guidelines by reviewing 27 textbooks on educational testing and 27 research studies. If you want insight into the myriad of MCQ variables, here it is.

2. QuestionMark – David Glow and Jayme Frey independently pointed me to the wealth of resources on this website. QuestionMark is a business, granted, but they know what they’re talking about – a claim backed up by the fact they have a psychometrician on the payroll (cheers David) and I heard Eric Shepherd with my own ears at LearnX last year and was very impressed.

3. The eLearning Coach – Connie Malamed is a qualified and experienced e-learning designer whose blog provides advice to fellow practitioners. I value Connie’s expertise because it is practical and she has implemented it in the real world.

If you are aware of other good MCQ resources – preferably evidence based – please share them here…