Posted tagged ‘evaluation’

The unscience of evaluation

29 November 2011

Evaluation is notoriously under done in the corporate sector.

And who can blame us?

With ever increasing pressure bearing down on L&D professionals to put out the next big fire, it’s no wonder we don’t have time to scratch ourselves before shifting our attention to something new – let alone measure what has already been and gone.

Alas, today’s working environment favours activity over outcome.

Pseudo echo

I’m not suggesting that evaluation is never done. Obviously some organisations do it more often than others, even if they don’t do it often enough.

However, a secondary concern I have with evaluation goes beyond the question of quantity: it’s a matter of quality.

As a scientist – yes, it’s true! – I’ve seen some dodgy pseudo science in my time. From political gamesmanship to biased TV and clueless newspaper reports, our world is bombarded with insidious half-truths and false conclusions.

The trained eye recognises the flaws (sometimes) but of course, most people are not science grads. They can fall for the con surprisingly easily.

The workplace is no exception. However, I don’t see it as employees trying to fool their colleagues with creative number crunching, so much as those employees unwittingly fooling themselves.

If a tree falls in the forest

The big challenge I see with evaluating learning in the workplace is how to demonstrate causality – ie the link between cause and effect.

Suppose a special training program is implemented to improve an organisation’s flagging culture metric. When the employee engagement survey is run again later, the metric goes up.

Graph

Congratulations to the L&D team for a job well done, right?

Not quite.

What actually caused the metric to go up? Sure, it could have been the training, or it could have been something else. Perhaps a raft of unhappy campers left the organisation and were replaced by eager beavers. Perhaps the CEO approved a special bonus to all staff. Perhaps the company opened an onsite crèche. Or perhaps it was a combination of factors.

If a tree falls in the forest and nobody hears it, did it make a sound? Well, if a few hundred employees undertook training but nobody measured its effect, did it make a difference?

Without a proper experimental design, the answer remains unclear.

Evaluation by design

To determine with some level of confidence whether a particular training activity was effective, the following eight factors must be considered…

Scientist

1. Isolation – The effect of the training in a particular situation must be isolated from all other factors in that situation. Then, the metric attributed to the staff who undertook the training can be compared to the metric attributed to the staff who did not undertake the training.

In other words, everything except participation in the training program must be more-or-less the same between the two groups.

2. Placebo – It’s well known in the pharmaceutical industry that patients in a clinical trial who are given a sugar pill rather than the drug being tested sometimes get better. The power of the mind can be so strong that, despite the pill having no medicinal qualities whatsoever, the patient believes they are doing something effective and so their body responds in kind.

As far as I’m aware, this fact has never been applied to the evaluation of corporate training. If it were, the group of employees who were not undertaking the special training would still need to leave their desks and sit in the classroom for three 4-hour stints over three weeks.

Why?

Because it might not be the content that makes the difference! It could be escaping the emails and phone calls and constant interruptions. It could be the opportunity to network with colleagues and have a good ol’ chat. It might be seizing the moment to think and reflect. Or it could simply be an appreciation of being trained in something, anything.

3. Randomisation – Putting the actuaries through the training and then comparing their culture metric to everyone else’s sounds like a great idea, but it will skew the results. Sure, the stats will give you an insight into how the actuaries are feeling, but it won’t be representative of the whole organisation.

Maybe the actuaries have a range of perks and a great boss; or conversely, maybe they’ve just gone through a restructure and a bunch of their mates were made redundant. To minimise these effects, staff from different teams in the organisation should be randomly assigned to the training program. That way, any localised factors will be evened out across the board.

4. Sample size – Several people (even if they’re randomised) can not be expected to represent an organisation of hundreds or thousands. So testing five or six employees is unlikely to produce useful results.

5. Validity – Calculating a few averages and generating a bar graph is a sure-fire way to go down the rabbit hole. When comparing numbers, statistically valid methods such as Analysis of Variance are required to get significant results.

6. Replication – Even if you were to demonstrate a significant effect of the training for one group, that doesn’t guarantee the same effect for the next group. You need to do the test more than once to establish a pattern and negate the suspicion of a one-off.

7. Subsets – Variations among subsets of the population may exist. For example, the parents of young children might feel aggrieved for some reason, or older employees might feel like they’re being ignored. So it’s important to analyse subsets to see if any clusters exist.

8. Time and space – Just because you demonstrated the positive effect of the training program on culture in the Sydney office, doesn’t mean it will have the same effect in New York or Tokyo. Nor does it mean it will have the same effect in Sydney next year.

Weird science

Don’t get me wrong: I’m not suggesting you need a PhD to evaluate your training activity. On the contrary, I believe that any evaluation – however informal – is better than none.

What I am saying, though, is for your results to be more meaningful, a little bit of know-how goes a long way.

For organisations that are serious about training outcomes, I go so far as to propose employing a Training Evaluation Officer – someone who is charged not only with getting evaluation done, but with getting it done right.

_______________

This post was originally published at the Learning Cafe on 14 November 2011.

Advertisements

Reflections of LearnX 2009 – Day 1

3 April 2009

I attended the annual LearnX Asia Pacific conference this week at Sydney’s Darling Harbour.

Darling Harbour on a dreary April morning.

While the weather was dreary, I found the sessions topical and thought provoking. Below I’ve shared some of the key messages that I drew from Day 1…

5 pm 3, courtesy of getwired, stock.xchng.The Magic of Speed Thinking: Ken Hudson, Director of The Speed Thinking Zone, kicked off proceedings with a keynote address about working smarter, not harder. Ken’s central theme is that being able to think faster and better can help us unlock ideas and improve our productivity. Maintaining that “our brain works better when our bodies are moving”, Ken got everyone in the room to participate in a few ice-breaker activities involving coin catching and brainstorming answers to pop questions. I must admit, it lifted the energy of the room. Ken then introduced a 9-circle template with the question “In these tough economic times, why should we invest more into training?” – and asked us to list 9 possibilities in 2 minutes. The idea isn’t necessarily to achieve a full gamut of answers, but to get the party started quickly. I think Ken’s ideas have real potential for expediting meetings and stimulating brainstorming sessions, but I still think that careful thought and deep reflection are necessary follow-ups. For more information about speed thinking, visit Ken’s website and refer to his book The Idea Generator.

Teamwork 2, courtesy of svilen001, stock.xchng.Bringing Generations Together through Collaboration and Informal Learning: Faith Legendre, Senior Global Consultant at Cisco WebEx, provided the audience with a synopsis of our 4 major generations today (Generation Vet, Boomers, Gen Y & Gen X), and an overview of their changing learning styles over time (push to pull, formal to informal, comprehensive to nibblets, and physical classes to online). While Faith recognised that generational attributes are widely disputed (eg online habits are not defined by age but by exposure to emerging technology), her key message is that people across all generations are using technology today to bridge gaps and collaborate. Faith also highlighted the technology collaboration community at Cisco Community Central.

Business or education, courtesy of lockstockb, stock.xchng.How to capture evaluation data to prevent costly e-learning deployment failures: Susan Pepper, Managing Director of the ROI Institute of Australia, reinforced the need for rigorous evaluation to ensure the success of e‑learning. Susan adheres to 5 levels of feedback, comprising Kirkpatrick’s four levels of evaluation, plus the calculation of return on investment (ROI). Susan also recommends that evaluation data be collected not only post implementation, but also during implementation to remedy any problems as they arise. Another key message is that e-learning programs require thorough planning, particularly to determine the organisation’s need, which in turn should inform the objectives of the solution.

Palmtop Series 1, courtesy of bizior, stock.xchng.Learning without boundaries: Ben Saunders, Business Analyst Consultant at HCS, provided us with a comprehensive overview of m‑learning. While pointing out that m-learning started as far back as 3000BC when the Sumerians carved out text onto portable stone tablets, Ben recognises that the increasing sophistication and decreasing cost of mobile devices (eg smart phones) are making m-learning more relevant today. Ben categorises the limitations of m‑learning under three major banners: hardware (screen size, usability, information security), software (multiple operating systems, unsupported file formats, SCORM compliance) and culture (work/life balance and the digital divide). However, he also notes that learners are already using mobile technologies in their general day-to-day activities, leading them to expect to do likewise for education.

Talking2, courtesy of len-k-a, stock.xchng.Extending your reach: Learning at a distance: Glen Hansen, National L&D Manager at Employment Plus, shared his organisation’s experience of using web conferencing to transition from traditional face-to-face learning delivery to a blended model. While the transition period was challenging (learning curve, lost skills through staff turnover), Glen cites significant benefits, such as: enhanced collaboration, enablement of JIT learning, consistency of message and reduced single point sensitivity. Glen also shares some practical tips for webconferencing, such as: conduct a needs analysis before launching web conferencing, trial potential software prior to selection, enquire whether the provider includes training in their package, appoint a moderator to support the facilitator during sessions, freeze the webcam to save bandwidth, use plenty of graphics, and provide opportunities for the learners to interact with one another. Glen also recommends The eLearning Guild’s Handbook on Synchronous e‑Learning.

Shaking hands, courtesy of acerin, stock.xchng.Selling e-learning to your clients: A culture change approach: I must admit that I felt like I had walked into the wrong session, as Ingrid Karlaftis, National Account Executive at Catapult E-Learning, adopted the vendor’s perspective of selling an e-learning solution to an organisation. However, I think Ingrid’s key messages can help e-learning practitioners within organisations, especially when implementing a project or major initiative. For example: never over promise and under deliver, work hand-in-hand with your clients along the journey, identify the needs of each team across the business (they will be different!), promote the notion of “one community”, train the trainer, maintain your transparency, provide constant support, measure and report.

Singer 4, courtesy of scottsnyde, stock.xchng.Professional Audio – The Key to Effective E-Learning: This was a shameless sales pitch, but to be fair, the presenters didn’t pretend otherwise. Adam Morgan and his crew promoted the advantages of employing professional actors (rather than “Tim from Accounts”) to produce the voiceovers in your e-learning courseware. Why? Because actors are better skilled at engaging your audience. Adam has a point in that an outfit like Voiceoversonthenet can cater for different audiences through variables such as accent, tone, gender and pace. So should you use an actor? Well that’s up to you.

Learning Leaders Panel: The final session on Day 1 was a facilitated discussion about building talent and learning anytime, anywhere, at any pace. Among the topics discussed: Bob Spence observed that informal learning relies on trust that the material being learned is worthwhile; Rob Wilkins shared his view that the feudal management system of a typical corporation inhibits its use of social media for learning; Anne Moore suggested that organisations need to become more like Gen-Y’s to support the next generation of employees who will lead us beyond the GFC; John Clifford informed us that every Telstra field technician has a laptop and a mobile device to enable e-learning on the road; Ann Quach recommended that we focus on content, then its mode of delivery (avoid using a blog or wiki just because it’s the latest fad); and Wendy John reminded us to empower staff to learn when they need to, otherwise engagement will be low and the experience will be a waste of time.

Stay tuned for an overview of Day 2…!

Walking in our customers’ shoes

16 December 2008

Sneakers, courtesy of irum, stock.xchng.Today my team undertook an interesting evaluation activity: We tested whether we could process a particular task after completing the online training that we had developed for it.

I’m glad to report that we could! While we don’t process the task day-to-day, we were still able to acquire the basic skills from the online training and apply them effectively in the “real world”.

In other words, we put ourselves in our customers’ shoes and were able to walk.

I suspect that this approach is under-used in the corporate sector to evaluate e‑learning. If the developer doesn’t experience what the learner experiences, how can he or she fully appreciate the outcome?