Posted tagged ‘data’

Science and Faith: A venn diagram

17 December 2012

Science and Faith: A venn diagram

Merry Christmas everyone.

I wish you a 2013 full of provocative ideas, courageous experimentation, loads of fun, and – of course – plenty of learning!

Playing by numbers

23 April 2012

The theme of last week’s Learning Cafe in Sydney was How to Win Friends and Influence Learning Stakeholders.

Among the stakeholders considered was the “C-Level & Leadership”. This got me thinking, do the C-suite and lower rung managers expect different things from L&D?

There’s no shortage of advice out there telling us to learn the language of finance, because that’s what the CEO speaks. And that makes sense to me.

While some of my peers shudder at the term ROI, for example, I consider it perfectly reasonable for the one who’s footing the bill to demand something in return.

Show me the money.

Stack of Cash

But I also dare to suggest that the managers who occupy the lower levels of the organisational chart don’t give a flying fox about all that.

Of course they “care” about revenue, costs and savings – and they would vigorously say so if asked! – but it’s not what motivates them day to day. What they really care about is their team’s performance stats.

I’m referring to metrics such as:

• Number of widgets produced per hour
• Number of defects per thousand opportunities
• Number of policy renewals
• Number of new write-ups

In other words, whatever is on their dashboard. That’s what they are ultimately accountable for, so that’s what immediately concerns them.

Woman drawing a graph

The business savvy L&D consultant understands this dynamic and uses it to his or her advantage.

He or she appreciates the difference between what the client says they want, and what they really need.

He or she realises the client isn’t invested in the training activity, but rather in the outcome.

He or she doesn’t start with the solution (“How about a team-building workshop?”), but rather with the performance variable (“I see your conversion rate has fallen short of the target over the last 3 months”).

He or she knows that the numbers that really matter don’t necessarily have dollar signs in front of them.

The unscience of evaluation

29 November 2011

Evaluation is notoriously under done in the corporate sector.

And who can blame us?

With ever increasing pressure bearing down on L&D professionals to put out the next big fire, it’s no wonder we don’t have time to scratch ourselves before shifting our attention to something new – let alone measure what has already been and gone.

Alas, today’s working environment favours activity over outcome.

Pseudo echo

I’m not suggesting that evaluation is never done. Obviously some organisations do it more often than others, even if they don’t do it often enough.

However, a secondary concern I have with evaluation goes beyond the question of quantity: it’s a matter of quality.

As a scientist – yes, it’s true! – I’ve seen some dodgy pseudo science in my time. From political gamesmanship to biased TV and clueless newspaper reports, our world is bombarded with insidious half-truths and false conclusions.

The trained eye recognises the flaws (sometimes) but of course, most people are not science grads. They can fall for the con surprisingly easily.

The workplace is no exception. However, I don’t see it as employees trying to fool their colleagues with creative number crunching, so much as those employees unwittingly fooling themselves.

If a tree falls in the forest

The big challenge I see with evaluating learning in the workplace is how to demonstrate causality – ie the link between cause and effect.

Suppose a special training program is implemented to improve an organisation’s flagging culture metric. When the employee engagement survey is run again later, the metric goes up.

Graph

Congratulations to the L&D team for a job well done, right?

Not quite.

What actually caused the metric to go up? Sure, it could have been the training, or it could have been something else. Perhaps a raft of unhappy campers left the organisation and were replaced by eager beavers. Perhaps the CEO approved a special bonus to all staff. Perhaps the company opened an onsite crèche. Or perhaps it was a combination of factors.

If a tree falls in the forest and nobody hears it, did it make a sound? Well, if a few hundred employees undertook training but nobody measured its effect, did it make a difference?

Without a proper experimental design, the answer remains unclear.

Evaluation by design

To determine with some level of confidence whether a particular training activity was effective, the following eight factors must be considered…

Scientist

1. Isolation - The effect of the training in a particular situation must be isolated from all other factors in that situation. Then, the metric attributed to the staff who undertook the training can be compared to the metric attributed to the staff who did not undertake the training.

In other words, everything except participation in the training program must be more-or-less the same between the two groups.

2. Placebo - It’s well known in the pharmaceutical industry that patients in a clinical trial who are given a sugar pill rather than the drug being tested sometimes get better. The power of the mind can be so strong that, despite the pill having no medicinal qualities whatsoever, the patient believes they are doing something effective and so their body responds in kind.

As far as I’m aware, this fact has never been applied to the evaluation of corporate training. If it were, the group of employees who were not undertaking the special training would still need to leave their desks and sit in the classroom for three 4-hour stints over three weeks.

Why?

Because it might not be the content that makes the difference! It could be escaping the emails and phone calls and constant interruptions. It could be the opportunity to network with colleagues and have a good ol’ chat. It might be seizing the moment to think and reflect. Or it could simply be an appreciation of being trained in something, anything.

3. Randomisation - Putting the actuaries through the training and then comparing their culture metric to everyone else’s sounds like a great idea, but it will skew the results. Sure, the stats will give you an insight into how the actuaries are feeling, but it won’t be representative of the whole organisation.

Maybe the actuaries have a range of perks and a great boss; or conversely, maybe they’ve just gone through a restructure and a bunch of their mates were made redundant. To minimise these effects, staff from different teams in the organisation should be randomly assigned to the training program. That way, any localised factors will be evened out across the board.

4. Sample size – Several people (even if they’re randomised) can not be expected to represent an organisation of hundreds or thousands. So testing five or six employees is unlikely to produce useful results.

5. Validity - Calculating a few averages and generating a bar graph is a sure-fire way to go down the rabbit hole. When comparing numbers, statistically valid methods such as Analysis of Variance are required to get significant results.

6. Replication - Even if you were to demonstrate a significant effect of the training for one group, that doesn’t guarantee the same effect for the next group. You need to do the test more than once to establish a pattern and negate the suspicion of a one-off.

7. Subsets – Variations among subsets of the population may exist. For example, the parents of young children might feel aggrieved for some reason, or older employees might feel like they’re being ignored. So it’s important to analyse subsets to see if any clusters exist.

8. Time and space - Just because you demonstrated the positive effect of the training program on culture in the Sydney office, doesn’t mean it will have the same effect in New York or Tokyo. Nor does it mean it will have the same effect in Sydney next year.

Weird science

Don’t get me wrong: I’m not suggesting you need a PhD to evaluate your training activity. On the contrary, I believe that any evaluation – however informal – is better than none.

What I am saying, though, is for your results to be more meaningful, a little bit of know-how goes a long way.

For organisations that are serious about training outcomes, I go so far as to propose employing a Training Evaluation Officer – someone who is charged not only with getting evaluation done, but with getting it done right.

_______________

This post was originally published at the Learning Cafe on 14 November 2011.

Facts are a bitch

28 October 2010

CameraThis morning I posted the following question to Twitter:

What do you think of Parrashoot as the name of a local photography competition in Parramatta?

The word play is genius, no?

Now, for those of you who don’t know, Parramatta is the cosmopolitan sister city of Sydney, approximately 23 kilometres (14 miles) west of the Harbour Bridge.

Due to its geographical location and its colourful history, it is often put down by yuppies and wanna-be’s, and is typically lumped into the broad, vague and lazy category “Sydney’s West” which features prominently on the nightly news.

While this view of my local area is about 25 years out of date (and perhaps a little racist?) it doesn’t seem to affect its prevalence.

Anyway, among the replies I received to my tweet was one that linked the fragment “shoot” to homicide. It’s clear the guy was joking, but it got me thinking…

Being the geek I am, I looked up the state’s crime statistics and graphed the homicides recorded by the police from 1995 through to 2009:

Graph of homicides recorded by NSW Police from 1995 through to 2009.

The results are intriguing – not only because the figures are incredibly low for a major metropolis.

Notice how Inner Sydney (the CBD and surrounds) tops the list with 156 reports, followed by Fairfield-Liverpool (southwestern suburbs), then the Hunter (northern wine & coal region), Canterbury-Bankstown (inner southwestern suburbs), Illawarra (south coast) and the Mid North Coast.

Eventually Central West Sydney (which includes Parramatta) makes an appearance with 66 reports, while – hang on! – the well-heeled Eastern Suburbs rounds out the Top 10 with 52 reports.

Oh, my. That’s enough to make oneself gag on one’s latte.

So what’s this got to do with learning?

In the workplace, how often do we L&D professionals make assumptions that simply aren’t true?

I’ll hazard a guess: too often.

My point is, we should endeavour to back up our assumptions with evidence.

• What are the learning priorities of the business?
• What is the most effective mode of delivery?
• Is Gen-Y collaborative?
• Are baby boomers technophobic?
• Does that expensive leadership course improve performance?
• Are our people incapable of self-directed learning?

These are just some of the many questions that we really should answer with data.

Otherwise we may find ourselves about 25 years out of date.

Ass

Analyse This

7 August 2009

From the instructional designer’s point of view, the term “Analysis” fulfils the “A” in the ADDIE Model, which in turn forms the backbone of the Instructional Systems Design (ISD) process.

ADDIE Model, courtesy of Regent University.

What is analysis?

Big Dog & Little Dog provide an excellent Instructional System Design Manual that covers analysis quite comprehensively. However the basics are really straightforward.

I like their description of analysis which they attribute to Allison Rossett:

Analysis is the study we do in order to figure out what to do.

Because that’s exactly what it is. It’s the foundation of all subsequent development activity.

There’s no point launching into a frenzy of work unless you know why you’re doing it. Otherwise your efforts are liable to go to waste, and where’s the value-add in that?

Focus on performance

When conducting a needs analysis in the workplace, it’s important to focus on performance. Not training, not learning, not development… performance.

Freestyle Biking, courtesy of sledpunk, stock.xchng.

Your red flag is the Performance Gap, which is the difference between what the level of performance is now, and what it should be.

You need to determine why the gap exists, then design a solution to fix it.

That solution may be training or learning or development, or something else altogether. It may be simple or complex, online or face-to-face, real-time or asynchronous.

It all depends on the nature of the problem.

Data

There are two major approaches to identifying performance gaps:

1. Reactive, and
2. Proactive.

The reactive approach responds to your customer (or someone else) telling you what they need. For example, a Team Leader might call you to say that her team is struggling to meet its productivity targets; or a Project Manager might inform you of the imminent rollout of a new processing system. In either case, you need to react to that information.

While the reactive approach is vitally important, the proactive approach is arguably more important in today’s environment. By adopting a proactive approach, you don’t simply wait for your customer to tell you what their needs are – you find out for yourself.

This kind of analysis relies heavily on data. The data may be subjective, for example:

People series, courtesy of ilco, stock.xchng.• Consultation
• Conversation
• Qualitative survey
• Focus group

Or it may be objective, for example:

Immersed in numbers, courtesy of JCKham under Creative Commons, Flickr.• Productivity statistics
• Quality statistics
• Complaints register
• Compliance report
• Skills matrix

The data provides the evidence to support what you’re going to do next.

It gives you the confidence that your work will hit the mark and, ultimately, improve the overall performance of the business.

Root cause analysis

When analysing the data, I recommend that you be suspicious, but fair.

For example, if a graph shows that a certain individual is struggling with his productivity score, then yes: suspect that person may be experiencing an issue that’s hindering their performance.

Magnifying glass, courtesy of benis979, stock.xchng.

Bear in mind, however, that a myriad of reasons may be influencing the result. Maybe they’re new to the role; maybe they’re sick or burnt out; maybe they’re constantly bombarded all day by their peers. Always consider the conditions and the circumstances.

But at the end of the day, the numbers don’t lie, so you need to do something. Sometimes training is the answer, sometimes it isn’t. Maybe a flawed process needs to be modified; maybe a pod reshuffle is in order; maybe someone just needs a holiday.

Whatever you do, make sure you address the root of the problem, not just the symptoms.


Follow

Get every new post delivered to your Inbox.

Join 398 other followers