Posted tagged ‘science’

Painting by numbers

3 June 2017

A lifetime ago I graduated as an environmental biologist.

I was one of those kids who did well in school, but had no idea what his vocation was. As a pimply teenager with minimal life experience, how was I to know even half the jobs that existed?

After much dilly dallying, I eventually drew upon my nerdy interest in science and my idealistic zeal for conservation and applied for a BSc. And while I eventually left the science industry, I consider myself extremely fortunate to have studied the discipline because it has been the backbone of my career.

Science taught me to think about the world in a logical, systematic manner. It’s a way of thinking that is founded on statistics, and I maintain it should inform the activities we undertake in other sectors of society such as Learning & Development.

The lectures I attended and the exams I crammed for faded into a distant memory, until the emergence of learning analytics rekindled the fire.

Successive realisations have rapidly dawned on me that I love maths and stats, I’ve floated away from them over time, the world is finally waking up to the importance of scientific method, and it is high time I refocused my attention onto it.

So it is in this context that I have started to review the principles of statistics and its contemporary manifestation, analytics. My exploration has been accompanied by several niggling queries: what’s the difference between statistics and analytics? Is the latter just a fancy name for the former? If not, how not?

Overlaying the post-modern notion of data science, what are the differences among the three? Is a data scientist, as Sean Owen jokingly attests, a statistician who lives in San Francisco?

The DIKW Pyramid

My journey of re-discovery started with the DIKW Pyramid. This beguilingly simple triangle models successive orders of epistemology, which is quite a complex concept. Here’s my take on it…

The DIKW Pyramid, with Data at the base, Information a step higher, Knowledge another step higher, and Wisdom at the peak.

At the base of the pyramid, Data is a set of values of qualitative or quantitative variables. In other words, it is the collection of facts or numbers at your disposal that somehow represent your subject of study. For example, your data may be the weights of 10,000 people. While this data may be important, if you were to flick through the reams of numbers you wouldn’t glean much from them.

The next step up in the pyramid is Information. This refers to data that has been processed to make it intelligible. For example, if you were to calculate the average of those ten thousand weights, you’d have a comprehensible number that is inherently meaningful. Now you can do something useful with it.

The next step up in the pyramid is Knowledge. To avoid getting lost in a philosophical labyrinth, I’ll just say that knowledge represents understanding. For example, if you were to compare the average weight against a medical standard, you might determine these people are overweight.

The highest step in the pyramid is Wisdom. I’ll offer an example of wisdom later in my deliberation, but suffice it to say here that wisdom represents higher order thinking that synthesises various knowledge to generate insight. For example, the wise man or woman will not only know these people are overweight, but also recognise they are at risk of disease.

Some folks describe wisdom as future focused, and I like that because I see it being used to inform decisions.


My shorthand definition of statistics is the analysis of numerical data.

In practice, this is done to describe a population or to compare populations – that is to say, infer significant differences between them.

For example, by calculating the average weight of 10,000 people in Town A, we describe the population of that town. And if we were to compare the weights of those 10,000 people with the weights of 10,000 people in Town B, we might infer the people in Town A weigh significantly more than the people in Town B do.

Similarly, if we were to compare the household incomes of the 10,000 people in Town A with the household incomes of the 10,000 people in Town B, we might infer the people in Town A earn significantly less than the people in Town B do.

Then if we were to correlate all the weights against their respective household incomes, we might demonstrate they are inversely proportional to one another.

The DIKW Pyramid, showing statistics converting data into information.

Thus, our statistical tests have used mathematics to convert our data into information. We have climbed a step up the DIKW Pyramid.


My shorthand definition of analytics is the analysis of data to identify meaningful patterns.

So while analytics is often conflated with statistics, it is indeed a broader expression – not only in terms of the nature of the data that may be analysed, but also in terms of what is done with the results.

For example, if we were to analyse the results of our weight-related statistical tests, we might recognise an obesity problem in poor neighbourhoods.

The DIKW Pyramid, showing analytics converting data into knowledge.

Thus, our application of analytics has used statistics to convert our data into information, which we have then translated into knowledge. We have climbed another step higher in the DIKW Pyramid.

Data science

My shorthand definition of data science is the combination of statistics, computer programming, and domain expertise to generate insight. Or so I’m led to believe.

Given the powerful statistical software packages currently available, I don’t see why anyone would need to resort to hand coding in R or Python. At this early stage of my re-discovery, I can only assume the software isn’t sophisticated enough to compute the specific processes that people need.

Nonetheless, if we return to our obesity problem, we can combine our new-found knowledge with existing knowledge to inform strategic decisions. For example, given we know a healthy diet and regular exercise promote weight loss, we might seek to improve the health of our fellow citizens in poor neighbourhoods (and thereby lessen the burden on public healthcare) by building sports facilities there, or by subsidising salad lunches and fruit in school canteens.

The DIKW Pyramid, showing data science converting data into wisdom.

Thus, not only has our application of data science used statistics and analytics to convert data into information and then into knowledge, it has also converted that knowledge into actionable intelligence.

In other words, data science has converted our data into wisdom. We have reached the top of the DIKW Pyramid.

E-Learning = Innovation = Science

10 June 2014

Have you ever been to a conference where the presenter asks the audience, “Who’s implemented a mobile learning strategy?”, and only 2 or 3 people raise their hand?

Forgive me: it’s a rhetorical question. I know you have. Because everyone has.

Of course the question might not revolve around mobile learning, but rather gamification, or enterprise social networking, or flipped classrooms, or whatever the hot topic may be.

While a lot of talk is bandied around about e-learning, it’s evident that relatively few of us are actually doing it.

The e-learning panel at AITD2014

To help bridge the gap, I was honoured to moderate a panel session at last month’s AITD National Conference. I was even more honoured to share the stage with Helen Blunden, Matthew Guyan, Anne Bartlett-Bragg and Simon Crook.

The session was entitled E-Learning: Transforming Talk into Action, and the panellists were hand-picked from multiple sectors to share their insights and expertise with us. And that they did.

Simon explained how his science students are using their iPads in class to enrich their learning experience: “Engage me or enrage me”; Matt described his use of Articulate Storyline to develop online courses in-house; Helen shared her experience in using Yammer to cultivate a collaborative culture in a conservative corporate environment; while Anne dove head-first into MOOCs and ruffled a few feathers along the way.

Regardless of the specific technology or pedagogy discussed by the panellists, the overarching advice provided by each one was to give it a go and see what happens.

In other words, e-learning is innovation.


Now I realise that many of my peers will balk at this assertion. After all, e-learning is decades old, and today’s L&D pro’s are tech savvy and digitally invested.

So let’s take the “e” out of “e-learning” already – I’ve argued that myself in the past. However I put it to you that a great many among us still haven’t put the “e” into e-learning, let alone take it out again.

For these people, e-learning represents making changes in something established, especially by introducing new methods, ideas, or products. And when you think about it, e-learning is that for the rest of us too – it’s just we’re more comfortable with it; or, in fact, excited by it.

For all of us then, viewing e-learning through the lens of innovation offers us a crucial advantage: it reframes failure.

You see, innovators don’t think of failure as most people do. Rather than see it as something to be ashamed of, avoided at all costs, and certainly not to be aired in public, innovators embrace failure, they actively seek it out – and most importantly of all, they learn from it.

They appreciate the fact that if you never try, you never know. A failure isn’t an error or a mistake, but a beautiful piece of intelligence that informs your next move.

The trick of course is to ensure that when you fail, you do so quickly and cheaply. You don’t want to bring the roof crashing down upon you, so protect yourself by taking baby steps. Pilot your innovation and if it doesn’t quite work, modify it and try again; if it tanks miserably, cut your losses and abandon it; but if it does work, scale it up, keep an eye on it, continue to modify it where necessary, and enjoy your “overnight success”.


And still I wish to take this line of thinking further. Beyond innovation, e-learning is science.

My definition of science is “systematic knowledge”. If you want to obtain deep, scientific insight, get systematic.

Scientists frame failure in much the same way as innovators do. Again, rather than seeing it as something to be ashamed of, they see it simply as a result. It’s not good or bad, right or wrong. It just is.

The advantage of viewing e-learning through the lens of science is embedded in its methodology. Classic experimental design is based on two hypotheses: the null hypothesis, in which the treatment has no effect; and the alternative hypothesis, in which the treatment has an effect. By running an experiment, the scientist will either accept or reject the null hypothesis.

For example, suppose a scientist in a soda company is charged with testing whether honey-flavoured cola will be popular. He might set up two sample groups drawn from the target market: one group tastes the regular cola, the other group tastes the honey-flavoured cola, and both rate their satisfaction. After crunching the numbers, the scientist may find no significant difference between the colas – so he accepts the null hypothesis. Or he may find that the honey-flavoured cola tastes significantly better (or worse!) than the regular cola – so he rejects the null hypothesis. Whether the null hypothesis is accepted or rejected, it’s a useful result. The concept of failure is redundant.

The parallel with e-learning is readily apparent. Consider the teacher who allows her students to bring their mobile devices into class; or the trainer who delivers part of her program online; or the manager who sets up a team site on SharePoint; or the L&D consultant who supports a group of employees through a MOOC. In each case, the null hypothesis is that her new method, idea or product has no effect – on what? that depends on the context – while the alternative is that is has. Either way, the result informs her next move.

A baby taking a step forward

So my advice to anyone who has never raised their hand at a conference is that you don’t need to don a white coat and safety goggles to transform talk into action. Rather, change your mindset and take a baby step forward.

Science and Faith: A venn diagram

17 December 2012

Science and Faith: A venn diagram

Merry Christmas everyone.

I wish you a 2013 full of provocative ideas, courageous experimentation, loads of fun, and – of course – plenty of learning!

The unscience of evaluation

29 November 2011

Evaluation is notoriously under done in the corporate sector.

And who can blame us?

With ever increasing pressure bearing down on L&D professionals to put out the next big fire, it’s no wonder we don’t have time to scratch ourselves before shifting our attention to something new – let alone measure what has already been and gone.

Alas, today’s working environment favours activity over outcome.

Pseudo echo

I’m not suggesting that evaluation is never done. Obviously some organisations do it more often than others, even if they don’t do it often enough.

However, a secondary concern I have with evaluation goes beyond the question of quantity: it’s a matter of quality.

As a scientist – yes, it’s true! – I’ve seen some dodgy pseudo science in my time. From political gamesmanship to biased TV and clueless newspaper reports, our world is bombarded with insidious half-truths and false conclusions.

The trained eye recognises the flaws (sometimes) but of course, most people are not science grads. They can fall for the con surprisingly easily.

The workplace is no exception. However, I don’t see it as employees trying to fool their colleagues with creative number crunching, so much as those employees unwittingly fooling themselves.

If a tree falls in the forest

The big challenge I see with evaluating learning in the workplace is how to demonstrate causality – ie the link between cause and effect.

Suppose a special training program is implemented to improve an organisation’s flagging culture metric. When the employee engagement survey is run again later, the metric goes up.


Congratulations to the L&D team for a job well done, right?

Not quite.

What actually caused the metric to go up? Sure, it could have been the training, or it could have been something else. Perhaps a raft of unhappy campers left the organisation and were replaced by eager beavers. Perhaps the CEO approved a special bonus to all staff. Perhaps the company opened an onsite crèche. Or perhaps it was a combination of factors.

If a tree falls in the forest and nobody hears it, did it make a sound? Well, if a few hundred employees undertook training but nobody measured its effect, did it make a difference?

Without a proper experimental design, the answer remains unclear.

Evaluation by design

To determine with some level of confidence whether a particular training activity was effective, the following eight factors must be considered…


1. Isolation – The effect of the training in a particular situation must be isolated from all other factors in that situation. Then, the metric attributed to the staff who undertook the training can be compared to the metric attributed to the staff who did not undertake the training.

In other words, everything except participation in the training program must be more-or-less the same between the two groups.

2. Placebo – It’s well known in the pharmaceutical industry that patients in a clinical trial who are given a sugar pill rather than the drug being tested sometimes get better. The power of the mind can be so strong that, despite the pill having no medicinal qualities whatsoever, the patient believes they are doing something effective and so their body responds in kind.

As far as I’m aware, this fact has never been applied to the evaluation of corporate training. If it were, the group of employees who were not undertaking the special training would still need to leave their desks and sit in the classroom for three 4-hour stints over three weeks.


Because it might not be the content that makes the difference! It could be escaping the emails and phone calls and constant interruptions. It could be the opportunity to network with colleagues and have a good ol’ chat. It might be seizing the moment to think and reflect. Or it could simply be an appreciation of being trained in something, anything.

3. Randomisation – Putting the actuaries through the training and then comparing their culture metric to everyone else’s sounds like a great idea, but it will skew the results. Sure, the stats will give you an insight into how the actuaries are feeling, but it won’t be representative of the whole organisation.

Maybe the actuaries have a range of perks and a great boss; or conversely, maybe they’ve just gone through a restructure and a bunch of their mates were made redundant. To minimise these effects, staff from different teams in the organisation should be randomly assigned to the training program. That way, any localised factors will be evened out across the board.

4. Sample size – Several people (even if they’re randomised) can not be expected to represent an organisation of hundreds or thousands. So testing five or six employees is unlikely to produce useful results.

5. Validity – Calculating a few averages and generating a bar graph is a sure-fire way to go down the rabbit hole. When comparing numbers, statistically valid methods such as Analysis of Variance are required to get significant results.

6. Replication – Even if you were to demonstrate a significant effect of the training for one group, that doesn’t guarantee the same effect for the next group. You need to do the test more than once to establish a pattern and negate the suspicion of a one-off.

7. Subsets – Variations among subsets of the population may exist. For example, the parents of young children might feel aggrieved for some reason, or older employees might feel like they’re being ignored. So it’s important to analyse subsets to see if any clusters exist.

8. Time and space – Just because you demonstrated the positive effect of the training program on culture in the Sydney office, doesn’t mean it will have the same effect in New York or Tokyo. Nor does it mean it will have the same effect in Sydney next year.

Weird science

Don’t get me wrong: I’m not suggesting you need a PhD to evaluate your training activity. On the contrary, I believe that any evaluation – however informal – is better than none.

What I am saying, though, is for your results to be more meaningful, a little bit of know-how goes a long way.

For organisations that are serious about training outcomes, I go so far as to propose employing a Training Evaluation Officer – someone who is charged not only with getting evaluation done, but with getting it done right.


This post was originally published at the Learning Cafe on 14 November 2011.

Facts are a bitch

28 October 2010

CameraThis morning I posted the following question to Twitter:

What do you think of Parrashoot as the name of a local photography competition in Parramatta?

The word play is genius, no?

Now, for those of you who don’t know, Parramatta is the cosmopolitan sister city of Sydney, approximately 23 kilometres (14 miles) west of the Harbour Bridge.

Due to its geographical location and its colourful history, it is often put down by yuppies and wanna-be’s, and is typically lumped into the broad, vague and lazy category “Sydney’s West” which features prominently on the nightly news.

While this view of my local area is about 25 years out of date (and perhaps a little racist?) it doesn’t seem to affect its prevalence.

Anyway, among the replies I received to my tweet was one that linked the fragment “shoot” to homicide. It’s clear the guy was joking, but it got me thinking…

Being the geek I am, I looked up the state’s crime statistics and graphed the homicides recorded by the police from 1995 through to 2009:

Graph of homicides recorded by NSW Police from 1995 through to 2009.

The results are intriguing – not only because the figures are incredibly low for a major metropolis.

Notice how Inner Sydney (the CBD and surrounds) tops the list with 156 reports, followed by Fairfield-Liverpool (southwestern suburbs), then the Hunter (northern wine & coal region), Canterbury-Bankstown (inner southwestern suburbs), Illawarra (south coast) and the Mid North Coast.

Eventually Central West Sydney (which includes Parramatta) makes an appearance with 66 reports, while – hang on! – the well-heeled Eastern Suburbs rounds out the Top 10 with 52 reports.

Oh, my. That’s enough to make oneself gag on one’s latte.

So what’s this got to do with learning?

In the workplace, how often do we L&D professionals make assumptions that simply aren’t true?

I’ll hazard a guess: too often.

My point is, we should endeavour to back up our assumptions with evidence.

• What are the learning priorities of the business?
• What is the most effective mode of delivery?
• Is Gen-Y collaborative?
• Are baby boomers technophobic?
• Does that expensive leadership course improve performance?
• Are our people incapable of self-directed learning?

These are just some of the many questions that we really should answer with data.

Otherwise we may find ourselves about 25 years out of date.