Tag: data

The right stuff

Well that was unexpected.

When I hit the Publish button on Not our job, I braced myself for a barrage of misunderstanding and its evil twin, misrepresentation.

But it didn’t happen. On the contrary, my peers who contacted me about it were downright agreeable. (A former colleague did politely pose a comment as a disagreement, but I happened to agree with everything she stated.)

I like to think I called a spade a spade: we’re responsible for learning & development; our colleagues are responsible for performance; and if they’re willing to collaborate, we have value to add.

Bar graph showing the impact of your ideas inside your brain much lower than the impact of your ideas when you put them out there.

The post was a thought bubble that finally precipitated after one sunny day, a long time ago, when Shai Desai asked me why I thought evaluation was so underdone by the L&D profession.

My post posited one reason – essentially, the inaccessibility of the data – but there are several other reasons closer to the bone that I think are also worth crystallising.

1. We don’t know how to do it.

I’m a Science grad, so statistical method is in my blood, but most L&D pro’s are not. If they haven’t found their way here via an Education or HR degree, they’ve probably fallen into it from somewhere else à la Richard in The Beach.

Which means they don’t have a grounding in statistics, so concepts such as regression and analysis of variance are alien and intimidating.

Rather than undertake the arduous journey of learning it – or worse, screw it up – we’d rather leave it well alone.

2. We’re too busy to do it.

This is an age old excuse for not doing something, but in an era of furloughs, restructures and budget freezes, it’s all too real.

Given our client’s ever-increasing demand for output, we might be forgiven for prioritising our next deliverable over what we’ve already delivered.

3. We don’t have to do it.

And it’s a two-way street. The client’s ever-increasing demand for output also means they prioritise our next deliverable over what we’ve already delivered.

If they don’t ask for evaluation, it’s tempting to leave it in the shadows.

4. We fear the result.

Even when all the planets align – we can access the data and we’ve got the wherewithal to use it – we may have a sneaking suspicion that the outcome will be undesirable. Either no significant difference will be observed, or worse.

This fear will be exacerbated when we design a horse, but are forced by the vagaries of corporate dynamics to deliver a camel.

A woman conjuring data from a tablet.

The purpose of this post isn’t to comment on the ethics of our profession nor lament the flaws of the corporate construct. After all, it boils down to human nature.

On the contrary, my intention is to expose the business reality for what it is so that we can do something about it.

Previously I’ve shared my idea for a Training Evaluation Officer – an expert in the science of data analysis, armed with the authority to make it happen. The role builds a bridge that connects learning & development with performance, keeping those responsible for each accountable to one another.

I was buoyed by Sue Wetherbee’s comment proposing a similar position:

…a People & Culture (HR) Analyst Business Partner who would be the one to funnel all other information to across all aspects of business input to derive “the story” for those who order it, pay for it and deliver it!

Sue, great minds think alike ;-)

And I was intrigued by Ant Pugh’s Elephant In The Room in which he challenges the assumption that one learning designer should do it all:

Should we spend time doing work we don’t enjoy or excel at, when there are others better equipped?

Just because it’s the way things are, doesn’t mean it’s the way things should be.

I believe a future exists where these expectations are relinquished. A future where the end result is not dictated by our ability to master all aforementioned skills, but by our ability to specialise on those tasks we enjoy.

How that will manifest, I don’t know (although I do have some ideas).

Ant, I’m curious… is one of those ideas an evaluation specialist? Using the ADDIE model as a guide, that same person might also attend to Analysis (so a better job title might be L&D Analyst) while other specialists focus on Design, Development and Implementation.

Then e-learning developers mightn’t feel the compulsion to call themselves Learning Experience Designers, and trainers won’t be similarly shamed into euphemising their titles. Specialists such as these can have the courage to embrace their expertise and do what they do best.

And important dimensions of our work – including evaluation – won’t only be done. They’ll be done right.

Not our job

Despite the prevailing rhetoric for the Learning & Development function to be “data driven”, data for the purposes of evaluating what we do is notoriously hard to come by.

Typically we collect feedback from happy sheets (which I prefer to call unhappy sheets) and confirm learning outcomes via some form of assessment.

In my experience, however, behavioural change is reported much less often, while anything to do with business metrics even less so. While I recognise multiple reasons for the latter in particular, one of them is simply the difficulty we mere mortals have in accessing the numbers.

Which has been a long-standing mystery to me. We’re all on the same team, so why am I denied the visibility of the information I need to do my job?

I’ve always suspected the root cause is a combination of human foibles (pride, fear, territoriality), substandard technology (exacerbated by policy) and a lack of skill or will to use the technology even when it is available.

Notwithstanding these ever-present problems, it’s been dawning on me that the biggest blocker to our ability to work with the numbers is the fact that, actually, it’s not our job.

Business woman presenting data to two colleagues

Consider a bank that discovers a major pain point among its customers is the long turnaround time on their home loan applications. To accelerate throughput and thus improve the customer experience, the C-suite makes a strategic decision to invest in an AI-assisted processing platform.

I contend the following:

  • It’s the job of the implementation team to ensure the platform is implemented properly.
  • It’s the job of the L&D team to build the employees’ capability to use it.
  • It’s the job of the service manager to report the turnaround times.
  • It’s the job of the CX researchers to measure the customer experience.
  • It’s the job of the C-suite to justify their strategy.

In this light, it’s clear why we L&D folks have so much trouble trying to do the other things on the list that don’t mention us. Not only are we not expected to do them, but those who are don’t want us to do them.

In short, we shouldn’t be doing them.

Caveat

At this juncture I wish to caution against conflating learning & development with performance consulting.

Yes, learning & development is a driver of performance, and an L&D specialist may be an integral member of a performance centre, but I urge anyone who’s endeavouring to rebrand their role as such to heed my caveat.

My point here is that if you are responsible for learning & development, be responsible for it; and let those who are responsible for performance be responsible for it.

Value

Having said that, there is plenty we should be doing within the bounds of our role to maximise the performance of the business. Ensuring our learning objectives are action oriented and their assessment authentic are two that spring to mind.

And I don’t wish to breathe air into the juvenile petulance that the phrase “not my job” can entail. On the contrary, we should be collaborating with our colleagues on activities related to our remit – for example training needs analysis, engineering the right environmental conditions for transfer, and even Level 4 evaluation – to achieve win-win outcomes.

But do it with them, not for them, and don’t let them offload their accountability for it being done. If they don’t wish to collaborate, so be it.

Essentially it boils down to Return on Expectation (ROE). In our quest to justify the Return on Investment (ROI) of our own service offering, we need to be mindful of what it is our financiers consider that service to be.

Anything beyond that is an inefficient use of our time and expertise.

Yellow submarine

Years ago, I remember taking a tour of what was then one of those newfangled “innovation labs”.

A hive of Design Thinking, it was crawling with serious young people in jeans and t-shirts scribbling on walls and rearranging herds of post-it notes.

In an otherwise old-fashioned financial services organisation, it was an impressive tilt towards modernisation and true customer centricity (beyond the warm and fuzzy TV commercials).

After our guide had finished explaining this brave new world to the group, one of us asked him to share a project he’d been working on. He proudly explained how the year prior, the lab had applied the progressive methodology to the development of a new product which had, finally, launched.

Which begged the next question… How many new customers did it sign up? His straight-faced answer: Seven.

Seven!

For a bank with literally millions of customers, this was astounding. And he didn’t seem all that bothered by it. The apparent solution was to go back to the drawing board and try again.

While still doing the math in my head to calculate the negative return on investment, I stumbled upon the myth of The Yellow Walkman. I neither confirm nor deny its veracity, but Alexander Cowan recounts it as follows in his article Yellow Walkman Data & the Art of Customer Discovery:

Close-up of a yellow Walkman

Sony’s conducting a focus group for a yellow ‘sport’ Walkman. After assembling their ‘man/woman on the street’ contingent, they ask them ‘Hey, how do you like this yellow Walkman?’ The reception’s great. ‘I love that yellow Walkman – it’s so sporty!’ ‘Man, would I rather I have a sweet yellow Walkman instead of a boring old black one.’

While everyone’s clinking glasses, someone had the insight to offer the participants a Walkman on their way out. They can choose either the traditional black edition or the sporty new yellow edition – there are two piles of Walkmans on two tables on the way out. Everyone takes a black Walkman.

It’s an old story, but its message remains relevant today. Because humans are terrible at predicting their own behaviour.

You see, talk is cheap. Everyone has great ideas… when someone else has to implement them. And if you ask someone point blank if they want something, nine times out of ten they’ll say yes. Then they never use it and you’re left carrying the can wondering where you went wrong.

We see this kind of thing all the time in workplace learning and development. Someone in the business will demand we build an online course, which no one will launch; or a manager will pull a capability out of thin air, oblivious to the real needs of their team.

As Cowan suggests, this can be mitigated by thoughtful questioning that avoids the solution-first trap. And of course the point of the MVP approach that’s championed by Design Thinking minimises any losses by failing fast.

But we can do something else before we get to that point: validate.

In the yellow Walkman example, Cowan offers:

Sony’s product designer mocks up several colors of Walkman and puts together some kind of an ordering page with the options. Focus group subjects (or just online visitors) are allowed to pre-order what they want. This gets you the same result without having to actually produce a whole bunch of yellow (or whatever) Walkmans.

In the L&D context, I suggest complementing our TNA consultations with assessments. So the team needs to develop x capability? Test it. They’re all over y competency? Test it.

And it needn’t be expensive nor onerous. A micro-assessment approach should be sufficient to expose the blindspots.

By validating your qualitative data with quantitative data, you’re building extra confidence into your bet and maximising its probability of success.

Lest it sink like a yellow submarine.

Painting by numbers

A lifetime ago I graduated as an environmental biologist.

I was one of those kids who did well in school, but had no idea what his vocation was. As a pimply teenager with minimal life experience, how was I to know even half the jobs that existed?

After much dilly dallying, I eventually drew upon my nerdy interest in science and my idealistic zeal for conservation and applied for a BSc. And while I eventually left the science industry, I consider myself extremely fortunate to have studied the discipline because it has been the backbone of my career.

Science taught me to think about the world in a logical, systematic manner. It’s a way of thinking that is founded on statistics, and I maintain it should inform the activities we undertake in other sectors of society such as Learning & Development.

The lectures I attended and the exams I crammed for faded into a distant memory, until the emergence of learning analytics rekindled the fire.

Successive realisations have rapidly dawned on me that I love maths and stats, I’ve floated away from them over time, the world is finally waking up to the importance of scientific method, and it is high time I refocused my attention onto it.

So it is in this context that I have started to review the principles of statistics and its contemporary manifestation, analytics. My exploration has been accompanied by several niggling queries: what’s the difference between statistics and analytics? Is the latter just a fancy name for the former? If not, how not?

Overlaying the post-modern notion of data science, what are the differences among the three? Is a data scientist, as Sean Owen jokingly attests, a statistician who lives in San Francisco?

The DIKW Pyramid

My journey of re-discovery started with the DIKW Pyramid. This beguilingly simple triangle models successive orders of epistemology, which is quite a complex concept. Here’s my take on it…

The DIKW Pyramid, with Data at the base, Information a step higher, Knowledge another step higher, and Wisdom at the peak.

At the base of the pyramid, Data is a set of values of qualitative or quantitative variables. In other words, it is the collection of facts or numbers at your disposal that somehow represent your subject of study. For example, your data may be the weights of 10,000 people. While this data may be important, if you were to flick through the reams of numbers you wouldn’t glean much from them.

The next step up in the pyramid is Information. This refers to data that has been processed to make it intelligible. For example, if you were to calculate the average of those ten thousand weights, you’d have a comprehensible number that is inherently meaningful. Now you can do something useful with it.

The next step up in the pyramid is Knowledge. To avoid getting lost in a philosophical labyrinth, I’ll just say that knowledge represents understanding. For example, if you were to compare the average weight against a medical standard, you might determine these people are overweight.

The highest step in the pyramid is Wisdom. I’ll offer an example of wisdom later in my deliberation, but suffice it to say here that wisdom represents higher order thinking that synthesises various knowledge to generate insight. For example, the wise man or woman will not only know these people are overweight, but also recognise they are at risk of disease.

Some folks describe wisdom as future focused, and I like that because I see it being used to inform decisions.

Statistics

My shorthand definition of statistics is the analysis of numerical data.

In practice, this is done to describe a population or to compare populations – that is to say, infer significant differences between them.

For example, by calculating the average weight of 10,000 people in Town A, we describe the population of that town. And if we were to compare the weights of those 10,000 people with the weights of 10,000 people in Town B, we might infer the people in Town A weigh significantly more than the people in Town B do.

Similarly, if we were to compare the household incomes of the 10,000 people in Town A with the household incomes of the 10,000 people in Town B, we might infer the people in Town A earn significantly less than the people in Town B do.

Then if we were to correlate all the weights against their respective household incomes, we might demonstrate they are inversely proportional to one another.

The DIKW Pyramid, showing statistics converting data into information.

Thus, our statistical tests have used mathematics to convert our data into information. We have climbed a step up the DIKW Pyramid.

Analytics

My shorthand definition of analytics is the analysis of data to identify meaningful patterns.

So while analytics is often conflated with statistics, it is indeed a broader expression – not only in terms of the nature of the data that may be analysed, but also in terms of what is done with the results.

For example, if we were to analyse the results of our weight-related statistical tests, we might recognise an obesity problem in poor neighbourhoods.

The DIKW Pyramid, showing analytics converting data into knowledge.

Thus, our application of analytics has used statistics to convert our data into information, which we have then translated into knowledge. We have climbed another step higher in the DIKW Pyramid.

Data science

My shorthand definition of data science is the combination of statistics, computer programming, and domain expertise to generate insight. Or so I’m led to believe.

Given the powerful statistical software packages currently available, I don’t see why anyone would need to resort to hand coding in R or Python. At this early stage of my re-discovery, I can only assume the software isn’t sophisticated enough to compute the specific processes that people need.

Nonetheless, if we return to our obesity problem, we can combine our new-found knowledge with existing knowledge to inform strategic decisions. For example, given we know a healthy diet and regular exercise promote weight loss, we might seek to improve the health of our fellow citizens in poor neighbourhoods (and thereby lessen the burden on public healthcare) by building sports facilities there, or by subsidising salad lunches and fruit in school canteens.

The DIKW Pyramid, showing data science converting data into wisdom.

Thus, not only has our application of data science used statistics and analytics to convert data into information and then into knowledge, it has also converted that knowledge into actionable intelligence.

In other words, data science has converted our data into wisdom. We have reached the top of the DIKW Pyramid.