Tag: quantitative

Soft landing

Muhammad Tahir Rabbani risked poking the bear on LinkedIn when he asked the Learning & Development Professionals Club:

How can we differentiate between hard and soft skills?

His question was serendipitous because I had been giving that very question some serious thought, and where I landed was that the successful application of hard skills can be measured definitively. For example, the code you write works as intended, or you arrive at the correct mathematical solution. Hence the metric is a hard number, or a binary yes or no (1 or 0).

On the other side of the coin are soft skills, so named because they are not hard skills – a bit like the white rhino and the black rhino, or hard dollars and soft dollars. Successful application of these skills less obviously boils down to a number. For example, how do you measure your communication skills or your relationship management prowess? The number of emails you send or the size of your network both miss the point. One approach might be to measure the skill indirectly, perhaps in terms of employee engagement or volume of sales.

I was comfortable with my position until I listened to a podcast by David James in which Guy Wallace quotes Joe Harless: “Soft skills is a euphemism for hard skills which we have not worked hard enough yet to define.” In other words, Guy explains, we typically don’t begin with the end in mind – that is to say, terminal performance.

And this made sense to me. I realised we could measure an executive’s communication skills by monitoring his target audience’s actions in response to the key messages in his memo; and we could measure a line manager’s provision of feedback by calculating her team member’s subsequent uptick in performance of a task. Thus, if we factor in terminal performance, a business metric emerges.

However, as Guy also explains, this is all highly dependent on the intended outcome in the context of the individual’s role, which makes it challenging to quantify at scale. Yet I also see how, just as we standardise the outputs of hard skills via acceptance criteria, we can do the same for soft skills. For example, we could use a rubric to assess whether the feedback that the manager provided clarified the situation, described the behaviour observed, and explained the impact. In this way, we “harden” the skill.

A pair of gymnastic rings

Complicating matters is the fact that some folks take umbrage at the word “soft” because it conveys the impression that they’re easy or weak. I consider this a misinterpretation, but I also acknowledge that perception is reality. Thus, countless peers have proffered alternative adjectives such as “business”, “professional”, “power”, “behavioral”, “employability”, “core”; or one that I’ve used myself in the context of a skills-based learning strategy, “transferable”.

The problem with all these labels, in my view, is that they don’t satisfactorily differentiate hard skills from soft. For example, is a hard skill such as data analysis not also a business skill or a power skill? It’s certainly transferable.

Wikipedia describes soft skills as “psychosocial”, and I feel this hits the mark closer than most. Intrapersonal skills that are exercised inside your head – such as creative thinking and resilience – are psychological, while interpersonal skills that are exercised with other people – such as communication and relationship management – are social. Unfortunately Wikipedia goes on to declare that hard skills are specific to individual professions, which is demonstrably false.

Another popular way to differentiate hard skills from soft is with the labels “technical” and “non-technical”. However Josh Bersin argues that soft skills are not soft because they’re highly complex, take years to learn, and are always changing in their scope. That sounds a lot like technical skills to me. In Figure 1 of his article he also combines core/technical skills! I wouldn’t dare suggest he was wrong; it’s just a matter of personal proclivity.

Harking back to my answer to Muhammad’s question, my own proclivity is to use the labels “objective” and “subjective”. Hard skills such as computer programming and data analysis are measured quantitatively; hence they are objective skills. In contrast, while the measurement of soft skills such as communication and relationship management can be rendered objective, we tend not to and so they remain qualitative in nature; in which case they are subjective skills.

Having said that, perhaps all this linguistic gymnastics is just an academic sideshow. Our audience doesn’t much care for it, and for them I suggest it would make more sense to repackage “digital” skills (working with data, working with technology) and “people” skills (working with humans, working with yourself). These complement “role-specific” skills (pertaining to accounting or derivatives trading, for example) and of course “compliance” skills (such as privacy and AML), both of which might be better pitched as competencies.

I also suggest that however you slice and dice skills, it’s always going to be a little bit wrong. There will inevitably be exceptions, cross-categorisations and dependencies. Frankly, it will be a marriage of convenience.

And that’s OK, because whatever you call them, what really matters is that we develop them to improve our performance.

Yellow submarine

Years ago, I remember taking a tour of what was then one of those newfangled “innovation labs”.

A hive of Design Thinking, it was crawling with serious young people in jeans and t-shirts scribbling on walls and rearranging herds of post-it notes.

In an otherwise old-fashioned financial services organisation, it was an impressive tilt towards modernisation and true customer centricity (beyond the warm and fuzzy TV commercials).

After our guide had finished explaining this brave new world to the group, one of us asked him to share a project he’d been working on. He proudly explained how the year prior, the lab had applied the progressive methodology to the development of a new product which had, finally, launched.

Which begged the next question… How many new customers did it sign up? His straight-faced answer: Seven.

Seven!

For a bank with literally millions of customers, this was astounding. And he didn’t seem all that bothered by it. The apparent solution was to go back to the drawing board and try again.

While still doing the math in my head to calculate the negative return on investment, I stumbled upon the myth of The Yellow Walkman. I neither confirm nor deny its veracity, but Alexander Cowan recounts it as follows in his article Yellow Walkman Data & the Art of Customer Discovery:

Close-up of a yellow Walkman

Sony’s conducting a focus group for a yellow ‘sport’ Walkman. After assembling their ‘man/woman on the street’ contingent, they ask them ‘Hey, how do you like this yellow Walkman?’ The reception’s great. ‘I love that yellow Walkman – it’s so sporty!’ ‘Man, would I rather I have a sweet yellow Walkman instead of a boring old black one.’

While everyone’s clinking glasses, someone had the insight to offer the participants a Walkman on their way out. They can choose either the traditional black edition or the sporty new yellow edition – there are two piles of Walkmans on two tables on the way out. Everyone takes a black Walkman.

It’s an old story, but its message remains relevant today. Because humans are terrible at predicting their own behaviour.

You see, talk is cheap. Everyone has great ideas… when someone else has to implement them. And if you ask someone point blank if they want something, nine times out of ten they’ll say yes. Then they never use it and you’re left carrying the can wondering where you went wrong.

We see this kind of thing all the time in workplace learning and development. Someone in the business will demand we build an online course, which no one will launch; or a manager will pull a capability out of thin air, oblivious to the real needs of their team.

As Cowan suggests, this can be mitigated by thoughtful questioning that avoids the solution-first trap. And of course the point of the MVP approach that’s championed by Design Thinking minimises any losses by failing fast.

But we can do something else before we get to that point: validate.

In the yellow Walkman example, Cowan offers:

Sony’s product designer mocks up several colors of Walkman and puts together some kind of an ordering page with the options. Focus group subjects (or just online visitors) are allowed to pre-order what they want. This gets you the same result without having to actually produce a whole bunch of yellow (or whatever) Walkmans.

In the L&D context, I suggest complementing our TNA consultations with assessments. So the team needs to develop x capability? Test it. They’re all over y competency? Test it.

And it needn’t be expensive nor onerous. A micro-assessment approach should be sufficient to expose the blindspots.

By validating your qualitative data with quantitative data, you’re building extra confidence into your bet and maximising its probability of success.

Lest it sink like a yellow submarine.