Posted tagged ‘L&D’

Where is L&D heading?

6 October 2015

Last week I was invited by David Swaddle to be a panellist at the Sydney eLearning and Instructional Design meetup.

The topic of the evening was Where is L&D Heading? and some questions were posted through by the attendees ahead of time, while others emerged through the discourse.

Here is an overview of my answers, plus elaborations and suggestions for further reading, for each of the questions that was (and was not) asked. Feel free to add your own views via the comments…

Businessman holding a crystal ball

With Ernst & Young dropping their degree entry requirement, how do you see the future of universities? Is the race to the bottom on time and price for degrees affecting employers’ perceptions of universities? What respect do MOOC qualifications get?

I find EY’s move here interesting, but I don’t expect other companies to follow suit en mass – particularly enterprise-wide. Having said that, dropping the degree entry requirement could make sense for specific teams such as Innovation, who might be looking for someone with creative thinking skills rather than a Bachelor of Commerce degree.

I see the future of universities as service providers, plain and simple. Students are customers, and increasing competition, deregulation and even the emergence of MOOCs has shifted power into their hands. Yes, deregulation may prompt the $100,000 degree… but who will buy it?

If students are customers, by extension so are employers. I don’t think the time and price of a degree are such big issues for them; instead I think it’s the relevance of the degree. Whether or not we agree the role of the university is to prepare students for the workplace, I think it’s going that way due to market forces.

Regarding MOOC qualifications, I think many of us are still looking at them the wrong way. When we worry about the status of their credentials or lose sleep over their completion rates, we’re perpetuating an out-dated paradigm of education based on formal learning. I prefer to see MOOCs through the lens of informal learning which values the learning over its bureaucracy. If a job applicant lists some MOOCs on their CV, I think it demonstrates an aptitude to drive their own development.

Question mark

How do you see the impact and importance of big data, adaptive learning, mobile learning and micro-learning?

While mobile learning gets a lot of hype – rightly or wrongly – my target audience is office bound. Yes, I can push content to their devices (and there’s a solid argument for micro-learning in this instance) but the truth is no one will do their training on the bus. Outside of work hours, most people don’t want to do anything work related.

I see more scope in pull learning. For example, it’s important that your intranet is mobile optimised, so when someone is away from their desk, they can quickly look up the information they need and put it into action.

The real power of m-learning though is in creating an experience. By this I mean integrating the content with the environment in which the individual is situated, and I see a lot of potential in augmented reality and wearable technologies facilitating this.

And let’s not forget about blended learning. If we allow our attendees to bring their tablets into class, they can participate in online polling, consume content and play games together. While this isn’t actually mobile learning, it leverages the technology.

As for big data, there is clearly a lot of potential in using it to inform our practice – if we can access it. I also see a lot of potential for adaptive learning in personalising the learning experience – if we can work with the tools. My caveat for emerging technologies such as these is what I call the “Average Joe imperative” – if regular folks can’t do it, it won’t gain widespread adoption.

Question mark

What about online social education and Communities of Practice? What are the challenges in using them properly in companies, schools or universities? Where are the success stories?

Beyond the technology, the success of social learning is predicated on the culture of the organisation. If you’re people aren’t the type who care and share, then a platform isn’t going to be much help. Having said that, I believe the managers in the organisation have a critical role to play in leading by example.

My go-to success stories for social learning are Coca-Cola Amatil, who have cultivated active communities of practice across state-based factory floors; and Deloitte, who are the poster child for enterprise social networking.

Question mark

Will interactive videos replace e-learning modules?

I think lots of things will replace e-learning modules!

As we embrace informal learning, we will rely less on e-learning modules in favour of alternatives such as social forums, job aids, games, and indeed, interactive videos.

I see the LMS then being used more for the assessment of learning.

Question mark

What tips does the panel have for coping with reduced training budgets?

My big tip here is that you can do a lot for free or on-the-cheap.

For example, if you want to film a training scenario, you could pay a production house many thousands of dollars to produce a slick, Academy Award worthy video clip. Alternatively, you could use your iPhone.

Sure, the quality won’t be nearly as good… so long as it’s good enough. What really matters is the learning outcome.

Besides, I think in-house production adds authenticity to the scene.

Question mark

Does L&D belong in HR?

I interpret this question as really asking “Should L&D be centralised or distributed?”.

My short answer is both. A centralised Organisational Development function can focus on enterprise-wide capability needs, while L&D professionals embedded in the business can address local capability needs.

Question mark

How does the panel identify whether an L&D professional is good? Does Australia need improved quality benchmarking or qualifications for L&D professionals such as instructional designers?

I think the point of learning in the workplace is to improve performance, so my definition of a “good” L&D professional is one that improves the performance of his or her business.

There are certain attributes that I value in an L&D pro, including being proactive, consultative, creative, and willing to try new things.

If I were considering an applicant for an instructional design role, I’d ask them to demonstrate their track record, just as I’d ask a sales rep to do. A portfolio would be useful, as would be their approach to a hypothetical project.

Furthermore, I think you can tell a lot about someone’s expertise through simple conversation; if they don’t really know what they’re talking about, it will become painfully obvious.

As for benchmarking and formal qualifications for L&D pro’s, I think they can help but I wouldn’t put too much stock into them. As EY is seeing, acing the qual doesn’t necessarily translate into good practice.

Question mark

What advice would you give to somebody interested in getting involved in ID?

I think getting involved is the key phrase in this question.

Attend meetups and events, get active on social media, participate in #lrnchat, work out loud, scan the academic research, and read blogs – learn from those at the coal face.

The caveat of the performance centre

10 February 2014

One of the more exciting ideas to emerge from the corporate learning space, which I hasten to add is yet to be realised, is to transform the Learning & Development department into a performance centre.

Rather than charging L&D Consultants with marketing the team’s lovingly crafted interventions, or reacting to random solution-first requests from the business – We need a team building workshop! – the Performance Consultant analyses the real needs of the business and identifies the relevant solutions.

This is not a novel idea. For example, I am aware of an Australian bank that established a performance centre over a decade ago, while Helen Blunden recently shared the following via an OzLearn chat:

On the face of it, this makes sense to me. I subscribe to the notion that the point of learning in the workplace is to improve performance, and the raison d’être of the performance centre is to shift our focus to its namesake.

However, I do have a caveat: If the performance centre is populated with L&D types, then the solutions they devise are probably going to be L&D oriented.

This won’t appear to pose a problem unless you appreciate that not all performance ailments are due to an L&D deficiency. On the contrary, poor performance may be caused by myriad factors such as:

Nails• A flawed process
• Obsolete technology
• Inadequate resourcing
• Noise or other disturbances
• Office politics
• Interpersonal conflict

…or any number of human conditions:

• Stress
• Sickness
• Demotivation
• Exhaustion
• Laziness

…not to mention one of my favourites offered by Joyce Seitzinger in the aforementioned Ozlearn chat:

Of course! Recruiting the right person for the role in the first place!

My point is, while poor performance may well be due to a lack of capability, it might not be either. An effective Performance Consultant must determine the root causes of the problems – whatever they may be – and respond accordingly. Do former L&D Consultants have that skillset?

If all you have is a hammer, everything looks like a nail.

E-Learning Provocateur: Volume 2

17 September 2012

Following the modest success of my first book, I decided to fulfil the promise of its subtitle and publish E-Learning Provocateur: Volume 2.

The volume comprises a collation of my articles from this blog. As in the first volume, my intent is to provoke deeper thinking across a range of e-learning related themes in the workplace, including:

E-Learning Provocateur: Volume 2•   social business
•   informal learning
•   mobile learning
•   microblogging
•   data analysis
•   digital influence
•   customer service
•   augmented reality
•   the role of L&D
•   smartfailing
•   storytelling
•   critical theory
•   ecological psychology
•   online assessment
•   government 2.0
•   human nature

Order your copy now at Amazon.

Drivers of Yammer use in the corporate sector

18 June 2012

Yammer has been quite a success at my workplace. Not off the charts like at Deloitte, yet very much alive and growing.

It warms my heart to see my colleagues asking and answering questions, sharing web articles, crowdsourcing ideas, gathering feedback, praising team mates, comparing notes on where to buy the best coffee, and even whining a little.

Every so often I’m asked by a peer at another company what they can do to increase the use of Yammer in their own organisation. I’m happy to share my opinion with them (borne from my experience), but thus far I have been cognisant of the fact that I haven’t cross-checked my ideas against those of others in the corporate sector.

So I recently invited 14 community managers from around the world to rate the key factors that drive Yammer use in their respective organisations. The results are summarised in the following graph.

Yammer drivers graph

While my sample size is probably too small to infer any significant differences among the factors, observation reveals a tiered arrangement.

The front runner is business champions. These enthusiastic users encourage the use of Yammer with their colleagues across the business. The importance of this factor is unsurprising, given the effectiveness of WOM in the marketing industry. Employees presumably trust their team mates more than they do HR, IT, or whoever “owns” Yammer in the workplace.

The next one down is another no brainer: internal promotion. Typical promotional activities such as newsletters, testimonials and merchandise not only raise awareness among the users, but also act as ongoing reminders. If WOM is the steam train, promotion is the coal that keeps it chugging.

Intrinsic motivation is obvious to anyone who knows the saying “You can lead a horse to water, but you can’t make it drink”. In other words, you can unleash your business champions and push all the promotion you like, but if the individuals who comprise your target audience lack a collaborative attitude, they won’t use Yammer.

Rounding out the top tier is top-down support and participation. Not only is it important for the user’s direct manager to be enthusiastic about Yammer and participate in it him- or herself, but it’s also important for the CEO, CFO, COO, CMO etc to do the same. They must lead by example.

Yammer icon

At the next tier down, informal support resources have some importance. I guess self-paced tutorials, user guides, tip sheets etc are less of an imperative when the system is so damn easy to use. Not to mention that just about everyone knows how to use Facebook or Twitter already, so in that sense they have prior knowledge.

User acknowledgement is also somewhat important. Everyone wants their questions to be answered, and perhaps attract a “like” or two. Otherwise, why would they bother?

The placement of Community Manager at this tier pleasantly surprised me, given the pool of respondents. Nonetheless, some sort of management of the forum is considered important in driving its use.

Integration of Yammer-based discourse into L&D offerings was also placed surprisingly low. I suspect that’s because only intrinsically motivated learners participate in it anyway.

Rounding out this tier, it appears a decent sense of netiquette is the norm in the workplace. You would be a clown to behave otherwise!

Yammer icon

At the lower tiers, we see the factors that are considered less important by the respondents.

I guess a formal usage policy is irrelevant to intrinsically motivated users, while prizes, points and other forms of extrinsic motivation are similarly redundant. Same goes for activities and games such as “fun facts” and trivia quizzes.

And one thing’s for sure: a traditional project management approach characterised by a hard launch and follow-up training misses the mark.

Yammer icon

In summary, then, we see that enterprise social networking is multifaceted. There is no silver bullet.

If your objective is to drive the use of Yammer in your organisation, you would be wise to focus your energy on the factors that offer the greatest return.

In the meantime, bear in mind that social forums grow organically. It takes time for individuals to see what’s in it for them and jump aboard.

Having said that, if the culture of your organisation is bad, it either needs to change or you should shift your efforts to something else.

Playing by numbers

23 April 2012

The theme of last week’s Learning Cafe in Sydney was How to Win Friends and Influence Learning Stakeholders.

Among the stakeholders considered was the “C-Level & Leadership”. This got me thinking, do the C-suite and lower rung managers expect different things from L&D?

There’s no shortage of advice out there telling us to learn the language of finance, because that’s what the CEO speaks. And that makes sense to me.

While some of my peers shudder at the term ROI, for example, I consider it perfectly reasonable for the one who’s footing the bill to demand something in return.

Show me the money.

Stack of Cash

But I also dare to suggest that the managers who occupy the lower levels of the organisational chart don’t give a flying fox about all that.

Of course they “care” about revenue, costs and savings – and they would vigorously say so if asked! – but it’s not what motivates them day to day. What they really care about is their team’s performance stats.

I’m referring to metrics such as:

• Number of widgets produced per hour
• Number of defects per thousand opportunities
• Number of policy renewals
• Number of new write-ups

In other words, whatever is on their dashboard. That’s what they are ultimately accountable for, so that’s what immediately concerns them.

Woman drawing a graph

The business savvy L&D consultant understands this dynamic and uses it to his or her advantage.

He or she appreciates the difference between what the client says they want, and what they really need.

He or she realises the client isn’t invested in the training activity, but rather in the outcome.

He or she doesn’t start with the solution (“How about a team-building workshop?”), but rather with the performance variable (“I see your conversion rate has fallen short of the target over the last 3 months”).

He or she knows that the numbers that really matter don’t necessarily have dollar signs in front of them.

The unscience of evaluation

29 November 2011

Evaluation is notoriously under done in the corporate sector.

And who can blame us?

With ever increasing pressure bearing down on L&D professionals to put out the next big fire, it’s no wonder we don’t have time to scratch ourselves before shifting our attention to something new – let alone measure what has already been and gone.

Alas, today’s working environment favours activity over outcome.

Pseudo echo

I’m not suggesting that evaluation is never done. Obviously some organisations do it more often than others, even if they don’t do it often enough.

However, a secondary concern I have with evaluation goes beyond the question of quantity: it’s a matter of quality.

As a scientist – yes, it’s true! – I’ve seen some dodgy pseudo science in my time. From political gamesmanship to biased TV and clueless newspaper reports, our world is bombarded with insidious half-truths and false conclusions.

The trained eye recognises the flaws (sometimes) but of course, most people are not science grads. They can fall for the con surprisingly easily.

The workplace is no exception. However, I don’t see it as employees trying to fool their colleagues with creative number crunching, so much as those employees unwittingly fooling themselves.

If a tree falls in the forest

The big challenge I see with evaluating learning in the workplace is how to demonstrate causality – ie the link between cause and effect.

Suppose a special training program is implemented to improve an organisation’s flagging culture metric. When the employee engagement survey is run again later, the metric goes up.


Congratulations to the L&D team for a job well done, right?

Not quite.

What actually caused the metric to go up? Sure, it could have been the training, or it could have been something else. Perhaps a raft of unhappy campers left the organisation and were replaced by eager beavers. Perhaps the CEO approved a special bonus to all staff. Perhaps the company opened an onsite crèche. Or perhaps it was a combination of factors.

If a tree falls in the forest and nobody hears it, did it make a sound? Well, if a few hundred employees undertook training but nobody measured its effect, did it make a difference?

Without a proper experimental design, the answer remains unclear.

Evaluation by design

To determine with some level of confidence whether a particular training activity was effective, the following eight factors must be considered…


1. Isolation – The effect of the training in a particular situation must be isolated from all other factors in that situation. Then, the metric attributed to the staff who undertook the training can be compared to the metric attributed to the staff who did not undertake the training.

In other words, everything except participation in the training program must be more-or-less the same between the two groups.

2. Placebo – It’s well known in the pharmaceutical industry that patients in a clinical trial who are given a sugar pill rather than the drug being tested sometimes get better. The power of the mind can be so strong that, despite the pill having no medicinal qualities whatsoever, the patient believes they are doing something effective and so their body responds in kind.

As far as I’m aware, this fact has never been applied to the evaluation of corporate training. If it were, the group of employees who were not undertaking the special training would still need to leave their desks and sit in the classroom for three 4-hour stints over three weeks.


Because it might not be the content that makes the difference! It could be escaping the emails and phone calls and constant interruptions. It could be the opportunity to network with colleagues and have a good ol’ chat. It might be seizing the moment to think and reflect. Or it could simply be an appreciation of being trained in something, anything.

3. Randomisation – Putting the actuaries through the training and then comparing their culture metric to everyone else’s sounds like a great idea, but it will skew the results. Sure, the stats will give you an insight into how the actuaries are feeling, but it won’t be representative of the whole organisation.

Maybe the actuaries have a range of perks and a great boss; or conversely, maybe they’ve just gone through a restructure and a bunch of their mates were made redundant. To minimise these effects, staff from different teams in the organisation should be randomly assigned to the training program. That way, any localised factors will be evened out across the board.

4. Sample size – Several people (even if they’re randomised) can not be expected to represent an organisation of hundreds or thousands. So testing five or six employees is unlikely to produce useful results.

5. Validity – Calculating a few averages and generating a bar graph is a sure-fire way to go down the rabbit hole. When comparing numbers, statistically valid methods such as Analysis of Variance are required to get significant results.

6. Replication – Even if you were to demonstrate a significant effect of the training for one group, that doesn’t guarantee the same effect for the next group. You need to do the test more than once to establish a pattern and negate the suspicion of a one-off.

7. Subsets – Variations among subsets of the population may exist. For example, the parents of young children might feel aggrieved for some reason, or older employees might feel like they’re being ignored. So it’s important to analyse subsets to see if any clusters exist.

8. Time and space – Just because you demonstrated the positive effect of the training program on culture in the Sydney office, doesn’t mean it will have the same effect in New York or Tokyo. Nor does it mean it will have the same effect in Sydney next year.

Weird science

Don’t get me wrong: I’m not suggesting you need a PhD to evaluate your training activity. On the contrary, I believe that any evaluation – however informal – is better than none.

What I am saying, though, is for your results to be more meaningful, a little bit of know-how goes a long way.

For organisations that are serious about training outcomes, I go so far as to propose employing a Training Evaluation Officer – someone who is charged not only with getting evaluation done, but with getting it done right.


This post was originally published at the Learning Cafe on 14 November 2011.

A question of leadership development

25 October 2011

A provocative question was posed at the latest Learning Cafe:

Does the learning department spend disproportionate effort on leadership development?

Colleagues holding question mark signs in front of their faces

To me, it makes good business sense to facilitate the development of effective leaders in the organisation. Leadership is a driver of culture, which in turn is a driver of engagement, which in turn is a driver of performance.

While I support the philosophy of leadership development, however, I have doubts over some of the interventions that are deployed under that banner. The eye-watering costs and time associated with formal leadership training should be carefully evaluated in terms of ROI.

So I don’t challenge whether substantial resources should be assigned to leadership development, but rather how they should be assigned. There is plenty of scope for informal learning solutions (for example) which are less time and money hungry – and arguably more effective!

Having said that, I think the effort assigned to developing leadership skills can be disproportionate in comparison to managerial skills. All too frequently, “leaders” are promoted due to their technical expertise, but they have never managed anyone in their life. Somehow we expect them to magically transform into Super Boss, but that’s not going to happen. What these people need is Management 101 – a no-nonsense explanation of their new responsibilities and accountabilities, and the corresponding skillset to fulfil them.

Thrown Under the Bus coffee mug

Do you agree with me? Review the opinions of other practitioners – and voice your own – at the Learning Cafe blog.


Get every new post delivered to your Inbox.

Join 663 other followers