Posted tagged ‘philosophy’

Let’s get rid of the instructional designers!

12 August 2014

That’s the view of some user-oriented design proponents.

It’s something I remembered while writing my last blog post about user-generated content. Whereas that post explored the role of the learner in the content development process, how about their role in the broader instructional design process?

I wrote a short (1000 word) assignment on the latter at uni several years ago – in the form of a review of a chapter written by Alison Carr-Chellman and Michael Savoy – and it’s a concept that has resonated with me ever since.

Here I shall share with you that review, unadulterated from its original form, except for the graphic of the user empowerment continuum and the hyperlink to the reference, both of which I have added for this post.

Whether or not the more “progressive” design philosophies resonate with you, at the very least I hope they provoke your thinking…

Users co-designing

Introduction

Carr-Chellman & Savoy (2004) provide a broad overview of user design. They define the term user design, compare it against other methodologies of user-oriented design, identify obstacles to its successful implementation, and finally make recommendations for the direction of further research.

Definition

According to Carr-Chellman & Savoy (2004), traditional instructional design methodologies disenfranchise the user from the design process. In a corporate organisation, for example, the leaders will typically initiate the instructional design project, an expert designer will then analyse the situation and create a design, and finally, the leaders will review the design and either approve it or reject it. The role of the user, then, is simply to use the system (or perhaps circumvent it).

In contrast to traditional instructional design methodologies, user design enables the users to participate in the design process. Instead of just using the system, they are involved in its design. Furthermore, their role is more than just providing input; they are active participants in the decision-making process.

Comparison against other methodologies

Carr-Chellman & Savoy (2004) carefully distinguish user design from other methodologies of user-oriented design, namely user-centered design and emancipatory design.

User-centered design

According to Carr-Chellman & Savoy (2004), user-centered design methodologies consider the needs of the user during the design process. In educational situations, for example, the expert designer may analyse the target audience, identify their preferred learning styles, and perhaps run a pretest. In tool usage situations, he or she may distribute user surveys or conduct usability testing. The goal of these activities is to obtain extra information to assist the designer in creating a better system for the users.

The key difference between user-centered design and user design is the level of participation of the users in the design process. Under a user-centered design model, the designer considers the needs of the users, but ultimately makes the design decisions on their behalf.

Under a user design model, however, the needs of the users go beyond mere food for thought. The users are empowered to make their own design decisions and thereby assume an active role in the design process.

User empowerment continuum, featuring traditional instructional design at the lowest extremity, then user-centered design, then user design, then emancipatory design at the highest extremity.

Emancipatory design

If traditional design occupies the lowest extremity of the user empowerment continuum, and user-centered design occupies a step up from that position, then emancipatory design occupies the opposite extremity.

Emancipatory design dispenses with the role of the expert designer and elevates the role of the users, so that in effect they are the designers. This methodology charges the users with full responsibility over all facets of the design process, from initiation, through analysis, design, review, to approval. Instead of having a system imposed on them, the users have truly designed it for themselves, according to their own, independent design decisions.

Emancipatory design is founded on issues of conflict and harmony in the disciplines of social economics and industrial relations. Carr-Chellman & Savoy (2004) recognise that the goal of emancipatory design is “more to create change and vest the users and frontline workers in organisational outcomes than it is actually to create a working instructional system”. Hence, emancipatory design may not be a universal instructional design methodology.

User design

User design fits between the extremes of the user empowerment continuum. Whereas traditional design and user-centered design remove the user from the active design process, and conversely, emancipatory design removes the expert designer from the process, user design merges the roles into the shared role of “co-designer”. It strikes a balance between the two perspectives by including contributions from both parties.

Arguably, user design is a universal instructional design methodology. Whereas traditional design and user-centered design devalue the role of the users in the active design process, emancipatory design devalues the role of the expert designer.

User design, however, values both roles. It recognises the necessity of the active involvement of users, because they are the experts in their domain and will be the ones operating the system. However, users can not be expected to understand the science of design. The active involvement of an expert designer is critical in guiding the design process and driving the work towards an efficient and effective outcome.

Obstacles

Carr-Chellman & Savoy (2004) identify numerous obstacles to the successful implementation of user design, including the reluctance of designers and leaders to share their decision-making powers with users, the inclusion of users too late in the design process, the tendency to categorise users into a homogenous group, and the lack of user motivation to participate in design activities.

Further Research

Carr-Chellman & Savoy (2004) claim that research specific to user design within instruction systems is scarce, and much of the research into other user-oriented design methodologies lacks scientific rigour. Therefore, they recommend the following actions for the research community:

  1. To create a standardised language to define user design and to distinguish it from other user-oriented design methodologies,

  2. To study the implementation of user design across different variables, such as user profile, subject area and mode of delivery, and

  3. To communicate the success of user design in terms of “traditional measures of effectiveness” for the purpose of influencing policymakers.

Furthermore, Carr-Chellman & Savoy (2004) recommend that researchers adopt the participatory action research (PAR) method of inquiry. They argue that PAR democratises the research process and, consequently, is ideologically aligned with the principles of user design.

It can be argued, therefore, that Carr-Chellman & Savoy (2004) promote both user design and user research. Their vision for users is not only to assume the role of “co-designer”, but also of “co-researcher”.

Reference

Carr-Chellman, A. & Savoy, M. (2004). User-design research, in Handbook of Research on Educational Communication and Technology, 2nd ed, D. H. Jonassen (Ed), pp. 701-716, New Jersey, USA: Lawrence Erlbaum.

Human enough

19 February 2013

It is with glee that the proponents of e-learning trumpet the results of studies such as the US Department of Education’s Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies, which found that, on average, online instruction is as effective as classroom instruction.

And who can blame them? It is only natural for evangelists to seize upon evidence that furthers their cause.

But these results mystified me. If humans are gregarious beings and learning is social, how can face-to-face instruction possibly fail to out perform its online equivalent?

That was until I watched Professor Steve Fuller’s Humanity 2.0 TEDxWarwick talk in Week 3 of The University of Edinburgh’s E-learning and Digital Cultures course.

The professor explains with wonderful articulation how difficult it is to define a human.

Sure, biologists will define humanity in terms of DNA, yet they can’t even agree on whether the Neanderthals were a subspecies of Homo sapiens or a separate species all together.

If we remove our gaze from the electron microscope, we have our morphology. Perhaps a human is an organism that has five fingers on each hand? But does that mean someone who is born with four (or six) is not human?

Perhaps a human is an organism that uses tools? Well, vultures drop rocks onto eggs to break them open.

Perhaps then a human is an organism that uses language? Whales might have something to say about that.

It is an intriguing conundrum that has occupied our thoughts since anyone can remember.

Title page of the first edition of René Descartes' Discourse on Method.

In the 17th Century, René Descartes made an intellectual breakthrough. He contended that “reason…is the only thing that makes us men, and distinguishes us from the beasts”. In other words, we are the only creatures on God’s earth capable of rational thought. I think, therefore I am.

Descartes pushed his point by arguing that while a robot might one day be developed to speak words, “it is not conceivable that such a machine should…give an appropriately meaningful answer in its presence”. And despite astonishing advances in artificial intelligence, the philosophical Frenchman remains right. Even Watson, who triumphed at Jeopardy! and today mines big data to help humans make better decisions, can not reasonably be considered a human itself. It is simply a product of computer programming.

Speaking of machines, if a human were to progressively replace her body parts with robotics – hence becoming a cyborg – at what point does she cease to be a human? According to the humanist tradition of Descartes, the absolute difference between a human and a non-human is a property of the mind. So, arguably she will remain a “human” until her brain is replaced.

But that begs the question: if we flip the scenario around and place a person’s brain in a robot’s body, does that make it a human?

All this philosophy starts to do my head in after a while, and that’s before getting into Freud’s posthumanism.

Somehow I prefer Joseph Gliddon’s simpler definition of a human: something that drinks coffee.

Cup of coffee

It’s not as flippant as it sounds, for it is our artificial enhancements that paradoxically make us more human.

Riding a bicycle, for example, is a quintessentially human endeavour. No other creature does it. Yes, a monkey might do so in the circus, but the reason we find it funny (or at least unusual) is because it doesn’t normally do that. The poor thing is mimicking a human.

Similarly, digital technology is an extension of our notion of humanity. Humans are the only organisms that use computers, surf the Web, write text, film video, record audio, and engage with one another in online discussion forums.

So when we view online pedagogy through this lens, we recognise very little of it that is not human. Consequently the strong performance of online students becomes less mysterious. In fact, it becomes expected because, just as a bicycle enhances our capability for travel, digital technology enhances our capability for learning.

This expectation is supported by a further finding of the Department of Education’s research – namely, that “blends of online and face-to-face instruction, on average, had stronger learning outcomes than did face-to-face instruction alone”. In other words, students who had the technology via the blended design performed better than those who didn’t.

But it doesn’t work in reverse: “the majority of…studies that directly compared purely online and blended learning conditions found no significant differences in student learning”. In other words, those who had the face-to-face interaction via the blended design performed no better than those who didn’t. Apparently the online instruction was human enough.

OK, on that bombshell, I think I’ll ride my bike to the cafe and pick up a cup of joe…

The equation for change

4 February 2013

Guns don’t kill people. People do.

It’s a well-worn saying that Americans in particular know only too well.

And of course it’s technically correct. I don’t fear a gun on the table, but I do fear someone might pick it up and pull the trigger. That’s why I don’t want a gun on the table.

It’s a subtle yet powerful distinction that occurred to me as I absorbed the core reading for Week 1 of The University of Edinburgh’s E-learning and Digital Cultures course; namely Daniel Chandler’s Technological or Media Determinism.

E-learning and Digital Cultures logo

Technological determinism is a philosophy that has implications for e-learning professionals as we grapple with technologies such as smartphones, tablets, ebooks, gamification, QR codes, augmented reality, the cloud, telepresence, ADDIE, SAM, and of course, MOOCs.

Chandler explains that “hard” technological determinism holds technology as the driver of change in society. Certain consequences are seen as “inevitable” or at least “highly probable” when a technology is unleashed on the masses. It’s how a lot of people view Apple products for example, and it’s extremist.

Like most extremism, however, it’s an absurd construct. Any given technology – whether it be a tool, a gadget or a methodology – is merely a thing. It can not do anything until people use it. Otherwise it’s just a box of wires or a figment of someone’s imagination.

Taking this rationale a step further, people won’t use a particular technology unless a socio-historical force is driving their behaviour to do so. History is littered with inventions that failed to take off because no one had any need for them.

Consider the fall of Aztec empire in the 16th Century. Sailing ships, armour, cannons, swords, horse bridles etc didn’t cause the conquistadors to catastrophically impact an ancient society. In the socio-historical context of the times, their demand for gold and glory drove them to exploit the technologies that were available to them. In other words, technology enabled the outcome.

Storming of the Teocalli by Cortez and His Troops

At the other end of the spectrum, technological denial is just as absurd. The view that technology does not drive social change is plainly wrong, as we can demonstrate by flipping the Aztec scenario: if sailing ships, armour etc were not available to the conquistadors, the outcome would have been very different. They wouldn’t have been able to get to the new world, let alone destroy it.

Of course, the truth lies somewhere in between. Technology is a driver of change in society, but not always, and never by itself. In other words, technology can change society when combined with social demand. It is only one component of the equation for change:

   Technology + Demand = Change   

In terms of e-learning, this “softer” view of technological determinism is a timely theoretical lens through which to see the MOOC phenomenon. Video, the Internet and Web 2.0 didn’t conspire to spellbind people into undertaking massive open online courses. In the socio-historical context of our time, the demand that providers have for altruism? corporate citizenship? branding? profit? (not yet) drives them to leverage these technologies in the form of MOOCs. Concurrently, a thirst for knowledge, the need for quality content, and the yearning for collaboration drives millions of students worldwide to sign up.

MOOCs won’t revolutionise education; after all, they are just strings of code sitting on a server somewhere. But millions of people using MOOCs to learn? That will shake the tree.

Child learning on a computer

So the practical message I draw from the theory of technological determinism is that to change your society – be it a classroom, an organisation, or even a country – there’s no point implementing a technology just for the sake of it. You first need to know your audience and understand the demands they have that drive their behaviour. Only then will you know which technology to deploy, if any at all.

As far as gun control in the US is concerned, that’s a matter for the Americans. I only hope they learn from their ineffective war on drugs: enforcement is vital, but it’s only half the equation. The other half is demand.

Open Learning Network vs Informal Learning Environment

21 September 2010

In the comments section of my previous post, Mike Caulfield kindly pointed me to the article Envisioning the Post-LMS Era: The Open Learning Network by Jonathan Mott.

I was immediately interested because, like me, Mott is striving to bridge the gap between the organisation’s LMS and the learner’s PLE. He articulates his position as such:

“…a one-or-the-other choice between the two is a false choice between knowledge-dissemination technologies and community-building tools. We can have both.”

Amen to that.

But how do we bridge the gap?

Mott’s blueprint is the Open Learning Network (OLN). Mine is the Informal Learning Environment (ILE).

While both proposals have remarkable similarities, the pedagogical philosophies that underpin them are fundamentally different.

The ILE recapped

An ILE is a space that centralises tools and resources that the learner can use to drive their own development. In How to revamp your learning model, I propose three core components:

1. A comprehensive wiki,
2. An open discussion forum, and
3. A bank of personal profiles.

These components work in tandem with the LMS and system reports, which in turn comprise the core components of the Formal Learning Environment (FLE).

A revamped learning model, consisting of an ILE and an FLE

The organisation manages the ILE on behalf of the learners, who are free to search, explore, ask and share at their own pace and at their own discretion, and – ideally – integrate the system into their broader PLE.

The OLN compared

While the ILE is designed to bridge the gap between the LMS and the PLE, it purposefully keeps them apart. Not only do I believe in the right of the learner to keep their PLE strictly personal, but I also believe in the power of separating “learning” from its administration.

The OLN takes a different approach. Mott states:

“The OLN is not intended merely to allow the LMS and PLE paradigms to coexist in harmony, but rather to take the best of each approach and mash them up into something completely different.”

The OLN model connects private and secure applications on the organisation’s network (such as the student information system, content repository, assessments and transcripts) to open and flexible tools and applications in the cloud (such as blogs, social networks and non-proprietary content) via a services-oriented architecture.
The university network and the cloud under the OLN model. Source: Mott, J. (2010) Envisioning the Post-LMS Era: The Open Learning Network, Educause Quarterley Magazine, Volume 33, Number 1.

Both the OLN and ILE are modular because they comprise standalone resources or “learning objects”. This makes them flexible because the objects can be easily replaced by others that are more current, relevant or useful.

The key difference between the two models is interoperability. In a nutshell, the objects in an OLN can talk to one another via web service protocols such as LTI. Mott elaborates:

“In the simplest terms, web services-enabled applications leverage the elegantly simple Hypertext Transfer Protocol (HTTP) that gave life to the World Wide Web. This means that applications use a common set of verbs (such as GET and POST) and nouns (standard definitions of ‘student,’ ‘course,’ ‘score,’ etc.) to share data (as XML) via HTML. A robust services architecture will also implement role-based security and authentication protocols to manage data and application access and permissions. Within such a framework, any tool can securely interact with any other tool, passing user IDs and course and role information. Activities are then logged in the second application so that data can be passed back to the originating application (via a secure HTTP POST in the browser).”

A full-featured OLN. Source: Mott, J. (2010) Envisioning the Post-LMS Era: The Open Learning Network, Educause Quarterley Magazine, Volume 33, Number 1.

The ILE model is not as technologically complex! It makes no demands for interoperability among the components of the PLE, ILE and LMS; in fact, it celebrates their independence. The common denominator is authentic assessment, which represents the sum of all learning regardless of its sources.

Digging deeper

So while the major difference between the OLN and ILE is apparently their respective technical framework, that is simply a manifestation of their true difference: pedagogical philosophy.

I see the OLN as a solution for monitoring the student’s progress during a program of study in the digital age. It formalises the informal. The underpinning pedagogical philosophy, therefore, is formal learning – which I recognise as entirely appropriate in the Higher Education environment.

In the workplace, however, the vast majority of learning is informal. I would even go so far as to suggest that we hinder the learning process by drowning it in bureaucracy.

I see the ILE as a solution for self-directed learning, peer-to-peer discourse and knowledge sharing. It informalises the formal. The underpinning pedagogical philosophy, therefore, is informal learning – which is crying out for support in the corporate sector.

Horses for courses

So yes, Mott and I propose two similar yet fundamentally different learning models – but we have our reasons.

Neither is necessarily right; neither is necessarily wrong.

It all depends on context.
 


Follow

Get every new post delivered to your Inbox.

Join 485 other followers