Posted tagged ‘philosophy’

The grassroots of learning

22 October 2014

Here’s a common scenario: I “quickly” look up something on Wikipedia, and hours later I have 47 tabs open as I delve into tangential aspects of the topic.

That’s the beauty of hypertext. A link takes you somewhere else, which contains other links that take you somewhere else yet again. The Internet is thus the perfect vehicle for explaining the concept of rhizomatic learning.

Rhizomatic learning is something that I have been superficially aware of for a while. I had read a few blog posts by Dave Cormier (the godfather of the philosophy) and I follow the intrepid Soozie Bea (a card-carrying disciple), but unfortunately I missed Dave’s #rhizo14 mooc earlier in the year.

Since I’ve been blogging about the semantics of education lately, I thought it high time to dig a little deeper.

Bamboo with rhizome

It seems to me that rhizomatic learning is the pedagogical antithesis of direct instruction. Direct instruction has pre-defined learning outcomes with pre-defined content to match. The content is typically delivered in a highly structured format.

In contrast, rhizomatic learning has no pre-defined learning outcomes nor pre-defined content. The learner almost haphazardly follows his or her own line of inquiry from one aspect of the subject matter to the next, then the next, and so forth according to whatever piques his or her interest. Thus it can not be predicted ahead of time.

Given my scientific background, I was already familiar with the rhizome. So is everyone else, incidentally, perhaps without realising it. A rhizome is the creeping rootstalk of a plant that explores the soil around it, sending out new roots and shoots as it goes along. A common example is bamboo, whose rhizome enables it to spread like wildfire.

In A Thousand Plateaus: Capitalism and Schizophrenia, Gilles Deleuze and Félix Guattari adopt the rhizome as a metaphor for the spread of culture throughout society. That’s a massive over-simplification, of course, and quite possibly wrong. The Outsider represents the extent of my French philosophy bookshelf!

Anyway, the point I’m bumbling towards is that Dave Cormier has picked up this philosophical metaphor and applied it to the wonderful world of learning. He explains in Trying to write Rhizomatic Learning in 300 words:

“Rhizomatic Learning developed as an approach for me as a response to my experiences working with online communities. Along with some colleagues we started meeting regularly online for live interactive webcasts starting in 2005 at Edtechtalk. We learned by working together, sharing our experiences and understanding. The outcomes of those discussions were more about participating and belonging than about specific items of content – the content was already everywhere around us on the web. Our challenge was in learning how to choose, how to deal with the uncertainty of abundance and choice presented by the Internet. In translating this experience to the classroom, I try to see the open web and the connections we create between people and ideas as the curriculum for learning. In a sense, participating in the community is the curriculum.”

I note that this explanation from 2012 is somewhat different from his paper in 2008, which of course reflects the evolution of the idea. In Rhizomatic Education: Community as Curriculum, Dave similarly mentioned the abundance of content on the Internet, and also the shrinking half-life of knowledge. He contrasted the context of traditional education – in which experts are the custodians of a canon of accepted thought, which is presumed to remain relatively stable – with today – in which knowledge changes so quickly as to make the traditional notion of education flawed.

Dave posited educational technology is a prime example. Indeed when I studied this discipline at university, much of the learning theory (for instance) enjoyed a broad canon of knowledge to which students such as myself could refer. It was even documented in textbooks. Other aspects of the subject (for instance, the rapid advances in technology, and the pedagogical shifts towards social and informal learning) could not be compared against any such canon. The development of this knowledge was so rapid that we students relied as much on each other’s recent experiences and on sharing our personal learning journeys than we did on anything the professor could supply.

“In the rhizomatic model of learning, curriculum is not driven by predefined inputs from experts; it is constructed and negotiated in real time by the contributions of those engaged in the learning process. This community acts as the curriculum, spontaneously shaping, constructing, and reconstructing itself and the subject of its learning in the same way that the rhizome responds to changing environmental conditions.”

From 2008 to 2012, I see a shift in Dave’s language from Rhizomatic Education to Rhizomatic Learning. This I think is a better fit for the metaphor, as while it may be argued that the members of the community are “teaching” one another, the driving force behind the learning process is the active learner who uses the community as a resource and makes his or her own decisions along the way.

I also note the change from “the community is the curriculum” to “participating in the community is the curriculum”. Another semantic shift that I think is closer to the mark, but perhaps still not quite there. I suggest that the content created by the members of community is the curriculum. In other words, the curriculum is the output that emerges from participating in the community. So “participating in the community produces the curriculum”.

As a philosophy for learning, then, rhizomatic learning is not so different from constructivism, connectivism, and more broadly, andragogy. The distinguishing feature is the botanical imagery.

However this is where my understanding clouds over…

Is it the abundance of content “out there” that is rhizomatic?

Or is it the construction of new knowledge that is rhizomatic?

Or is it the learning journey that is undertaken by the individual learner?

Perhaps such pedantic questions are inconsequential, but the scientist in me demands clarification. So I propose the following:

 

The knowledge that is constructed by the community is the rhizome.

The process of constructing the knowledge
by the members of the community is rhizomatic education.

The process of exploring, discovering and consuming the knowledge
by the individual learner is rhizomatic learning.

 

If we return to my Wikipedia scenario, we can use it as a microcosm of the World Wide Web and the universe more broadly:

The ever-expanding Wikipedia is the rhizome.

The Wikipedians are conducting rhizomatic education.

I, the Average Joe who looks it up and loses myself in it for hours on end,
is experiencing rhizomatic learning.

In the age of Web 2.0, Average Joe may also be a Wikipedian. Hence we can all be rhizomatic educators and rhizomatic learners.

Barnstar

I also detect a certain level of defensiveness from Dave in his early paper. He prefaces his work with a quote from Henrik Ibsen’s An enemy of the People which rejoices in the evolution of “truth” in the face of conventional resistance [my interpretation], while later on he addresses the responses of the “purveyors of traditional educational knowledge” – primarily in the realms of academic publishing and intellectual property.

I think Dave was right to be defensive. Despite the pervasive learnification of education that would theoretically promote rhizomatic learning as its poster boy, anything new that threatens the status quo is typically met with outrage from those who stand to lose out.

A case in point is moocs. Dave refers to Alec Couros’s graduate course in educational technology, which was a precursor to his enormously popular #ETMOOC. While a cMOOC such as this one may be the epitome of the rhizomatic philosophy, I contend that it also applies to the xMOOC.

You see, while the xMOOC is [partly] delivered instructivistly, those darn participants still learn rhizomatically! And so the traditionalists delight in the low completion rates of moocs, while the rest of us appreciate that learning (as opposed to education) simply doesn’t work that way – especially in the digital age.

Don’t get me wrong: I am no anti-educationalist. Regular readers of my blog will not find it surprising when I point out that sometimes the rhizomatic model is not appropriate. For example, when the learner is a novice in a particular field, they don’t know what they don’t know. As I was alluding to via my tweet to Urbie in lrnchat, sometimes there is a central and stable canon of knowledge and the appointed expert is best placed to teach it to you.

I also realise that while an abundance of knowledge is indeed freely available on the Internet, not all of it is. It may be hidden in walled gardens, or not on the web at all. Soozie makes the point that information sources go beyond what the web and other technologies can channel. “Information that is filtered, classified or cleansed, consolidated or verified may also come from formal, non-formal or informal connections including teachers, friends, relatives, professional colleagues and recognized experts in the field.” But I take her point that all this is enhanced by technology.

Finally, the prominence of rhizomatic learning will inevitably increase as knowledge continues to digitise and our lens on learning continues to informalise. In this context, I think the role of the instructor needs much more consideration. While Dave maintains that the role is to provide an introduction to an existing learning community in which the student may participate, there is obviously more that we L&D pro’s must do to fulfil our purpose into the future.

On that note I’ll rest my rhizomatic deliberation on rhizomatic learning. If you want to find out more about this philosophy, I suggest you look it up on Wikipedia.

Let’s get rid of the instructional designers!

12 August 2014

That’s the view of some user-oriented design proponents.

It’s something I remembered while writing my last blog post about user-generated content. Whereas that post explored the role of the learner in the content development process, how about their role in the broader instructional design process?

I wrote a short (1000 word) assignment on the latter at uni several years ago – in the form of a review of a chapter written by Alison Carr-Chellman and Michael Savoy – and it’s a concept that has resonated with me ever since.

Here I shall share with you that review, unadulterated from its original form, except for the graphic of the user empowerment continuum and the hyperlink to the reference, both of which I have added for this post.

Whether or not the more “progressive” design philosophies resonate with you, at the very least I hope they provoke your thinking…

Users co-designing

Introduction

Carr-Chellman & Savoy (2004) provide a broad overview of user design. They define the term user design, compare it against other methodologies of user-oriented design, identify obstacles to its successful implementation, and finally make recommendations for the direction of further research.

Definition

According to Carr-Chellman & Savoy (2004), traditional instructional design methodologies disenfranchise the user from the design process. In a corporate organisation, for example, the leaders will typically initiate the instructional design project, an expert designer will then analyse the situation and create a design, and finally, the leaders will review the design and either approve it or reject it. The role of the user, then, is simply to use the system (or perhaps circumvent it).

In contrast to traditional instructional design methodologies, user design enables the users to participate in the design process. Instead of just using the system, they are involved in its design. Furthermore, their role is more than just providing input; they are active participants in the decision-making process.

Comparison against other methodologies

Carr-Chellman & Savoy (2004) carefully distinguish user design from other methodologies of user-oriented design, namely user-centered design and emancipatory design.

User-centered design

According to Carr-Chellman & Savoy (2004), user-centered design methodologies consider the needs of the user during the design process. In educational situations, for example, the expert designer may analyse the target audience, identify their preferred learning styles, and perhaps run a pretest. In tool usage situations, he or she may distribute user surveys or conduct usability testing. The goal of these activities is to obtain extra information to assist the designer in creating a better system for the users.

The key difference between user-centered design and user design is the level of participation of the users in the design process. Under a user-centered design model, the designer considers the needs of the users, but ultimately makes the design decisions on their behalf.

Under a user design model, however, the needs of the users go beyond mere food for thought. The users are empowered to make their own design decisions and thereby assume an active role in the design process.

User empowerment continuum, featuring traditional instructional design at the lowest extremity, then user-centered design, then user design, then emancipatory design at the highest extremity.

Emancipatory design

If traditional design occupies the lowest extremity of the user empowerment continuum, and user-centered design occupies a step up from that position, then emancipatory design occupies the opposite extremity.

Emancipatory design dispenses with the role of the expert designer and elevates the role of the users, so that in effect they are the designers. This methodology charges the users with full responsibility over all facets of the design process, from initiation, through analysis, design, review, to approval. Instead of having a system imposed on them, the users have truly designed it for themselves, according to their own, independent design decisions.

Emancipatory design is founded on issues of conflict and harmony in the disciplines of social economics and industrial relations. Carr-Chellman & Savoy (2004) recognise that the goal of emancipatory design is “more to create change and vest the users and frontline workers in organisational outcomes than it is actually to create a working instructional system”. Hence, emancipatory design may not be a universal instructional design methodology.

User design

User design fits between the extremes of the user empowerment continuum. Whereas traditional design and user-centered design remove the user from the active design process, and conversely, emancipatory design removes the expert designer from the process, user design merges the roles into the shared role of “co-designer”. It strikes a balance between the two perspectives by including contributions from both parties.

Arguably, user design is a universal instructional design methodology. Whereas traditional design and user-centered design devalue the role of the users in the active design process, emancipatory design devalues the role of the expert designer.

User design, however, values both roles. It recognises the necessity of the active involvement of users, because they are the experts in their domain and will be the ones operating the system. However, users can not be expected to understand the science of design. The active involvement of an expert designer is critical in guiding the design process and driving the work towards an efficient and effective outcome.

Obstacles

Carr-Chellman & Savoy (2004) identify numerous obstacles to the successful implementation of user design, including the reluctance of designers and leaders to share their decision-making powers with users, the inclusion of users too late in the design process, the tendency to categorise users into a homogenous group, and the lack of user motivation to participate in design activities.

Further Research

Carr-Chellman & Savoy (2004) claim that research specific to user design within instruction systems is scarce, and much of the research into other user-oriented design methodologies lacks scientific rigour. Therefore, they recommend the following actions for the research community:

  1. To create a standardised language to define user design and to distinguish it from other user-oriented design methodologies,

  2. To study the implementation of user design across different variables, such as user profile, subject area and mode of delivery, and

  3. To communicate the success of user design in terms of “traditional measures of effectiveness” for the purpose of influencing policymakers.

Furthermore, Carr-Chellman & Savoy (2004) recommend that researchers adopt the participatory action research (PAR) method of inquiry. They argue that PAR democratises the research process and, consequently, is ideologically aligned with the principles of user design.

It can be argued, therefore, that Carr-Chellman & Savoy (2004) promote both user design and user research. Their vision for users is not only to assume the role of “co-designer”, but also of “co-researcher”.

Reference

Carr-Chellman, A. & Savoy, M. (2004). User-design research, in Handbook of Research on Educational Communication and Technology, 2nd ed, D. H. Jonassen (Ed), pp. 701-716, New Jersey, USA: Lawrence Erlbaum.

Human enough

19 February 2013

It is with glee that the proponents of e-learning trumpet the results of studies such as the US Department of Education’s Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies, which found that, on average, online instruction is as effective as classroom instruction.

And who can blame them? It is only natural for evangelists to seize upon evidence that furthers their cause.

But these results mystified me. If humans are gregarious beings and learning is social, how can face-to-face instruction possibly fail to out perform its online equivalent?

That was until I watched Professor Steve Fuller’s Humanity 2.0 TEDxWarwick talk in Week 3 of The University of Edinburgh’s E-learning and Digital Cultures course.

The professor explains with wonderful articulation how difficult it is to define a human.

Sure, biologists will define humanity in terms of DNA, yet they can’t even agree on whether the Neanderthals were a subspecies of Homo sapiens or a separate species all together.

If we remove our gaze from the electron microscope, we have our morphology. Perhaps a human is an organism that has five fingers on each hand? But does that mean someone who is born with four (or six) is not human?

Perhaps a human is an organism that uses tools? Well, vultures drop rocks onto eggs to break them open.

Perhaps then a human is an organism that uses language? Whales might have something to say about that.

It is an intriguing conundrum that has occupied our thoughts since anyone can remember.

Title page of the first edition of René Descartes' Discourse on Method.

In the 17th Century, René Descartes made an intellectual breakthrough. He contended that “reason…is the only thing that makes us men, and distinguishes us from the beasts”. In other words, we are the only creatures on God’s earth capable of rational thought. I think, therefore I am.

Descartes pushed his point by arguing that while a robot might one day be developed to speak words, “it is not conceivable that such a machine should…give an appropriately meaningful answer in its presence”. And despite astonishing advances in artificial intelligence, the philosophical Frenchman remains right. Even Watson, who triumphed at Jeopardy! and today mines big data to help humans make better decisions, can not reasonably be considered a human itself. It is simply a product of computer programming.

Speaking of machines, if a human were to progressively replace her body parts with robotics – hence becoming a cyborg – at what point does she cease to be a human? According to the humanist tradition of Descartes, the absolute difference between a human and a non-human is a property of the mind. So, arguably she will remain a “human” until her brain is replaced.

But that begs the question: if we flip the scenario around and place a person’s brain in a robot’s body, does that make it a human?

All this philosophy starts to do my head in after a while, and that’s before getting into Freud’s posthumanism.

Somehow I prefer Joseph Gliddon’s simpler definition of a human: something that drinks coffee.

Cup of coffee

It’s not as flippant as it sounds, for it is our artificial enhancements that paradoxically make us more human.

Riding a bicycle, for example, is a quintessentially human endeavour. No other creature does it. Yes, a monkey might do so in the circus, but the reason we find it funny (or at least unusual) is because it doesn’t normally do that. The poor thing is mimicking a human.

Similarly, digital technology is an extension of our notion of humanity. Humans are the only organisms that use computers, surf the Web, write text, film video, record audio, and engage with one another in online discussion forums.

So when we view online pedagogy through this lens, we recognise very little of it that is not human. Consequently the strong performance of online students becomes less mysterious. In fact, it becomes expected because, just as a bicycle enhances our capability for travel, digital technology enhances our capability for learning.

This expectation is supported by a further finding of the Department of Education’s research – namely, that “blends of online and face-to-face instruction, on average, had stronger learning outcomes than did face-to-face instruction alone”. In other words, students who had the technology via the blended design performed better than those who didn’t.

But it doesn’t work in reverse: “the majority of…studies that directly compared purely online and blended learning conditions found no significant differences in student learning”. In other words, those who had the face-to-face interaction via the blended design performed no better than those who didn’t. Apparently the online instruction was human enough.

OK, on that bombshell, I think I’ll ride my bike to the cafe and pick up a cup of joe…

The equation for change

4 February 2013

Guns don’t kill people. People do.

It’s a well-worn saying that Americans in particular know only too well.

And of course it’s technically correct. I don’t fear a gun on the table, but I do fear someone might pick it up and pull the trigger. That’s why I don’t want a gun on the table.

It’s a subtle yet powerful distinction that occurred to me as I absorbed the core reading for Week 1 of The University of Edinburgh’s E-learning and Digital Cultures course; namely Daniel Chandler’s Technological or Media Determinism.

E-learning and Digital Cultures logo

Technological determinism is a philosophy that has implications for e-learning professionals as we grapple with technologies such as smartphones, tablets, ebooks, gamification, QR codes, augmented reality, the cloud, telepresence, ADDIE, SAM, and of course, MOOCs.

Chandler explains that “hard” technological determinism holds technology as the driver of change in society. Certain consequences are seen as “inevitable” or at least “highly probable” when a technology is unleashed on the masses. It’s how a lot of people view Apple products for example, and it’s extremist.

Like most extremism, however, it’s an absurd construct. Any given technology – whether it be a tool, a gadget or a methodology – is merely a thing. It can not do anything until people use it. Otherwise it’s just a box of wires or a figment of someone’s imagination.

Taking this rationale a step further, people won’t use a particular technology unless a socio-historical force is driving their behaviour to do so. History is littered with inventions that failed to take off because no one had any need for them.

Consider the fall of Aztec empire in the 16th Century. Sailing ships, armour, cannons, swords, horse bridles etc didn’t cause the conquistadors to catastrophically impact an ancient society. In the socio-historical context of the times, their demand for gold and glory drove them to exploit the technologies that were available to them. In other words, technology enabled the outcome.

Storming of the Teocalli by Cortez and His Troops

At the other end of the spectrum, technological denial is just as absurd. The view that technology does not drive social change is plainly wrong, as we can demonstrate by flipping the Aztec scenario: if sailing ships, armour etc were not available to the conquistadors, the outcome would have been very different. They wouldn’t have been able to get to the new world, let alone destroy it.

Of course, the truth lies somewhere in between. Technology is a driver of change in society, but not always, and never by itself. In other words, technology can change society when combined with social demand. It is only one component of the equation for change:

   Technology + Demand = Change   

In terms of e-learning, this “softer” view of technological determinism is a timely theoretical lens through which to see the MOOC phenomenon. Video, the Internet and Web 2.0 didn’t conspire to spellbind people into undertaking massive open online courses. In the socio-historical context of our time, the demand that providers have for altruism? corporate citizenship? branding? profit? (not yet) drives them to leverage these technologies in the form of MOOCs. Concurrently, a thirst for knowledge, the need for quality content, and the yearning for collaboration drives millions of students worldwide to sign up.

MOOCs won’t revolutionise education; after all, they are just strings of code sitting on a server somewhere. But millions of people using MOOCs to learn? That will shake the tree.

Child learning on a computer

So the practical message I draw from the theory of technological determinism is that to change your society – be it a classroom, an organisation, or even a country – there’s no point implementing a technology just for the sake of it. You first need to know your audience and understand the demands they have that drive their behaviour. Only then will you know which technology to deploy, if any at all.

As far as gun control in the US is concerned, that’s a matter for the Americans. I only hope they learn from their ineffective war on drugs: enforcement is vital, but it’s only half the equation. The other half is demand.

Open Learning Network vs Informal Learning Environment

21 September 2010

In the comments section of my previous post, Mike Caulfield kindly pointed me to the article Envisioning the Post-LMS Era: The Open Learning Network by Jonathan Mott.

I was immediately interested because, like me, Mott is striving to bridge the gap between the organisation’s LMS and the learner’s PLE. He articulates his position as such:

“…a one-or-the-other choice between the two is a false choice between knowledge-dissemination technologies and community-building tools. We can have both.”

Amen to that.

But how do we bridge the gap?

Mott’s blueprint is the Open Learning Network (OLN). Mine is the Informal Learning Environment (ILE).

While both proposals have remarkable similarities, the pedagogical philosophies that underpin them are fundamentally different.

The ILE recapped

An ILE is a space that centralises tools and resources that the learner can use to drive their own development. In How to revamp your learning model, I propose three core components:

1. A comprehensive wiki,
2. An open discussion forum, and
3. A bank of personal profiles.

These components work in tandem with the LMS and system reports, which in turn comprise the core components of the Formal Learning Environment (FLE).

A revamped learning model, consisting of an ILE and an FLE

The organisation manages the ILE on behalf of the learners, who are free to search, explore, ask and share at their own pace and at their own discretion, and – ideally – integrate the system into their broader PLE.

The OLN compared

While the ILE is designed to bridge the gap between the LMS and the PLE, it purposefully keeps them apart. Not only do I believe in the right of the learner to keep their PLE strictly personal, but I also believe in the power of separating “learning” from its administration.

The OLN takes a different approach. Mott states:

“The OLN is not intended merely to allow the LMS and PLE paradigms to coexist in harmony, but rather to take the best of each approach and mash them up into something completely different.”

The OLN model connects private and secure applications on the organisation’s network (such as the student information system, content repository, assessments and transcripts) to open and flexible tools and applications in the cloud (such as blogs, social networks and non-proprietary content) via a services-oriented architecture.
The university network and the cloud under the OLN model. Source: Mott, J. (2010) Envisioning the Post-LMS Era: The Open Learning Network, Educause Quarterley Magazine, Volume 33, Number 1.

Both the OLN and ILE are modular because they comprise standalone resources or “learning objects”. This makes them flexible because the objects can be easily replaced by others that are more current, relevant or useful.

The key difference between the two models is interoperability. In a nutshell, the objects in an OLN can talk to one another via web service protocols such as LTI. Mott elaborates:

“In the simplest terms, web services-enabled applications leverage the elegantly simple Hypertext Transfer Protocol (HTTP) that gave life to the World Wide Web. This means that applications use a common set of verbs (such as GET and POST) and nouns (standard definitions of ‘student,’ ‘course,’ ‘score,’ etc.) to share data (as XML) via HTML. A robust services architecture will also implement role-based security and authentication protocols to manage data and application access and permissions. Within such a framework, any tool can securely interact with any other tool, passing user IDs and course and role information. Activities are then logged in the second application so that data can be passed back to the originating application (via a secure HTTP POST in the browser).”

A full-featured OLN. Source: Mott, J. (2010) Envisioning the Post-LMS Era: The Open Learning Network, Educause Quarterley Magazine, Volume 33, Number 1.

The ILE model is not as technologically complex! It makes no demands for interoperability among the components of the PLE, ILE and LMS; in fact, it celebrates their independence. The common denominator is authentic assessment, which represents the sum of all learning regardless of its sources.

Digging deeper

So while the major difference between the OLN and ILE is apparently their respective technical framework, that is simply a manifestation of their true difference: pedagogical philosophy.

I see the OLN as a solution for monitoring the student’s progress during a program of study in the digital age. It formalises the informal. The underpinning pedagogical philosophy, therefore, is formal learning – which I recognise as entirely appropriate in the Higher Education environment.

In the workplace, however, the vast majority of learning is informal. I would even go so far as to suggest that we hinder the learning process by drowning it in bureaucracy.

I see the ILE as a solution for self-directed learning, peer-to-peer discourse and knowledge sharing. It informalises the formal. The underpinning pedagogical philosophy, therefore, is informal learning – which is crying out for support in the corporate sector.

Horses for courses

So yes, Mott and I propose two similar yet fundamentally different learning models – but we have our reasons.

Neither is necessarily right; neither is necessarily wrong.

It all depends on context.
 


Follow

Get every new post delivered to your Inbox.

Join 516 other followers