In Roses are red, I proposed definitions for oft-used yet ambiguous terms such as “competency” and “capability”.
Not only did I suggest a competency be considered a task, but also that its measurement be binary: competent or not yet competent.
As a more general construct, a capability is not so readily measured in a binary fashion. For instance, the question is unlikely to be whether you can analyse data, but the degree to which you can do so. Hence capabilities are preferably measured via a proficiency scale.
Of course numerous proficiency scales exist. For example:
The NIH Proficiency Scale maintains Not Applicable, Fundamental Awareness (basic knowledge), Novice (limited experience), Intermediate (practical application), Advanced (applied theory) and Expert (recognized authority).
The Common European Framework of Reference for Languages maintains Basic User, Independent User and Proficient User.
The Core Skills for Work Developmental Framework maintains A Novice Performer, An Advanced Beginner, A Capable Performer, A Proficient Performer and An Expert Performer.
The NSW Public Sector Capability Framework maintains Foundational, Intermediate, Adept, Advanced and Highly Advanced, while the accompanying Occupation Specific Capability Sets deviate from the established pattern and instead maintain Level 1, Level 2, Level 3, Level 4 and Level 5.
No doubt each of these scales aligns to the purpose for which it was defined. So I wonder if a scale for the purpose of organisational development might align to the Kirkpatrick Model of Evaluation:
|0||Not Yet Assessed||None|
|1||Self Rater||Self rated|
|2||Knower||Passes an assessment|
|3||Doer||Observed by others|
|4||Performer||Meets relevant KPIs|
Table 1. Tracey Proficiency Scale (CC BY-NC-SA)
I contend that such a scale simplifies the measurement of proficiency for L&D professionals, and is presented in a language that is clear and self-evident for our target audience.
Hence it is ahem scalable across the organisation.