The growing need for open frameworks of learning outcomes

(A contribution to Open Education Week — see note at end.)

What is the need?

Imagine what could happen if we had a really good sets of usable open learning outcomes, across academic subjects, occupations and professions. It would be easy to express and then trace the relationships between any learning outcomes. To start with, it would be easy to find out which higher-level learning outcomes are composed, in a general consensus view, of which lower-level outcomes.

Some examples … In academic study, for example around a more complex topic from calculus, perhaps it would be made clear what other mathematics needs to be mastered first (see this recent example which lists, but does not structure). In management, it would be made clear, for instance, what needs to be mastered in order to be able to advise on intellectual property rights. In medicine, to pluck another example out of the air, it would be clarified what the necessary components of competent dementia care are. Imagine this is all done, and each learning outcome or competence definition, at each level, is given a clear and unambiguous identifier. Further, imagine all these identifiers are in HTTP IRI/URI/URL format, as is envisaged for Linked Data and the Semantic Web. Imagine that putting in the URL into your browser leads you straight to results giving information about that learning outcome. And in time it would become possible to trace not just what is composed of what, but other relationships between outcomes: equivalence, similarity, origin, etc.

It won’t surprise anyone who has read other pieces from me that I am putting forward one technical specification as part of an answer to what is needed: InLOC.

So what could then happen?

Every course, every training opportunity, however large or small, could be tagged with the learning outcomes that are intended to result from it. Every educational resource (as in “OER”) could be similarly tagged. Every person’s learning record, every person’s CV, people’s electronic portfolios, could have each individual point referred, unambiguously, to one or more learning outcomes. Every job advert or offer could specify precisely which are the learning outcomes that candidates need to have achieved, to have a chance of being selected.

All these things could be linked together, leading to a huge increase in clarity, a vast improvement in the efficiency of relevant web-based search services, and generally a much better experience for people in personal, occupational and professional training and development, and ultimately in finding jobs or recruiting people to fill vacancies, right down to finding the right person to do a small job for you.

So why doesn’t that happen already? To answer that, we need to look at what is actually out there, what it doesn’t offer, and what can be done about it.

What is out there?

Frameworks, that is, structures of learning outcomes, skills, competences, or similar things under other names, are surprisingly common in the UK. For many years now in the UK, Sector Skills Councils (SSCs), and other similar bodies, have been producing National Occupational Standards (NOSs), which provided the basis for all National Vocational Qualifications (NVQs). In theory at least, this meant that the industry representatives in the SSCs made sure that the needs of industry were reflected in the assessment criteria for awarding NVQs, generally regarded as useful and prized qualifications at least in occupations that are not classed as “professional”.

NOSs have always been published openly, and they are still available to be searched and downloaded at the UKCES’s NOS site. The site provides a search page. As one of my current interests is corporate governance, I put that phrase in to the search box giving several results, including a NOS called CFABAI131 Support corporate decision-making (which is a PDF document). It’s a short document, with a few lines of overview, six performance criteria, each expressed as one sentence, and 15 items of knowledge and understanding, which is what is seen to be needed to underpin competent performance. It serves to let us all know what industry representatives think is important in that support function.

In professional training and development, practice has been more diverse. At one pole, the medical profession has been very keen to document all the skills and competences that doctors should have, and keen to ensure that these are reflected in medical education. The GMC publishes Tomorrow’s Doctors, introduced as follows:

The GMC sets the knowledge, skills and behaviours that medical students learn at UK medical schools: these are the outcomes that new UK graduates must be able to demonstrate.

Tomorrow’s Doctors covers the outline of the whole syllabus. It prepares the ground for doctors to move on to working in line with Good Medical Practice — in essence, the GMC’s list of requirements for someone to be recognised as a competent doctor.

The medical field is probably the best developed in this way. Some other professions, for example engineering and teaching, have some general frameworks in place. Yet others may only have paper documentation, if any at all.

Beyond the confines of such enclaves of good practice, yet more diverse structures of learning outcomes can be found, which may be incoherent and conflicting, particularly where there is no authority or effective body charged with bringing people to consensus. There are few restrictions on who can now offer a training course, and ask for it to be accredited. It doesn’t have to be consistent with a NOS, let alone have the richer technical infrastructure hinted at above. In Higher Education, people have started to think in terms of learning outcomes (see e.g. the excellent Writing and using good learning outcomes by David Baume), but, lacking sufficient motivation to do otherwise, intended learning outcomes tend to be oriented towards institutional assessment processes, rather than to the needs of employers, or learners themselves. In FE, the standardisation influence of NOSs has been weakened and diluted.

In schools in the UK there is little evidence of useful common learning outcomes being used, though (mainly) for the USA there exists the Achievement Standards Network (ASN), documenting a very wide range of school curricula and some other things. It has recently been taken over by private interests (Desire2Learn) because no central funding is available for this kind of service in the USA.

What do these not offer?

The ASN is a brilliant piece of work, considering its age. Also related to its age, it has been constructed mainly through processing paper-style documentation into the ASN web site, which includes allocating ASN URIs. It hasn’t been used much for authorities constructing their own learning outcome frameworks, with URIs belonging to their own domains, though it could in principle be.

Apart from ASN, practically none of the other frameworks that are openly available (and none that are not) have published URIs for every component. Without these URIs, it is much harder to identify, unambiguously, which learning outcome one is referring to, and virtually impossible to check that automatically. So the quality of any computer assisted searching or matching will inevitably be at best compromised, at worst non-existent.

As learning outcomes are not easily searchable (outside specific areas like NOSs), the tendency is to reinvent them each time they are written. Even similar outcomes, whatever the level, routinely seem to be be reinvented and rewritten without cross-reference to ones that already exist. Thus it becomes impossible in practice to see whether a learning opportunity or educational resource is roughly equivalent to another one in terms of its learning outcomes.

Thus, there is little effective transparency, no easy comparison, only the confusion of it being practically impossible to do the useful things that were envisaged above.

What is needed?

What is needed is, on the one hand, much richer support for bodies to construct useful frameworks, and on the other hand, good examples leading the way, as should be expected from public bodies.

And as a part of this support, we need standard ways of modelling, representing, encoding, and communicating learning outcomes and competences. It was just towards these ends that InLOC was commissioned. There’s a hint in the name: Integrating Learning Outcomes and Competences. InLOC is also known as ELM 2.0, where ELM stands for European Learner Mobility, within which InLOC represents part of a powerful proposed infrastructure. It has been developed under the auspices of the CEN Workshop, Learning Technologies, and funded by the DG Enterprise‘s ICT Standardization Work Programme.

InLOC, fully developed, would really be the icing on the cake. Even if people just did no more than publishing stable URIs to go with every component of every framework or structure of learning outcomes or competencies, that would be a great step forward. The existence and openness of InLOC provides some of the motivation and encouragement for everyone to get on with documenting their learning outcomes in a way that is not only open in terms of rights and licences, but open in terms of practice and effect.


Open Education Week 2014 logoThe third annual Open Education Week takes place from 10-15 March 2014. As described on the Open Education Week web site “its purpose is to raise awareness about the movement and its impact on teaching and learning worldwide“.

Cetis staff are supporting Open Education Week by publishing a series of blog posts about open education activities. Cetis have had long-standing involvement in open education and have published a range of papers which cover topics such as OERs (Open Educational Resources) and MOOCs (Massive Open Online Courses).

The Cetis blog provides access to the posts which describe Cetis activities concerned with a range of open education activities.

What is my work?

Is there a good term for my specialist area of work for CETIS? I’ve been trying out “technology for learner support”, but that doesn’t fully seem to fit the bill. If I try to explain, reflecting on 10 years (as of this month) involvement with CETIS, might readers be able to help me?

Back in 2002, CETIS (through the CRA) had a small team working with “LIPSIG”, the CETIS special interest group involved with Learner Information (the “LI” of “LIPSIG”). Except that “learner information” wasn’t a particularly good title. It was also about the technology (soon to be labelled “e-portfolio”) that gathered and managed certain kinds of information related to learners, including their learning, their skills – abilities – competence, their development, and their plans. It was therefore also about PDP — Personal Development Planning — and PDP was known even then by its published definition “a structured and supported process undertaken by an individual to reflect upon their own learning, performance and/or achievement and to plan for their personal, educational and career development”.

There’s that root word, support (appearing as “supported”), and PDP is clearly about an “individual” in the learner role. Portfolio tools were, and still are, thought of as supporting people: in their learning; with the knowledge and skills they may attain, and evidence of these through their performance; their development as people, including their learning and work roles.

If you search the web now for “learner support”, you may get many results about funding — OK, that is financial support. Narrowing the search down to “technology for learner support”, the JISC RSC site mentions enabling “learners to be supported with their own particular learning issues”, and this doesn’t obviously imply support for everyone, but rather for those people with “issues”.

As web search is not much help, let’s take a step back, and try to see this area in a wider perspective. Over my 10 years involvement with CETIS, I have gradually come to see CETIS work as being in three overlapping areas. I see educational (or learning) technology, and related interoperability standards, as being aimed at:

  • institutions, to help them manage teaching, learning, and other processes;
  • providers of learning resources, to help those resources be stored, indexed, and found when appropriate;
  • individual learners;
  • perhaps there should be a branch aimed at employers, but that doesn’t seem to have been salient in CETIS work up to now.

Relatively speaking, there have always seemed to be plenty of resources to back up CETIS work in the first two areas, perhaps because we are dealing with powerful organisations and large amounts of money. But, rather than get involved in those two areas, I have always been drawn to the third — to the learner — and I don’t think it’s difficult to understand why. When I was a teacher for a short while, I was interested not in educational adminstration or writing textbooks, but in helping individuals learn, grow and develop. Similar themes pervade my long term interests in psychology, psychotherapy, counselling; my PhD was about cognitive science; my university teaching was about human-computer interaction — all to do with understanding and supporting individuals, and much of it involving the use of technology.

The question is, what does CETIS do — what can anyone do — for individual learners, either with the technology, or with the interoperability standards that allow ICT systems to work together?

The CETIS starting point may have been about “learner information”, but who benefits from this information? Instead of focusing on learners’ needs, it is all too easy for institutions to understand “learner information” as information than enables institutions to manage and control the learners. Happily though, the group of e-portfolio systems developers frequenting what became the “Portfolio” SIG (including Pebble, CIEPD and others) were keen to emphasise control by learners, and when they came together over the initiative that became Leap2A, nearly six years ago, the focus on supporting learners and learning was clear.

So at least then CETIS had a clear line of work in the area of e-portfolio tools and related interoperability standards. That technology is aimed at supporting personal, and increasingly professional, development. Partly, this can be by supporting learners taking responsibility for tracking the outcomes of their own learning. Several generic skills or competences support their development as people, as well as their roles as professionals or learners. But also, the fact that learners enter information about their own learning and development on the portfolio (or whatever) system means that the information can easily be made available to mentors, peers, or whoever else may want to support them. This means that support from people is easier to arrange, and better informed, thus likely to be more effective. Thus, the technology supports learners and learning indirectly, as well as directly.

That’s one thing that the phrase “technology for learner support” may miss — support for the processes of other people supporting the learner.

Picking up my personal path … building on my involvement in PDP and portfolio technology, it became clear that current representations of information about skills and competence were not as effective as they could be in supporting, for instance, the transition from education to work. So it was, that I found myself involved in the area that is currently the main focus of my work, both for CETIS, and also on my own account, through the InLOC project. This relates to learners rather indirectly: InLOC is enabling the communication and reuse of definitions and descriptions of learning outcomes and competence information, and particularly structures of sets of such definitions — which have up to now escaped an effective and well-adopted standard representation. Providing this will mean that it will be much easier for educators and employers to refer to the same definitions; and that should make a big positive difference to learners being able to prepare themselves effectively for the demands of their chosen work; or perhaps enable them to choose courses that will lead to the kind of work they want. Easier, clearer and more accurate descriptions of abilities surely must support all processes relating to people acquiring and evidencing abilities, and making use of related evidence towards their jobs, their well-being, and maybe the well-being of others.

My most recent interests are evidenced in my last two blog posts — Critical friendship pointer and Follower guidance: concept and rationale — where I have been starting to grapple with yet more complex issues. People benefit from appropriate guidance, but it is unlikely there will ever be the resources to provide this guidance from “experts” to everyone — if that is even what we really wanted.

I see these issues also as part of the broad concern with helping people learn, grow and develop. To provide full support without information technology only looks possible in a society that is stable — where roles are fixed and everyone knows their place, and the place of others they relate to. In such a traditionalist society, anyone and everyone can play their part maintaining the “social order” — but, sadly, such a fixed social order does not allow people to strike out in their own new ways. In any case, that is not our modern (and “modernist”) society.

I’ve just been reading Herman Hesse’s “Journey to the East” — a short, allegorical work. (It has been reproduced online.) Interestingly, it describes symbolically the kind of processes that people might have to go through in the course of their journey to personal enlightenment. The description is in no way realistic. Any “League” such as Hesse described, dedicated to supporting people on their journey, or quest, would practically be able to support only very few at most. Hesse had no personal information technology.

Robert K. Greenleaf was inspired by Hesse’s book to develop his ideas on “Servant Leadership“. His book of that name was put together in 1977, still before the widespread use of personal information techology, and the recognition of its potential. This idea of servant leadership is also very clearly about supporting people on their journey; supporting their development, personally and professionally. What information would be relevant to this?

Providing technology to support peer-to-peer human processes seems a very promising approach to allowing everyone to find their own, unique and personal way. What I wrote about follower guidance is related to this end: to describe ways by which we can offer each other helpful mutual support to guide our personal journeys, in work as well as learning and potentially other areas of life. Is there a short name for this? How can technology support it?

My involvement with Unlike Minds reminds me that there is a more important, wider concept than personal learning, which needs supporting. We should be aspiring even more to support personal well-being. And one way of doing this is through supporting individuals with information relevant to the decisions they make that affect their personal well-being. This can easily be seen to include: what options there are; ideas on how to make decisions; what the consequences of those decision may be. It is an area which has been more than touched on under the heading “Information, Advice and Guidance”.

I mentioned the developmental models of William G Perry and Robert Kegan back in my post earlier this year on academic humility. An understanding of these aspects of personal development is an essential part of what I have come to see as needed. How can we support people’s movement through Perry’s “positions”, or Kegan’s “orders of consciousness”? Recognising where people are in this, developmental, dimension is vital to informing effective support in so many ways.

My professional interest, where I have a very particular contribution, is around the representation of the information connected with all these areas. That’s what we try to deal with for interoperability and standardisation. So what do we have here? A quick attempt at a round-up…

  • Information about people (learners).
  • Information about what they have learned (learning outcomes, knowledge, skill, competence).
  • Information that learners find useful for their learning and development.
  • Information about many subtler aspects of personal development.
  • Information relevant to people’s well-being, including
    • information about possible choices and their likely outcomes
    • information about individual decision-making styles and capabilities
    • and, as this is highly context-dependent, information about contexts as well.
  • Information about other people who could help them
    • information supporting how to find and relate to those people
    • information supporting those relationships and the support processes
    • and in particular, the kind of information that would promote a trusting and trusted relationship — to do with personal values.

I have the strong sense that this all should be related. But the field as a whole doesn’t seem have a name. I am clear that it is not just the same as the other two areas (in my mind at least) of CETIS work:

  • information of direct relevance to institutions
  • information of direct relevance to content providers.

Of course my own area of interest is also relevant to those other players. Personal well-being is vital to the “student experience”, and thus to student retention, as well as to success in learning. That is of great interest to institutions. Knowing about individuals is of great value to those wanting to sell all kinds of services to to them, but particularly services to do with learning and resources supporting learning.

But now I ask people to think: where there is an overlap between information that the learner has an interest in, and information about learners of interest to institutions and content providers, surely the information should be under the control of the individual, not of those organisations?

What is the sum of this information?

Can we name that information and reclaim it?

Again, can people help me name this field, so my area of work can be better understood and recognised?

If you can, you earn 10 years worth of thanks…

Developing a new approach to competence representation

InLOC is a European project organised to come up with a good way of communicating structures or frameworks of competence, learning outcomes etc. We’ve now produced our interim reports for consultation: the Information Model and the Guidelines. We welcome feedback from everyone, to ensure this becomes genuinely useful and not just another academic exercise.

The reason I’ve not written any blog posts for a few weeks is that so much of my energy has been going into InLOC, and for good reason. It has been a really exciting time working with the team to develop a better approach to representing these things. Many of us have been pushing in this direction for years, without ever quite getting there. Several projects have been nearby, including, last year, InteropAbility (JISC page; project wiki) and eCOTOOL (project web site; my Competence Model page) — I’ve blogged about these before, and we have built on ideas from both of them, as well as from several other sources: you may be surprised at the range and variety of “stakeholders” in this area that we have assembled within InLOC. Doing the thinking for the Logic of Competence series was of course useful background, but nor did it quite get there.

What I want to announce now is that we are looking for the widest possible feedback as further input to the project. It’s all too easy for people like us, familiar with interoperability specifications, simply to cook up a new one. It is far more of a challenge, as well as hugely more worthwhile and satisfying, to create something genuinely useful, which people will actually use. We have been looking at other groups’ work for several months now, and discussing the rich, varied, and sometimes confusing ideas going around the community. Now we have made our own initial synthesis, and handed in the “interim” draft agreements, it is an excellent time to carry forward the wide and deep consultation process. We want to discuss with people whether our InLOC format will work for them; whether they can adopt, use or recommend it (or whatever their role is to do with specifications; or, what improvements need to be made so that they are most likely to take it on for real.

By the end of November we are planning to have completed this intense consultation, and we hope to end up with the desired genuinely useful results.

There are several features of this model which may be innovative (or seem so until someone points out somewhere they have been done before!)

  1. Relationships aren’t just direct as in RDF — there is a separate class to contain the relationship information. This allows extra information, including a number, vital for defining levels.
  2. We distinguish the normal simple properties, with literal objects, which are treated as integral parts of whatever it is (including: identifier, title, description, dates, etc.) from what could be called “compound properties”. Compound properties, that have more than one part to their range, are a little like relationships, and we give them a special property class, allowing labels, and a number (like in relationships).
  3. We have arranged for the logical structure, including the relationships and compound properties, to be largely independent of the representation structure. This allows several variant approaches to structuring, including tree structures, flat structures, or Atom-like structures.

The outcome is something that is slightly reminiscent both of Atom itself, and of Topic Maps. Both are not so like RDF, which uses the simplest possible building blocks, but resulting in the need for harder-to-grasp constructs like blank nodes. The fact of being hard to grasp leads to people trying different ways of doing things, and possibly losing interoperability on the way. Both Atom and Topic Maps, in contrast, add a little more general purpose structure, which does make quite a lot of intuitive sense in both cases, and they have been used widely, apparently with little troublesome divergence.

Are we therefore, in InLOC, trying to feel our way towards a general-purpose way of representing substantial hierarchical structures of independently existing units, in a way that makes more intuitive sense that elementary approaches to representing hierarchies? General taxonomies are simply trying to represent the relationships between concepts, whereas in InLOC we are dealing with a field where, for many years, people have recognised that the structure is an important entity in its own right — so much so that it has seemed hard to treat the components of existing structures (or “frameworks”) as independent and reusable.

So, see what you think, and please tell me, or one of the team, what you do honestly think. And let’s discuss it. The relevant links are also available straight from the InLOC wiki home page. And if you are responsible for creating or maintaining structures of intended learning outcomes, skills, competences, competencies, etc., then you are more than welcome to try out our new approach, that we hope combines ease of understanding with the power to express just what you want to express in your “framework”, and that you will be persuaded to use it “for real”, perhaps when we have made the improvements that you need.

We envisage a future when many ICT tools can use the same structures of learning outcomes and competences, saving effort, opening up interoperability, and greatly increasing the possibilities for services to build on top of each other. But you probably don’t need reminding of the value of those goals. We’re just trying to help along the way.

The logic of tourism as an analogy for competence

(20th in my logic of competence series.)

Modelling competence is too far removed from common experience to be intuitive. So I’ve been thinking of what analogy might help. How about the analogy of tourism? This may help particularly with understanding the duality between competence frameworks (like tourist itineraries) and competence concept definitions (like tourist destinations).

The analogy is helped by the fact that last week I was in Lisbon for the first time, at work (the CEN WS-LT and TC 353), but also more relevantly as a tourist. (If you don’t know Lisbon, think of examples to suit your own chosen place to visit, that you know better.) I’ll start with the aspects of the analogy that seem to be most straightforward, and go on to more subtle features.

First things first, then: a tourist itinerary includes a list of destinations. This can be formalised as a guided tour, or left informal as a “things you should see” list given by a friend who has been there. A destination can be in any number of itineraries, or none. An itinerary has to include some destinations, but in principle it doesn’t have any upper limits: it could be a very detailed itinerary that takes a year to properly acquaint a newcomer with the ins and outs of the city. Different itineraries for the same place may have more, or fewer, destinations within that place. They may or may not agree on the destinations included. If there were destinations included by the large majority of guides, another guide could select these as the “essential” Lisbon or wherever. In this case, perhaps that would include visiting the Belem tower; the Castle of St George; Sintra; experiencing Fado; sampling the local food, particularly fish dishes; and a ride on one of the funicular trams that climb the steep hills. Or maybe not, in each case. There again, you could debate whether Sintra should be included in a guide to Lisbon, or just mentioned as a day trip.

A small itinerary could be made for a single destination, if desired. Some guides may just point you to a museum or destination as a whole; others may give detailed suggestions for what you should see within that destination. A cursory guide might say that you should visit Sintra; a detailed one might say that you really must visit the Castle of the Moors in Sintra, as well as other particular places in Sintra. A very detailed guide might direct you to particular things to see in the Castle of the Moors itself.

It should be clear from the above discussion that a place to visit should not be confused with an itinerary for that place. Any real place has an unlimited number of possible itineraries for it. An itinerary for a city may include a museum; an itinerary for a museum may include a painting; there may sometimes even be guides to a painting that direct the viewer to particular features of that painting. The guide to the painting is not the painting; the guide to the museum is not the museum; the guide to the city is not the city.

There might also be guides that do not propose particular itineraries, but list many places you might go, and you select yourself. In these cases, some kind of categorisation might be used to help you select the places of interest to you. What period of history do they come from? Are they busy or quiet? What do they cost? How long do they take to visit? Or a guide with itineraries may also categorise attractions, and make them explicitly optional. Optionality might be particularly helpful in guided tours, so that people can leave out things of less interest.

If a set of guides covered several whole places, not just one, it may make comparisons across the different places. If you liked the Cathar castles in the South of France, you may like the Castle of the Moors in Sintra. Those who like stately homes, on the other hand, may be given other suggestions.

A guide to a destination may also contain more than an itinerary of included destinations within it. A guidebook may give historical or cultural background information, which goes beyond the description of the destinations. Guides may also propose a visit sequence, which is not inherent in the destinations.

The features I have described above are reasonably replicated in discussion of competence. A guide or itinerary corresponds to a competence framework; a destination corresponds to a competence concept. This is largely intended to throw further light on what I discussed in number 12 in this series, Representing the interplay between competence definitions and structures.

Differences

One difference is that tourist destinations have independent existence in the physical world, whereas competence concepts do not. It may therefore be easier to understand what is being referred to in a guide book, from a short description, than in a competence framework. Both guide book and competence framework may rely on context. When a guide book says “the entrance”, you know it means the entrance to the location you are reading about, or may be visiting.

Physical embodiment brings clarity and constraints. Smaller places may be located within larger places, and this is relatively clear. But it is less clear whether lesser competence concepts are part of greater competence concepts. What one can say (and this carries through from the tourism analogy) is that concepts are included in frameworks (or not), and that any concept may be detailed by (any number of) frameworks.

Competence frameworks and concepts are more dependent on the words used in description, and because a description necessarily chooses particular words, it is easy to confuse the concept with the framework if they use the same words. It is easy to use the words of a descriptive framework to describe a concept. It is not so common, though perfectly possible, to use the description of an itinerary as a description of a place. It is because of this greater dependence on words (compared with tourist guides) that it may be more necessary to clarify the context of a competence concept definition, in order to understand what it actually means.

Where the analogy with competence breaks down more seriously is that high stakes decisions rarely depend on exactly where someone has visited. But at a stretch of the imagination, they could: recruitment for a relief tour guide could depend on having visited all of a given set of destinations, and being able to answer questions about them. What high stakes promotes is the sense that a particular structure (as defined or adopted by the body controlling the high-stakes decisions) defines a particular competence concept. Despite that, I assert that the competence structure and the separate competence concept remain strictly separate kinds of thing.

Understanding the logic of competence through this analogy

The features of competence models that are illustrated here are these.

  • Competence frameworks or structures may include relevant competence concepts, as well as other material. (See № 12.)
  • Competence concept definitions may be detailed by a framework structure for that competence concept. Nevertheless the structure does not fully define the concept. (See № 12 and № 13.)
  • Competence frameworks may include optional competences (as well as necessary or mandatory ones). (See № 15 and № 7.)
  • Both frameworks and concepts may be categorised. (See also № 5.)
  • Frameworks may contain sub-frameworks (just as itineraries may contain sub-itineraries).
  • But frameworks don’t contain concepts in the same way: they just include them (or not).
  • A framework may be simply an unstructured list of defined concepts.

I hope that helps anyone to understand more of the logic of competence, and I hope that also helps InLOC colleagues come to consensus on the related matters.

More and less specificity in competence definitions

(19th in my logic of competence series.)

Descriptions of personal ability can serve either as claims, like “This is what I am good at …”, or as answers to questions like “What are you good at?” or “can you … ?” In conversations — whether informally, or formally as in a job interview — the claims, questions, and answers may be more or less specific. That is a necessary and natural feature of communication. It is the implications of this that I want to explore here, as they bear on my current work, in particular including the InLOC project.

This is a new theme in my logic of competence series. Since the previous post in that series, I had to focus on completing the eCOTOOL competence model and managing the initial phases of InLOC, which left little time for following up earlier thinking. But there were ideas clearly evident in my last post in this series (representing level relationships) and now is the time for followup and development. The terms introduced previously there can be linked to this new idea of specificity. Simply: binarily assessable concepts are ones that are defined specifically enough for a yes/no judgement about a person’s ability; rankably assessable concepts have an intermediate degree of specificity, and are complemented by level definitions; while unorderly assessable concepts are ones that are less specifically defined, requiring more specificity to be properly assessable. (See that previous post for explanation of those terms.) The least specific competence-related concepts are not properly assessable at all, but serve as tags or headings.

As well as giving weight and depth to this idea of specificity in competence definitions, in this post I want to explore the connection between competence definitions and answering questions, because I think this will help to explain the ideas, because it is relatively straightforward to understand that questions and answers can be more or less specific.

Since the previous post in the series, my terminology has shifted slightly. The goals of InLOC — Integrating Learning Outcomes and Competences — have made it plain that we need to deal equally with learning outcomes and with competence or ability concepts. So I include “learning outcomes” more liberally, always meaning intended learning outcomes.

Job interviews

Imagine you are interviewing someone for a job. To make it more interesting, let’s make it an informal one: perhaps a mutual business contact has introduced you to a promising person at a business event. Add a little pressure by imagining that you have just a few minutes to make up your mind whether you want to ask this person to go through a longer, formal process. How would you structure the interview, and what questions would you ask?

As I envisage the process, one would probably start off with quite general, less specific questions, and then go into more detail where appropriate, where it mattered. So, for instance, one might ask “are you a programmer?”, and if the answer was yes, go into more detail about languages, development environments, length of experience, type of experience, etc. etc. The useful detail in this case would depend entirely on the circumstances of the job. For a graduate to be recruited into a large company, what matters might be aptitude, as it would be likely that full training would be supplied (which you could perhaps see as a kind of technical “enculturation”). On the other hand, for a specialist to join a short-term high-stakes project, even small details might matter a lot, as learning time would probably be minimal.

In reality, most job interviews start, not from a blank sheet, but from the basis of a job advert, and an application form, or CV and covering letter. A job advert may specify requirements; an application form may contain specific questions for which answers are expected, but in the absence of an appliation form, a CV and covering letter needs to try to answer, concisely, some of the key questions that would be asked first in an informal, unprepared job interview. This naturally explains the universal advice that CVs should be designed specifically for each job application. What you say about yourself unprompted not only reveals that information itself, but also says much about what you expect the other person to reckon as significant or interesting.

So, in the job interview, we notice the natural importance of varying specificity in descriptions and questions about abilities and experience.

Recruitment

This then carries over to the wider recruitment process. Potential employers often formulate a list of what is required of prospective employees, in terms of which abilities and experience are essential or desirable, but the detail and specificity of each item will naturally vary. The evidence for a less specific requirement may be assessed at interview with some quick general questions, but a more exacting requirement may want harder evidence such as a qualification, certificate or testimonial from an expert witness.

For example, in a regulated world such as pesticides that I wrote about recently, an employer might well want a prospective employee to have obtained a relevant certificate or qualification, so that they can legally do their job. Even when a certificate is not a legal requirement, some are widely asked for. A prospective sales employee with a driving licence or an office employee with an ECDL might be preferred over one without, and it would be perfectly reasonable for an employer to insist that non-native speakers had obtained a given certified level of proficiency in the principle workplace language. In each case, because the certificate is awarded only to people who have passed a carefully controlled test, the test result serves to answer many quite specific questions about the holder’s abilities, as well as the potential legal fact of their being allowed to perform certain actions in regulated occupations.

Vocational qualifications often detail quite specifically what holders are able to do. This is clearly the intention of the Europass Certificate Supplement (ECS), and has been in the UK, through the system of National Vocational Qualifications, relying on National Occupational Standards. So we could expect that employers with specific learning outcome or competence requirements may specify that candidates should have particular vocational qualifications; but what about less specific requirements? My guess is that those employers who have little regard for vocational qualifications are just those whose requirements are less specific. Time was when many employers looked only for a “good degree”, which in the UK often meant a “2:1″, an upper second class. This was supposed to answer generic questions, as typically the specific subject of the degree was not specified. Now there is a growing emphasis on the detail of the degree transcript or Europass Diploma Supplement (EDS), from which a prospective employer can read at least assessment results, if not yet explicit details of learning outcomes or competences. There is also a increasing trend towards making explicit the intended learning outcomes of courses at all levels, so the course information might be more informative than the transcript of EDS.

Interestingly, the CVs of many technical workers contain highly unspecific lists of programming languages that the individual implicitly claims, stating nothing about the detailed abilities and experience. These lists answer only the most general questions, and serve effectively only to open a conversation about what the person’s actual experience and achievements have been in those programming languages. At least for human languages there is the increasingly used CEFR; there does not appear to be any such widely recognised framework for programming languages. Perhaps, in the case of programming languages, it would be clumsy and ineffective to give answers to more detailed questions, because the individual does not know what those detailed questions would be.

Specificity in frameworks

Frameworks seem to gravitate towards specificity. Given that some people want to know the answers to specific questions, this is quite reasonable; but where does that leave the expression of the less specific requirements? For examples of curriculum frameworks, there is probably nowhere better than the American Achievement Standards Network (ASN). Here, as in many other places, learning outcomes are defined only in one or two levels. The ASN transcribes documents faithfully, then among many other things marks the “indexing status” of the various components. For an arbitrary example, see Earth and Space Science, which is a topic heading and not “indexable”. The heading below just states what the topic is about, and is not “indexable”. It is below this that the content becomes “indexable”, with first some less specific statements about what should be achieved by the end of fourth grade, broken down into the smallest components such as Identify characteristics of soils, minerals, rocks, water, and the atmosphere. It looks like it is just the “indexable” resources that are intended to represent intended learning outcome definitions.

At fourth grade, this is clearly nothing to do with employment, but even so, identifying characteristics of soils etc. is something that students may or may not be able to do, and this is part of the less specifically defined (but still “indexable”) “understanding of the characteristics of earth materials”. It strikes me that the item about identifying characteristics would fit reasonably (in my scheme of the previous post) as a “rankably assessible” concept, and its parent item about understanding might be classified (in my scheme) as unorderly assessable.

How to represent varying specificity

Having pointed out some of the practical examples of varying specificity in definitions of learning outcome or competence, the important issue for work such as InLOC is to provide some way of representing, not only different levels of specificity, but also how they relate to one another.

An approach through considering questions and answers

Any concept that is related to learning outcomes or competence can provide the basis for questions of an individual. Some of these questions have yes/no answers; some invite answers on a scale; some invite a longer, less straightforward reply, or a short reply that invites further questions. A stated concept can be both the answer to a question, and the ground for further questions. So, to go back to some of the above examples, a CV might somewhere state “French” or “Java”. These might be answers to the questions “what languages have you studied?” or “what languages do you use?” They also invite further questions, such as “how well do you know …?”, or “how much have you used …, and in what contexts?”, or “how good are you at …?” – which, if there is an appropriate scale, could be reformulated as “what level is your ability in …?”

Questions could be found corresponding to the ASN examples as well. “Identify characteristics of soils, minerals, rocks, water, and the atmosphere” has the same format that allows “can you …?” or “I can …”. The less specific statement — “By the end of fourth grade, students will develop an understanding of the characteristics of earth materials,” — looks like it corresponds with questions more like “what do you understand about earth materials?”.

As well as “summative” questions, there are related questions that are used in other ways than assessment. “How confident are you of your ability in …?” and “is your ability in … adequate in your current situation?” both come to mind (stimulated by considerations in LUSID).

What I am suggesting here is that we can adapt some of the natural properties of questions and answers to fit definitions of competence and ability. So what properties do I have in mind? Here is a provisional and tentative list.

  • Questions can be classified as inviting one of four kinds of answer:
    1. yes or no;
    2. a value on a (predefined) scale;
    3. examples;
    4. an explanation that is more complex than a simple value.
  • These types of answer probably need little explanation – many examples can readily be imagined.
  • The same form of answer can relate to more than one question, but usually the answer will mean different things. To be fully and clearly understood, an answer should relate to just one question. Using the above example, “French” as the answer to “what languages have you studied?” means something substantially different from “French” as the answer to “what languages are you fluent in?”
  • A more specific question may imply answers to less specific questions. For example, “what programming languages have you used in software development?” implies answers such as “software development” to the question “what competences do you have in ICT?” Many such implied questions and answers can be formulated. What matters in a particular framework is the other answers in that particular framework that can be inferred.
  • An answer to a less specific question may invite further more specific questions.
    1. Conversely to the example just above, if the question “what competences do you have in ICT?” includes the answer “software development”, a good follow-up question might be “what programming languages have you used in software development?” Similar patterns could be seen for any technical specialty. Often, answers like this may be taken from a known list of options. There are only so many languages, both human and computer.
    2. Where an answer is a rankable concept, questions about the level of that ability are invited. For instance, the question “what foreign languages can you speak?”, answered with “French” and “Italian”, invites questions such as “what is your European Language Passport level of ability in spoken interaction in French?”
    3. Where an answer has been analysed into its component parts, questions about each component part make sense. For example, if the answer to “are you able to clear sites for tree planting?”, following the LANTRA Treework NOS (2009) was “yes”, that invites the narrower implied questions set out in that NOS, like “can you select appropriate clearance methods …?” or “do you understand the potential impacts of your work on the environment …?”
    4. Unless the question is fully specific, admitting only the answers yes and no, and even in that case many times, it is nearly always possible to ask further questions, and give further answers. But everyone’s interest in detail stops sooner or later. The place to stop asking more specific questions is when the answer does not significantly affect the outcome you are looking for. And that varies between different interested parties.
  • Questions may be equivalent to other questions in other frameworks. This will come out from the answers given. If the answers given by the same person in the same context are always the same for two questions, they are effectively equivalent. It is genuinely helpful to know this, as it means that one can save time not repeating questions.
  • Answers to some questions may imply answers to other questions in different frameworks, without being equivalent. The answers may contain, or be contained by, their counterparts. This is another way of linking together questions from different frameworks, and saving asking unnecessary extra questions.

That covers a view of how to represent varying specificity in questions and answers, but not yet frameworks as they are at present.

Back to frameworks as they are at present

At present, it is not common practice to set out frameworks of competence or ability in terms of questions and answers, but only in terms of the concepts themselves. But, to me, it helps understanding enormously to imagine the frameworks as frameworks of questions, and the learning outcome or competence concepts as potential answers. In practice, all you see in the frameworks is the answers to the implied questions.

Perhaps this has come about through a natural process of doing away with unnecessary detail. The overall question in occupational competence frameworks is, “are you competent to do this job?”, so it can go unstated, with the title of the job standing in for the question. The rest of the questions in the framework are just the detailed questions about the component parts of that competence (see Carroll and Boutall’s ideas of Functional Analysis in their Guide to Developing National Occupational Standards). The formulation with action verbs helps greatly in this approach. To take NOS examples from way back in the 3rd post in this series, the units themselves and the individual performance criteria share a similar structure. Less specifically, “set out and establish crops” relates both to the question “are you able to set out and establish crops” and the competence claim “I am able to set out and establish crops”. More specifically, “place equipment and materials in the correct location ready for use” can be prefixed with “are you able to …” for a question, or “I am able to …” as a claim. Where all the questions take a form that invites answers yes or no, one really does not need to represent the questions at all.

With a less uniform structure, one would need mentally to remove all the questions to get a recognisable framework; or conversely, to understand a framework in terms of questions, one needs to add in those implied questions. This is not as easy, and perhaps that is why I have been drawn to elaborating all those structuring relationships between concepts.

We are left in a place that is very close to where we were before in the previous post. At simplest, we have the individual learning outcome or competence definitions (which are the answers) and the frameworks, which show how the answers connect up, without explicitly mentioning the questions themselves. The relations between the concepts can be factored out, and presented either together in the framework, or separately together with the concepts that are related by those relations.

If the relationships are simply “broader” and “narrower”, things are pretty straightforward. But if we admit less specific concepts and questions, because the questions are not explicitly represented, the structure needs a more elaborate set of relationships. In particular, we have to make particular provision for rankable concepts and levels. I’ll leave detailing the structures we are left with for later.

Before that, I’d like to help towards better grasp of the ideas through the analogy with tourism.

Competence and regulation

Today I had a most helpful phone call with a kind lady from the Health and Safety Executive (HSE), and it has illuminated the area of the competence world, related to regulation, that I was very unclear about, so I thought I would try to share my increased understanding.

The EU often comes up with directives intended for the good of European citizens in general. In this case, as an example we are looking at Directive 2009/128/EC of 2009-10-21 “establishing a framework for Community action to achieve the sustainable use of pesticides”. Good that this one looks uncontroversial in principle – we don’t want people to use pesticides in an unregulated way, potentially polluting common air, water or ground (potentially without our even being aware of it), so I guess most people would support the principle of regulation here.

If you work your way down to Article 5 of this directive, you see:

Article 5
Training
1. Member States shall ensure that all professional users, distributors and advisors have access to appropriate training by bodies designated by the competent authorities. This shall consist of both initial and additional training to acquire and update knowledge as appropriate.

The training shall be designed to ensure that such users, distributors and advisors acquire sufficient knowledge regarding the subjects listed in Annex I, taking account of their different roles and responsibilities.

2. By 14 December 2013, Member States shall establish certification systems and designate the competent authorities responsible for their implementation. These certificates shall, as a minimum, provide evidence of sufficient knowledge of the subjects listed in Annex I acquired by professional users, distributors and advisors either by undergoing training or by other means.

(I will say nothing at all about what “competent” means as in “competent authority”. Maybe it is quite different.)

It goes on. So what is this Annex I? That is really significant for the purposes of knowledge, skill and competence. It’s worth perhaps repeating this in full, just to get the full flavour of one example of the language and way these things are set out.

Training subjects referred to in Article 5

  1. All relevant legislation regarding pesticides and their use.
  2. The existence and risks of illegal (counterfeit) plant protection products, and the methods to identify such products.
  3. The hazards and risks associated with pesticides, and how to identify and control them, in particular:
    1. risks to humans (operators, residents, bystanders, people entering treated areas and those handling or eating treated items) and how factors such as smoking exacerbate these risks;
    2. symptoms of pesticide poisoning and first aid measures;
    3. risks to non-target plants, beneficial insects, wildlife, biodiversity and the environment in general.
  4. Notions on integrated pest management strategies and techniques, integrated crop management strategies and tech­niques, organic farming principles, biological pest control methods, information on the general principles and crop or sector-specific guidelines for integrated pest management.
  5. Initiation to comparative assessment at user level to help professional users make the most appropriate choices on pesticides with the least side effects on human health, non-target organisms and the environment among all auth­orised products for a given pest problem, in a given situation.
  6. Measures to minimise risks to humans, non-target organisms and the environment: safe working practices for storing, handling and mixing pesticides, and disposing of empty packaging, other contaminated materials and surplus pesticides (including tank mixes), whether in concentrate or dilute form; recommended way to control operator exposure (personal protection equipment).
  7. Risk-based approaches which take into account the local water extraction variables such as climate, soil and crop types, and relieves.
  8. Procedures for preparing pesticide application equipment for work, including its calibration, and for its operation with minimum risks to the user, other humans, non-target animal and plant species, biodiversity and the environment, including water resources.
  9. Use of pesticide application equipment and its maintenance, and specific spraying techniques (e.g. low-volume spraying and low-drift nozzles), as well as the objectives of the technical check of sprayers in use and ways to improve spray quality. Specific risks linked to use of handheld pesticide application equipment or knapsack sprayers and the relevant risk management measures.
  10. Emergency action to protect human health, the environment including water resources in case of accidental spillage and contamination and extreme weather events that would result in pesticide leaching risks.
  11. Special care in protection areas established under Articles 6 and 7 of Directive 2000/60/EC.
  12. Health monitoring and access facilities to report on any incidents or suspected incidents.
  13. Record keeping of any use of pesticides, in accordance with the relevant legislation.

What we have here is a kind of syllabus, but in some ways quite a vague syllabus. It does not make it expressly clear what people have to be able to do as a result of training with this syllabus, as is now good practice for many learning outcomes, particularly vocational ones. So it falls to the Member States to interpret that, with the result that different Member States may do things differently, the resultant competences may not be the same, and there may well be considerable differences across Europe in how safe people actually are from the dangers the regulation was brought in to counter.

So the European directive works its way down through the system to national governments, and out comes something like The Plant Protection Products (Basic Conditions) Regulations 1997. In this case, though the area is similar, this UK legislation was obviously created long before the above European directive. Here we read:

1. It shall be the duty of all employers to ensure that persons in their employment who may be required during the course of their employment to use prescribed plant protection products are provided with such instruction, training and guidance as is necessary to enable those persons to comply with any requirements provided in and under these Regulations and the Plant Protection Products Regulations.

and later

3. No person in the course of a business or employment shall use a prescribed plant protection product, or give an instruction to others on the use of a prescribed plant protection product, unless that person—
(a) has received adequate instruction, training and guidance in the safe, efficient and humane use of prescribed plant protection products, and
(b) is competent for the duties which that person is called upon to perform.

and yet later

7.—(1) No person in the course of a commercial service shall use a prescribed plant protection product approved for agricultural use unless that person—
(a) has obtained a certificate of competence recognised by the Ministers; or
(b) uses that plant protection product under the direct and personal supervision of a person who holds such a certificate; or [...]

The UK Regulations do not themselves define in detail what counts as “adequate instruction, training and guidance”, nor indeed “competent” and “competence”. This is where the HSE comes in, by approving as “adequate” the proposals of awarding bodies aimed at the certification of this training etc.

Do we get the general idea here? I hope so. But wait a minute …

One cannot help remarking on the differences between the language of the EU regulation and the language recommended, say, in the Europass Certificate Supplement, where it is clear that each skill or competence item should start with an action verb. Is it therefore a case of lack of effective communication between DGs? It looks likely, but I have no evidence.

Nor do I have an opinion about the merits of leaving definitions open (perhaps deliberately so) to give room for courts to establish case law and precedent.

But it would seem to me a good idea, when formulating this kind of regulation, at the same time to put together a well-structured framework of knowledge, skill and competence to define the required abilities of the people concerned. Not defining them clearly just means that the cost of defining them is multiplied through being borne by every Member State, resulting not only in divergence but in considerable administrative work that one could say was unnecessary. Multiply this across all the relevant European regulations. OK, admitted, I have little knowledge of the workings of such bureaucracy. Maybe there is a reason, but at present I am an unsatisfied citizen.

And this is one area where InLOC outputs could potentially play a role. It would be principally at a European level, though national governments could do something similar for any national regulations. Some central European body could define the required knowledge and ability, for each European regulation across all areas of public life, according to clear and sensible standard approaches that relate directly to learning outcomes, competence, training and assessment. The requirements could be published in InLOC format in all relevant languages. (That’s what InLOC is set up to facilitate). How to train and assess would still be up to training, assessment and awarding bodies, and there would still have to be structures and practices (probably with considerable national variation) within which this is controlled, and operating licences managed. But at least several stages would be removed from the process, which could be much quicker. The Commission could be seen to be more in touch with the grass roots. Procedures would look more transparent and fairer. Maybe, even, European regulations would be held in higher repute. That would be a nice outcome.