What is CEN TC 353 becoming?

The CEN TC 353 was set up (about seven years ago) as the European Standardization Technical Committee (“TC”) responsible for “ICT for Learning Education and Training” (LET). At the end of the meeting I will be describing below, we recognised that the title has led some people to think it is a committee for standardising e-learning technology, which is far from the truth. I would describe its business as being, effectively, the standardization of the representation of information about LET, so that it can be used in (any kind of) ICT systems. We want the ICT systems we use for LET to be interoperable, and we want to avoid the problems that come from vendors all defining their own ways of storing and handling information, thus making it hard to migrate to alternative systems. Perhaps the clearest evidence of where TC 353 works comes from the two recent European Standards to our name. EN 15981, “EuroLMAI”, is about information about learner results from any kind of learning, specifically including the Diploma Supplement, and the UK HEAR, that document any higher education achievements. EN 15982, “MLO” (Metadata for Learning Opportunities) is the European equivalent of the UK’s XCRI, “eXchanging Course-Related Information”, mainly about the information used to advertise courses, which can be of any kind. Neither of these are linked to the mode of learning, technology enhanced or not; and indeed we have no EN standards about e-learning as such. So that’s straight, then, I trust …

At this CEN TC 353 meeting on 2014-04-08 there were delegates from the National Bodies of: Finland; France (2); Germany; Greece; Norway; Sweden (2); UK (me); and the TC 353 secretary. That’s not very many for an active CEN TC. Many of the people there have been working with CETIS people, including me, for several years. You could see us as the dedicated, committed few.

The main substance of the day’s discussion was about two proposed new work items (“NWIs”), one from France, one from Sweden, and the issues coming out of that. I attended the meeting as the sole delegate (with the high-sounding designation, “head of delegation”) from BSI, with a steer from colleagues that neither proposal was ready for acceptance. That, at least, was agreed by the meeting. But something much more significant appeared to happen, which seemed to me like a subtle shift in the identity of TC 353. This is entirely appropriate, given that the CEN Workshop on Learning Technologies (WS-LT), which was the older, less formal body, is now acccepted as defunct — this is because CEN are maintaining their hard line on process and IPR, which makes running an open CEN workshop effectively impossible.

No technical standardization committee that I know of is designed to manage pre-standardization activities. Floating new ideas, research, project work, comparing national initiatives, etc., need to be done before a proposal reaches a committee of this kind, because TC work, whether in CEN, or in our related ISO JTC1 SC36, tends to be revision of documents that are presented to the committee. It’s very difficult and time consuming to construct a standard from a shaky foundation, simply by requesting formal input and votes from national member bodies. And when a small team is set up to work under the constraints of a bygone era of confidentiality, in some cases it has proved insurmountably difficult to reach a good consensus.

Tore Hoel, a long-time champion of the WS-LT, admitted that it is now effectively defunct. I sadly agree, while appreciating all the good work it has done. So TC 353 has to explore a new role in the absence of what was its own Workshop, which used to do the background work and to suggest the areas of work that needed attention. Tore has recently blogged what he thinks should be the essential characteristics of a future platform for European open standards work, and I very much agree with him. He uses the Open Stand principles as a key reference.

So what could this new role be? The TC members are well connected in our field, and while they do not themselves do much IT systems implementation, they know those people, and are generally in touch with their views. The TC members also have a good overview of how the matters of interest to TC 353 relate to neighbouring issues and stakeholders. We believe that the TC is, collectively, in quite a good position to judge when it is worth working towards a new European Standard, which is after all their raison d’etre. We can’t see any other body that could perform this role as well, in this specific area.

As we were in France, the famous verse of Rouget de Lisle, the “Marseillaise” came to my mind. “Aux armes, citoyens, Formez vos bataillons!” the TC could be saying. What I really like, on reflection, about this aspect of the French national anthem is that it isn’t urging citizens to join some pre-arranged (e.g. royal) battalions, but to create their own. Similarly, the TC could say, effectively, “now is the time to act — do it in your own ways, in your own organisations, whatever they are — but please bring the results together for us to formalise when they are ready.”

For me, this approach could change the whole scene. Instead of risking being an obstacle to progress, the CEN TC 353 could add legitimacy and coherence to the call for pre-standardization activity in chosen areas. It would be up to the individuals listening (us wearing different hats) to take up that challenge in whatever ways we believe are best. Let’s look at the two proposals from that perspective.

AFNOR, the French standards body, was suggesting working towards a European Standard (EN) with the title “Metadata for Learning Opportunities part 2 : Detailed Description of Training and Grading (face to face, distance or blended learning and MOOCs): Framework and Methodology”. The point is to extend MLO (EN 15982), including perhaps some of those characteristics of courses (learning opportunities), perhaps drawn from the Norwegian CDM or its French derivative, that didn’t make it into the initial version of MLO for advertising. There have from time to time in the UK been related conversations about the bits of the wider vision for XCRI that didn’t make it into XCRI-CAP (“Course Advertising Profile”). But they didn’t make it probably for some good reason — maybe either there wasn’t agreement about what they should be, or there wasn’t any pressing need, or there weren’t enough implementations of them to form the basis for effective consensus.

Responding to this, I can imagine BSI and CETIS colleagues in the UK seriously insisting, first, that implemention should go hand in hand with specification. We need to be propertly motivated by practical use cases, and we need to test ideas out in implementation before agreeing to standardize them. I could imagine other European colleagues insisting that the ideas should be accepted by all the relevant EC DGs before they have a chance of success in official circles. And so on — we can all do what we are best at, and bring those together. And perhaps also we need to collaborate between national bodies at this stage. It would make sense, and perhaps bring greater commitment from the national bodies and other agencies, if they were directly involved, rather than simply sending people to remote-feeling committees of standards organisations. In this case, it would be up to the French, whose Ministry of Education seems to be wanting something like this, to arrange to consult with others, to put together an implemented proposal that has a good chance of achieving European consensus.

We agreed that it was a good idea for the French proposal to use the “MOOC” label to gain interest and motivation, while the work would in no way be limited to MOOCs. And it’s important to get on board both some MOOC providers, and related though different, some of the agencies who aggregate information about MOOCs (etc.) and offer information about them through portals so that people can find appropriate ones. The additional new metadata would of course be designed to make that search more effective, in that more of the things that people ask about will be modelled explicitly.

So, let’s move on to the Swedish proposal. This was presented under the title “Linked and Open Data for Learning and Education”, based on their national project “Linked and Open Data in Schools” (LODIS). We agreed that it isn’t really on for a National Body simply to propose a national output for European agreement, without giving evidence on why it would be helpful. In the past, the Workshop would have been a fair place to bring this kind of raw idea, and we could have all pitched in with anything relevant. But under our new arrangements, we need the Swedes themselves to lead some cross-European collaboration to fill in the motivation, and do the necessary research and comparison.

There are additional questions also relevant to both proposals. How will they relate to the big international and American players? For example, are we going to get schema.org to take these ideas on, in the fullness of time? How so? Does it matter? (I’m inclined to think it does matter.)

I hope the essentials of the new approach are apparent in both cases. The principle is that TC 353 acts as a mediator and referee, saying “OK” to the idea that some area might be ripe for further work, and encouraging people to get on with it. I would, however, suggest that three vital conditions should apply, for this approach to be effective as well as generally acceptable.

  1. The principal stakeholders have to arrange the work themselves, with enough trans-national collaboration to be reasonably sure that the product will gain the European consensus needed in the context of CEN.
  2. The majority of the drafting and testing work is done clearly before a formal process is started in CEN. In our sector, it is vital that the essential ideas are free and open, so we want a openly licenced document to be presented to the TC as a starting point, as close as can be to the envisioned finishing point. CEN will still add value through the formal process and formal recognition, but the essential input will still be openly and freely licenced for others to work with in whatever way they see fit.
  3. The TC must assert the right to stop and revoke the CEN work item, if it turns out that it is not filling a genuine European need. There is room for improvement here over the past practice of the TC and the WS-LT. It is vital to our reputation and credibility, and to the ongoing quality of our output, that we are happy with rejecting work that it not of the right quality for CEN. Only in this way can CEN stakeholders have the confidence in a process that allows self-organising groups to do all the spadework, prior to and separate from formal CEN process and oversight.

At the meeting we also heard that the ballot on the TC 353 marketing website was positive. (Disclosure: I am a member of the TC 353 “Communications Board” who advised on the content.) Hopefully, a consequence of this will be that we are able to use the TC 353 website both to flag areas for which TC 353 believes there is potential for new work, and to link to the pre-standardization work that is done in those areas that have been encouraged by the TC, wherever that work is done. We hope that this will all help significantly towards our aim of effectively open standardization work, even where the final resulting EN standards remain as documents with a price tag.

I see the main resolutions made at the meeting as enacting this new role. TC 353 is encouraging proposers of new work to go ahead and develop mature open documentation, and clear standardization proposals, in whatever European collaborations they see fit, and bring them to a future TC meeting. I’d say that promises a new chapter in the work of the TC, which we should welcome, and we should play our part in helping it to work effectively for the common good.

The growing need for open frameworks of learning outcomes

(A contribution to Open Education Week — see note at end.)

What is the need?

Imagine what could happen if we had a really good sets of usable open learning outcomes, across academic subjects, occupations and professions. It would be easy to express and then trace the relationships between any learning outcomes. To start with, it would be easy to find out which higher-level learning outcomes are composed, in a general consensus view, of which lower-level outcomes.

Some examples … In academic study, for example around a more complex topic from calculus, perhaps it would be made clear what other mathematics needs to be mastered first (see this recent example which lists, but does not structure). In management, it would be made clear, for instance, what needs to be mastered in order to be able to advise on intellectual property rights. In medicine, to pluck another example out of the air, it would be clarified what the necessary components of competent dementia care are. Imagine this is all done, and each learning outcome or competence definition, at each level, is given a clear and unambiguous identifier. Further, imagine all these identifiers are in HTTP IRI/URI/URL format, as is envisaged for Linked Data and the Semantic Web. Imagine that putting in the URL into your browser leads you straight to results giving information about that learning outcome. And in time it would become possible to trace not just what is composed of what, but other relationships between outcomes: equivalence, similarity, origin, etc.

It won’t surprise anyone who has read other pieces from me that I am putting forward one technical specification as part of an answer to what is needed: InLOC.

So what could then happen?

Every course, every training opportunity, however large or small, could be tagged with the learning outcomes that are intended to result from it. Every educational resource (as in “OER”) could be similarly tagged. Every person’s learning record, every person’s CV, people’s electronic portfolios, could have each individual point referred, unambiguously, to one or more learning outcomes. Every job advert or offer could specify precisely which are the learning outcomes that candidates need to have achieved, to have a chance of being selected.

All these things could be linked together, leading to a huge increase in clarity, a vast improvement in the efficiency of relevant web-based search services, and generally a much better experience for people in personal, occupational and professional training and development, and ultimately in finding jobs or recruiting people to fill vacancies, right down to finding the right person to do a small job for you.

So why doesn’t that happen already? To answer that, we need to look at what is actually out there, what it doesn’t offer, and what can be done about it.

What is out there?

Frameworks, that is, structures of learning outcomes, skills, competences, or similar things under other names, are surprisingly common in the UK. For many years now in the UK, Sector Skills Councils (SSCs), and other similar bodies, have been producing National Occupational Standards (NOSs), which provided the basis for all National Vocational Qualifications (NVQs). In theory at least, this meant that the industry representatives in the SSCs made sure that the needs of industry were reflected in the assessment criteria for awarding NVQs, generally regarded as useful and prized qualifications at least in occupations that are not classed as “professional”.

NOSs have always been published openly, and they are still available to be searched and downloaded at the UKCES’s NOS site. The site provides a search page. As one of my current interests is corporate governance, I put that phrase in to the search box giving several results, including a NOS called CFABAI131 Support corporate decision-making (which is a PDF document). It’s a short document, with a few lines of overview, six performance criteria, each expressed as one sentence, and 15 items of knowledge and understanding, which is what is seen to be needed to underpin competent performance. It serves to let us all know what industry representatives think is important in that support function.

In professional training and development, practice has been more diverse. At one pole, the medical profession has been very keen to document all the skills and competences that doctors should have, and keen to ensure that these are reflected in medical education. The GMC publishes Tomorrow’s Doctors, introduced as follows:

The GMC sets the knowledge, skills and behaviours that medical students learn at UK medical schools: these are the outcomes that new UK graduates must be able to demonstrate.

Tomorrow’s Doctors covers the outline of the whole syllabus. It prepares the ground for doctors to move on to working in line with Good Medical Practice — in essence, the GMC’s list of requirements for someone to be recognised as a competent doctor.

The medical field is probably the best developed in this way. Some other professions, for example engineering and teaching, have some general frameworks in place. Yet others may only have paper documentation, if any at all.

Beyond the confines of such enclaves of good practice, yet more diverse structures of learning outcomes can be found, which may be incoherent and conflicting, particularly where there is no authority or effective body charged with bringing people to consensus. There are few restrictions on who can now offer a training course, and ask for it to be accredited. It doesn’t have to be consistent with a NOS, let alone have the richer technical infrastructure hinted at above. In Higher Education, people have started to think in terms of learning outcomes (see e.g. the excellent Writing and using good learning outcomes by David Baume), but, lacking sufficient motivation to do otherwise, intended learning outcomes tend to be oriented towards institutional assessment processes, rather than to the needs of employers, or learners themselves. In FE, the standardisation influence of NOSs has been weakened and diluted.

In schools in the UK there is little evidence of useful common learning outcomes being used, though (mainly) for the USA there exists the Achievement Standards Network (ASN), documenting a very wide range of school curricula and some other things. It has recently been taken over by private interests (Desire2Learn) because no central funding is available for this kind of service in the USA.

What do these not offer?

The ASN is a brilliant piece of work, considering its age. Also related to its age, it has been constructed mainly through processing paper-style documentation into the ASN web site, which includes allocating ASN URIs. It hasn’t been used much for authorities constructing their own learning outcome frameworks, with URIs belonging to their own domains, though it could in principle be.

Apart from ASN, practically none of the other frameworks that are openly available (and none that are not) have published URIs for every component. Without these URIs, it is much harder to identify, unambiguously, which learning outcome one is referring to, and virtually impossible to check that automatically. So the quality of any computer assisted searching or matching will inevitably be at best compromised, at worst non-existent.

As learning outcomes are not easily searchable (outside specific areas like NOSs), the tendency is to reinvent them each time they are written. Even similar outcomes, whatever the level, routinely seem to be be reinvented and rewritten without cross-reference to ones that already exist. Thus it becomes impossible in practice to see whether a learning opportunity or educational resource is roughly equivalent to another one in terms of its learning outcomes.

Thus, there is little effective transparency, no easy comparison, only the confusion of it being practically impossible to do the useful things that were envisaged above.

What is needed?

What is needed is, on the one hand, much richer support for bodies to construct useful frameworks, and on the other hand, good examples leading the way, as should be expected from public bodies.

And as a part of this support, we need standard ways of modelling, representing, encoding, and communicating learning outcomes and competences. It was just towards these ends that InLOC was commissioned. There’s a hint in the name: Integrating Learning Outcomes and Competences. InLOC is also known as ELM 2.0, where ELM stands for European Learner Mobility, within which InLOC represents part of a powerful proposed infrastructure. It has been developed under the auspices of the CEN Workshop, Learning Technologies, and funded by the DG Enterprise‘s ICT Standardization Work Programme.

InLOC, fully developed, would really be the icing on the cake. Even if people just did no more than publishing stable URIs to go with every component of every framework or structure of learning outcomes or competencies, that would be a great step forward. The existence and openness of InLOC provides some of the motivation and encouragement for everyone to get on with documenting their learning outcomes in a way that is not only open in terms of rights and licences, but open in terms of practice and effect.


Open Education Week 2014 logoThe third annual Open Education Week takes place from 10-15 March 2014. As described on the Open Education Week web site “its purpose is to raise awareness about the movement and its impact on teaching and learning worldwide“.

Cetis staff are supporting Open Education Week by publishing a series of blog posts about open education activities. Cetis have had long-standing involvement in open education and have published a range of papers which cover topics such as OERs (Open Educational Resources) and MOOCs (Massive Open Online Courses).

The Cetis blog provides access to the posts which describe Cetis activities concerned with a range of open education activities.

Learning about learning about …

I was recently reading a short piece from Peter Honey (of learning styles fame)
in a CIPD blog post in which he writes, saving the most important item for last in his list:

Learning to learn – the ultimate life-skill

You can turn learning in on itself and use your learning skills to help you learn how to become an increasingly effective learner. Learning to learn is the key to enhancing all the above.

It’s all excellent stuff, and very central to the consideration of learning technology, particularly that dedicated to supporting reflection.

Then I started thinking further (sorry, just can’t help it…)

If learning to learn is the ultimate life skill, then surely the best that educators can do is to help people learn to learn.

But learning to learn is not altogether straightforward. There are many pitfalls that interfere with effective learning, and which may not respond to pure unaided will-power or effort. Thus, to help people learn to learn, we (as educators) have to know about those pitfalls, those obstacles, those hazards that stand in the way of learning generally, and we have to be able somehow at least to guide the learners we want to help around those hazards.

There are two approaches we could take here. First, we could try to diagnose what our learners are trying to learn, what is preventing them, and maybe give them the knowledge they are lacking. That’s a bit like a physician prescribing some cure — not just medicine, perhaps, but a cure that involves a change of behaviour. Or it’s a bit like seeing people hungry, and feeding them — hungry for knowledge, perhaps? If we’re talking about knowledge here, of course, there is a next stage: helping people to find the knowledge that they need, rather than giving it to them directly. I put that in the same category, as it is not so very different.

There is a second, qualitatively different approach. We could help our learners learn about their own learning. We could guide them — and this is a highly reflective task — to diagnose their own obstables to learning. This is not simply not knowing where to look for what they want to know, it is about knowing more about themselves, and what it may be within them that interferes with their learning processes — their will to learn, their resolve (Peter Honey’s article starts with New Year’s resolutions) or, even, their blind spots. To pursue the analogy, that is like a physician giving people the tools to maintain their own health, or, proverbially, rather than giving a person a fish, teaching them to fish.

Taking this further starts to relate closely in my mind to Kelly’s Personal Construct Psychology; and also perhaps to Kuhn’s ideas about the “Structure of Scientific Revolutions”. Within a particular world view, one’s learning is limited by that world view. When the boundaries of that learning are being pushed, it is time to abandon the old skin and take up a new and more expansive one; or just a different one, more suited to the learning that one wants. But it is hard — painful even (Kelly recognised that clearly) and the scientific establishment resists revolutions.

In the literature and on the web, there is the concept called “triple loop learning”, and though this doesn’t seem to be quite the same, it would appear to be going in the same direction, even if not as far.

What, then, is our task as would-be educators; guides; coaches; mentors? Can we get beyond the practices analogous to Freudian psychoanalyis, which are all too prone to set up a dependency? How can we set our learners truly free?

This may sound strange, but I would say we (as educators, etc.) need to study, and learn about, learning about learning. We need to understand not just about particular obstacles to learning, and how to get around those; but also about how people learn about their own inner obstacles, and how they can successfully grow around them.

As part of this learning, we do indeed need to understand how, in any given situation, a person’s world view is likely to relate to what they can learn in that situation; but further, we need to understand how it might be possible to help people recognise that in themselves. You think not? You think that we just have to let people be, to find their own way? It may be, indeed, that there is nothing effective that we are wise enough to know how to do, for a particular person, in a particular situation. And, naturally, it may be that even if we offer some deep insight, that we know someone is ready to receive, they may choose not to receive it. That is always a possibility that we must indeed respect.

And there cannot be a magic formula, a infallible practice, a sure method, a way of forcibly imbuing people with that deep wisdom. Of course there isn’t — we know that. But at least we can strive in our own ways to live with the attitude of doing whatever we can, firstly, not to stand in the way of whatever light may dawn on others, but also, if we are entrusted with the opportunity, to channel or reflect some of that light in a direction that we hope might bear fruit.

Again, it is not hard to connect this to systems thinking and cybernetics. Beyond the law of requisite variety — something about controlling systems needing to be at least as complex as the systems they are controlling — the corresponding principle is practically commonplace: to help people learn something, we have to have learned more than we expect them to learn. In this case, to help people learn about their own learning, we have to have learned about learning about learning.

People are all complex. It is sadly common to fail to take into account the richness and complexity of the people we have dealings with. To understand the issues and challenges people might have with learning about their own learning, we have to really stretch ourselves, to attend to the Other, to listen and to hear acutely enough with all our senses, to understand enough about them, where they come from, where they are, to have an idea about what may either stand in the way, or enable, their learning about their learning. Maybe love is the best motivator. But we also need to learn.

Right then, back on the CETIS earth (which is now that elegant blue-grey place…) I just have to ask, how can technology help? E-portfolio technology has over the years taken a few small steps towards supporting reflection, and indeed communication between learners, and between learners and tutors, mentors, educators. I think there is something we can do, but what it is, I am not so sure…

Learning about learning about learning — let’s talk about it!

Privacy? What about self-disclosure?

When we talk about privacy, we are often talking about the right to privacy. That is something like the right to limit or constrain disclosure of information relating to oneself. I’ve often been puzzled by the concept of privacy, and I think that it helps to think first about self-disclosure.

Self-disclosure is something that we would probably all like to control. There’s a lot of literature on self-disclosure in many settings, and it is clearly recognised as important in several ways. I like the concept of self-disclosure, because it is a positive concept, in contrast to the rather negative idea of privacy. Privacy is, as its name suggests, a “privative” concept. Though definitions vary greatly, one common factor is that definitions of privacy tend to be in terms of the absence of something undesirable, rather than directly as the presence of something valuable.

Before I go on, let me explain my particular interest in privacy and self-disclosure – though everyone potentially has a strong legitimate interest in them. Privacy is a key aspect of e-portfolio technology. People are only happy with writing down reflections on personal matters, including personal development, if they can be assured that the information will only be shared with people they want it to be shared with. It is easy to understand this in terms of mistakes, for example. To learn from one’s mistakes, one needs to be aware of them, and it may help to be able to discuss mistakes with other trusted people. But we often naturally have a sense of shame about mistakes, and unless understanding and compassion can be assured, we reasonably worry that the knowledge of our mistakes may negatively influence other people’s perception of our value as people. So it is vital that e-portfolio technology allows us to record reflections on such sensitive matters privately, and share them only with carefully selected people, if anyone at all.

This central concept for e-portfolios, reflection, links straight back to self-disclosure and self-understanding, and indeed identity. Developing ourselves, our identity, qualities and values as well as our knowledge and skill, depends in part on reflection giving us a realistic appreciation of where we are now, and who we are now.

Let me make the perhaps obvious point that most of us want to be accepted and valued as we are, and ideally understood positively; and that this can even be a precondition of our personal growth and development. Privacy, being a negative concept, doesn’t directly help with that. What is vital to acceptance and understanding is appropriate self-disclosure, with the right people, at the right time and in the right context. Even in a world where there was no privacy, this would still be a challenge. How would we gain the attention of people we trust, to notice what we are, what we do, what we mean, and to help us make sense of that?

In our society, commercial interests see, in more and more detail, some selected aspects of what we do. Our information browsing behaviour is noted by Google, and helps to shape what we get in reply to our searches, as well as the adverts that are served up. On Amazon, our shopping behaviour is pooled, enabling us to be told what others in our position might have bought or looked at. The result of this kind of information gathering is that we are “understood”, but only superficially, in the dimensions that relate to what we might pay for. If this helps in our development, it is only in superficial ways. That is a problem.

A more sinister aspect is where much of the energy in the privacy discussion is used up. The patterns of our information search, added to the records of who we communicate with, and perhaps key words in the content of our communications, alert those in power to the possibility that we may pose a threat to the status quo, or to those who have a vested interest in maintaining that power. We have noticed the trend of growing inequality in our society over the last half century.

But, in focusing on these, albeit genuine and worrying issues, what is lost from focus is the rich subtlety of active self-disclosure. It is as if we are so worried by information about ourselves falling into undesirable hands that we forget about the value of knowledge of ourselves being shared with, and entrusted to, those who can really validate us, and who can help us to understand who we are and where we might want to go.

So, I say, let’s turn the spotlight instead onto how technology can help make self-disclosure not only easier, but directed to the right people. This could start along the lines of finding trustable people, and verifying their trustworthiness. Rather than these trustable people being establishment authorities, how about finding peers, or peer groups, where mutual trust can develop? Given a suitable peer group, it is easy to envisage tools helping with an ordered process of mutual self-disclosure, and increasing trust. Yes, privacy comes into this, because an ordered process of self-disclosure will avoid untimely and inappropriate disclosures. But what do we mean by appropriate? Beyond reciprocity, which is pretty much universally acknowledged as an essential part in friendship and other good relationships, I’d say that what is appropriate is a matter for negotiation, rather than assumption. So, there is a role for tools to help in the negotiation of what is appropriate. Tools could help expose assumptions, so that they can be questioned and laid open to change.

Let’s make and use tools like this to retake control, or perhaps take control for the first time, of the rules and processes of self-disclosure, so that we can genuinely improve mutual recognition, acceptance and understanding, and provide a more powerful and fertile ground for personal and collective development.

Even-handed peer-to-peer self-disclosure will be a stimulus to move towards more sharing, equality, co-operation, collaboration, and a better society.

JSON-LD: a useful interoperability binding

Over the last few months I’ve been exploring and detailing a provisional binding of the InLOC spec to JSON-LD (spec; site). My conclusion is that JSON is better matched to linked data than XML is, if you understand how to structure JSON in the JSON-LD way. Here are my reflections, which I hope add something to the JSON-LD official documentation.

Let’s start with XML, as it is less unfamiliar to most non-programmers, due to similarities with HTML. XML offers two kinds of structures: elements and attributes. Elements are the the pieces of XML that are bounded by start and end tags (or are simply empty tags). They may nest inside other elements. Attributes are name-value pairs that exist only within element start tags. The distinction is useful for marking up text documents, as the tags, along with their attributes, are added to the underlying text, without altering it. But for data, the distinction is less helpful. In fact, some XML specifications use almost no attributes. Generally, if you are using XML to represent data, you can change attributes into elements, with the attribute name as a contained element name, and the attribute value as text contained within the new element.

Confused? You’d be in good company. Many people have complained about this aspect of XML. It gives you more than enough “rope to hang yourself with”.

Now, if you’re writing a specification that might be even remotely relevant to the world of linked data, it is really important that you write your specification in a way that clearly distinguishes between the names of things – objects, entities, etc. – and the names of their properties, attributes, etc. It’s a bit like, in natural language, distinguishing nouns from adjectives. “Dog” is a good noun, “brown” is a good adjective, and we want to be able to express facts such as “this dog is of the colour brown”. The word “colour” is the name of the property; the word “brown” is the value of the property.

The bit of linked data that is really easy to visualise and grasp is its graphical representation. In a linked data graph, customarily, you have ovals that represent things – the nouns, objects, entities, etc. – labelled arrows to represent the property names (or “predicates”); and rectangles to represent literal values.

Given the confusion above, it’s not surprising that when you want to represent linked data using XML, it can be particularly confusing. Take a look at this bit of the RDF/XML spec. You can see the node and arc diagram, and the “striped” XML that is needed to represent it. “Striping” means that as you work your way up or down the document tree, you encounter elements that represent alternately (a) things and (b) the names of properties of these things.

Give up? So do most people.

But wait. Compared to RDF/XML, representing linked data in JSON-LD is a doddle! How so?

Basics of how JSON-LD works

Well, look at the remarkably simple JSON page to start with. There you see it: the most important JSON structure is the “object”, which is “an unordered set of name/value pairs”. Don’t worry about arrays for now. Just note that a value can also be an object, so that objects can nest inside each other.

the JSON object diagram

To map this onto linked data, just look carefully at the diagram, and figure that…

  1. a JSON object represents a thing, object, entity, etc.
  2. property names are represented by the strings.

In essence, there you have it!

But in practice, there is a bit more to the formal RDF view of linked data.

  • Objects in RDF have an associated unique URI, which is what allows the linking. (No need to confuse things with blank nodes right now.)
  • To do this in JSON, objects must have a special name/value pair. JSON-LD uses the name “@id” as the special name, and its value must be the URI of the object.
  • Predicates – the names of properties – are represented in RDF by URIs as well.
  • To keep JSON-LD readable, the names stay as short and meaningful labels, but they need to be mapped to URIs.
  • If a property value is a literal, it stays as a plain value, and isn’t an object in its own right.
  • In RDF, literal values can have a data type. JSON-LD allows for this, too.

JSON-LD manages these tricks by introducing a section called the “context”. It is in the “context” that the JSON names are mapped to URIs. Here also, it is possible to associate data types with each property, so that values are interpreted in the way intended.

What of JSON arrays, then? In JSON-LD, the JSON array is used specifically to give multiple values of the same property. Essentially, that’s all. So each property name, for a given object, is only used once.

Applying this to InLOC

At this point, it is probably getting hard to hold in one’s head, so take a look at the InLOC JSON-LD binding, where all these issues are illustrated.

InLOC is a specification designed for the representation of structures of learning outcomes, competence definitions, and similar kinds of thing. Using InLOC, authorities owning what are often called “frameworks” or (confusingly) “standards” can express their structures in a form that is completely explicit and machine processable, without the common reliance on print-style layout to convey the relationships between the different concepts. One of the vital characteristics of such structures is that one, higher-level competence can be decomposed in terms of several, lower-level competences.

InLOC was planned to able to be linked data from the outset. Following many good examples, including the revered Dublin Core, the InLOC information model is expressed in terms of classes and properties. Thus, it is clear from the outset that there is a mapping to a linked data style model.

To be fully multilingual, InLOC also takes advantage of the “language map” feature of JSON-LD. Instead of just giving one text value to a property, the value of any human-language property is an object, within which the keys are the two-letter language codes, and the values are the property value in that language.

To see more, please take a look at the JSON-LD spec alongside the InLOC JSON-LD binding. And you are most welcome to a personal explanation if you get in touch with me.

To take home…

If you want to use JSON-LD, ensure that:

  • anything in your model that looks like a predicate is represented as a name in JSON object name/value pairs;
  • anything in your model that looks like a value is represented as the value of a JSON name/value pair;
  • you only use each property name once – if there are multiple values of that property, use a JSON array;
  • any entities, objects, things, or whatever you call them, that have properties, are represented as JSON objects;
  • and then, following the spec, carefully craft the JSON-LD context, to map the names onto URIs, and to specify any data types.

Try it and see. If you follow me, I think it will make sense – more sense than XML. It’s now (January 2014) a W3C Recommendation.

Educational Technology Standardization in Europe

The current situation in Europe regarding the whole process of standardization in the area of ICT for Learning Education and Training (LET) is up in the air just now, because of a conflict between how we, the participants, see it best proceeding, and how the formal de jure standards bodies are reinforcing their set up.

My dealings with European learning technology standardization colleagues in the last few years have probably been at least as much as any other single CETIS staff member. Because of my work on European Learner Mobility and InLOC, since 2009 I have attended most of the meetings of the Workshop Learning Technologies (which also has an official page), and I have also been involved centrally in the eCOTOOL and to a lesser extend in the ICOPER European projects.

So what is going on now — what is of concern?

In CETIS, we share some common views on the how the standardization process should be taken forward. During the course of specification development, it is important to involve people implementing them, and not just have people who theorise about them. In the case of educational technology, the companies who are most likely to use the interoperability specifications we are interested in tend to be small and agile. They are helped by specifications that are freely available, and available as soon as they are agreed. Having to pay for them is an unwelcome obstacle. They need to be able to implement the specifications without any constraints or legal worries.

However, over the course of this last year, CEN has reaffirmed long standing positions which don’t match our requirements. The issue centres partly around perceived business models. The official standards bodies make money from selling copies of standards documents. In a paper based, slow-moving world, one can see some sense in this. Documents may have been costly to produce, and businesses relying on a standard wanted a definitive copy. We see similar issues and arguments around academic publishing. In both fields, it is clear that the game is continuing to change, but hasn’t reached a new stable state yet. What we are saying is that, in our area, this traditional business model is never likely to be be justified, and it’s diffcult to imagine the revenues materialising.

The European learning technology standardization community have been lucky in past years, because the official standards bodies have tolerated activity which is not profitable for them. Now — we can only guess, because of financial belts being tightened — CEN at least is not going to continue tolerating this. Their position is set out in their freely available Guides.

Guide 10, the “Guidelines for the distribution and sales of CEN-CENELEC publications”, states:

Members shall exercise these rights in accordance with the provisions of this Guide and in a way that protects the integrity and value of the Publications, safeguards the interests of other Members and recognizes the value of the intellectual property that they contain and the costs to the CEN-CENELEC system of its development and maintenance.
In particular, Members shall not make Publications, including national implementations and definitive language versions, available free of charge to general users without the specific approval of the Administrative Boards of CEN and/or CENELEC.

And, just in case anyone was thinking of circumventing official sales by distributing early or draft versions, this is expressly forbidden.

6.1.1 Working drafts and committee drafts
The distribution of working drafts, committee drafts and other proceedings of CEN-CENELEC technical bodies and Working Groups is generally restricted to the participants and observers in those technical bodies and Working Groups and they shall not otherwise be distributed.

So there it is: specification development under the auspices of CEN is not allowed to be open, despite our view that openness works best in any case, and that it is genuinely needed in our area.

As if this were not difficult enough, the problems extend beyond the copyright of standards documentation. After a standard is agreed, it has to be “implemented”, of course. What kind of use is permitted, and under what terms? A fully open standard will allow any kind of use without royalty or any other kind of restriction, and this is particularly relevant to developers of free and open source software. One specification can build on another, and this can get very tricky if there are conditions attached to implementation of specifications. I’ve come across cases where a standardization body won’t reuse a specification because it is not clear that it is licenced freely enough.

So what is the CEN position on this? Guide 8 (December 2011) is the “CEN-CENELEC Guidelines for Implementation of the Common IPR Policy on Patent”. Guide 8 does say that the use of official standards is to be free of royalties, but at the end of Clause 4.1 one senses a slight hesitation:

The words “free of charge” in the Declaration Form do not mean that the patent holder is waiving all of its rights with respect to the essential patent. Rather, it refers to the issue of monetary compensation; i.e. that the patent holder will not seek any monetary compensation as part of the licensing arrangement (whether such compensation is called a royalty, a one-time licensing fee, etc.). However, while the patent holder in this situation is committing to not charging any monetary amount, the patent holder is still entitled to require that the implementer of the above document sign a licence agreement that contains other reasonable terms and conditions such as those relating to governing law, field of use, reciprocity, warranties, etc.

What does this mean in practice? It seems unclear in a way that could cause considerable concern. And when thinking of potential cumulative effects, Definition 2.9 defines “reciprocity” thus:

as used herein, requirement for the patent holder to license any prospective licensee only if such prospective licensee will commit to license, where applicable, its essential patent(s) or essential patent claim(s) for implementation of the same above document free of charge or under reasonable terms and conditions

Does that mean that the implementer of a standard can impose any terms and condition that are arguably reasonable on its users, including payments? Could this be used to change the terms of a derivative specification? We — our educational technology community — really don’t need this kind of unclarity and uncertainty. Why not have just a plain, open licence?

What seems to be happening here is the opposite of the arrangement known as “copyleft“. While under “copyleft”, any derivative work has to be similarly licenced, under the CEN terms, it seems that patent holders can impose conditions, and can allow companies implementing their patents to impose more conditions or charge any reasonable fees. Perhaps CEN recognises that they can’t expect everyone to give them all of the cake? To stretch that metaphor a bit, maybe we are guessing that much of the educational technology community — the open section that we believe is particularly important — has no appetite for that kind of cake.

The CEN Workshop on Learning Technologies has suspended its own proceedings for reasons such as the above, and several of us are trying to think of how to go forward. It seems that it will be fruitless to try to continue under a strict application of the existing rules. The situation is difficult.

Perhaps we need a different approach to consensus process governance. Yes, that reads “consensus process governance”, a short phrase, apparently never used before, but packed full of interesting questions. If we have heavyweight bodies sitting on top of standardization, it is no wonder that people have to pay (in whatever way) for those staff, those premises, that bureaucracy.

It is becoming commonplace to talk of the “1%” extracting more and more resource from us “99%“. (See e.g. videos like this one.) And naturally any establishment tends to seek to preserve itself and feather its own nest. But the real risk is that our community is left out, progressively deprived of sustenance and air, with the strongest vested interests growing fatter, continually trying to tighten their grip on effective control.

So, it is all the more important to find a way forward that is genuinely collaborative, in keeping with a proper consensus, fair to all including those with less resource, here in standardization as in other places in society. I am personally up for collaborating with others to find a better way forward, and hope that we will make progress together under the CETIS umbrella — or indeed any other convenient umbrella that can be opened.

InLOC and OpenBadges: a reprise

InLOC is well designed to provide the conceptual “glue” or “thread” for holding together structures and planned pathways of achievement, which can be represented by Mozilla OpenBadges.

Since my last post — the last of the previous academic year, also about OpenBadges and InLOC — I have been invited to talk at OBSEG – the Open Badges in Scottish Education Group. This is a great opportunity, because it involves engaging with a community with real aspirations for using Open Badges. One of the things that interests people in OBSEG is setting up combinations of lesser badges, or pathways for several lesser badges to build up to greater badges. I imagine that if badges are set up in this way, the lesser badges are likely to become the stepping stones along the pathway, while it is the greater badge that is likely to be of direct interest to, e.g., employers.

All this is right in the main stream of what InLOC addresses. Remember that, using InLOC, one can set out and publish a structure or framework of learning outcomes, competenc(i)es, etc., (called “LOC definitions”) each one with its own URL (or IRI, to be technically correct), with all the relationships between them set out clearly (as part of the “LOC structure”).

The way in which these Scottish colleagues have been thinking of their badges brings home another key point to put the use of InLOC into perspective. As with so many certificates, awards, qualifications etc., part of the achievement is completion in compliance with the constraints or conditions set out. These are likely not to be learning outcomes or competences in their own right.

The simplest of these non-learning-outcome criteria could be attendance. Attendance, you might say, stands in for some kind of competence; but the kind of basic timekeeping and personal organisation ability that is evidenced by attendance is very common in many activities, so is unlikely to be significant in the context of a Badge awarded for something else. Other such criteria could be grouped together under “ability to follow instructions” or something similar. A different kind of criterion could be the kinds of character “traits” that are not expected to be learned. A person could be expected to be cheerful; respectful; tall; good-looking; or a host of other things not directly under their control, and either difficult or impossible to learn. These non learning outcome aspects of criteria are not what InLOC is principally designed for.

Also, over the summer, Mozilla’s Web Literacy Standard (“WebLitStd”) has been progressing towards version 1.0, to be featured in the upcoming MozFest in London. I have been tracking this with the help of Doug Belshaw, who after great success as an Open Badges evangelist has been focusing on the WebLitStd as its main protagonist. I’m hoping soon (hopefully by MozFest time) to have a version of the WebLitStd in InLOC, and this brings to the fore another very pragmatic question about using InLOC as a representation.

Many posts ago, I was drawing out the distinction between LOC (that is, Learning Outcome or Competence) definitions that are, on the one hand, “binary”, and on the other hand, “rankable”. This is written up in the InLOC documentation. “Binary” ones are the ones for which you can say, without further ado, that someone has achieved this learning outcome, or not yet achieved it. “Rankable” ones are ones where you can put people in order of their ability or competence, but there is no single set of criteria distinguishing two categories that one could call “achieved” and “not yet achieved”.

In the WebLitStd, it is probably fair to say that none of the “competencies” are binary in these terms. One could perhaps characterise them as rankable, though perhaps not fully, in that there may be two people with different configurations of that competency, as a result perhaps of different experiences, each of whom were better in some ways than the other, and each conversely less good in other ways. It may well be similar in some of the Scottish work, or indeed in many other Badge criteria. So what to do for InLOC?

If we recognise a situation where the idea is to issue a badge for an achievement that is clearly not a binary learning outcome, we can outline a few stages of development of their frameworks, which would result in a progressively tighter matching to an InLOC structure or InLOC definitions. I’ll take the WebLitStd as illustrative material here.

First, someone may develop a badge for something that is not yet well-defined anywhere — it could have been conceived without reference to any existing standards. To illustrate this case, an example of a title could be “using Web sites”. There is no one component of the WebLitStd that covers “using the web”, and yet “using” it doesn’t really cover Web literacy as a whole. In this case, the Badge criteria would need to be detailed by the Badge awarder, specifically for that badge. What can still be done within OpenBadges is that there could be alignment information; however it is not always entirely clear what the relationship is meant to be between a badge and a standard it is “aligned” to. The simplest possibility is that the alignment is to some kind of educational level. Beyond this it gets trickier.

A second possibility for a single badge would be to refer to an existing “rankable” definition. For example, consider the WebLitStd skill, “co-creating web resources”, which is part of the “sharing & collaborating” competency of the “Connecting” strand. To think in detail about how this kind of thing could be badged, we need to understand what would count (in the eye of the badge issuer) as “co-creating web resources”. There are very many possible examples that readily come to mind, from talking about what a web page could have on it, to playing a vital part in a team building a sophisticated web service. One may well ask, “what experiences do you have of co-creating web resources?” and, depending on the answer, one could roughly rank people in some kind of order of amount and depth of experience in this area. To create a meaningful badge, a more clearly cut line needs to be drawn. Just talking about what could be on a web page is probably not going to be very significant for anyone, as it is an extremely common experience. So what counts as significant? It depends on the badge issuer, of course, and to make a meaningful badge, the badge issuer will need to define what the criteria are for the badges to be issued.

A third and final stage, ideal for InLOC, would be if a badge is awarded with clearly binary criteria. In this case there is nothing standing in the way of having the criteria property of the Badge holding a URL for a concept directly represented as a binary InLOC LOCdefinition. There are some WebLitStd skills that could fairly easily be seen as binary. Take “distinguishing between open and closed licensing” as an example. You show people some licenses; either they correctly identify the open ones or they don’t. That’s (reasonably) clear cut. Or take “understanding and labeling the Web stack”. Given a clear definition of what the “Web stack” is, this appears to be a fairly clear-cut matter of understanding and memory.

Working back again, we can see that in the third stage, a Badge can have criteria (not just alignments) which refer directly to InLOC information. At the second and first stage, badge criteria need something more than is clearly set out in InLOC information already published elsewhere. So the options appear to be:

  1. describing what the criteria are in plain text, with reference to InLOC information only through alignment; and
  2. defining an InLOC structure specifically for the badge, detailing the criteria.

The first of these options has its own challenges. It will be vital to coherence to ensure that the alignments are consistent with each other. This will be possible, for example, if the aspects of competence covered are separate (independent; orthogonal even). So, if one alignment is to a level, and the second to a topic area, that might work. But it is much less promising if more specific definitions are referred to.

(I’d like to write an example at this point, but can’t decide on a topic area — I need someone to give me their example and we can discuss it and maybe put it here.)

From the point of view of InLOC, the second option is much more attractive. In principle, any badge criteria could be analysed in sufficient detail to draw out the components which can realistically be thought of as learning outcomes — properties of the learners — that may be knowledge, skill, competence, etc. No matter how unusual or complex these are, they can in principle be expressed in InLOC form, and that will clarify what is really “aligned” with what.

I’ll say again, I would really like to have some well-worked-out examples here. So please, if you’re interested, get in touch and let’s talk through some of interest to you. I hope to be starting that in Glasgow this week.

A new (for me) understanding of standardization

When engaging deeply in any standardization project, as I have with the InLOC project, one is likely to get new insights into what standardization is, or should be. I tried to encapsulate this in a tweet yesterday, saying “Standardization, properly, should be the process of formulation and formalisation of the terms of collective commitment”.

Then @crispinweston replied “Commitment to whom and why? In the market, fellow standardisers are competitors.” I continued, with the slight frustration at the brevity of the tweet format, “standards are ideally agreed between mutually recognising group who negotiate their common interest in commitment”. But when Crispin went on “What role do you give to the people expected to make the collective commitment in drafting the terms of that commitment?” I knew it was time to revert from micro-blogging to macro-blogging, so to speak.

Crispin casts me in the position of definer of roles — I disclaim that. I am trying, rather, firstly to observe and generalise from my observations about what standardization is, when it is done successfully, whether or not people use or think of the term “standardization”, and secondly, to intuit a good and plausible way forward, perhaps to help grow a consensus about what standardization ought to be, within the standardization community itself.

One of the challenges of the InLOC project was that the project team started from more or less carte blanche. Where there is a lot of existing practice, standardization can (in theory at least) look at existing practice, and attempt to promote standardization on the best aspects of it, knowing that people do it already, and that they might welcome (for various reasons) a way to do it in just one way, rather than many. But in the case of InLOC, and any other “anticipatory” standard, people aren’t doing closely related things already. What they are doing is publishing many documents about the knowledge, skills, competence, or abilities (or “competencies”) that people need for particular roles, typically in jobs, but sometimes as learners outside of employment. However, existing practice says very little about how these should be structured, and interrelated, in general.

So, following this “anticipatory” path, you get to the place where you have the specification, but not the adoption. How do you then get the adoption? It can only be if you have been either lucky, in that you’ve formulated a need that people naturally come to see, or that you are persuasive, in that you persuade people successfully that it is what they really (really) want.

The way of following, rather than anticipating, practice certainly does look the easier, less troubled, surer path. Following in that way, there will be a “community” of some sort. Crispin identifies “fellow standardisers” as “competitors” in the market. “Coopetition” is a now rather old neologism that comes to mind. So let me try to answer the spirit at least of Crispin’s question — not the letter, as I am seeing myself here as more of an ethnographer than a social engineer.

I envisage many possible kinds of community coming together to formulate the terms of their collective commitments, and there may be many roles within those communities. I can’t personally imagine standard roles. I can imagine the community led by authority, imposing a standard requirement, perhaps legally, for regulation. I can imagine a community where any innovator comes up with a new idea for agreeing some way of doing things, and that serves to focus a group of people keen to promote the emerging standard.

I can imagine situations where an informal “norm” is not explicitly formulated at all, and is “enforced” purely by social peer pressure. And I can imagine situations where the standard is formulated by a representative body of appointees or delegates.

The point is that I can see the common thread linking all kinds of these practices, across the spectrum of formality–informality. And my view is that perhaps we can learn from reflecting on the common points across the spectrum. Take an everyday example: the rules of the road. These are both formal and informal; and enforced both by traffic authorities (e.g. police) and by peer pressure (often mediated by lights and/or horn!)

When there is a large majority of a community in support of norms, social pressure will usually be adequate, in the majority of situations. Formal regulation may be unnecessary. Regulation is often needed where there is less of a complete natural consensus about the desirability of a norm.

Formalisation of a norm or standard is, to me, a mixed blessing. It happens — indeed it must happen at some stage if there is to be clear and fair legal regulation. But the formalisation of a standard takes away the natural flexibility of a community’s response both to changing circumstances in general, and to unexpected situations or exceptions.

Time for more comment? You would be welcome.