What is CEN TC 353 becoming?

The CEN TC 353 was set up (about seven years ago) as the European Standardization Technical Committee (“TC”) responsible for “ICT for Learning Education and Training” (LET). At the end of the meeting I will be describing below, we recognised that the title has led some people to think it is a committee for standardising e-learning technology, which is far from the truth. I would describe its business as being, effectively, the standardization of the representation of information about LET, so that it can be used in (any kind of) ICT systems. We want the ICT systems we use for LET to be interoperable, and we want to avoid the problems that come from vendors all defining their own ways of storing and handling information, thus making it hard to migrate to alternative systems. Perhaps the clearest evidence of where TC 353 works comes from the two recent European Standards to our name. EN 15981, “EuroLMAI”, is about information about learner results from any kind of learning, specifically including the Diploma Supplement, and the UK HEAR, that document any higher education achievements. EN 15982, “MLO” (Metadata for Learning Opportunities) is the European equivalent of the UK’s XCRI, “eXchanging Course-Related Information”, mainly about the information used to advertise courses, which can be of any kind. Neither of these are linked to the mode of learning, technology enhanced or not; and indeed we have no EN standards about e-learning as such. So that’s straight, then, I trust …

At this CEN TC 353 meeting on 2014-04-08 there were delegates from the National Bodies of: Finland; France (2); Germany; Greece; Norway; Sweden (2); UK (me); and the TC 353 secretary. That’s not very many for an active CEN TC. Many of the people there have been working with CETIS people, including me, for several years. You could see us as the dedicated, committed few.

The main substance of the day’s discussion was about two proposed new work items (“NWIs”), one from France, one from Sweden, and the issues coming out of that. I attended the meeting as the sole delegate (with the high-sounding designation, “head of delegation”) from BSI, with a steer from colleagues that neither proposal was ready for acceptance. That, at least, was agreed by the meeting. But something much more significant appeared to happen, which seemed to me like a subtle shift in the identity of TC 353. This is entirely appropriate, given that the CEN Workshop on Learning Technologies (WS-LT), which was the older, less formal body, is now acccepted as defunct — this is because CEN are maintaining their hard line on process and IPR, which makes running an open CEN workshop effectively impossible.

No technical standardization committee that I know of is designed to manage pre-standardization activities. Floating new ideas, research, project work, comparing national initiatives, etc., need to be done before a proposal reaches a committee of this kind, because TC work, whether in CEN, or in our related ISO JTC1 SC36, tends to be revision of documents that are presented to the committee. It’s very difficult and time consuming to construct a standard from a shaky foundation, simply by requesting formal input and votes from national member bodies. And when a small team is set up to work under the constraints of a bygone era of confidentiality, in some cases it has proved insurmountably difficult to reach a good consensus.

Tore Hoel, a long-time champion of the WS-LT, admitted that it is now effectively defunct. I sadly agree, while appreciating all the good work it has done. So TC 353 has to explore a new role in the absence of what was its own Workshop, which used to do the background work and to suggest the areas of work that needed attention. Tore has recently blogged what he thinks should be the essential characteristics of a future platform for European open standards work, and I very much agree with him. He uses the Open Stand principles as a key reference.

So what could this new role be? The TC members are well connected in our field, and while they do not themselves do much IT systems implementation, they know those people, and are generally in touch with their views. The TC members also have a good overview of how the matters of interest to TC 353 relate to neighbouring issues and stakeholders. We believe that the TC is, collectively, in quite a good position to judge when it is worth working towards a new European Standard, which is after all their raison d’etre. We can’t see any other body that could perform this role as well, in this specific area.

As we were in France, the famous verse of Rouget de Lisle, the “Marseillaise” came to my mind. “Aux armes, citoyens, Formez vos bataillons!” the TC could be saying. What I really like, on reflection, about this aspect of the French national anthem is that it isn’t urging citizens to join some pre-arranged (e.g. royal) battalions, but to create their own. Similarly, the TC could say, effectively, “now is the time to act — do it in your own ways, in your own organisations, whatever they are — but please bring the results together for us to formalise when they are ready.”

For me, this approach could change the whole scene. Instead of risking being an obstacle to progress, the CEN TC 353 could add legitimacy and coherence to the call for pre-standardization activity in chosen areas. It would be up to the individuals listening (us wearing different hats) to take up that challenge in whatever ways we believe are best. Let’s look at the two proposals from that perspective.

AFNOR, the French standards body, was suggesting working towards a European Standard (EN) with the title “Metadata for Learning Opportunities part 2 : Detailed Description of Training and Grading (face to face, distance or blended learning and MOOCs): Framework and Methodology”. The point is to extend MLO (EN 15982), including perhaps some of those characteristics of courses (learning opportunities), perhaps drawn from the Norwegian CDM or its French derivative, that didn’t make it into the initial version of MLO for advertising. There have from time to time in the UK been related conversations about the bits of the wider vision for XCRI that didn’t make it into XCRI-CAP (“Course Advertising Profile”). But they didn’t make it probably for some good reason — maybe either there wasn’t agreement about what they should be, or there wasn’t any pressing need, or there weren’t enough implementations of them to form the basis for effective consensus.

Responding to this, I can imagine BSI and CETIS colleagues in the UK seriously insisting, first, that implemention should go hand in hand with specification. We need to be propertly motivated by practical use cases, and we need to test ideas out in implementation before agreeing to standardize them. I could imagine other European colleagues insisting that the ideas should be accepted by all the relevant EC DGs before they have a chance of success in official circles. And so on — we can all do what we are best at, and bring those together. And perhaps also we need to collaborate between national bodies at this stage. It would make sense, and perhaps bring greater commitment from the national bodies and other agencies, if they were directly involved, rather than simply sending people to remote-feeling committees of standards organisations. In this case, it would be up to the French, whose Ministry of Education seems to be wanting something like this, to arrange to consult with others, to put together an implemented proposal that has a good chance of achieving European consensus.

We agreed that it was a good idea for the French proposal to use the “MOOC” label to gain interest and motivation, while the work would in no way be limited to MOOCs. And it’s important to get on board both some MOOC providers, and related though different, some of the agencies who aggregate information about MOOCs (etc.) and offer information about them through portals so that people can find appropriate ones. The additional new metadata would of course be designed to make that search more effective, in that more of the things that people ask about will be modelled explicitly.

So, let’s move on to the Swedish proposal. This was presented under the title “Linked and Open Data for Learning and Education”, based on their national project “Linked and Open Data in Schools” (LODIS). We agreed that it isn’t really on for a National Body simply to propose a national output for European agreement, without giving evidence on why it would be helpful. In the past, the Workshop would have been a fair place to bring this kind of raw idea, and we could have all pitched in with anything relevant. But under our new arrangements, we need the Swedes themselves to lead some cross-European collaboration to fill in the motivation, and do the necessary research and comparison.

There are additional questions also relevant to both proposals. How will they relate to the big international and American players? For example, are we going to get schema.org to take these ideas on, in the fullness of time? How so? Does it matter? (I’m inclined to think it does matter.)

I hope the essentials of the new approach are apparent in both cases. The principle is that TC 353 acts as a mediator and referee, saying “OK” to the idea that some area might be ripe for further work, and encouraging people to get on with it. I would, however, suggest that three vital conditions should apply, for this approach to be effective as well as generally acceptable.

  1. The principal stakeholders have to arrange the work themselves, with enough trans-national collaboration to be reasonably sure that the product will gain the European consensus needed in the context of CEN.
  2. The majority of the drafting and testing work is done clearly before a formal process is started in CEN. In our sector, it is vital that the essential ideas are free and open, so we want a openly licenced document to be presented to the TC as a starting point, as close as can be to the envisioned finishing point. CEN will still add value through the formal process and formal recognition, but the essential input will still be openly and freely licenced for others to work with in whatever way they see fit.
  3. The TC must assert the right to stop and revoke the CEN work item, if it turns out that it is not filling a genuine European need. There is room for improvement here over the past practice of the TC and the WS-LT. It is vital to our reputation and credibility, and to the ongoing quality of our output, that we are happy with rejecting work that it not of the right quality for CEN. Only in this way can CEN stakeholders have the confidence in a process that allows self-organising groups to do all the spadework, prior to and separate from formal CEN process and oversight.

At the meeting we also heard that the ballot on the TC 353 marketing website was positive. (Disclosure: I am a member of the TC 353 “Communications Board” who advised on the content.) Hopefully, a consequence of this will be that we are able to use the TC 353 website both to flag areas for which TC 353 believes there is potential for new work, and to link to the pre-standardization work that is done in those areas that have been encouraged by the TC, wherever that work is done. We hope that this will all help significantly towards our aim of effectively open standardization work, even where the final resulting EN standards remain as documents with a price tag.

I see the main resolutions made at the meeting as enacting this new role. TC 353 is encouraging proposers of new work to go ahead and develop mature open documentation, and clear standardization proposals, in whatever European collaborations they see fit, and bring them to a future TC meeting. I’d say that promises a new chapter in the work of the TC, which we should welcome, and we should play our part in helping it to work effectively for the common good.

JSON-LD: a useful interoperability binding

Over the last few months I’ve been exploring and detailing a provisional binding of the InLOC spec to JSON-LD (spec; site). My conclusion is that JSON is better matched to linked data than XML is, if you understand how to structure JSON in the JSON-LD way. Here are my reflections, which I hope add something to the JSON-LD official documentation.

Let’s start with XML, as it is less unfamiliar to most non-programmers, due to similarities with HTML. XML offers two kinds of structures: elements and attributes. Elements are the the pieces of XML that are bounded by start and end tags (or are simply empty tags). They may nest inside other elements. Attributes are name-value pairs that exist only within element start tags. The distinction is useful for marking up text documents, as the tags, along with their attributes, are added to the underlying text, without altering it. But for data, the distinction is less helpful. In fact, some XML specifications use almost no attributes. Generally, if you are using XML to represent data, you can change attributes into elements, with the attribute name as a contained element name, and the attribute value as text contained within the new element.

Confused? You’d be in good company. Many people have complained about this aspect of XML. It gives you more than enough “rope to hang yourself with”.

Now, if you’re writing a specification that might be even remotely relevant to the world of linked data, it is really important that you write your specification in a way that clearly distinguishes between the names of things – objects, entities, etc. – and the names of their properties, attributes, etc. It’s a bit like, in natural language, distinguishing nouns from adjectives. “Dog” is a good noun, “brown” is a good adjective, and we want to be able to express facts such as “this dog is of the colour brown”. The word “colour” is the name of the property; the word “brown” is the value of the property.

The bit of linked data that is really easy to visualise and grasp is its graphical representation. In a linked data graph, customarily, you have ovals that represent things – the nouns, objects, entities, etc. – labelled arrows to represent the property names (or “predicates”); and rectangles to represent literal values.

Given the confusion above, it’s not surprising that when you want to represent linked data using XML, it can be particularly confusing. Take a look at this bit of the RDF/XML spec. You can see the node and arc diagram, and the “striped” XML that is needed to represent it. “Striping” means that as you work your way up or down the document tree, you encounter elements that represent alternately (a) things and (b) the names of properties of these things.

Give up? So do most people.

But wait. Compared to RDF/XML, representing linked data in JSON-LD is a doddle! How so?

Basics of how JSON-LD works

Well, look at the remarkably simple JSON page to start with. There you see it: the most important JSON structure is the “object”, which is “an unordered set of name/value pairs”. Don’t worry about arrays for now. Just note that a value can also be an object, so that objects can nest inside each other.

the JSON object diagram

To map this onto linked data, just look carefully at the diagram, and figure that…

  1. a JSON object represents a thing, object, entity, etc.
  2. property names are represented by the strings.

In essence, there you have it!

But in practice, there is a bit more to the formal RDF view of linked data.

  • Objects in RDF have an associated unique URI, which is what allows the linking. (No need to confuse things with blank nodes right now.)
  • To do this in JSON, objects must have a special name/value pair. JSON-LD uses the name “@id” as the special name, and its value must be the URI of the object.
  • Predicates – the names of properties – are represented in RDF by URIs as well.
  • To keep JSON-LD readable, the names stay as short and meaningful labels, but they need to be mapped to URIs.
  • If a property value is a literal, it stays as a plain value, and isn’t an object in its own right.
  • In RDF, literal values can have a data type. JSON-LD allows for this, too.

JSON-LD manages these tricks by introducing a section called the “context”. It is in the “context” that the JSON names are mapped to URIs. Here also, it is possible to associate data types with each property, so that values are interpreted in the way intended.

What of JSON arrays, then? In JSON-LD, the JSON array is used specifically to give multiple values of the same property. Essentially, that’s all. So each property name, for a given object, is only used once.

Applying this to InLOC

At this point, it is probably getting hard to hold in one’s head, so take a look at the InLOC JSON-LD binding, where all these issues are illustrated.

InLOC is a specification designed for the representation of structures of learning outcomes, competence definitions, and similar kinds of thing. Using InLOC, authorities owning what are often called “frameworks” or (confusingly) “standards” can express their structures in a form that is completely explicit and machine processable, without the common reliance on print-style layout to convey the relationships between the different concepts. One of the vital characteristics of such structures is that one, higher-level competence can be decomposed in terms of several, lower-level competences.

InLOC was planned to able to be linked data from the outset. Following many good examples, including the revered Dublin Core, the InLOC information model is expressed in terms of classes and properties. Thus, it is clear from the outset that there is a mapping to a linked data style model.

To be fully multilingual, InLOC also takes advantage of the “language map” feature of JSON-LD. Instead of just giving one text value to a property, the value of any human-language property is an object, within which the keys are the two-letter language codes, and the values are the property value in that language.

To see more, please take a look at the JSON-LD spec alongside the InLOC JSON-LD binding. And you are most welcome to a personal explanation if you get in touch with me.

To take home…

If you want to use JSON-LD, ensure that:

  • anything in your model that looks like a predicate is represented as a name in JSON object name/value pairs;
  • anything in your model that looks like a value is represented as the value of a JSON name/value pair;
  • you only use each property name once – if there are multiple values of that property, use a JSON array;
  • any entities, objects, things, or whatever you call them, that have properties, are represented as JSON objects;
  • and then, following the spec, carefully craft the JSON-LD context, to map the names onto URIs, and to specify any data types.

Try it and see. If you follow me, I think it will make sense – more sense than XML. It’s now (January 2014) a W3C Recommendation.

A new (for me) understanding of standardization

When engaging deeply in any standardization project, as I have with the InLOC project, one is likely to get new insights into what standardization is, or should be. I tried to encapsulate this in a tweet yesterday, saying “Standardization, properly, should be the process of formulation and formalisation of the terms of collective commitment”.

Then @crispinweston replied “Commitment to whom and why? In the market, fellow standardisers are competitors.” I continued, with the slight frustration at the brevity of the tweet format, “standards are ideally agreed between mutually recognising group who negotiate their common interest in commitment”. But when Crispin went on “What role do you give to the people expected to make the collective commitment in drafting the terms of that commitment?” I knew it was time to revert from micro-blogging to macro-blogging, so to speak.

Crispin casts me in the position of definer of roles — I disclaim that. I am trying, rather, firstly to observe and generalise from my observations about what standardization is, when it is done successfully, whether or not people use or think of the term “standardization”, and secondly, to intuit a good and plausible way forward, perhaps to help grow a consensus about what standardization ought to be, within the standardization community itself.

One of the challenges of the InLOC project was that the project team started from more or less carte blanche. Where there is a lot of existing practice, standardization can (in theory at least) look at existing practice, and attempt to promote standardization on the best aspects of it, knowing that people do it already, and that they might welcome (for various reasons) a way to do it in just one way, rather than many. But in the case of InLOC, and any other “anticipatory” standard, people aren’t doing closely related things already. What they are doing is publishing many documents about the knowledge, skills, competence, or abilities (or “competencies”) that people need for particular roles, typically in jobs, but sometimes as learners outside of employment. However, existing practice says very little about how these should be structured, and interrelated, in general.

So, following this “anticipatory” path, you get to the place where you have the specification, but not the adoption. How do you then get the adoption? It can only be if you have been either lucky, in that you’ve formulated a need that people naturally come to see, or that you are persuasive, in that you persuade people successfully that it is what they really (really) want.

The way of following, rather than anticipating, practice certainly does look the easier, less troubled, surer path. Following in that way, there will be a “community” of some sort. Crispin identifies “fellow standardisers” as “competitors” in the market. “Coopetition” is a now rather old neologism that comes to mind. So let me try to answer the spirit at least of Crispin’s question — not the letter, as I am seeing myself here as more of an ethnographer than a social engineer.

I envisage many possible kinds of community coming together to formulate the terms of their collective commitments, and there may be many roles within those communities. I can’t personally imagine standard roles. I can imagine the community led by authority, imposing a standard requirement, perhaps legally, for regulation. I can imagine a community where any innovator comes up with a new idea for agreeing some way of doing things, and that serves to focus a group of people keen to promote the emerging standard.

I can imagine situations where an informal “norm” is not explicitly formulated at all, and is “enforced” purely by social peer pressure. And I can imagine situations where the standard is formulated by a representative body of appointees or delegates.

The point is that I can see the common thread linking all kinds of these practices, across the spectrum of formality–informality. And my view is that perhaps we can learn from reflecting on the common points across the spectrum. Take an everyday example: the rules of the road. These are both formal and informal; and enforced both by traffic authorities (e.g. police) and by peer pressure (often mediated by lights and/or horn!)

When there is a large majority of a community in support of norms, social pressure will usually be adequate, in the majority of situations. Formal regulation may be unnecessary. Regulation is often needed where there is less of a complete natural consensus about the desirability of a norm.

Formalisation of a norm or standard is, to me, a mixed blessing. It happens — indeed it must happen at some stage if there is to be clear and fair legal regulation. But the formalisation of a standard takes away the natural flexibility of a community’s response both to changing circumstances in general, and to unexpected situations or exceptions.

Time for more comment? You would be welcome.

What is my work?

Is there a good term for my specialist area of work for CETIS? I’ve been trying out “technology for learner support”, but that doesn’t fully seem to fit the bill. If I try to explain, reflecting on 10 years (as of this month) involvement with CETIS, might readers be able to help me?

Back in 2002, CETIS (through the CRA) had a small team working with “LIPSIG”, the CETIS special interest group involved with Learner Information (the “LI” of “LIPSIG”). Except that “learner information” wasn’t a particularly good title. It was also about the technology (soon to be labelled “e-portfolio”) that gathered and managed certain kinds of information related to learners, including their learning, their skills – abilities – competence, their development, and their plans. It was therefore also about PDP — Personal Development Planning — and PDP was known even then by its published definition “a structured and supported process undertaken by an individual to reflect upon their own learning, performance and/or achievement and to plan for their personal, educational and career development”.

There’s that root word, support (appearing as “supported”), and PDP is clearly about an “individual” in the learner role. Portfolio tools were, and still are, thought of as supporting people: in their learning; with the knowledge and skills they may attain, and evidence of these through their performance; their development as people, including their learning and work roles.

If you search the web now for “learner support”, you may get many results about funding — OK, that is financial support. Narrowing the search down to “technology for learner support”, the JISC RSC site mentions enabling “learners to be supported with their own particular learning issues”, and this doesn’t obviously imply support for everyone, but rather for those people with “issues”.

As web search is not much help, let’s take a step back, and try to see this area in a wider perspective. Over my 10 years involvement with CETIS, I have gradually come to see CETIS work as being in three overlapping areas. I see educational (or learning) technology, and related interoperability standards, as being aimed at:

  • institutions, to help them manage teaching, learning, and other processes;
  • providers of learning resources, to help those resources be stored, indexed, and found when appropriate;
  • individual learners;
  • perhaps there should be a branch aimed at employers, but that doesn’t seem to have been salient in CETIS work up to now.

Relatively speaking, there have always seemed to be plenty of resources to back up CETIS work in the first two areas, perhaps because we are dealing with powerful organisations and large amounts of money. But, rather than get involved in those two areas, I have always been drawn to the third — to the learner — and I don’t think it’s difficult to understand why. When I was a teacher for a short while, I was interested not in educational adminstration or writing textbooks, but in helping individuals learn, grow and develop. Similar themes pervade my long term interests in psychology, psychotherapy, counselling; my PhD was about cognitive science; my university teaching was about human-computer interaction — all to do with understanding and supporting individuals, and much of it involving the use of technology.

The question is, what does CETIS do — what can anyone do — for individual learners, either with the technology, or with the interoperability standards that allow ICT systems to work together?

The CETIS starting point may have been about “learner information”, but who benefits from this information? Instead of focusing on learners’ needs, it is all too easy for institutions to understand “learner information” as information than enables institutions to manage and control the learners. Happily though, the group of e-portfolio systems developers frequenting what became the “Portfolio” SIG (including Pebble, CIEPD and others) were keen to emphasise control by learners, and when they came together over the initiative that became Leap2A, nearly six years ago, the focus on supporting learners and learning was clear.

So at least then CETIS had a clear line of work in the area of e-portfolio tools and related interoperability standards. That technology is aimed at supporting personal, and increasingly professional, development. Partly, this can be by supporting learners taking responsibility for tracking the outcomes of their own learning. Several generic skills or competences support their development as people, as well as their roles as professionals or learners. But also, the fact that learners enter information about their own learning and development on the portfolio (or whatever) system means that the information can easily be made available to mentors, peers, or whoever else may want to support them. This means that support from people is easier to arrange, and better informed, thus likely to be more effective. Thus, the technology supports learners and learning indirectly, as well as directly.

That’s one thing that the phrase “technology for learner support” may miss — support for the processes of other people supporting the learner.

Picking up my personal path … building on my involvement in PDP and portfolio technology, it became clear that current representations of information about skills and competence were not as effective as they could be in supporting, for instance, the transition from education to work. So it was, that I found myself involved in the area that is currently the main focus of my work, both for CETIS, and also on my own account, through the InLOC project. This relates to learners rather indirectly: InLOC is enabling the communication and reuse of definitions and descriptions of learning outcomes and competence information, and particularly structures of sets of such definitions — which have up to now escaped an effective and well-adopted standard representation. Providing this will mean that it will be much easier for educators and employers to refer to the same definitions; and that should make a big positive difference to learners being able to prepare themselves effectively for the demands of their chosen work; or perhaps enable them to choose courses that will lead to the kind of work they want. Easier, clearer and more accurate descriptions of abilities surely must support all processes relating to people acquiring and evidencing abilities, and making use of related evidence towards their jobs, their well-being, and maybe the well-being of others.

My most recent interests are evidenced in my last two blog posts — Critical friendship pointer and Follower guidance: concept and rationale — where I have been starting to grapple with yet more complex issues. People benefit from appropriate guidance, but it is unlikely there will ever be the resources to provide this guidance from “experts” to everyone — if that is even what we really wanted.

I see these issues also as part of the broad concern with helping people learn, grow and develop. To provide full support without information technology only looks possible in a society that is stable — where roles are fixed and everyone knows their place, and the place of others they relate to. In such a traditionalist society, anyone and everyone can play their part maintaining the “social order” — but, sadly, such a fixed social order does not allow people to strike out in their own new ways. In any case, that is not our modern (and “modernist”) society.

I’ve just been reading Herman Hesse’s “Journey to the East” — a short, allegorical work. (It has been reproduced online.) Interestingly, it describes symbolically the kind of processes that people might have to go through in the course of their journey to personal enlightenment. The description is in no way realistic. Any “League” such as Hesse described, dedicated to supporting people on their journey, or quest, would practically be able to support only very few at most. Hesse had no personal information technology.

Robert K. Greenleaf was inspired by Hesse’s book to develop his ideas on “Servant Leadership“. His book of that name was put together in 1977, still before the widespread use of personal information techology, and the recognition of its potential. This idea of servant leadership is also very clearly about supporting people on their journey; supporting their development, personally and professionally. What information would be relevant to this?

Providing technology to support peer-to-peer human processes seems a very promising approach to allowing everyone to find their own, unique and personal way. What I wrote about follower guidance is related to this end: to describe ways by which we can offer each other helpful mutual support to guide our personal journeys, in work as well as learning and potentially other areas of life. Is there a short name for this? How can technology support it?

My involvement with Unlike Minds reminds me that there is a more important, wider concept than personal learning, which needs supporting. We should be aspiring even more to support personal well-being. And one way of doing this is through supporting individuals with information relevant to the decisions they make that affect their personal well-being. This can easily be seen to include: what options there are; ideas on how to make decisions; what the consequences of those decision may be. It is an area which has been more than touched on under the heading “Information, Advice and Guidance”.

I mentioned the developmental models of William G Perry and Robert Kegan back in my post earlier this year on academic humility. An understanding of these aspects of personal development is an essential part of what I have come to see as needed. How can we support people’s movement through Perry’s “positions”, or Kegan’s “orders of consciousness”? Recognising where people are in this, developmental, dimension is vital to informing effective support in so many ways.

My professional interest, where I have a very particular contribution, is around the representation of the information connected with all these areas. That’s what we try to deal with for interoperability and standardisation. So what do we have here? A quick attempt at a round-up…

  • Information about people (learners).
  • Information about what they have learned (learning outcomes, knowledge, skill, competence).
  • Information that learners find useful for their learning and development.
  • Information about many subtler aspects of personal development.
  • Information relevant to people’s well-being, including
    • information about possible choices and their likely outcomes
    • information about individual decision-making styles and capabilities
    • and, as this is highly context-dependent, information about contexts as well.
  • Information about other people who could help them
    • information supporting how to find and relate to those people
    • information supporting those relationships and the support processes
    • and in particular, the kind of information that would promote a trusting and trusted relationship — to do with personal values.

I have the strong sense that this all should be related. But the field as a whole doesn’t seem have a name. I am clear that it is not just the same as the other two areas (in my mind at least) of CETIS work:

  • information of direct relevance to institutions
  • information of direct relevance to content providers.

Of course my own area of interest is also relevant to those other players. Personal well-being is vital to the “student experience”, and thus to student retention, as well as to success in learning. That is of great interest to institutions. Knowing about individuals is of great value to those wanting to sell all kinds of services to to them, but particularly services to do with learning and resources supporting learning.

But now I ask people to think: where there is an overlap between information that the learner has an interest in, and information about learners of interest to institutions and content providers, surely the information should be under the control of the individual, not of those organisations?

What is the sum of this information?

Can we name that information and reclaim it?

Again, can people help me name this field, so my area of work can be better understood and recognised?

If you can, you earn 10 years worth of thanks…

Developing a new approach to competence representation

InLOC is a European project organised to come up with a good way of communicating structures or frameworks of competence, learning outcomes etc. We’ve now produced our interim reports for consultation: the Information Model and the Guidelines. We welcome feedback from everyone, to ensure this becomes genuinely useful and not just another academic exercise.

The reason I’ve not written any blog posts for a few weeks is that so much of my energy has been going into InLOC, and for good reason. It has been a really exciting time working with the team to develop a better approach to representing these things. Many of us have been pushing in this direction for years, without ever quite getting there. Several projects have been nearby, including, last year, InteropAbility (JISC page; project wiki) and eCOTOOL (project web site; my Competence Model page) — I’ve blogged about these before, and we have built on ideas from both of them, as well as from several other sources: you may be surprised at the range and variety of “stakeholders” in this area that we have assembled within InLOC. Doing the thinking for the Logic of Competence series was of course useful background, but nor did it quite get there.

What I want to announce now is that we are looking for the widest possible feedback as further input to the project. It’s all too easy for people like us, familiar with interoperability specifications, simply to cook up a new one. It is far more of a challenge, as well as hugely more worthwhile and satisfying, to create something genuinely useful, which people will actually use. We have been looking at other groups’ work for several months now, and discussing the rich, varied, and sometimes confusing ideas going around the community. Now we have made our own initial synthesis, and handed in the “interim” draft agreements, it is an excellent time to carry forward the wide and deep consultation process. We want to discuss with people whether our InLOC format will work for them; whether they can adopt, use or recommend it (or whatever their role is to do with specifications; or, what improvements need to be made so that they are most likely to take it on for real.

By the end of November we are planning to have completed this intense consultation, and we hope to end up with the desired genuinely useful results.

There are several features of this model which may be innovative (or seem so until someone points out somewhere they have been done before!)

  1. Relationships aren’t just direct as in RDF — there is a separate class to contain the relationship information. This allows extra information, including a number, vital for defining levels.
  2. We distinguish the normal simple properties, with literal objects, which are treated as integral parts of whatever it is (including: identifier, title, description, dates, etc.) from what could be called “compound properties”. Compound properties, that have more than one part to their range, are a little like relationships, and we give them a special property class, allowing labels, and a number (like in relationships).
  3. We have arranged for the logical structure, including the relationships and compound properties, to be largely independent of the representation structure. This allows several variant approaches to structuring, including tree structures, flat structures, or Atom-like structures.

The outcome is something that is slightly reminiscent both of Atom itself, and of Topic Maps. Both are not so like RDF, which uses the simplest possible building blocks, but resulting in the need for harder-to-grasp constructs like blank nodes. The fact of being hard to grasp leads to people trying different ways of doing things, and possibly losing interoperability on the way. Both Atom and Topic Maps, in contrast, add a little more general purpose structure, which does make quite a lot of intuitive sense in both cases, and they have been used widely, apparently with little troublesome divergence.

Are we therefore, in InLOC, trying to feel our way towards a general-purpose way of representing substantial hierarchical structures of independently existing units, in a way that makes more intuitive sense that elementary approaches to representing hierarchies? General taxonomies are simply trying to represent the relationships between concepts, whereas in InLOC we are dealing with a field where, for many years, people have recognised that the structure is an important entity in its own right — so much so that it has seemed hard to treat the components of existing structures (or “frameworks”) as independent and reusable.

So, see what you think, and please tell me, or one of the team, what you do honestly think. And let’s discuss it. The relevant links are also available straight from the InLOC wiki home page. And if you are responsible for creating or maintaining structures of intended learning outcomes, skills, competences, competencies, etc., then you are more than welcome to try out our new approach, that we hope combines ease of understanding with the power to express just what you want to express in your “framework”, and that you will be persuaded to use it “for real”, perhaps when we have made the improvements that you need.

We envisage a future when many ICT tools can use the same structures of learning outcomes and competences, saving effort, opening up interoperability, and greatly increasing the possibilities for services to build on top of each other. But you probably don’t need reminding of the value of those goals. We’re just trying to help along the way.

Reviewing the future for Leap2

JISC commissioned a Leap2A review report (PDF), carried out early in 2012, that has now been published. It is available along with other relevant materials from the e-Portfolio interoperability JISC page. For anyone following the fortunes of Leap2A, it is highly worthwhile reading. Naturally, not all possible questions were answered (or asked), and I’d like to take up some of these, with implications for the future direction of Leap2 more generally.

The summary recommendations were as follows — these are very welcome!

  1. JISC should continue to engage with vendors in HE who have not yet implemented Leap2A.
  2. Engagement should focus on communities of practice that are using or are likely to use e-portfolios, and situations where e-portfolio data transfer is likely to have a strong business case.
  3. JISC should continue to support small-scale tightly focused developments that are likely to show immediate impact.
  4. JISC should consider the production of case studies from PebblePad and Mahara that demonstrate the business case in favour of Leap2A.
  5. JISC should consider the best way of encouraging system vendors to provide seamless import services.
  6. JISC should consider constructing a standardisation roadmap via an appropriate BSI or CEN route.

That tallies reasonably with the outcome of the meeting back in November last year, where we reckoned that Leap2A needs: more adoption; more evidence of utility; to be taken more into the professional world; good governance; more examples; and for the practitioner community to build around it models of lifelong development that will justify its existence.

Working backwards up the list for the Leap2A review report, recommendation 6 is one for the long term. It could perhaps be read in the context of the newly formed CETIS position on the recent Government Open Standards Consultation. There we note:

Established public standards bodies (such as ISO, BSI and CEN), while doing valuable work, have some aspects that would benefit from modernisation to bring them more into line with organisations such as W3C and OASIS.

The point then elaborated is that the community really needs open standards that are freely available as well as royalty-free and unencumbered. The de jure standards bodies normally still charge for copies of their standards, as part of their business model, which we see as outdated. If we can circumvent that issue, then BSI and CEN would become more attractive options.

It is the previous recommendation, number 5 in the list above, that I will focus on more, though. Here is the fuller version of that recommendation (appearing as paragraph 81).

One of the challenges identified in this review is to increase the usability of data exchange with the Leap2A specification, by removing the current necessity for separate export and import. This report RECOMMENDS that JISC considers the best way of encouraging system vendors to provide seamless data exchange services between their products, perhaps based on converging practice in the use of interoperability and discovery technologies (for example future use of RDF). It is recognised that this type of data exchange may require co-ordinated agreement on interoperability approaches across HEIs, FECs and vendors, so that e-portfolio data can be made available through web services, stressing ease of access to the learner community. In an era of increasing quantities of open and linked data, this recommendation seems timely. The current initiatives around courses information — XCRI-CAP, Key Information Sets (KIS) and HEAR — may suggest some suitable technical approaches, even though a large scale and expensive initiative is not recommended in the current financially constrained circumstances.

As an ideal, that makes perfect sense from the point of view of an institution transferring a learner’s portfolio information to another institution. However, seamless transfer is inherently limited by the compatibility (or lack of it) between the information stored in each system. There is also a different scenario, that has always been in people’s minds when working on Leap2A. It is that learners themselves may want to be able to download their own information, to keep for use, at an uncertain time in the future, in various ways that are not necessarily predictable by the institutions that have been hosting their information. In any case, the predominant culture in the e-portfolio community is that all the information should be learner-ownable, if not actually learner-owned. This is reflected in the report’s paragraph 22, dealing with current usage from PebblePad.

The implication of the Leap2A functionality is that data transfer is a process of several steps under the learner’s control, so the learner has to be well-motivated to carry it out. In addition Leap2A is one of several different import/export possibilities, and it may be less well understood than other options. It should perhaps be stressed here that PebblePad supports extensive data transfer methods other than Leap2A, including zip archives, native PebblePad transfers of whole or partial data between accounts, and similarly full or partial export to HTML.

This is followed up in the report’s paragraph 36, part of the “Challenges and Issues” section.

There also appears to be a gap in promoting the usefulness of data transfer specifically to students. For example in the Mahara and PebblePad e-portfolios there is an option to export to a Leap2A zip file or to a website/HTML, without any explanation of what Leap2A is or why it might be valuable to export to that format. With a recognisable HTML format as the other option, it is reasonable to assume that students will pick the format that they understand. Similarly it was suggested that students are most likely to export into the default format, which in more than one case is not the Leap2A specification.

The obvious way to create a simpler interface for learners is to have just one format for export. What could that format be? It should be noted first that separate files that are attached to or included with a portfolio will always remain separate. The issue is the format of the core data, which in normal Leap2A exports is represented by a file named “leap2a.xml”.

  1. It could be plain HTML, but in this case the case for Leap2A would be lost, as there is no easy way for plain HTML to be imported into another portfolio system without a complex and time-consuming process of choosing where each single piece of information should be put in the new system.
  2. It could be Leap2A as it is, but the question then would be, would this satisfy users’ needs? Users’ own requirements for the use of exports is not spelled out in the report, and it does not appear to have been systematically investigated anywhere, but it would be reasonable to expect that one use case would be that users want to display the information so that it can be cut and pasted elsewhere. Leap2A supports the display of media files within text, and formatting of text, only through the inclusion of XHTML within the content of entries, in just the same way as Atom does. It is not unreasonable to conclude that limiting exports to plain Leap2A would not fully serve user export needs, and therefore it is and will continue to be unreasonable to expect portfolio systems to limit users to Leap2A export only.
  3. If there were a format that fully met the requirements both for ease of viewing and cut-and-paste, and for relatively easy and straightforward importing to another portfolio system (comparable to Leap2A currently), it might then be reasonable to expect portfolio systems to have this as their only export format. Then, users would not have to choose, would not be confused, and the files which they could view easily and fully through a browser on their own computer system would also be able to be imported to another portfolio system to save the same time and effort that is currently saved through the use of Leap2A.

So, on to the question, what could that format be? What follows explains just what the options are for this, and how it would work.

The idea for microformats apparently originated in 2000. The first sentence of the Wikipedia article summarises nicely:

A microformat (sometimes abbreviated µF) is a web-based approach to semantic markup which seeks to re-use existing HTML/XHTML tags to convey metadata and other attributes in web pages and other contexts that support (X)HTML, such as RSS. This approach allows software to process information intended for end-users (such as contact information, geographic coordinates, calendar events, and the like) automatically.

In 2004, a more sophisticated approach to similar ends was proposed in RDFa. Wikipedia has “RDFa (or Resource Description Framework –in– attributes) is a W3C Recommendation that adds a set of attribute-level extensions to XHTML for embedding rich metadata within Web documents.”

In 2009 the WHATWG were developing Microdata towards its current form. The Microformats community sees Microdata as having grown out of Microformats ideas. Wikipedia writes “Microdata is a WHATWG HTML specification used to nest semantics within existing content on web pages. Search engines, web crawlers, and browsers can extract and process Microdata from a web page and use it to provide a richer browsing experience for users.”

Wikipedia quotes the Schema.org originators (launched on 2 June 2011 by Bing, Google and Yahoo!) as stating that it was launched to “create and support a common set of schemas for structured data markup on web pages”. It provides a hierarchical vocabulary, in some cases drawing on Microformats work, that can be used within the RDFa as well as Microdata formats.

Is it possible to represent Leap2A information in this kind of way? Initial exploratory work on Leap2R has suggested that it is indeed possible to identify a set of classes and properties that could be used more or less as they are with RDFa, or could be correlated with the schema.org hierarchy for use with Microdata. However, the solution needs detail adding and working through.

In principle, using RDFa or Microdata, any portfolio information could be output as HTML, with the extra information currently represented by Leap2A added into the HTML attributes, which is not directly displayed, and so does not interfere with human reading of the HTML. Thus, this kind of representation could fully serve all the purposes currently served by HTML export of Leap2A. It seems highly likely that practical ways of doing this can be devised that can convey the complete structure currently given by Leap2A. The requirements currently satisfied by Leap2A would be satisfied by this new format, which might perhaps be called “Leap2H5″, for Leap2 information in HTML5, or maybe alternatively “Leap2XR”, for Leap2 information in XHTML+RDFa (in place of Leap2A, meaning Leap2 information in Atom).

Thus, in principle it appears perfectly possible to have a single format that simultaneously does the job both of HTML and Leap2A, and so could serve as a plausible principal export and import format, removing that key obstacle identified in paragraph 36 of the Leap2A review report. The practical details may be worked out in due course.

There is another clear motivation in using schema.org metadata to mark up portfolio information. If a web page uses schema.org semantics, whether publicly displayed on a portfolio system or on a user’s own site, Google and others state that the major search engines will create rich snippets to appear under the search result, explaining the content of the page. This means, potentially, that portfolio presentations would be more easily recognised by, for instance, employers looking for potential employees. In time, it might also mean that the search process itself was made more accurate. If portfolio systems were to adopt export and import using schema.org in HTML, it could also be used for all display of portfolio information through their systems. This would open the way to effective export of small amounts of portfolio information simply by saving a web page displayed through normal e-portfolio system operation; and could also serve as an even more effective and straightforward method for transferring small amounts of portfolio information between systems.

Having recently floated this idea of agreeing Leap2 semantics in schema.org with European collaborators, it looks like gaining substantial support. This opens up yet another very promising possibility: existing European portfolio related formats could be harmonised through this new format, that is not biased towards any of the existing ones — as well as Leap2A, there is the Dutch NTA 2035 (derived from IMS ePortfolio), and also the Europass CV format. (There is more about this strand of unfunded work through MELOI.) All of these are currently expressed using XML, but none have yet grasped the potential of schema.org in HTML through microdata or RDFa. To restate the main point here, this means having the semantics of portfolio information embedded in machine-processable ways, without interfering with the human-readable HTML.

I don’t want to be over-optimistic, as currently money tends only to go towards initiatives with a clear business case, but I am hopeful that in the medium term, people will recognise that this is an exciting and powerful potential development. When any development of Leap2 gets funded, I’m suggesting that this is what to go for, and if anyone has spare resource to work on Leap2 in the meanwhile, this is what I recommend.

Where are the customers?

All of us in the learning technology standards community share the challenge of knowing who our real customers are. Discussion at the January CEN Workshop on Learning Technologies (WS-LT) was a great stimulus for my further reflection — should we be thinking more of national governments?

Let’s review the usual stakeholder suspects: education and training providers; content providers; software developers; learners; the European Commission. I’ll gesture (superficially) towards arguing that each one of these may indeed be stakeholders, but the direction of the argument is that there is a large space in our clientele and attendance for those who are directly interested and can pay.

Let’s start with the the providers of education and training. They do certainly have an interest in standards, otherwise why would JISC be supporting CETIS? But rarely do they implement standards directly. They are interested, so our common reasoning goes, in having standards-compliant software, so that they can choose between software and migrate when desired, avoiding lock-in. But do they really care about what those standards are? Do they, specifically, care enough to contribute to their development and to the bodies and meetings that take forward that development?

In the UK, as we know, JISC acts as an agent on behalf of UK HEIs and others. This means that, in the absence of direct interest from HEIs, it is JISC that ends up calling the shots. (Nothing inherently wrong with that – there are many intelligent, sensible people working for JISC.) Many of us play a part in the collective processes by which JISC arrives at decisions about what it will fund. We are left hoping that JISC’s customers appreciate this, but it is less than entirely clear how much they appreciate the standardisation aspect.

I’ll be even more cursory about content providers, as I know little about that field. My guess is that many larger providers would welcome the chance of excluding their competitors, and that they participate in standardisation only because they can’t get away with doing differently. Large businesses are too often amoral beasts.

How about the software vendors, then? We don’t have to look far for evidence that large purveyors of proprietary software may be hostile in spirit to standardisation of what their products do, and that they are kept in line, if at all, only by pressure from those who purchase the software. In contrast, open source developers, and smaller businesses, typically welcome standards, allowing work to be reused as much as possible.

In my own field of skills and competence, there are several players interested in managing the relevant information about skills and competence, including (in the UK) Sector Skills Councils, and bodies that set curricula for qualifications. But they will naturally need some help to structure their skill and competence information, and for that they will need tools, either that they develop themselves or buy. It is those tools that are in line to be standards compliant.

And what of the learners themselves? Seems to me “they” (including “we” users) really appreciate standards, particularly if it means that our information can be moved easily between different systems. But, as users, few of us have influence. Outside the open source community, which is truly wonderful, I can’t easily recall any standards initiative funded by ordinary users. Rather, the influence we and other users have is often doubly indirect: filtered through those who pay for the tools we use, and through those who develop and sell those tools.

The European Commission, then? Maybe. We do have the ICT Standardisation Work Programme (ICTSWP), sponsored by DG Enterprise and Industry. I’m grateful that they are sponsoring the work I am doing for InLOC, though isn’t the general situation a bit like JISC? It is all down to which priorities happen to be on the agenda (of the EC this time), and the EC is rather less open to influence than JISC. Whether an official turns up to a CEN Workshop seems to depend on the priorities of that official. André Richier (the official named in “our” bit of the ICTSWP) often turns up to the Workshop on ICT Skills, but rarely to our Workshop. In any case they are not the ultimate customers.

What are the actual interests of the EC? Mobility, evidently. There has been so much European funding over the years with the term “mobiity” attached. Indeed, the InLOC work is seen as part of the WS-LT’s work on European Learner Mobility. Apart from mobility, the EC must have some general interest in the wellbeing of the European economy as a whole, but this is surely difficult, where the interests of different nations surely diverge. More of this later.

In the end, many people don’t turn up, for all these reasons. They don’t turn up at the WS-LT; they don’t turn out in any real strength for the related BSI committee, IST/43; few of the kinds of customer I’m thinking about even turn up at ISO SC36.

Who does turn up then? They are great people. They are genuinely enthusiastic about standardisation, and have many bright ideas. They are mostly in academia, small (often one-person) consultancy, projects, networks or consortia. They like European, national, or any funding for developing their often genuinely good ideas. Aren’t so many of us like that? But there were not even many of us lot at this WS-LT meeting in Berlin. And maybe that is how it goes – when starved of the direct stimulus of the people we are doing this for, we risk losing our way, and the focus, enthusiasm and energy dwindles, even within our idealistic camp.

Before I leave our esteemed attendees, however, I would like to point out the most promising bodies that were represented at the WS-LT meeting: KION from Italy and the University of Oslo’s USIT, both members of RS3G, the Rome Student Systems and Standards Group, an association of software providers. They are very welcome and appropriate partners with the WS-LT.

Which brings me back to the question, where are the other (real) customers? We could ask the same thing of IST/43, and of ISO SC36. Which directly interested parties might pay? Perhaps a good place to start the analysis is to divide the candidates roughly between private and public sectors.

My guess here is that private sector led standardisation works best in the classic kinds of situation. What would be the point of a manufacturer developing their own range of electrical plugs and sockets? Even with telephones, there are huge advantages in having a system where everyone can dial everyone else, and indeed where all handsets work everywhere (well, nearly…). But the systems we are working with are not in that situation. There are reasons for these vendors to want to try their own new non-standard things. And much of what we do leads, more than follows, implementation. That ground sometimes seems a bit shaky.

Private sector interest in skills and competence is focused in the general areas of personnel, recruitment, HR, and training. Perhaps, for many businesses, the issues are not seen as complex enough to merit the involvement of standards.

So what are the real benefits that we see from learning technology standardisation, and put across to our customers? Surely these include better, more effective as well as efficient education; in the area of skills and competence, easier transition between education and work; and tools to help with professional and vocational development. These relate to classic areas of direct interest from government, because all governments want a highly skilled, competent, professional work force, able to “compete” in the global(ised) economy, and to upskill themselves as needed. The foundations of these goals are laid in traditional education, but they go a long way beyond the responsibilities of schools, HEIs, and traditional government departments of education. Confirmation of the blurring of boundaries comes from recalling that the the EC’s ICTSWP is sponsored not by DG Education and Culture, but DG Enterprise and Industry.

My conclusion? Government departments need our help in seeing the relevance of learning technology standardisation, across traditional departmental boundaries. This is not a new message. What I am adding to it is that I think national government departments and their agencies are our stakeholders, indeed our customers, and that we need to be encouraging them to come along to the WS-LT. We need to pursuade them that different countries do share an interest in learning technology standardisation. This would best happen alongside their better involvement in national standards bodies, which is another story, another hill to climb…

Standardization process – ISO SC36

In mid March I spent several days at the ISO SC36 meeting in Strasbourg, an experience which was … how can I say this … frustrating. Let me give some background, explain my frustration, then offer ideas about causes and possible remedies.

ISO/IEC JTC1 SC36 (official ISO page; own web site) is the International Organization for Standardization’s committee on ITLET — information technologies for learning education and training. It currently meets twice a year at points round the globe, and as it was meeting relatively close by, and was dealing with an item of considerable importance to my work (e-portfolio reference model) I chose to attend: not, however, for the full week and more, but just for the WG3 meetings, which spanned 4 days. These meetings are attended by representatives of national standards bodies — in our case, BSI.

What happens at these meetings is governed by procedural rules that participants explain are necessary, and to which they are resigned, rather than interpreting creatively. The problem is not just the process itself, which would not be too bad if everything ran according to plan, but the way in which it is applied inflexibly. Even if it is clear that a draft presented to a committee needs a lot of revision, participants have to stick to wading through as many comments as national bodies have provided. As each national body provides comments independently, many of the comments conflict. Comments can be general, technical, or editorial, and the committee has to find a resolution on each one, fitting in to just one of the few acceptable formulae. This might be fine if everything was going the way the procedure writers imagined, but when it isn’t, it can be excruciating. The only way I found to make it tolerable is to multi-task, and to do other work at the same time as the committee proceedings when the matter in hand was of no great importance (though this does depend on having a good Internet connection available).

The result is great inconvenience, and either an inefficient process where people are only giving part of their attention, or a frustrating waste of time if you try to give the whole of your attention. It’s not that these processes don’t need to be gone through — they do — but better ways must be possible, and surely need to be implemented. For instance, what if the editors received feedback from national bodies, and produced an integrated redraft, dealing as well as they can with all the comments? No physical meeting is needed for this. This inability to change the process to suit the actual situation seems to feed the whole procedural problem.

The consequence of inefficiency could be that busy people are less likely to engage with the process than they might otherwise have been. Or perhaps the people who are really influential in large organisations will just stay at home, and send some minion to do the negotiation for them. That would compromise the nature of any agreement that is able to be reached. Many of the people who do appear at such meetings are either academics, with their own research agenda, or independent or semi-independent professionals, who use the opportunity to network, and in the worst cases (happily not evident at all among the people I met) such people can even use the processes to advance their own business interests at the expense of others. This is hardly the ideal recipe for an effective, meaningful, significant standards-setting body.

Sometimes, committee output is “just a technical report”, with no official status as a standard. Now in CEN, the processes is usefully differentiated: in our area there is an informal Workshop (WS-LT) to discuss things and come to agreements, and a formal Technical Committee (TC353), made up of national body representatives, to take decisions on standardisation. But in ISO, there is no similar division of role.

It seems to me that, particularly in the current economic climate, the SC36 mode of operation may not be sustainable, as people come to apply some kind of cost-benefit analysis. If we want such standardisation bodies to continue in the future, I’d say we need to review the processes deeply, coming to a new understanding of what structures and practices are helpful towards what ends. (We can then ask those who care about those ends to finance the process.)

Of course the following speculation is not of itself going to make changes happen. However, it is just possible that conversations about the issues may help towards a consensus about how to move things forward. (Co-incidentally, see my private blog about the piece by Theodore Zeldin on the Pont de l’Europe in Strasbourg.)

First, I would move most of the ISO process away from face-to-face. We do need to get to know others personally, but this could be done more effectively through something more like an annual conference, with the majority of time devoted to networking, and some presentations of the live issues that most need discussion, in preparation for the consensus process.

Second, I would adapt the processes so that they are better tuned to producing durable consensus. How to do this is too large a topic to address here.

Third, I would put in several checks at different stages of the work to confirm that whatever was being discussed was genuinely of importance to significant stakeholders. When a work item failed such a test, it would be dismissed. This would probably reduce the workload very significantly.

Fourth, I would try (though I don’t know how) to ensure that all participants

  • properly understand consensus process
  • are committed to acting transparently
  • come to the proceedings with good will

Even if bodies like ISO don’t get round to it, it would be good for those who care to formalise some set of principles such as the ones I am suggesting above, resulting in what could be seen as agreed standards for standards bodies. If we had a list of criteria by which to judge standards bodies and standardisation process, we could agree to support and attend only bodies that conformed. This would apply not only to official “de jure” standardisation bodies, but also to the many other bodies (including all those we know in CETIS) that prepare and publish interoperability specifications.

If anyone knows of any such existing guidelines, I’d be grateful to learn of them.