Prof. Zhu’s presentation on e-education in China

Initially, it’s hard to get past the eye-popping numbers (1876 universities, 17 million students and so on) but once you do, you’ll see that the higher education sector in China is facing remarkably familiar challenges with some interesting solutions.

We were very fortunate here at IEC that Prof Zhu Zhiting and colleagues from Eastern China Normal University and the China e-Learning Technology Standardization Committee agreed to visit our department after attending the JISC CETIS conference yesterday. He kindly agreed to let us publish his slides, which are linked below.

The two most noticeable aspects of prof. Zhu’s presentation are the nature of planning e-education in China, and the breadth of interests in Prof. Zhu’s Distance Education College & e-Educational System Engineering Research Center.

Because the scale of education in China is so vast, any development has to be based on multiple layers of initiatives. The risks involved mean that the national ministery of education needs to plan at very high, strategic levels, that set out parameters for regional and local governments to follow. This is not new per se, but it leads to a thoroughness and predictability in infrastructure that others could learn from.

The department in Shanghai, though, is another matter. Their projects range from international standardisation right down to the development of theories that integrate short term and long term individual memory with group memory. Combined with concrete projects such as the roll-out of a lifelong learning platform for the citizens of Shanghai, that leads to some serious synergies.

Learn more from Prof. Zhu’s slides

More about IEC and what it does.

How users can get a grip on technological innovation

Control of a technology may seem a vendor business: Microsoft and Windows, IBM and mainframes. But by understanding how technology moves from prototype to ubiquity, the people who foot the bill can get to play too.

Though each successful technological innovation follows its own route to becoming an established part of the infrastructure, there are couple of regularities. Simplified somewhat, the process can be conceived as looking like this:
The general technology commodification process

The interventions (open specifications, custom integration et al) are not mutually exclusive, nor do they necessarily come in a fixed chronological order. Nonetheless, people do often try to establish a specification before the technology is entirely ‘baked’, in order to forestall expensive dead-ends (the GSM mobile phone standard, for example). Likewise, a fully fledged open source alternative can take some time to emerge (e.g. mozilla / firefox in the web browser space).

The interventions themselves can be done by any stakeholder; be they vendors, buyers, sector representatives such as JISC or anyone else. Who benefits from which intervention is a highly complex and contextualised issue, however. Take, for example, the case of VLEs:

The technology commodification process as applied to VLEs

When VLEs were heading from innovation projects to the mainstream, stakeholders of all kinds got together to agree open standards in the IMS consortium. No-one controlled the whole space, and no-one wanted to develop an expensive system that didn’t work with third party tools. Predictably, implementations of the agreed standards varied, and didn’t interoperate straight off, which created a niche for tools such as Reload and course genie which allowed people to do some custom integration; a glossing over the peculiarities of different content standard implementations. Good for them, but not optimal for buyers or the big vle vendors.

In the mean time, some VLE vendors smelled an opportunity to control the technology, and get rich off the (license) rent. Plenty of individual buyers as well as buyer consortia took a conscious and informed decision to go along with such a near monopoly. Predictability and stability (i.e. fast commodification) were weighed against the danger of vendor lock-in in the future, and stability won.

When the inevitable vendor lock-in started to bite, another intervention became interesting for smaller tool vendors and individual buyers: reverse engineering. This intervention is clearer in other technologies such Windows file and print server protocols (the Samba project) and the PC hardware platform (the early ‘IBM clones’). Nonetheless, a tool vendor such as HarvestRoad made a business of freeing content that was caught in proprietary formats in closed VLEs. Good for them, not optimal for big platform vendors.

Lastly, when the control of a technology by a platform becomes too oppressive, the obvious answer these days is to construct an open source competitor. Like Linux in operating systems, Firefox in browsers or Apache in webservers, Moodle (and Sakai, atutor, .LRN and a host of others) have achieved a more even balance of interests in the VLE market.

The same broad pattern can be seen in many other technologies, but often with very particular differences. In blogging software, for example, open source packages have been pretty dominant from the start, and one package even went effectively proprietary later on (Movable Type).

Likewise, the shifting interests of stakeholders can be very interesting in technologies that have not been fully commodified yet. Yahoo for example, has not been an especially strong proponent of open specifications and APIs in the web search domain until it found itself the underdog to Google’s dominance. Google clearly doesn’t feel the need to espouse open specifications there, since it owns the search space.

But it doesn’t own mobile phone operating systems or social networking, so it is busy throwing its weight behind open specification initiatives there. Just as they are busy reverse engineering some of Microsoft’s technologies in domains that have already commodified such as office file formats.

From the perspective of technology users and sector representatives, it pays to consider each technology in a particular context before choosing the means to commodify it as advantageously but above all as quickly as possible. In the end, why spend time fighting over technology that is predictable, boring and ubiquitous when you can build brand new cool stuff on top of it?

Recycling webcontent with DITA

Lots of places and even people have a pile of potentially useful content sitting in a retired CMS or VLE. Or have content that needs to work on a site as much as a pdf or a booklet. Or want to use that great open stuff from the OU, but with a tweak in that paragraph and in the college’s colours, please.

The problem is as old as the hills, of course, and the traditional answer in e-learning land has been to use one of the flavours of IMS Content Packaging. Which works well enough, but only at a level above the actual content itself. That is, it’ll happily zip up webcontent, provide a navigation structure to it and allow the content to be exchanged between one VLE and another. But it won’t say anything about what the webcontent itself looks like. Nor does packaging really help with systems that were never designed to be compliant with IMS Content Packaging (or METS, or MPEG 21 DID, or IETF Atom etc, etc.).

In other sectors and some learning content vendors, another answer has been the use of single source authoring. The big idea behind that one is to separate content from presentation: if every system knows what all parts of a document mean, than the form could be varied at will. Compare the use of styles in MS Word. If you religiously mark everything as either one of three heading levels or one type of text, changing the appearance of even a book length document is a matter of seconds. In single source content systems that can be scaled up to include not just appearance, but complete information types such as brochures, online help, e-learning courses etc.

The problem with the approach is that you need to agree on the meaning of parts. Beyond a simple core of a handful of elements such as ‘paragraph’ and ‘title’, that quickly leads to heaps of elements with no obvious relevance to what you want to do, but still lacking the two or three elements that you really need. What people think are meaningful content parts simply differs per purpose and community. Hence the fact that a single source mark-up language such as the Text Encoding Initiative (TEI) currently has 125 classes with 487 elements.

The spec

The Darwin Information Typing Architecture (DITA) specification comes out of the same tradition and has a similar application area, but with a twist: it uses specialisation. That means that it starts with a very simple core element set, but stops there. If you need to have any more elements, you can define your own specialisations of existing elements. So if the ‘task’ that you associate with a ‘topic’ is of a particular kind, you can define the particularity relative to the existing ‘task’ and incorporate it into your content.

Normally, just adding an element of your own devising is only useful for your own applications. Anyone else’s applications will at best ignore such an element, or, more likely, reject your document. Not so in DITA land. Even if my application has never heard of your specialised ‘task’, it at least knows about the more general ‘task’, and will happily treat your ‘task’ in those more general terms.

Though DITA is an open OASIS specification, it was developed in IBM as a solution for their pretty vast software documentation needs. They’ve also contributed the useful open source toolkit for processing content into and out of DITA (Sourceforge), with comprehensive documentation, of course.

That toolkit demonstrates the immediate advantage of specialisation: it saves an awful lot of time, because you can re-use as much code as possible. This works both in the input and output stage. For example, a number of transforms already exist in the toolkit to take docbook, html or other input, and transform it into DITA. Tweaking those to accept the html from any random content management system is not very difficult, and once that’s done, all the myriad existing output formats immediately become available. What’s more, any future output formats (e.g. for a new Wiki or VLE format) will be immediately useable once someone, somewhere makes a DITA to new format transform available.

Moreover, later changes and tweaks to your own element specialisations don’t necessarily require re-engineering all tools or transforms. Hence that Darwin moniker. You can evolve datamodels, rather than set them in stone and pray they won’t change.

The catch

All of this means that it quickly becomes more attractive to use DITA than make a set of custom transforms from scratch. But DITA isn’t magic, and there are some catches. One is simply that some assembly is required. Whatever legacy content you have lying around, some tweakery is needed in order to get it into DITA, and out again without losing to much of the original structural meaning.

Also, the spec itself was designed for software documentation. Though several people are taking a long, hard look at specialising it for educational applications (ADL, Edutech Wiki and OASIS itself), that’s not proven yet. Longer, non-screenful types of information have been done, but might not offer enough for those with, say, an existing docbook workflow.

The technology for the toolkit is of a robust, if pedestrian variety. All the elements and specialisations are in Document Type Definitions (DTDs) –a decidly retro XML technology– though you can use the hipper XMLSchema or RelaxNG as well. The toolkit itself is also rather dependent on extensive path hackery. High volume. real time content transformation is therefore probably best done with a new tool set.

Those tool issues are independent of the architecture itself, though. The one tool that would be difficult to remove is XSL Transforms, and that is pretty fundamental. Though ‘proper’ semantic web technology might have offered a far more powerful means to manipulate content meaning in theory, the more limited, but directly implementable XSLTs give it a distinct practical edge.

Finally, direct content authoring and editing in DITA XML poses the same problem that all structural content systems suffer from. Authors want to use MS Office, and couldn’t care less about consistent meaningful document structuring, while the Word format is a bit of a pig to transform and it is very difficult to extract a meaningful structure from something that was randomly styled.

Three types of solution exist for this issue: one is to use a dedicated XML editor meant for the non-angle bracket crowd. Something like XMLMind’s editor is pretty impressive and free to boot, but may only work for dedicated content authors simply because it is not MS Word. You can use MS Word with templates either directly with an plug-in, or with some post-processing via OpenOffice (much like ICE does). Those templates makes Word behave differently from normal, though, which authors may not appreciate.

Perhaps it is, therefore, best to go with the web oriented simplicity, and transform orientation of DITA, and use a Wiki. Wiki formats are so simple that mapping a style to a content structure is pretty safe and robust, and not too alien and complex for most users.