JISC and MIT: comparing notes on ed tech

A few weeks ago I had an opportunity to join a conversation between JISC and MIT OEIT (http://oeit.mit.edu/) to exchange information about current initiatives and possible collaborations. The general themes of the conversation were openness and sustainability. There was an agreed sense that, currently, “Open is the new educational tech” (Vijay). The areas of strategic interest, competencies, and knowledge of open institutes are now central to much educational development. JISC’s work in many diverse areas has contributed to the growth of openness both the successive programmes of work connected to repositories (including the cultivation of developer happiness) and more recently the JISC and HEA OER programme.

Vijay outlined some of the thinking that MIT OEIT are doing around innovation and sustainability outlining where they fit in that cycle and the limiting dependencies of innovation. In a four stage innovation cycle. MIT OEIT are mostly involved in the initial incubation/ development phase and the early implementation phase. They’re not in the business of running services but they need to ensure that their tools are designed and developed in ways which are congruous with sustainability. One key point in their analysis is that the limiting factor for innovation is not your organisational growth (whether the size of the project, design team, or facilities) but the growth of nascent surrounding communities in other parts of the value chain.

As a result, MIT have found that sustainability and embedding innovation isn’t just about more resources it’s about basic design choices and community development. Openness and open working allows the seeding of the wider community from the outset and allows a project to develop competencies and design skills in the wider community). This resonates with some of observations made by OSS Watch and Paul Walk. We then discussed the success of the Wookie widget work carried out by Scott Wilson (CETIS) and how that has successfully developed from a JISC project into an Apache Foundation incubator http://incubator.apache.org/wookie/.

The conversation continued around the tech choices being made in the UKOER programme noting the strength in the diversity of approaches and tools that have been in use in the programme and the findings that appear to be emerging- there is no dominant software platform, choices about support for standards are being driven, in part, by the software platforms rather than a commitment to any standard. [I’ll blog more on this in January as the technical conversations with projects are completed]. We also noted upcoming work around RSS and deposit tools taking place both following on from the JISCRI deposit tools event and emerging from the UKOER programme [see Jorum’s discussion paper on RSS for ingest http://blogs.cetis.ac.uk/lmc/2009/12/09/oer-rss-and-jorumopen/]

Brandon then highlighted the SpokenMedia project (http://spokenmedia.mit.edu/) creating tools to automatically transcribe video of lectures both for to enable better search and to make materials to be more accessible and scannable. The tools achieve up 60% base accuracy and are trainable up to 80% accuracy. MIT hope this will make lecture video significantly more browseable and are exploring the release of an api for this as an educational service.

We then discussed some projects working in areas that support bringing research data into curriculum. MIT have a series of projects in this area under the general name of STAR (http://web.mit.edu/star/) which provide suites of tools to use research data in the classroom. One successful implementation of this is STARBioGene allows Biology students to use research tools and materials as part of the core curriculum. Some of the STAR tools are desktop applications and some are cloud-based, many have been made open source.
The wider uptake of the project has contributed to the development of communities outside MIT who are using these tools – as such also it’s an example of growing the wider uptake community outlined in their innovation cycle. One consideration that it has raised about communities of use is that some of the visualisation tools require high performance computing (even if only needed in small bursts). The trend toward computationally intensive science education may create other questions of access beyond the license.

Another interesting tool that we discussed was the Folk Semantic Tool from COSL at Utah State University: on the one hand it’s another RSS aggregator for OERs, on the other, for users running Firefox and Greasemonkey it’s a plugin to add recommendations for OERs into any webpage (which runs off a single line of javascript). http://www.folksemantic.com/

MIT: M.S. Vijay Kumar & Brandon Muramatsu JISC: David Flanders, John Robertson (CETIS)

Managing OERs: the problem of version control?

Proposal: those releasing OERs should not invest undue effort in attempting to maintain version control over copies of their material other than those they directly manage.

This post looks at one possible administrative or management concern or challenges emerging from the technical side of working with Open Educational Resources. My response to this concern is (more than usual) opinion rather than advice and hopes to provoke some debate.

There are plenty of reasons why version control of files is critical. These range from managing which version of a document you can safely delete to making sure you’re reading the right document or installing the most up to date patch. Good version control is a key part of content production, file management, and dissemination. Any repository, content management system or other tool – need to clearly distinguish between current and older versions. Older versions may or may not be maintained (whether publicly or privately). In itself this creates a question of what those releasing resources should link to. At its simplest version information about research papers is important to distinguish between pre-print and post-print. However, when papers are published that usually represents a final version of that paper (and not many repositories are [currently] likely to make public multiple versions of an article.

Educational resources on the other hand are usually considered less finished in that even once they are used for teaching year by year [in theory] they regularly evolve to reflect feedback, changes in course content, and developments in teaching style. These iterative versions may often blur into each over as in the lecturer’s mind they are the notes for topic ‘x’ rather than discrete intellectual works. Unless a course or class is completely restructured these assets are likely considered to be one entity [There is a case to be made that these materials are perhaps in need of more rigorous versioning]. For academics who’ve engaged with the idea of Open Education or simply appreciate the visibility it offers this may create a desire to update the materials they’ve released and replace them with new versions. Indeed, if they discover an error in their materials, or their thinking shifts they may be insistent on trying to manage the available copies of their work.

Local repositories or services managing OERs will doubtless develop their own policies and practices to support or address this concern and it makes sense to keep the available learning resources updated. The policies and practices will likely diverge over whether older versions of materials are kept and/or made available to the public. This process gets more interesting though when we consider what interaction projects have with other services which have copies of (rather than just link to) their resources – to what extent do you try to version secondary copies of your resources?

I think there are several factors that shape how OER producers should respond to this question:

  1. strictly speaking you have no legal right to request the removal or update of such resources. Once released under an open license the content is out of your control as long as the license is conformed to. [Note: I am not a lawyer but part of the entire point of most open licenses is that they are non-transactional and irrevocable].
  2. most services or individuals that have taken copies of your resources are likely to be very happy to take updated copies as well.
  3. although most notification or harvesting technologies or standards can support (in some form) the deletion, creation or updating of records [and this would potentially support pointing to a new version]; they deal with metadata rather than allowing remote file management [AFAIK] and even if they did support remote file management few people are going to enable such a feature.
  4. manually distributing updated copies is a possibility but is time intensive and also relies on the policy, procedure, and practice of the third party service.

Considering these factors, I don’t think under normal circumstances OER distributors should be concerned about how their materials are versioned once they leave their local service. This does imply that there may always be some degree of confusion but I’d suggest that on the web there is (even when concerted efforts are made to reduce it) and that responding to the confusion requires consumers of OERs to exercise the same information literacy skills that they need when interacting with any online resource.

This said I think there are steps OER producers can take to promote the visibility of their current resources: one such would be to include a purl or other appropriate uri which points to for the latest version in the resource and metadata where this is possible, whether as a cover page, a subtitle at the start of a video, or other such mechanism, there is a compelling case that resources should include information about where they’ve come from – not only to promote the latest versions but also to note the resource provider. I’ve talked previously about this idea of that resources should be self-descriptive. There may be limit cases in which there is a compelling case to try to remove every trace of a resource but these are unlikely to be common.