Open, Education

This is a longish summary of a presentation I gave recently, covering why I was talking, the spectrum of openness, the ways of being open, the range of activities involved in education and how open things might apply to those activities. You may want to skim through until something catches your eye :)

Why I did this

When Marieke asked me to give a “general introduction to open education” for the Open Knowledge Foundation / LinkedUp Project Open Education Handbook booksprint I admit I was somewhat nervous. More so when I saw the invite list. I mean, I’ve worked on OERs for a few years, mostly specializing in technologies for managing their dissemination and discovery; I’ve even helped write a book about that, (which incidentally was the output of a booksprint, about which I have also written), but that only covers a small part of the OER endeavour, and OERs are only a small element in the Open Education movement, and I saw the list of invitees to the booksprint and could see names of people who knew much more than me.

However Martin Poulter then asked this on Twitter

and I thought why not take inspiration from that approach. I can say stuff, and if it is wrong someone will put me right; it’ll be like learning about things. I like learning things, I like Open Education and I like booksprints. So this is what I said.

I wanted to emphasize that Open Education covers a wide range of activities. It has a long history, which we can see in the name of institutions like the Open University, but has recently taken on new impetus in a new direction, not disconnected with that history, but not entirely the same. Being a bit of a reductionist, the simple way to illustrate the range of Open Education was to reflect on the extent and range of meanings of Open and the range of activities that may be involved in education.

The spectrum of openness

Shows a range of

A “map” of IP rights and freedoms to show people use and view the different “permissions” (some legal, some illegal), BY DAVID EAVES, from

A couple of weeks ago this discussion on the spectrum of open passed through twitter. At one extreme you have “proprietary”, i.e. the commercially licensed use of other people’s resources covered by copyright or patents. Is this open? Well not in the sense of Open in OERs, but it is more open than material which is covered by non-disclosure agreements or trade secrets, and “fair use” or “fair dealing” may sometimes offer an exemption to needing a licence. So it makes sense to start the spectrum of openness here. Then you move to more liberal licences, say Creative Commons Licenses with ND or NC restrictions, through Share Alike to the most liberal attribution-only (CC:BY) and unrestricted (CC:0) licences. And then you pass into illegal use which ignores property rights, for personal use, for sharing (piracy) or claims that something is what it isn’t (counterfeiting).

When using, sharing and repurposing resources, teachers tend to work in the part of the spectrum spanning from proprietary through to the ignoring of property rights. It is interesting to reflect that much technical effort has been spent on facilitating the former (think Athens, Shibboleth Access Management Federation, and single sign-on solutions for identification, authentication and authorisation), political effort on legitimising some of the latter (e.g. use of orphan works, exemptions for text mining) and educational effort on avoiding what is not legitimate. One of the benefits of the OER/Open Access approach is in avoiding effort.

The ways of being open

That all focusses on open access to and use of resources, but there are other ways of being open, seen in terms such as “open development” “open practice” “open university” and even “open prison” which all have something to do with who you allow to participate in what. There is much gnashing of teeth when this sense of openness gets confused with openness of access and use; for example complaints that a standard isn’t open because it costs money or that an online course isn’t open because the resources used cannot be copied. Yes you could spend the rest of your life trying to distinguish between “open” “free” and “libre”, but in real life words don’t align with nice neat categories of meaning like that.

I don’t think participation has to be open to everyone for a process to be described as open. As with openness in access and use, openness in participation can happen to various extents: towards one end of the spectrum, participation in IMS specification development is open to anyone who pays to be a member, ISO standardization processes are open to any national standardization body; wikipedia is an obvious example of a more open approach.

This form of openness is really interesting to me because I think that through sharing the development of resources we may see an improvement in their quality. I think that the OER work to date has largely missed this. And incidentally, having a hand in the development of a resource makes someone more likely to use that resource.

Activities involved in education

I think this picture does a reasonable job of showing the range of activities that may be involved in education, and I’ll stress from the outset that they don’t all have to be, some forms of education will only involve one or two of these activities.

The range of activities related to education.

The range of activities related to education.

Running down the diagonal you have the core processes of formal education (but note well: this isn’t a waterfall project plan, I’m not saying each one happens when the other is complete): policy at a national through to institutional level on how institutions are run, for example who gets to learn what and how, and who pays for it; administration, dealing with recruitment, admissions, retention, progression, graduation, timetabling, reporting, and so on; teaching, to use an old-fashioned term to include mentoring and all non-instructivist activities around the deliberate nurturing of knowledge; learning, which may be the only necessary activity here; assessment, not just summative, but also formative and diagnostic–remember, this isn’t a waterfall; and accreditation, saying who learnt what. Around these you have academic and business topics that inform or influence these processes: politics, management studies, pedagogy, psychology, philosophy, library functions, and Human Resource functions such as recruitment and staff development.

Open Education

OER interest tends to focus on the teaching, learning, assessment nexus at the middle of this picture, but Open Education should be, and is, wider. Maybe it would be useful to try to map where some of the other open endeavours fit. Open Badges, for example sit squarely on accreditation. Open Educational Practice sits somewhere around teaching and pedagogy. Open Access to research outputs sits roughly where OER does, but also with added implications to pedagogy, psychology, management and philosophy as research fields. Open research in general sits with these research fields but is also a useful way of learning. Open data is a bit tricky since it depends what you do with it, but the linked-up veni challenge submissions showed interesting ideas around library functions such as resource discovery, and around policy and administration, and learning analytics kind of comes under teaching. Similarly with Open Source Software and Open Standards, they cover pretty much everything on the main diagonal from Admin to assessment (including library). And MOOCs? well, the openness is in admission policy, so I’ve put them there. I suspect there is a missing “open learning” that sits over learning and covers informal education and much of what the original cMOOC pioneers were interested in.

How various open endeavours relate to  education to give open education.

How various open endeavours relate to education to give open education.


ebooks 2013

Every year for the past dozen or so years the Department of Information Sciences at UCL have organised a meeting on ebooks. I’ve only been to one of them before, two or three years ago, when the big issues were around what publishers’ DRM requirements for ebooks meant for libraries. I came away from that musing on what the web would look like if it had been designed by publishers and librarians (imagine questions like: “when you lend out our web page, how will you know that the person looking at the screen is a member of your library?”…). So I wasn’t sure what to expect when I decided to go to this year’s meeting. It turned out to be far more interesting than I had hoped, I latched on to three themes of particular interest to me: changing paradigms (what is an ebook?), eTextBooks and discovery.

Changing paradigms

With the earliest printed books, or incunabula, such as the Gutenberg Bible, printers sought to mimic the hand written manuscripts with which 15th cent scholars were familiar; in much the same way as publishers now seek to replicate printed books as ebooks.

With the earliest printed books, or incunabula, such as the Gutenberg Bible, printers sought to mimic the hand written manuscripts with which 15th cent scholars were familiar; in much the same way as publishers now seek to replicate printed books as ebooks.

In the first presentation of the day Lorraine Estelle, chief executive of Jisc Collections, focussed on access to electronic resources. Access not lending; resources not ebooks. She highlighted the problems of using yesterday’s language and thinking as being problematic in this context, like having a “horseless carriage” and buying it hay. [This is my chance to make the analogy between incunabula and ebooks again, see right.] The sort of discussions I recalled from the previous meeting I attended reflect this thinking, publishers wanting a digital copy of a book to be equivalent to the physical book, only lendable to one person at a time and to require replacing after a certain number of loans.

We need to treat digital content as offering new possibilities and requiring new ways of working. This might be uncomfortable for publishers (some more than others) and there was some discussion about how we cannot assume that all students will naturally see the advantages, especially if they have mostly encountered problematic content that presents little that could not be put on paper but is encumbered with DRM to the point that it is questionable as to whether they really own the book. But there is potential as well as resistance. Of course there can be more interesting, more interactive content–Will Russell of the Royal Society of Chemistry described how they have been publishing to mobile devices, with tools such as Chem Goggles that will recognise a chemical structure and display information about the chemical. More radically, there can also be new business models: Lorraine suggested Institutions could become publishers of their own teaching content, and later in the day Caren Milloy, also of Jisc Collections, and Brian Hole of Ubiquity Press pointed to the possibilities of open access scholarly publishing.

Caren’s work with the OAPEN Library is worth looking through for useful information relating to quality assurance in open monograms such as notifying readers of updates or errata. Caren also talked about the difficulties in advertising that a free online version of a resource is available when much of the dissemination and discovery ecosystem (you know, Amazon, Google…) is geared around selling stuff, difficulties that work with EDitEUR on the ONIX metadata scheme will hopefully address soon.

Brian described how Ubiquity Press can publish open access ebooks by driving down costs and being transparent about what they charge for. They work from XML source, created overseas, from which they can publish in various formats including print on demand, and explore economies of scale by working with university presses, resulting in a charge to the author (or their funders) of about £150 for a chapter assuming there is nothing to complex in that chapter.


All through the day there were mentions of eTextBooks, starting again with Lorraine who highlighted the paperless medic and how his quest to work only with digital resources is complicated by the non-articulation of the numerous systems he has to use. When she said that what he wanted was all his content (ebooks, lecture handouts, his own notes etc.) on the same platform, integrated with knowledge about when and where he had to be for lectures and when he had exams, I really started to wonder how much functionality can you put into an eContent platform before it really becomes a single-person content-oriented VLE. And when you add in the ability to share notes with the social and communication capability of most mobile devices, what then do you have?

A couple of presentations addressed eTextBooks directly, from a commercial point of view. Jenni Evans spoke about Vital Source and Andrejs Alferovs about Kortext both of which are in the business of working with institutions distributing online textbooks to students. Both seem to have a good grasp of what students want, which I think should be useful requirements to feed into eTextBook standardization efforts such as eTernity, these include:

  • ability to print
  • offline access
  • availability across multiple devices
  • reliable access under load
  • integration with VLE
  • integration with syllabus/curriculum
  • epub3 interactive content
  • long term access
  • ability for student to highlight/annotate text and share this with chosen friends
  • ability to search text and annotations


There was also a theme of resource discovery running through the day, and I have already mentioned in passing that this referenced Google and Amazon, but also social media. Nick Canty spoke about a survey of library use of social media, I thought it interesting that there seemed to be some sophisticated use of the immediacy of Twitter to direct people to more permanent content, e.g. to engagement on Facebook or the library website.

Both Richard Wallis of OCLC and Robert Faber of OUP emphasized that users tend to use Google to search and gave figures for how much of the access to library catalogue pages came direct from Google and other external systems, not from their own catalogue search interface. For example the Biblioteque Nationale de France found that 80% of access to their catalogue pages cam directly from web search engines not catalogue searches, and Robert gave similar figures for access to Oxford Journals. The immediate consequence of this is that if most people are trying to find content using external systems then you need to make sure that at least some (as much as possible, in fact) of your content is visible to them–this feeds in to arguments about how open access helps solve discoverability problems. But Richard went further, he spoke about how the metadata describing the resources needs to be in a language that Google/Bing/Yahoo understand, and that language is He did a very good job distinguishing between the usefulness of specialist metadata schema for exchanging precise information between libraries or publishers, but when trying to pass general information to Google:

it’s no use using a language only you speak.

Richard went on to speak about the Google Knowledge graph and their “things not strings” approach facilitated by linked data. He urged libraries to stop copying text and to start linking, for example not to copy an author name from an authority file but to link to the entry in that file, in Eric Miller’s words to move from cataloguing to “catalinking”.


So was this really about ebooks? Probably not, and the point was made that over the years the name of the event has variously stressed ebooks and econtent and that over that time what is meant by “ebook” has changed. I must admit that for me there is something about the idea of a [e]book that I prefer over a “content aggregation” but if we use the term ebook, let’s use it acknowledging that the book of the future will be as different from what we have now as what we have now is from the medieval scroll.

Picture Credit
Scanned image of page of the Epistle of St Jerome in the Gutenberg bible taken from Wikipedia. No Copyright.

Brief reflections on Open Practice and OER Sustainability

Lorna and I ran a session at the CETIS conference on the topic of Open Practice and OER Sustainability, we had 10-minute presentations from ten brilliant people who have been involved in the UKOER programme each giving a view from their own perspective on the general problem of “what now that the Jisc money has gone?” It’s fruitless to try to summarise that in full, so what I will do is add links to presentations to the session page linked-to above and give my own very cursory summary of a few of the themes. Lorna has also written a summary on her own blog.

“Scratch your own itch”

One of the most telling comments on sustainability, from Julian Tenney talking about the Xerte project, was that a project would most likely be sustainable if it was about doing something that the people involved needed doing anyway. Not necessarily something that would be done anyway (though in Xerte’s case mostly it was), but definitely not something that was being done just because the money was there. I agree with a comment that was made that there is a problem with the way that Universities treat project funding in this respect (at least in research departments), always the emphasis is on chasing money, getting the next grant. There were many examples of what it might be that “needs doing anyway”, at personal, subject community, institutional, and national/sector-wide level, from the sharing of resources between humanities teachers using HumBox, extra mural studies of the Department of continuing Education at Oxford University, the institutional teaching and learning policy at Leeds Met University, FE colleges in Scotland working in ever closer union and student progression from College to University.

nickbalance(By: Nick Sheppard, Leeds Metropolitan University)

Nick Sheppard asked for a technical infrastructure to support these institutional and other policies. He (and others) asked for APIs and other links between repositories (and the rest of the web, I assume) so that the greatest advantage could be had for effort. Sarah Currier told us about the new offers from Mimas to make your OER effort “Jorum Powered” through a hosted repository, a web interface into Jorum, or by building custom applications using the new Jorum API.

But with technical infrastructure come technical requirements, David Kernohan was worried that these requirements are only bearable by an academic with help, and that once the Jisc funding goes that support will also go. Suzanne Hardy also touched on this.

by David Kernohan, Jisc. The teddy bear is an academic.

The concept involved here was identified by Yvonne Howard as relative advantage, the advantage of something has to be compared to the costs and the costs have to be minimised, as can be done through clever technology such as maximum use of machine created metadata.

“It’s like MOOCs stole OER’s girlfriend”

footpathSo far I’ve mentioned advantages for many people but glossed over the fact that different people will see different advantages; they don’t and for that reason they will pursue different directions, as we have seen with MOOCs. Amber Thomas of Warwick University (but yes, the same Amber as was of JISC) described MOOCs and OERs as distant cousins who used to get on but are now no longer friendly for some reason. And it’s not like the O for Open in the two really stands for the same thing, as Pat Lockley said, their open is not necessarily our open. But, he asked, what is open? a footpath through private land or a National Park with the right to roam where you please (if you can manage to get there)?lakedistrict

(this last photo is mine and is covered by the CC-BY licence of this blog; the others aren’t and are used according to their various licences or permissions from their creators.)

Some adventures with HTML5

A couple of weeks ago I hosted an online webinar for JISC OER Rapid Innovation projects. Here I will attempt to summarise what was said about HTML5.

Rapid Innovation projects are short projects, typically only a few months long, that JISC fund to do some development; they’re not the place for open-ended explorations of new concepts, but that doesn’t mean that they aren’t projects from which we can learn a lot. They are quite a good test bed for assumptions that certain developments should be quite easily achievable: you think that the state of technology X is such that a couple of months of developer effort should be enough to realise idea Y: a rapid innovation project is a way of testing this. The aim of this webinar was to collect reflections from this round of projects on a number of technologies that several projects had tried. HTML5, with associated aspects of Javascript, video and accessibility was one of those technologies.

One of the projects that had the strongest dependency on HTML5 was XENITH (Xerte Experience Now Improved: Targeting HTML5) which was predicated on converting the Xerte online toolkit (a popular wizard-based approach to creating OERs) from Flash output to HTML5. This seems even more important now than it did when the project started, as we have seen an accelerating shift away from Flash to HTML5 on mobile platforms. Tellingly, we were told that once busy Flash mailing lists now have very little traffic, a sign that developers are deserting Flash tools.

Julian Tenney, the XERTE project manager (and Flash developer by background), reported that he had initially been nervous about the feasibility of replacing the functionality of the flash player with HTML5, but he said he was “much much more comfortable with it now, it seems that [the project] haven’t really hit an awful lot of problems.” The project was running ahead of expectations, with solid core implemented with a good interface and more than half of the 75 templates for different types of page converted to HTML5. The project has used JQuery as the gernal JavaScript framework, which is a popular choice. After a fair amount of investigation into how to support audio and video playback they adopted JW Player, which did most of what the project needed to do without them trying to create anything new from scratch.

One advantage that HTML5 has over Flash, highlighted by EA Draffan of the Synote Mobile project, is that in principle it should help make resources accessible to all. XERTE has a good record for supporting access, for example it will work through the JAWS screen reader, and Julian pointed to a disadvantage of HTML5: that the accessibility was left to the browser, and not as in the case of Flash under the control of the developer. This sentiment that was echoed by Josef Baker who has been working on displaying maths in HTML5 compared to pdf for the Maxtract project, who had found that neither accessible pdf nor HTML 5 worked as well for blind and visually impaired users as plain text.

This problem seems most acute with video playback, where making resources accessible for anyone can be a problem on some devices. Several projects reported that there is a problem still getting the acceptable behaviour for video playback across different browser/platform combinations; an issue which Synote have documented. Several people voiced concern at this inconsistency concerning video, the plethora of Javascript libraries for controlling video, even at the level of there being no one video format that would work across platforms; the poor performance on small mobile screens and lack of mature development framework elements (especially compared to apps). Simon Morris of the ensemble project and associated rapid innovation project for OER data infrastructure was especially critical of the ease of developing tools for sophisticated manipulation of the video stream. While it seems possible to create HTML5 applications to do this that work for a specific target browser & device, the difficulty seems to be getting something that will work across multiple platforms. He was doubtful about whether the document centric layout engines for HTML5 would ever be as easy to use for graphics oriented purposes as those available for native mobile apps. He also pointed to the example of controlling video from YouTube, where the API functionality to do such things as tracking which part of the video was being viewed was only available in the Flash and not in HTML5. According to Simon, there are deep-seated problems associated with the file format standards with respect to pseudo-streaming, for example the information that allows one to jump in to a video at an arbitrary point is held at the end of mpeg video files, meaning the entire video has to be loaded before the viewer can jump to the bit they want to see.

It seems clear that libraries such as JQuery have helped overcome many of the inconsistencies of creating good user experiences in HTML5. HTML5 video still has a way to go, especially on mobiles. There was disagreement on whether the problems described were signs of immaturity and indicated a need to support the further development of JavaScript libraries that aim to iron over platform inconsistencies for video in a similar way to JQuery, or an obstacle to using HTML5 that would be difficult to overcome while native apps provide an alternative. The “native Vs HTML5 web app” question is one that goes far beyond the experiences of a few projects with video.

Examples of good licence embedding

I was asked last week to provide some good examples of embedded licences in OERs. I’m pleased to do that (with the proviso that this is just my personal opinion of “good”) since it makes a change from carping about how some of the outputs of the UKOER programme demonstrate a neglect of seemingly obvious points about self-description. For example anyone who gets hold of a copy of the resource would want see that it is an OER, so it seems obvious that the Creative Commons licence should be clearly displayed on the resource; they would also want to see something about who created, owned or published the resource, partly to comply with the attribution condition of Creative Commons licences but also to conform with good academic and information literacy practice around provenance and citation. With few exceptions, the machine readable metadata hidden in the OERs’ files (such as MS Office file properties, id3 tags, EXIF etc.) are an irremediable mess, especially for licence and attribution information which cannot on the whole created automatically, and so are generally ignored. Also, the metadata stored in a content management system such as a repository and displayed on the landing page for the resource are not relevant when the resource is copied and used in some other system. So what I’m looking at here is human readable information about licence and attribution that travels with the resource when it is copied. Different approaches are required for different resource types, so I’ll take them in turn.

Text, e.g. office documents, MS Word, Powerpoint, PDF
Pretty simple really, you can have a title section with the name of resource creator and a footer with the copyright and licensing information. You can also have a more extensive “credits” page at the end of the document. Running page headers and footers work well if you think that people might take just a few pages rather than the whole document.
Example text OER with attribution and licence information. Note that the licence statement and logo link to the legal deed on the Creative Commons website.
Example OER powerpoint with licence and attribution information. Note how the final slide gives licence and attribution information of third party resources used.

Web pages
Basically a special case of a text document, the attribution and licence information can be included in a title or footer section, scroll down to the bottom of this page to see an example. For HTML there is a good case for making this information machine readable by wrapping the information in microdata or RDFa tags. Plugins exist for many web content management systems to do this, and the Creative Commons licensing generator will produce an HTML snippet that includes such tags.

ImagesExample of photo with attribution and licence information
Really the only option for putting the essentially textual information about licence and attribution into an image is to add it as a bar to the image. The Attribute Images and related projects at Nottingham have been doing good work on automating this.

A spoken introduction can provide the information required. BBC podcasts give good examples, though they are not OERs; also the introduction to the video below works as audio.

An introductory screen or credits at the end (with optional voice over) can provide the required information. See for example this video from MIT OCW (be sure to skip to the end to see credits to third party resources used).

Podcasts (and other RSS feeds)

As well as having <copyright> and <creativeCommons:license> tags in the RSS feed at channel and item level, Oxford Universities OER podcasts use an image for the channel that includes the creative commons logo. This is useful because the image is displayed by many feed readers and podcast applications. Of course the recordings should have licence information in them just as any other audio or video OER.

The Challenge of ebooks

Yesterday I was in London, along with a group of people with a wide range of experience in digital resource management, OERs, and publishing for a workshop which was part of the Challenge of eBooks project. Here’s a quick summary and some reflections.

To kick off, Ken Chad defined eBooks for the purpose of the workshop, and I guess the report to be delivered by the project, as anything delivered digitally that was longer than a journal article. I’ll come back to what I think are the problems with that later, but we didn’t waste time discussing it. It did mean that we included in the discussion such things as scanned copies of texts such as those that can be made under the CLA licence, and the difficulties around managing and distributing those.

For the earliest printed books, or incunabula, such as the Gutenberg Bible, printers sought to mimic the hand written manuscripts with which 15th cent scholars were familiar; in much the same way publishers now seek to replicate printed books as ebooks.

With the earliest printed book, or incunabula, such as the Gutenberg Bible, printers sought to mimic the hand written manuscripts with which 15th cent scholars were familiar; in much the same way as publishers now seek to replicate printed books as ebooks.

The main part of the workshop was organised around a “jobs to be done” framework. The idea of this is to focus on what people are trying to do “people don’t want a 5mm drill bit, they want a 5mm hole”. I found that useful in distinguishing ebooks in the domain of HE from the vast majority of those sold. In the latter case the job to be done is simply reading the book: the customer wants a copy of a book simply because they want to read that book, or a book by that author, or a book of that genre, but there isn’t necessarily any further motive beyond wanting the experience of reading the book. In HE the job to be done (ultimately) is for the student or researcher to learn something, though other players may have a job to do that leads to this, for example providing a student with resources that will help them learn something. I have views on how the computing power in the delivery platform can be used for more than just making the delivery of text more convenient: how it can be used to make the content interactive, or to deliver multimedia content, or to aid discussion or just connect different readers of the same text (I was pleased that someone mentioned the way a kindle will show which passages have been bookmarked/commented on by other readers).

The issues raised in discussion included rights clearance, the (to some extent technical, but mostly legal) difficulties of creating course packs containing excerpts of selected texts, the diversity of platforms and formats, disability access, and relationships with publishers.

It was really interesting that accessibility featured so strongly. Someone suggested that this was because the mismatch between an ebook and the device on which it is displayed creates an impairment so frequently that accessibility issues are plain for all to see.

A lot of the issues seem to go back publishers struggling with a new challenge, not knowing how they can meet it and keep their business model intact. It was great to have Suzanne Hardy of the PublishOER project there with her experience of how publishers will respond to an opportunity (such as getting more information about their users through tracking) but need help in knowing what the opportunities are when all they can see is the threat of losing control of their content. Whether publishers can make the necessary changes in currently print-oriented business processes to realise these benefits was questioned. Also there are challenges to libraries in HE, who are used to being able to buy one copy of a book for an institution whereas publishers now want to be able to sell access to individuals–partly, I guess, so that they can make that link between a user and the content they provide, but also because one digital copy can go a lot further than a single physical copy.

Interestingly, the innovation in ebooks is coming not from conventional publishers but from players such as Amazon, Apple and from publishers such as O’Reilly and Pearson. (Note that Pearson have a stake in education that includes an assessment business, online courses and colleges and so go beyond being a conventional publisher.) Also, the drive behind these innovations comes from new technology making new business models possible, not from evolution of current business, nor, arguably, from user demand.

So, anyway, what is an ebook? I am not happy with a definition that includes web sites of additional content created to accompany a book, or pages of a physical book that have been scanned. That doesn’t represent the sort of technical innovation that is creating new and interesting opportunities and the challenges come with them. Yes there are important (long-standing) issues around digital content in general, some of which will overlap with ebooks, but I will be disappointed if the report from this project is full of issues that could have been written about 10yrs ago. That’s not because I think those issues are dead but because I think ebooks are something different that deserves attention. I’ll suggest two approaches defining to what that something is:

1. an ebook is what the ebook reading devices and apps read well. By-and-large that means content in mobi or ePub format. Ebook readers don’t handle scanned page images well. They don’t read most pdf well (though depends on the tool and nature of pdf used, but aim of pdf was to maintain page layout which is exactly what you don’t want on an ebook reader). Word processed files are borderline but mostly word processed documents are page-oriented which raises the same issue as with pdfs. In short WYSIWYG and ebooks don’t match.

2. an ebook is aggregated content, packaged so that it can be moved from server to device, with more-or-less linear navigation. In the aggregation (which is often a zip file under another extension name) are assets (the text, images and other content that are viewed) plus metadata that describes the book as a whole (and maybe the assets individually) and information about how the assets should be navigated (structural metadata describing the organisation of the book). That’s essentially what mobi and ePub are. It’s also what IMS Content Packaging and offspring like SCORM and Common Cartridge are; and for that matter it’s what the MS Office and Open Office formats are.

I had a short discussion with Zak Mensah of JISC Digital Media about whether the content should be mostly text based. I would like to see as much non-text material as is useful, but clearly there is a limit. It would be perverse to take a set of videos, sequence them one after another with screen of text between each one like a caption frame in a silent movie, and then call it a book. However, there is something more than text that would make sense as a book: imagine replacing all the illustrations in a well-illustrated text book with models, animations, videos … for example, a chemistry book with interactive models of chemical structures, graphs that change when you alter the parameters; or a Shakespeare text with videos of performance in parallel with text…that still makes sense as a book.

[image of page from Gutenberg Bible taken from wikipedia]

A short update on resource tracking

In our reflections on technical aspects of phase 2 of the UKOER programme, we said that we didn’t understand why projects aren’t worrying more about tracking the use and reuse of the OERs they released. The reason for this was that if you don’t know how much your resources are used you will not be in a good position sustain your project after JISC have stopped funding it. For example, how can you justify the effort and cost of clearing resources for release under a Creative Commons licence unless you can show that people want their own copies of the resources you release rather than just view the the copy you have on your own server? Here is a quick update projects related to resource tracking.

Under the OER Rapid Innovation programme JISC have funded the TracOER project. It was known from the outset that the project would start slowly but in the last couple of weeks it has got some momentum going. The nub of the problem they are looking at is that

when an OER is taken from its host or origin server, in order to be used and reused the origin institution and the community generally lose track of it.

Building on work by Scott Leslie, their prospective solution is the use of a web bug/beacon: an image, normally invisible but TrackOER may use the creative commons licence badge, embedded in the resource but hosted by whomever is collecting the stats (let’s say the OER publisher). So long as the image is not removed, whenever the resource is loaded there will be a request sent to the publisher’s server for that image and that request can be logged. Additional information can be acquired by appending ?key1=value1&key2=value2… onto the src url of the img element in the resource; anything after the ? is logged in the server logs but does not affect the image that is served. For example, you could encode an identifier for the OER like this

<img src="">

TrackOER are investigating the use of Google analytics and the open source alternative piwiki (both with and without JavaScript, maybe) for the actual tracking. One of their challenges is that both normally assume that the person doing the tracking knows where the resource is, i.e. it will be where they put it, whereas with OERs one of the things that would be most worth knowing about is whether anyone has made a copy your resource somewhere else. However if you use JavaScript you have access to this information and can write it to the tracking image URL. Another challenge comes with using Creative Commons licence images instead of an invisible tracking bug is that you use several images for tracking not just the one. TrackOER have modified piwiki to allow for the use of multiple alternative images.

As an aside, TrackOER have also found a service called Stipple, they say:

using Stipple to track OER across the web in the same way as the TrackOER script is perfectly feasible. It might even be easy. You could get richer analytics as well as access to promotional tools.

OER tracking at Creative Commons
Creative Commons have posted three ideas for tracking OERs, two which use a mechanism they call refback and one which provides an API to data they acquire as a result of people linking to their licences and using images of licence badges served from their hosts. In all cases it is a priority to avoid anything that smacks of DRM or excessive and covert surveillance, understandable given that Creative Commons as an organistion is a third party between resource user and owner and cannot do anything that would risk losing the trust of either.

Refback tracking involves putting a link in the resource being tracked to the site doing the tracking (the two variants are that this may be either the publisher or Creative Commons, i.e. independent and distributed or hosted and centralised). If a curious user follows that link (and the assumption is that occasionally someone will) the tracking site will log request for the page to which the link goes, included in the log information is the “referrer” i.e. the URL of the page on which the user clicked the link. An application on the tracking site will work through this referrer log and fetch the pages for any URL it does not recognise to ascertain (e.g. from the attribution metadata) whether they are copies of a resource that it is tracking.

The third approach involves Creative Commons logging the referrers for requests to get a copy of one of their licence badges, and then looking at the attribution metadata on the web page in which the badge was embedded to build up a graph of pages that represent re-use of others. This information would be hosted on Creative Commons servers and be available to others via an API.

A reflection for open education week

It’s open education week, lots of interesting events are happening and lots of reflections being made on what open education means. One set of reflections that caught my eye was a trio of posts from Jisc programme managers David, Amber and Lawrie: three personal attempts to draw a picture of the open education space to answer the question “what is open education and how does it fit in with everything else?”. These sprung from an attempt “to describe the way JISC-funded work is contributing to developing this space”. They are great. But I think they miss one thing: the time dimension. By a stroke of good luck, Lou Macgill has recently produced an OER Timeline which I think represents this very nicely. (Yes, I know that there is much more to education than resources, and much more open education than OER, but it’s resource management and dissemination that I mostly work on.)

Maybe it’s a sign of age, but the changes in approaches to supporting the sharing of content is something that has been interesting me more and more of late. Nearly two years ago Lorna, John and I produced a paper for the ADL Repositories and Registries Summit called Then and Now which highlighted changes in technical approaches to JISC programmes that CETIS had helped support between 2002 and 2010. The desire to share resources had always been there, the change was from a focus on tight technical specifications to one which put openness at the centre. This wasn’t done for any ideological reason, but because we had an aim, “share stuff”, and the open approach seemed the one that presents fewest obstacles. I tried to describe the advantages of the open approach in An open and closed case for educational resources.

The timeline helps me understand why we are doing OER rather than some other means of solving the problem of how to share content, but that is just one aspect. What I really like about the open approach is that it creates new possibilities as well as solving old problems. So as well as a timeline of solutions what we should have is a timeline that shows what we are trying to do, one which shows the changing aims as well as the changing solutions, and that I think would show a trend to Open Education.