Feeding a repository

There has been some discussion recently about mechanisms for remote or bulk deposit in repositories and similar services. David Flanders ran a very thought provoking and lively show and tell meeting a couple of weeks ago looking at deposit. In part this is familiar territory; looking at and tweaking the work that the creators of the SWORD profile have done based on APP; or looking again at webDav. But there is also a newly emerging approach of using RSS or Atom feeds to populate repositories, a sort of feed-deposit. Coincidentally we also received a query at CETIS from a repository which is looking to collect outputs of the UKOER programme asking for help in firming-up the requirements for bulk or remote deposit, and asking how RSS possibly fitted into this.

So what is this feed-deposit idea. The first thing to be aware of is that as far as I can make out a lot of the people who talk about this don’t necessarily have the same idea of “repository” and “deposit” as I do. For example the Nottingham Xpert rapid innovation project and the Ensemble feed aggregator are both populated by feeds (you can also disseminate material through iTunesU this way). But, (I think) these are all links-only collections, so I would call them a catalogues not repositories, and I would say that they work by metadata harvest(*) not deposit. However, they do show that you can do something with feeds which the people who think that RSS or Atom is about stuff like showing the last ten items published should take note of. The other thing to take note of is podcasting, by which I don’t mean sticking audio files on a web server and letting people find them, but I mean feeds that either carry or point to audio/video content so that applications and devices like phones and wireless-network enabled media players can automatically load that content. If you combine what Xpert and Ensemble are doing by way of getting information about entire collections with the way that podcasts let you automatically download content then you could populate a repository through feeds.

The trouble is, though, that once you get down to details there are several problems and several different ways of overcoming them.

For example, how do you go beyond having a feed for just the last 10 resources? Putting everything into one feed doesn’t scale. If your content is broken down into manageable sized collections (e.g. The OU’s OpenLearn courses and I guess many other OER projects) you could put everything from each collection into a feed and then have an OPML file to say where all the different feeds are (which works up to a point, especially if the feeds will be fairly static, until your OPML file gets too large). Or you could have an API that allowed the receiver of the feed to specify how they wanted to chunk up the data: OpenSearch should be useful here, it might be worth looking at YouTube as an example. Then there are similar choices to be made for how just about every piece of metadata and the content itself is expressed in the feed, starting with the choice of flavour(s) for RSS or ATOM feed.

But, feed-deposit is a potential solution, and it’s not good to try to start with a solution and then articulate the problem. The problem that needs addressing (by the repository that made the query I mentioned above) is how best to deposit 100s of items given (1) a local database which contains the necessary metadata (2) enough programming expertise to read that metadata from the database and republish or post to an API. The answer does not involve someone sat for a week copy-and-pasting into a web form that the repository provides as its only means of deposit.

There are several ways of dealling with that. So far a colleague who is in this position has had success depositing into Flickr, SlideShare and Scribd by repeated calls to their respective APIs for remote deposit—which you could call a depositer-push approach—but an alternative is that she put the resources somewhere, provides information to tell repositories where they are so any repository that listens can come and harvest them—which would be more like a repository-pull approach, and in which case Feed-deposit might be the solution.

[* Yes, I know about OAI-PMH, the comparison is interesting, but this is a long post already.]

Resource description requirements for a UKOER project

CETIS have provided information on what we think are the metadata requirements for the UK OER programme, but we have also said that individual projects should think about their own metadata requirements in addition to these. As an example of what I mean by this, here is what I produced for the Engineering Subject Centre’s OER project.

Like it says on the front page it’s an attempt to define what information about a resource should be provided, why, for whom, and in what format, where:

“Who” includes project funders (HEFCE + JISC and Academy as their agents), project partner contributing resource, project manager, end users (teachers and students), aggregators—that is people who wish to build services on top of the collection.

“Why” includes resource management, selection and use as well as discovery through Google or otherwise, etc. etc.

“Format” includes free text for humans to read (which is incidentally what Google works from) and encoded text for machine operations (e.g. XML, RSS, HTML metatags, microformats, metadata embedded in other formats or held in the database of whatever content management system lies behind the host we are using).

You can read it on Scribd: Resource description requirements for EngSC OER project

[I should note that I work for the Engineering Subject Centre as well as CETIS and this work was not part of my CETIS work.]

It would be useful to know if other projects have produced anything similar. . .

Distribution platforms for OERs

One of the workpackages for CETIS’s support of the UKOER programme is:

Technical Guidelines–Services and Applications Inventory and Guidance:
Checklist and notes to support projects in selecting appropriate publication/distribution applications and services with some worked examples (or recommendations).
Output: set of wiki pages based on content type and identifying relevant platforms, formats, standards, ipr issues, etc.

I’ve made a start on this here, in a way which I hope will combine the three elements mentioned in the workpackage:

  1. An inventory of host platforms by resource type. Which are platforms that are being used for which media or resource types?
  2. A checklist of technical factors that projects should consider in their choice of platform
  3. Further information and guidance for some of the host platforms. Essentially that’s the checklist filled in

In keeping with the nature of this phase of the UKOER programme as a pilot, we’re trying not to be prescriptive about the type of platform projects will use. Specifically, we’re not assuming that they will use standard repository software and are encouraging projects to explore and share any information about the suitability of web2.0 social sharing sites. At the moment the inventory is pretty biased to these web2.0 sites, but that’s just a reflection of where I think new information is required.

How you can help

Feedback
Any feedback on the direction of this work would be welcome. Are there any media types I’m not considering that I should? Are the factors being considered in the checklist the right ones? Is the level of detail sufficient? Where are the errors?

Information
I want to focus on the platforms that are actually being used, so it would be helpful to know which these are. Also, I know from talking to some of you that there is invaluable experience about using some of these services, for example some APIs are better documented than others, some offer better functionality than others, some have limitations that aren’t apparent until you try to use them seriously. It would be great to have this in-depth information, there is space in the entry for each platform for these “notes and comments”.

Contributions
The more entries are filled out the better, but there’s a limit on what I can do, so all contributions would be welcome. In particular, I know that iTunes/iTunesU is important for audio video / podcasting, but I don’t have access myself — it seems to require some sort of plug-in called “iTunes” ;-) — so if anyone can help with that I would be especially grateful.

Depending on how you feel, you help by emailing me (philb@icbl.hw.ac.uk), or by registering on the CETIS wiki and either using the article talk page (please sign your comments) or the article itself. Anything you write is likely to be distributed under a Creative Commons cc-by-nc licence.