Sharing service information?

Over the past few weeks the question of how to find service end-points keeps coming up in conversation (I know, says a lot about the sort of conversations I have), for example we have been asked whether we can provide information about where are the RSS feed locations for the services/collections created by the all the UKOER projects. I would generalise this to service end points, by which I mean the things like the base URL for OAI-PMH or RSS/ATOM feed locations or SRU target locations, more generally the location of the web API or protocol implementations that provide machine-to-machine interoperability. It seems that these are often harder to find than they should be, and I would like to recommend one and suggest another approach to helping make them easier to find.

The approach I would like to recommend to those who provide service end points, i.e. those of you who have a web-based service (e.g. a repository or OER collection) that supports machine-to-machine interoperability (e.g. for metadata dissemination, remote search, or remote upload) is that taken by web 2.0 hosts. Most of these have reasonably easy-to-find sections of their website devoted to documenting their API, and providing “how-to” information for what can be done with it, with examples you can follow, and the best of them with simple step-by-step instructions. Here’s a quick list by way of providing examples

I’ll mention Xpert Labs as well because, while the “labs” or “backstage” approach in general isn’t quite what I mean by simple “how-to” information, it looks like Xpert are heading that way and “labs” sums up the experimental nature of what they provide.

That helps people wanting to interoperate with those services and sites they know about, but it begs a more fundamental question, which is how to find those services in the first place; for example, how do you find all those collections of OERs. Well, some interested third-party could build a registry for you, but that’s an extra effort for someone who is neither providing or using the data/service/API. Furthermore, once the information is in the registry it’s dead, or at least at risk of death. What I mean is that there is little contact between the service provider and the service registry: the provider doesn’t really rely on the service registry for people to use their services and the service registry doesn’t actually use the information that it stores. Thus, it’s easy for the provider to forget to tell the service registry when the information changes, and if it does change there is little chance of the registry maintainer noticing. So my suggestion is that those who are building aggregation services based on interoperating with various other sites provide access to information about the endpoints they use. An example of this working is the JournalToCs service, which is an RSS aggregator for research journal tables of contents but which has an API that allows you to find information for the Journals that it knows about (JOPML showed the way here, taking information from a JISC project that spawned JournalToCs and passing on lists of RSS feeds as OPML). Hopefully this approach of endpoint users proving information about what they used would only provide information that actually worked and was useful (at least for them).

3 thoughts on “Sharing service information?

  1. Pingback: Making OER visible and findable : Information Environment Team

  2. After JOPML I also did Ensemble, which does the same API-RSS-OPML type thing but for OER podcasts:

    This is fed by an OPML file exposed by Oxford, OU, Cambridge and others that lists their iTunesU podcast feeds. Other alternatives have been to have a “feed of feeds”.

    Btw, what about the IESR ?

  3. Pingback: UKOER Sources | Phil's work blog

Comments are closed.