A simpler sourcing maturity assessment approach

Knowing how to procure your IT services, software and hardware is a vital function in any organisation. Assessing one’s maturity in this aspect can be complex, which is why SURF developed a simpler approach.

There are a number of perspectives to take on IT and its place in an organisation, but for further and higher education institutions, the procurement or sourcing of services – in the widest sense of the word ‘services’ – may be among the most important ones. With the ongoing move to cloud provisioning, determining where a particular service is going to come from and how it is managed is crucial.

A number of approaches to measure and improve an organisation’s maturity in this area exist, but, as Bert van Zomeren points out in the EUNIS paper that presents the SURF Sourcing Maturity Assessment Approach, these are quite complex. They can be so sophisticated that organisations hire consultancies that it do it for them. The SURF method doesn’t go quite as deep as those exercises, but is a much easier first step.

The heart of the approach is simple: a champion identifies the key stakeholders in the organisation with regard to the sourcing process, each of the stakeholders fills out the questionnaire, the results are analysed, the stakeholders meet, and appropriate adjustments to the process are agreed upon.

As in many of these approaches, the questions in the questionnaire describe an ideal situation, and respondents are asked to rank their organisation on how closely they think their organisation resembles that ideal on a scale. Some of these ideals may be uncontroversial, but it is certainly possible that others do provoke debate – adapting processes to suit services, rather than the other way round, for example. Still, such a debate can be a valuable input into the wider maturation process.

I’ve just translated the questionnaire into English, and it has been made available as a combination Google form and spreadsheet. To test it yourself, you need to sign into Google drive, put the form and spreadsheet into your drive, then make copies. The spreadsheet has two sheets: one that gathers the data and another that turns the data into a crude, but extensible report.

It’d probably be a good idea to read van Zomeren and Levinson’s short EUNIS paper before you start. There is a much more extensive guide to the approach in Dutch as well, but we thought we’d gather some feedback first before translating that as well. A guide of that sort will almost certainly be necessary in order to use the simpler sourcing maturity assessment approach in anger at an institution.

Using standards to make assessment in e-textbooks scalable, engaging but robust

During last week’s EDUPUB workshop, I presented a demo of how an IMS QTI 2.1 question item could be embedded in an EPUB3 e-book in a way that is engaging, but also works across many e-book readers. Here’s the why and how.

One of the most immediately obvious differences between a regular book and an e-textbook is the inclusion of little quizzes at the end of a chapter that allow the learner to check their understanding of what they’ve just learned. Formative assessment matters in textbooks.

When moving to electronic textbooks, there is a great opportunity to make that assessment more interactive, and provide richer feedback, and connect the learning to a wider view of how a student is doing (i.e. learning analytics). The question is how to do that in a way that works across many e-reading devices and applications, on a scale that works for publishers.

QTI item in Adobe Editions

QTI item in Adobe Editions

Scalability is where interoperability standards like EPUB3, IMS Learning Tool Interoperability (LTI) and IMS Question and Test Interoperability (QTI) 2.1 come in. People use a large number of different software systems in the authoring, management, and playback of e-books. Connecting each of those to all the others with one-off custom integrations just gets too complex, too expensive and too brittle; that’s why an increasing number of publishers and software vendors agreed on the EPUB specification. As long as you implement that spec, solutions can scale across many e-book applications. The same goes for question and test material, where IMS QTI does the same job. LTI does that job for connecting VLEs to any online learning tool.

Which leaves the question of how to square the circle of making the assessment experience as engaging and effective as possible, but also work on devices with very different capabilities.

Fortunately, EPUB3 files can include a number of techniques that allow an author to adapt the content to the capability of the device it is being read on. I used those techniques to present the same QTI item in three different ways; as a static quiz – much like a printed book –, as a simple interactive widget and as a feedback rich test run by an online assessment system inside the book. The latter option makes detailed analytics data available and it should also make it possible to send a grade to a VLE automatically.

The how

QTI item in Apple iBooks

QTI item in Apple iBooks

For the static representation and the interactive widget, I relied on Steve Lay’s rather brilliant transform from QTI XML to HTML5 (and back again), and to make the HTLM5 interactive with some javascript. By including this QTI HTML5 in the EPUB, you get all the advantages of standard QTI, in a way that still works in a simple, offline reader such as Adobe Editions as well as more capable software such as Apple’s iBooks.

For the most capable, online ebook readers such as Readium, the demo e-textbook connects to QTIWorks, an online QTI compliant assessment engine. It does that via IMS LTI 1.1, but in a somewhat unusual way: in LTI terms, the e-book behaves as a tool consumer. That is; like a VLE. Using a hash of an Oauth secret and key, it establishes a connection to QTIWorks, identifies the user, and retrieves the right quiz to show inside the ebook. A place to send the results of the quiz to is also provided, but I’ve not tested that yet. QTIWorks makes detailed report available of what the learner did exactly with each item, which can be retrieved in a variety of machine readable formats.

QTI item in Readium

QTI item in Readium

Because the secret and the key have to be included in the book, the LTI connection the book establishes is not as secure as an LTI connection from a proper VLE. For access to some formative assessment, that may be a price worth paying, though.

The demo EPUB3 uses both scripting and some metadata to determine which version of the QTI item to show. The QTI item, the LTI launch and the EPUB textbook are all valid according to their specifications, and rely on stock readers to work.

Acknowledgements and links

David McKain for making QTIWorks
Steve Lay for the QTI HTML transforms
John Kristian of the OAuth project for the OAuth javascript library
Stephen Vickers for the ceLTIc IMS LTI development tools

The (ugly, content-less) demonstration EPUB3 and associated code is available from Github.

QTI 2.1 spec release helps spur over £250m of investment worldwide

With the QTI 2.1 specification finalised and released, we’re seeing significant global investment in tools that implement the spec. Tools developed by JISC projects have been central.

It has taken a while, but since March this year, IMS Question and Test Interoperability 2.1 has been released as a final specification. That means that people can implement it, secure in the knowledge that it won’t change or disappear, even if there are likely to be future versions.

The release, not coincidentally, happens at a time when there is a lot of activity regarding the use of the specification around the world. This level of investment isn’t just due to a set of documents on a website, it is also due to the fact that there is a range of working implementations available that demonstrate how QTI 2.1 works, and that’s where a couple of Jisc projects play a crucial role. But let’s have a look at what people are doing with the spec around the world first.

The Netherlands

The biggest assessment project in the low lands at the moment is the effort to move all online school exams to the QTI 2.1 format. The multi-million Euro effort is led by the Commissie voor Examens, managed by DUO, with the CITO exam body and trifork as contractors. Because of the specific demands put upon the whole infrastructure, the partners will need an extensive profile.

Accompanying the formal exam profile is the NL-QTI effort led by Kennisnet. This pragmatic but relatively rich profile of the specification is meant to facilitate an eco system of material and software for general use in schools. We should see more of that profile in the near future.

Lastly, Surf is currently running the Assessment and Assessment Driven Learning programme in higher education, which will revolve around a sharable infrastructure for online assessment. Part of that programme will be an exploration to what extent such sharing can be facilitated by QTI 2.1

Germany

The main player here is the Onyx suite from BPS. This complete assessment suite of editor, test player, analytics module and converter is built around QTI 2.1, and has been used standalone as well as integrated with the OLAT VLE. One instance of the latter that is shared between all 13 universities in Saxony has about 50.000 users, with about 25.000 log-ins per day. Similar consortia exist in Thuringia and Rhineland-Palatinate, and there are further university specific installations with a combined total of about a 108.000 users. The hosted Onyx test player runs about 300 – 1000 test runs a day.

France

The work in France is on a smaller scale, but is mature and well targeted. The MOCAH team of UPMC, Paris 6 has developed a system where QTI 2.1 source is transformed such that it can be run on generic Java or PHP based web servers, as well as specialised QTI players. The focus is on the teaching of math to secondary schools students, and it has been used in 160 classes, where 400 patterns have been created. The latter are question item templates that generate large amounts of items for students to practice on; a key requirement.

South Korea

After experiments in the past with, among other tools, QTI 2.1 generated from common word-processing tools, KERIS – the Korea Education and Research Information Service – is now engaging vendors in a project to integrate QTI 2.1 in EPUB 3 ebooks. Various options are being explored at the moment, with results due later this year.

USA

This is where the development-at-scale is taking place at the moment, thanks to the Race To The Top (RTTT) projects that were funded by the Obama administration. There are two state-led consortia – Smarter Balanced and PARCC – with a mission to overhaul the whole assessment infrastructure in schools, base it on open standards and open source software, and provide a tranche of new material to go with it. They had an initial budget of $160-170 million each, with about a third of those budgets intended for tool development. QTI 2.1, along with the Accessible Portable Item Protocol (APIP) extensions, is at the heart of the initiative.

The size of those consortia is having effects elsewhere too. One major educational publisher has already decided to standardise internally on QTI 2.1, and others are looking at the same option. Not that such a thing is new: organisations such as the Northwest Evaluation Association (NWEA) and the world’s largest testing organisation – ETS – have already chosen QTI 2.1 as their internal ‘lingua franca’. Rather than make many point to point integrations between their own systems and collections, and then having to do that again with each organisation they partner with, they translate each format to and from QTI.

UK

Meanwhile, back in the UK, JISC has sponsored a small community – most recently via the Assessment & Feedback programme – that has played a vital role in making QTI 2.1 real. ‘Real’ in the sense of checking whether and how the specification would work, as it was being designed, in the case of Jassess. ‘Real’ also in the sense of putting QTI 2.1 material in the hands of a range of teachers and learners, via editing tools such as Uniqurate and playback tools such as QTIWorks. An excellent RSC Scotland post outlines exactly how those outputs of the QTI-DI and Uniqurate projects work.

All of these UK projects’ tools, guidance and assessment materials are known to all the above communities, as well as plenty of others I’ve not even mentioned. In some cases, the JISC sponsored tools have been extended by others, in other cases, the presence and online accessibility of the resources meant that those other communities knew what was possible, what their own tools and materials should look like, and how they could interoperate.

At this point, it’s not clear whether new Jisc will support future work in this area. What is clear, however, is that JISC’s past investment will continue to have a global effect well beyond the initial outlay.

What could a GPS for learner journeys look like?

Last weekend, a motley crew of designers, students, developers, business and government people came together in Edinburgh to prototype designs and apps to help learners manage their journeys. With help, I built a prototype that showed how curriculum and course offering data can be combined with e-portfolios to help learners find their way.

The first official Scottish government data jam, facilitated by Snook and supported by TechCube, is part of a wider project to help people navigate the various education and employment options in life, particularly post 16. The jam was meant to provide a way to quickly prototype a wide range of ideas around the learner journey theme.

While many other teams at the jam built things like a prototype social network, or great visualisations to help guide learners through their options, we decided to use the data that was provided to help see what an infrastructure could look like that supported the apps the others were building.

In a nutshell, I wanted to see whether a mash-up of open data in open standard formats could help answer questions like:

  • Where is the learner in their journey?
  • Where can we suggest they go next?
  • What can help them get there?
  • Who can help or inspire them?

Here’s a slide deck that outlines the results. For those interested in the nuts and bolts read on to learn more about how we got there.

Where is the learner?

To show how you can map where someone is on their learning journey, I made up an e-portfolio. Following an excellent suggestion by Lizzy Brotherstone of the Scottish Government, I nicked a story about ‘Ryan’ from an Education Scotland website on learner journeys. I recorded his journey in a Mahara e-portfolio, because it outputs data in the standard LEAP2a format- I could have used PebblePad as well for the same reason.

I then transformed the LEAP2a XML into very rough but usable RDF using a basic stylesheet I made earlier. Why RDF? Because it makes it easy for me to mash up the portfolios with other datasets; other data formats would also work. The made-up curriculum identifiers were added manually to the RDF, but could easily have been taken from the LEAP2a XML with a bit more time.

Where can we suggest they go next?

I expected that the Curriculum for Excellence would provide the basic structure to guide Ryan from his school qualifications to a college course. Not so, or at least, not entirely. The Scottish Qualifications Framework gives a good idea of how courses relate in terms of levels (i.e. from basic to a PhD and everything in between), but there’s little to join subjects. After a day of head scratching, I decided to match courses to Ryan’s qualifications by level and comparing the text of titles. We ought to be able to do better than that!

The course data set was provided to us was a mixture of course descriptions from the Scottish Qualifications Authority, and actual running courses offered by Scottish colleges all in one CSV file. During the jam, Devon Walshe of TechCube made a very comprehensive data set of all courses that you should check out, but too late for me. I had a brief look at using XCRI feeds like the ones from Adam Smith college too, but went with the original CSV in the end. I tried using LOD Refine to convert the CSV to RDF, but it got stuck on editing the RDF harness for some reason. Fortunately, the main OpenRefine version of the same tool worked its usual magic, and four made-up SQA URIs later, we were in business.

This query takes the email of Ryan as a unique identifier, then finds his qualification subjects and level. That’s compared to all courses from the data jam course data set, and whittled down to those courses that match Ryan’s qualifications and are above the level he already has.

The result: too many hits, including ones that are in subjects that he’s unlikely to be interested in.

So let’s throw in his interests as well. Result: two courses that are ideal for Ryan’s skills, but are a little above his level. So we find out all the sensible courses that can take him to his goal.

What can help them get there?

One other quirk about the curriculum for excellence appears to be that there are subject taxonomies, but they differ per level. Intralect implemented a very nice one that can be used to tag resources up to level 3 (we think). So Intralect’s Janek exported the vocabulary in two CSV files, which I imported in my triple store. He then built a little web service in a few hours that takes the outcome of this query, and returns a list of all relevant resources in the Intralibrary digital repository for stuff that Ryan has already learned, but may want to revisit.

Who can help or inspire them?

It’s always easier to have someone along for the journey, or to ask someone who’s been before you. That’s why I made a second e-portfolio for Paula. Paula is a year older than Ryan, is from a different, but nearby school, and has done the same qualifications. She’s picked the same qualification as a goal that we suggested to Ryan, and has entered it as a goal on her e-portfolio. Ryan can get it touch with her over email.

This query takes the course suggested to Ryan, and matches it someone else’s stated academic goal, and reports on what she’s done, what school she’s from, and her contact details.

Conclusion

For those parts of the Curriculum for Excellence for which experiences and outcomes have been defined, it’d be very easy to be very precise about progression, future options, and what resources would be particularly helpful for a particular learner at a particular part of the journey. For the crucial post 16 years, this is not really possible in the same way right now, though it’s arguable that its all the more important to have solid guidance at that stage.

Some judicious information architecture would make a lot more possible without necessarily changing the syllabus across the board. Just a model that connects subject areas across the levels, and school and college tracks would make more robust learner journey guidance possible. Statements that clarify which course is an absolute pre-requisite for another, and which are suggested as likely or preferable would make it better still.

We have the beginnings of a map for learner journeys, but we’re not there yet.

Other than that, I think agreed identifiers and data formats for curriculum parts, electronic portfolios or transcripts and course offerings can enable a whole range of powerful apps of the type that others at the data jam built, and more. Thanks to standards, we can do that without having to rely on a single source of truth or a massive system that is a single point of failure.

Find out all about the other great hacks on the learner journey data jam website.

All the data and bits of code I used are available on github

Assessment & Feedback tool development lessons

With most software development project in the JISC Assessment & Feedback programme drawing to a close, it’s a good time to look at some common themes in their findings.

There’s a small, but perfectly formed little cluster of four projects in ‘strand C’ of the Assessment & Feedback programme. Strand C is the techy corner, because it is these that projects that took existing open source tools and adapted them for use in organisations beyond the ones they were developed in.

Within the strand, the tools that were being developed were:

  • Rogō, a complete assessment authoring, playback and management system, developed by the eponymous project at Nottingham University, and deployed in three other institutions
  • OpenMentor, a system that analyses tutor feedback on assignments, developed at the OU, now deployed in two other institutions by the OMTetra project
  • QTIWorks, a full-featured, QTI compliant assessment and test player, developed at Edinburgh University, now deployed by the QTI-DI project
  • Uniqurate, an online, QTI compliant assessment and test authoring tool developed at Kingston University by the eponymous project, and coupled to QTIWorks

Looking through their development experiences, there’s a couple of themes that seem to recur:

User interface complexity

What to do when one set of users need something simple, and another set want full access to all functions? The clearest example of that dillema was presented to the Uniqurate project: there was an existing assessment item editor called mathqurate that gave access to all aspects of many different question types, but was only really usable by experts, and an earlier version of uniqurate that was very friendly, but also very limited. Which is why the current project aimed to become the “goldilocks editor” by offering a flexible but easily graspable set of item type modules, but also by offering different modes that are accessible to more intrepid users.

The most advanced of these modes gives the user access to the QTI source code of a question, which is something that is also available in QTIWorks. Another, arguably more important simple versus complex user interface issue that QTIWorks has to deal with is how to show runtime variables. For authors, this is vital, but for candidates it is rather confusing and often assessment defeating. Solution? Like Uniqurate: different modes for different audiences.

In OpenMentor, the audience is broadly the same – tutors –, but some wanted to know what’s going on in the ‘black box’ that takes their feedback on assignments and categorises it into a well-known taxonomy, while others where just happy with the results. The likely solution is also to include an advanced mode in a future version of the tool.

Interoperating with other systems

Or: how do I get user information in my tool without asking those users to type it all in?

OpenMentor and Rogō went down the LDAP route, given that it is the most common way to distribute person information inside organisations. It worked for these tools too, though Rogō had to spend quite some time at one of the new sites to adapt the LDAP to Rogō mapping. Some assembly may be required, in other words.

Rogō and QTIWorks also implemented the much newer IMS Learning Technology Interoperability (LTI) specification. This specification is designed to allow more ad hoc connections between a VLE and tools such as the tools from the assessment & feedback programme. LTI is intended primarily to identify users, but it can also be used to move some user information from one system to another, particularly when those systems may be in different organisations. This function is still evolving, though, as Rogō found when they looked for an external examiner role within LTI. They couldn’t find it when they implemented it, but LTI supports it now.

Fostering a community

Because all four projects are open source, and because they were all meant to facilitate wider adoption, community building with users and other developers was paramount. It’s not easy, though.

Uniqurate noticed this particularly with regard to the use of agile software methodologies, as outlined in their last blogpost. Agile is generally advocated because it makes sure development happens in small steps that track what users actually want. Except that the users in this case where very busy academics who were enthusiastic, but rarely available during term time. And a project is too short to easily work around that. Conclusion: sometimes other methodologies may work better.

The OMtetra project used workshops and surveys to engage their user community, which did work. Developer engagement might be a slightly different matter, however: there are three different public code repositories for OpenMentor, of different degrees of currency. The branch developed during this project is the slightly, rather than the very, stale one. Whether all the developments have made it through to the latest branch is not clear. It is still actively developed, however, and that’s the main thing.

For QTIworks, code and documentation is clearer, and with success: the code has been adopted by developers on one of the very large Race To The Top assessment projects in the United States. It has been used there to prototype some potentially revolutionary new functionality in interoperable assessment material, which is likely to become part of the QTI specification itself. Part of the success may also be due to the fact that, like Uniqurate, a demo version of QTIworks is available online.

Both QTIworks and Uniqurate, have, however, been used for teaching and learning in a relatively limited scale compared to Rogō. As the Rogō project discovered, that can be a mixed blessing. Once courses start to rely on a system, the demand for support of all kinds increases exponentially- and that’s before Rogō is being used widely for summative assessment. Sound user and installation documentation helps, but doesn’t resolve all issues that other organisations may need help with, whether there’s a support business model in place or not. Also, demands of other organisations inevitably lead to tensions with the priorities of the original developers. That’s manageable, but requires thought and ongoing commitment.

Conclusion

It is a bit difficult to generalise across these four projects, much less all open source software developments at universities. Yet it seems fairly clear that the main issue is community building: once the right number of the right mix of partners are on board, other issues become more tractable. Fostering such communities is difficult, but it is something that an organisation like OSSWatch can help with; as Rogō has already been doing.

Doing analytics with open source linked data tools

Like most places, the University of Bolton keeps its data in many stores. That’s inevitable with multiple systems, but it makes getting a complete picture of courses and students difficult. We test an approach that promises to integrate all this data, and some more, quickly and cheaply.

Integrating a load of data in a specialised tool or data warehouse is not new, and many institutions have been using them for a while. What Bolton is trying in its JISC sponsored course data project is to see whether such a warehouse can be built out of Linked Data components. Using such tools promises three major advantages over existing data warehouse technology:

It expects data to be messy, and it expects it to change. As a consequence, adding new data sources, or coping with changes in data sources, or generating new reports or queries should not be a big deal. There are no schemas to break, so no major re-engineering required.

It is built on the same technology as the emergent web of data. Which means that increasing numbers of datasets – particularly from the UK government – should be easily thrown into the mix to answer bigger questions, and public excerpts from Bolton’s data should be easy to contribute back.

It is standards based. At every step from extracting the data, transforming it and loading it to querying, analysing and visualising it, there’s a choice of open and closed source tools. If one turns out not to be up to the job, we should be able to slot another in.

But we did spend a day kicking the tires, and making some initial choices. Since the project is just to pilot a Linked Enterprise Data (LED) approach, we’ve limited ourselves to evaluate just open source tools. We know there plenty of good closed source options in any of the following areas, but we’re going to test the whole approach before deciding on committing to license fees.

Data sources

D2RQ

Google Refine logo

Before we can mash, query and visualise, we need to do some data extraction from the sources, and we’ve come down on two tools for that: Google Refine and D2RQ. They do slightly different jobs.

Refine is Google’s power tool for anyone who has to deal with malformed data, or who just wants to transform or excerpt from format to another. It takes in CSV or output from a range of APIs, and puts it in table form. In that table form, you can perform a wide range of transformations on the data, and then export in a range of formats. The plug-in from DERI Galway, allows you to specify exactly how the RDF – the linked data format, and heart of the approach – should look when exported.

What Refine doesn’t really do (yet?) is transform data automatically, as a piece of middleware. All your operations are saved as a script that can be re-applied, but it won’t re-apply the operations entirely automagically. D2RQ does do that, and works more like middleware.

Although I’ve known D2RQ for a couple of years, it still looks like magic to me: you download, unzip it, tell it where your common or garden relational database is, and what username and password it can use to get in. It’ll go off, inspect the contents of the database, and come back with a mapping of the contents to RDF. Then start the server that comes with it, and the relational database can be browsed and queried like any other Linked Data source.

Since practically all relevant data in Bolton are in a range of relational databases, we’re expecting to use D2R to create RDF data dumps that will be imported into the data warehouse via a script. For a quick start, though, we’ve already made some transforms with Refine. We might also use scripts such as Oxford’s XCRI XML to RDF transform.

Storage, querying and visualisation

Callimachus project logo

We expected to pick different tools for each of these functions, but ended up choosing one, that does it all- after a fashion. Callimachus is designed specifically for rapid development of LED applications, and the standard download includes a version of the Sesame triplestore (or RDF database) for storage. Other triple stores can also be used with Callimachus, but Sesame was on the list anyway, so we’ll see how far that takes us.

Callimachus itself is more of a web application on top that allows quick visualisations of data excerpts- be they straight records of one dataset or a collection of data about one thing from multiple sets. The queries that power the Callimachus visualisations have limitations – compared to the full power of SPARQL, the linked data query language – but are good enough to knock up some pages quickly. For the more involved visualisations, Callimachus SPARQL 1.1 implementation allows the results a query to be put out as common or garden JSON, for which many different tools exist.

Next steps

We’ve made some templates already that pull together course information from a variety of sources, on which I’ll report later. While that’s going on, the main other task will be to set up the processes of extracting data from the relational databases using D2R, and then loading it into Callimachus using timed scripts.

VLE commodification is complete as Blackboard starts supporting Moodle and Sakai

Unthinkable a couple of years ago, and it still feels a bit April 1st: Blackboard has taken over the Moodlerooms and NetSpot Moodle support companies in the US and Australia. Arguably as important is that they have also taken on Sakai and IMS luminary Charles Severance to head up Sakai development within Blackboard’s new Open Source Services department. The life of the Angel VLE Blackboard acquired a while ago has also been extended.

For those of us who saw Blackboard’s aggressive acquisition of commercial competitors WebCT and Angel, and seen the patent litigation they unleashed against Desire 2 Learn, the idea of Blackboard pledging to be a good open source citizen may seem a bit … unsettling, if not 1984ish.

But it has been clear for a while that Blackboard’s old strategy of ‘owning the market’ just wasn’t going to work. Whatever the unique features are that Blackboard has over Moodle and Sakai, they aren’t enough to convince every institution to pay for the license. Choosing between VLEs was largely about price and service, not functionality. Even for those institutions where price and service were not an issue, many departments had sometimes not entirely functional reasons for sticking with one or another VLE that wasn’t Blackboard.

In other words, the VLE had become a commodity. Everyone needs one, and they are fairly predictable in their functionality, and there is not that much between them, much as I’ve outlined in the past.

So it seems Blackboard have wisely decided to switch focus from charging for IP to becoming a provider of learning tool services. As Blackboard’s George Kroner noted, “It does kinda feel like @Blackboard is becoming a services company a la IBM under Gerstner

And just as IBM has become quite a champion of Open Source Software, there is no reason to believe that Blackboard will be any different. Even if only because the projects will not go away, whatever they do to the support companies they have just taken over. Besides, ‘open’ matters to the education sector.

Interoperability

Blackboard had already abandoned extreme lock-in by investing quite a bit in open interoperability standards, mostly through the IMS specifications. That is, users of the latest versions of Blackboard can get their data, content and external tool connections out more easily than in the past- it’s no longer as much of a reason to stick with them.

Providing services across the vast majority of VLEs (outside of continental Europe at least) means that Blackboard has even more of an incentive to make interoperability work across them all. Dr Chuck Severance’s appointment also strongly hints at that.

This might need a bit of watching. Even though the very different codebases, and a vested interest in openness, means that Blackboard sponsored interoperability solutions – whether arrived at through IMS or not – are likely to be applicable to other tools, this is not guaranteed. There might be a temptation to cut corners to make things work quickly between just Blackboard Learn, Angel, Moodle 1.9/2.x and Sakai 2.x.

On the other hand, the more pressing interoperability problems are not so much between the commodified VLEs anymore, they are between VLEs and external learning tools and administrative systems. And making that work may just have become much easier.

The Blackboard press releases on Blackboard’s website.
Dr Chuck Severance’s post on his new role.

Question and Test tools demonstrate interoperability

As the QTI 2.1 specification gets ready for final release, and new communities start picking it up, conforming tools demonstrated their interoperability at the JISC – CETIS 2012 conference.

The latest version of the world’s only open computer aided assessment interoperability specification, IMS’ QTI 2.1, has been in public beta for some time. That was time well spent, because it allowed groups from across at least eight nations across four continents to apply it to their assessment tools and practices, surface shortcomings with the spec, and fix them.

Nine of these groups came together at the JISC – CETIS conference in Nottingham this year to test a range of QTI packages with their tools, ranging from the very simple to the increasingly specialised. In the event, only three interoperability bugs were uncovered in the tools, and those are being vigorously stamped on right now.

Where it gets more complex is who supports what part of the specification. The simplest profile, provisionally called CC QTI, was supported by all players and some editors in the Nottingham bash. Beyond that, it’s a matter of particular communities matching their needs to particular features of the specification.

In the US, the Accessible Portable Item Profile (APIP) group brings together a group of major test and tool vendors, that are building a profile for summative testing in schools. Their major requirement is the ability to finely adjust the presentation of questions to learners with diverse needs, which is why they have accomplished by building an extension to QTI 2.1. The material also works in QTI tools that haven’t been built explicitly for APIP yet.

A similar group has sprung up in the Netherlands, where the goal is to define all computer aided high stakes school testing in the country in QTI 2.1 That means that a fairly large infrastructure of authoring tools and players is being built at the moment. Since the testing material covers so many subjects and levels, there will be a series of profiles to cover them all.

An informal effort has also sprung up to define a numerate profile for higher education, that may yet be formalised. In practice, it already works in the tools made by the French MOCAH project, and the JISC Assessment and Feedback sponsored QTI-DI and Uniqurate projects.

For the rest of us, it’s likely that IMS will publish something very like the already proven CC QTI as the common core profile that comes with the specification.

More details about the tools that were demonstrated are available at the JISC – CETIS conference pages.

Approaches to building interoperability and their pros and cons

System A needs to talk to System B. Standards are the ideal to achieve that, but pragmatics often dictate otherwise. Let’s have a look at what approaches there are, and their pros and cons.

When I looked at the general area of interoperability a while ago, I observed that useful technology becomes ubiquitous and predictable enough over time for the interoperability problem to go away. The route to get to such commodification is largely down to which party – vendors, customers, domain representatives – is most powerful and what their interests are. Which describes the process very nicely, but doesn’t help solve the problem of connecting stuff now.

So I thought I’d try to list what the choices are, and what their main pros and cons are:

A priori, global
Also known as de jure standardisation. Experts, user representatives and possibly vendor representatives get together to codify whole or part of a service interface between systems that are emerging or don’t exist yet; it can concern either the syntax, semantics or transport of data. Intended to facilitate the building of innovative systems.
Pros:

  • Has the potential to save a lot of money and time in systems development
  • Facilitates easy, cheap integration
  • Facilitates structured management of network over time

Cons:

  • Viability depends on the business model of all relevant vendors
  • Fairly unlikely to fit either actually available data or integration needs very well

A priori, local
i.e. some type of Service Oriented Architecture (SOA). Local experts design an architecture that codifies syntax, semantics and operations into services. Usually built into agents that connect to each other via an ESB.
Pros:

  • Can be tuned for locally available data and to meet local needs
  • Facilitates structured management of network over time
  • Speeds up changes in the network (relative to ad hoc, local)

Cons:

  • Requires major and continuous governance effort
  • Requires upfront investment
  • Integration of a new system still takes time and effort

Ad hoc, local
Custom integration of whatever is on an institution’s network by the institution’s experts in order to solve a pressing problem. Usually built on top of existing systems using whichever technology is to hand.
Pros:

  • Solves the problem of the problem owner fastest in the here and now.
  • Results accurately reflect the data that is actually there, and the solutions that are really needed

Cons:

  • Non-transferable beyond local network
  • Needs to be redone every time something changes on the local network (considerable friction and cost for new integrations)
  • Can create hard to manage complexity

Ad hoc, global
Custom integration between two separate systems, done by one or both vendors. Usually built as a separate feature or piece of software on top of an existing system.
Pros:

  • Fast point-to-point integration
  • Reasonable to expect upgrades for future changes

Cons:

  • Depends on business relations between vendors
  • Increases vendor lock-in
  • Can create hard to manage complexity locally
  • May not meet all needs, particularly cross-system BI

Post hoc, global
Also known as standardisation, consortium style. Service provider and consumer vendors get together to codify a whole service interface between existing systems; syntax, semantics, transport. The resulting specs usually get built into systems.
Pros:

  • Facilitates easy, cheap integration
  • Facilitates structured management of network over time

Cons:

  • Takes a long time to start, and is slow to adapt
  • Depends on business model of all relevant vendors
  • Liable to fit either available data or integration needs poorly

Clearly, no approach offers instant nirvana, but it does make me wonder whether there are ways of combining approaches such that we can connect short term gain with long term goals. I suspect if we could close-couple what we learn from ad hoc, local integration solutions to the design of post-hoc, global solutions, we could improve both approaches.

Let me know if I missed anything!

PROD; a practical case for Linked Data

Georgi wanted to know what problem Linked Data solves. Mapman wanted a list of all UK universities and colleges with postcodes. David needed a map of JISC Flexible Service Delivery projects that use Archimate. David Sherlock and I got mashing.

Linked Data, because of its association with Linked Open Data, is often presented as an altruistic activity, all about opening up public data and making it re-usable for the benefit of mankind, or at least the tax-payers who facilitated its creation. Those are a very valid reasons, but they tend to obscure the fact that there are some sound selfish reasons for getting into the web of data as well.

In our case, we have a database of JISC projects and their doings called PROD. It focusses on what JISC-CETIS focusses on: what technologies have been used by these projects, and what for. We also have some information on who was involved with the projects, and were they worked, but it doesn’t go much beyond bare names.

In practice, many interesting questions require more information than that. David’s need to present a map of JISC Flexible Service Delivery projects that use Archimate is one of those.

This presents us with a dilemma: we can either keep adding more info to PROD, make ad-hoc mash-ups, or play in the Linked Data area.

The trouble with adding more data is that there is an unending amount of interesting data that we could add, if we had infinite resources to collect and maintain it. Which we don’t. Fortunately, other people make it their business to collect and publish such data, so you can usually string something together on the spot. That gets you far enough in many cases, but it is limited by having to start from scratch for virtually every mashup.

Which is where Linked Data comes in: it allows you to link into the information you want but don’t have.

For David’s question, the information we want is about the geographical position of institutions. Easily the best source for that info and much more besides is the dataset held by the JISC’s Monitoring Unit. Now this dataset is not available as Linked Data yet, but one other part of the Linked Data case is that it’s pretty easy to convert a wide variety of data into RDF. Especially when it is as nicely modelled as the JISC MU’s XML.

All universities with JISC Flexible Delivery projects that use Archimate. Click for the google map

All universities with JISC Flexible Delivery projects that use Archimate. Click for the google map

Having done this once, answering David’s question was trivial. Not just that, answering Mapmans’ interesting question on a list of UK universities of colleges with postcodes was a piece of cake too. That answer prompted Scott and Tony’s question on mapping UCAS and HESA codes, which was another five second job. As was my idle wonder whether JISC projects by Russell group universities used Moodle more or less than those led by post ’92 institutions (answer: yes, they use it more).

Russell group led JISC projects with and without Moodle

Russell group led JISC projects with and without Moodle

Post 92 led JISC projects with or without Moodle

Post '92 led JISC projects with or without Moodle

And it doesn’t need to stop there. I know about interesting datasets from UKOLN and OSSWatch that I’d like to link into. Links from the PROD data to the goodness of Freebase.com and dbpedia.org already exist, as do links to MIMAS’ names project. And each time such a link is made, everyone else (including ourselves!) can build on top of what has already been done.

This is not to say that creating Linked Data out of PROD was for free, nor that no effort is involved in linking datasets. It’s just that the effort seems less than with other technologies, and the return considerably more.

Linked Data also doesn’t make your data automatically usable in all circumstances or bug free. David, for example, expected to see institutions on the map that do use the Archimate standard, but not necessarily as a part of a JISC project. A valid point, and a potential improvement for the PROD dataset. It may also have never come to light if we hadn’t been able to slice and dice our data so readily.

An outline with recipes and ingredients is to follow.