My favourite twitter conversation from #cetis13…
I wish I had a picture of Scott in his zombie t-shirt!
My favourite twitter conversation from #cetis13…
I wish I had a picture of Scott in his zombie t-shirt!
The theme of this years CETIS conference was Open for Education: Technology Innovation in Universities and Colleges, as usual we had a wide and diverse range of sessions but if there was one theme that underpinned them all it was how can we sustain innovation in the face of the challenges currently facing the sector?
Sustainability was the explicit theme of the Open Practice and OER Sustainability session Phil and I ran. Three years of HEFCE UKOER funding came to an end last autumn and, while there’s no denying that the programmes produced a significant quantity of open educational resources, did they also succeed in changing practice and embedding open education innovation across the English HE sector? Judging by the number of speakers and participants at the session I think it’s fair to say that the answer is a resounding “Yes”. At least in the short term. Patrick MacAndrew, who has been involved in organising this year’s OER13 conference, pointed out that while they expected a drop in numbers this year, as UKOER funding has ended and the event is not running in conjunction with OCWC, in actual fact numbers have risen significantly. Practice has changed and many institutions really are more aware of the potential and benefits of open educational resources and open educational practices. Though as several participants pointed out, MOOCs have rather eclipsed OERs over the last 12 months and the relationship between the two is ambiguous to say the least. As Amber Thomas put it: “MOOCs stole OERs girlfriend”.
David Kernohan used the memorable image of a teddy bear lecturer playing happily on a seesaw with his friends and with lots of open educational resources and innovative technologies until all the money ran out and all that was left was the teddy bear and the resources. However I can’t help thinking that the real threat to OER sustainability is that the next thing to disappear might be the teddy bear, and after all it’s the teddy bears, or rather the people, that sustain communities of innovation and practice. With this in mind, there was some discussion of the importance of subject communities in sustaining innovative educational practice and Suzanne Hardy of Newcastle reminded us that Humbox, an excellent example of an innovative and sustainable development presented by Yvonne Howard of Southampton, was originally a collaboration between four HEA subject centres. The legacy of the subject centres is certainly still visible in the sector, however as many talented people have had to move into other roles and those that have managed to hang on are increasingly under threat, how much longer will the community of open educational innovation be able to sustain itself?
The latter half of Scott Wilson’s session on Open Innovation and Open Development also focused on sustainability and again the discussion circled round to how can we sustain the community of developers that drive innovation forward? It’s more years than I can recall since their demise, but the CETIS SIGS were put forward yet again as a good model for sustaining innovative communities of developers and practitioners. I also suggested that it was still possible to see the legacy of the SHEFC Use of the MANs Initiative in the sector as a surprising number of people still working in educational technology innovation first cut their teeth on UMI projects.
There was some discussion of the emergence of “boundary spanning people and blended professionals” but also a fear that institutions are increasingly falling back on very traditional and strictly delineated professional roles. At a time when innovation is increasingly important, many institutions are shedding the very people who have been responsible for driving innovation forward in the sector. At the end of the session, Scott asked what is the one thing that organisations such as Cetis and OSSwatch should do over the next six months to help sustain open innovation and open development? The answer that came back was Survive! Just survive, stay alive, keep the innovation going, don’t loose the people. The fact that Scott was wearing a zombie t-shirt while facilitating the session was verging on the poignant :}
Meanwhile over in Martin Hawksey and David Sherlock’s Analytics and Institutional Capabilities session Ranjit Sidhu of SiD was laying into all manner of institutional nonsense including the sector wide panic that followed clearing, the brutal reality of the competitive education market, the millions spent on google advertising, the big data projects that are little more than a big waste of money and, last but not least, the KIS. Ranjit showed the following slide which drew a collective murmur of horror, though not surprise, from the audience.
If you look carefully you’ll notice that the number of daily request to Unistats for data is….9. Yep. 9. It hasn’t even hit double figures. One colleague who was responsible KIS returns recently estimated that the cost to their institution was in the region of a hundred thousand. Multiply that across the sector…Does anyone know what the total cost of the KIS has been? And the return on investment? As one participant commented in response to Ranjit’s presentation, KIS is not a tool for students, it’s a tool to beat VCs over the head with. I’ll leave you to draw your own conclusions…
I think it’s fair to say that a lot of us went to CETIS13 not knowing quite what to expect and even fewer of us know what the future holds. Despite these uncertainties the conference had a noticeably positive vibe, which more than a few people remarked on over the course of the event. We’re all living in “interesting times” but the brutal reality of the crisis facing HE has done little to dent people’s belief that sustaining open innovation, and the community of open innovators, is a fundamental necessity if the sector is to face these challenges. I certainly felt there was a real spirit of determination at CETIS13, here’s hoping it will see us through the “interesting times”.
The Cetis13 Conference is just days away and excitement is mounting to fever pitch. Or something. Sadly, if you haven’t already booked your place at the conference, you’ve missed the boat. Don’t despair though! You can still follow the fun on twitter, #cetis13, and this year we will also be streaming our two keynotes, “Digital Citizenship and Open Social”
by Josie Fraser and “The Path to Open Learning is Paved with Good Intentions” by Professor Patrick McAndrew. You can find the livestream here http://jisc.cetis.ac.uk/cetis13live
This year, for our sins, Phil and I are running the following session:
HEFCE funding of the HE Academy/JISC Open Educational Resources programme has come to an end, but this should not mean the end of UK OER. The emphasis of the programme was always on sustainable release of resources and change in culture and practice, not a one-off dumping of teaching materials. Through the programme we have seen changes in approaches to the management of learning resources, learned about how they can be disseminated openly, and embarked on new practices in Open Education that go well beyond (and occasionally do not even include) open access to learning materials.
In this session we will reflect on some of these changes and new approaches, with an emphasis on which are sustainable and how various technologies might help with sustainability. A good starting point for discussion would be “Technology for open educational resources - Into the wild” which reflects on several areas covered during the UK OER programme, though there are also many issues worth discussing that are not well covered in that book, for example management of the creation of OERs and practices in Open Education.
When Phil, Martin and I were initially planning this session we drew up a wish-list of people that we knew would be able to make a really thoughtful contribution to the debate. Based on the assumption that maybe only about half of our dream team would be able to participate, we e-mailed a dozen speakers and were <cliche>stunned and delighted</cliche> when almost everyone said yes! So we are now in the enviable position of having ten of the UK’s most challenging and thought provoking open education thinkers presenting in the space of just over three hours. Just look at our lineup….
We haven’t asked our presenters for titles in advance so I am looking forward to hearing everyone’s thoughts and perspectives on OER and sustainability. I think it’s fair to say that this line up should make for some lively discussions! Particularly as Suzanne has promised to deliver her presentation through the medium of interpretative dance, while David Kernohan will be favouring light operetta. At least that’s what they said on twitter, so it must be true, right? Oh, and Pat has threatened to do another video…. And all I have to do is chair the session and make sure no one talks for more than ten minutes. Easy? Wish me luck :}
Look forward to seeing you at #cetis13!
Thanks to Pat Lockley for drawing my attention to Reuter’s interesting take on inBloom, the US K-12 development that I blogged about a couple of weeks ago. You can find the article here: K-12 student database jazzes tech startups, spooks parents. Just in case you missed it, inBloom is a new technology integration initiative for the US schools’ sector launched by the Shared Learning Collective and funded by the Carnegie Corporation and the Bill and Melinda Gates Foundation. One of the aims of InBloom is to create a:
Secure data management service that allows states and districts to bring together and manage student and school data and connect it to learning tools used in classrooms.
I should confess that my interest in inBloom is purely on the technical side as it builds on two core technologies that CETIS has had some involvement with; the Learning Registry and the Learning Resource Metadata Initiative. The Reuter’s article provides a rather different perspective on the development however, describing the initiative as:
a $100 million database built to chart the academic paths of public school students from kindergarten through high school.
In operation just three months, the database already holds files on millions of children identified by name, address and sometimes social security number. Learning disabilities are documented, test scores recorded, attendance noted. In some cases, the database tracks student hobbies, career goals, attitudes toward school - even homework completion.
Local education officials retain legal control over their students’ information. But federal law allows them to share files in their portion of the database with private companies selling educational products and services.
When reported in these terms, it’s easy to understand why some parents have raised concerns about the initiative. The report goes on to say
Federal officials say the database project complies with privacy laws. Schools do not need parental consent to share student records with any “school official” who has a “legitimate educational interest,” according to the Department of Education. The department defines “school official” to include private companies hired by the school, so long as they use the data only for the purposes spelled out in their contracts.
The database also gives school administrators full control over student files, so they could choose to share test scores with a vendor but withhold social security numbers or disability records.
That’s hardly reassuring to many parents.
And for good measure they then quote a concerned parent saying
“Once this information gets out there, it’s going to be abused. There’s no doubt in my mind.”
Parents from New York, Louisiana, the Massachusetts chapters of the American Civil Liberties Union and Parent-Teacher Association have also written to state officials “in protest” with the help of a civil liberties attorney in New York.
To be fair to Reuters it’s not all Fear, Uncertainty and Doubt, the article also puts forward some of the potential benefits of the development as well as expressing the drawbacks and concerns. I certainly felt it was quite a balanced article that raised some valid issues.
It also clarified one issue that had rather puzzled me about the TechCrunch’s original report on inBloom which quoted Rupert Murdoch as saying:
“When it comes to K-12 education, we see a $500 billion sector in the U.S. alone that is waiting desperately to be transformed by big breakthroughs that extend the reach of great teaching.”
At the time I couldn’t see the connection between inBloom and Rupert Murdoch, and TechCrunch didn’t make it explicit, however Reuters explains that the inBloom technical infrastructure was built by Amplify Education, a division of Rupert Murdoch’s News Corps. That explains that then.
Those of you who have been following the CETIS Analytics Series will be aware that such concerns about privacy, anonymity and large scale data integration and analysis initiatives are nothing new, however I thought this was an interesting example of the phenomenon.
It’s also worth adding that, as the parent of a primary school age child, it has never once occurred to me to enquire what kind of data the school records, who that data is shared with and in what form. To be honest I am pretty philosophical about these things. However it is interesting that people have a tendency not to ask questions about their data until a big / new / evil / transformative (delete according to preference) technology development like this comes along. So what do you think? Is it all FUD? Or is it time to get our tin hats out?
I’m still very interested to see if inBloom’s technical infrastructure and core technologies are up to the job, so I’ll continue to watch these developments with interest. And you never know, if my itchy nose gets the better of me I might even ask around to find out what happens to pupil data on this side of the pond.
I had read the post the previous day and had already decided not to respond because tbh I just wouldn’t know where to begin.
However since David is offering “a large drink of the author’s choice” as the prize for the best response, I have been persuaded to take up the challenge. Which just goes to show there’s no better way to motivate
me folk then by offering drink. (Mine’s a G&T David, or a red wine, possibly both, though not in the same glass.)
I am still at a loss to offer a serious critique of this article so in the best spirit of OER, I am going to recycle what everyone else has already said. Reuse FTW!
The article can basically be summarised as follows:
It’s 10 years since MIT launched OpenCourseware. Since then OERs have FAILED because they have not transformed and disrupted higher education. List of reasons for their failure: discoverability, quality control, “The Last Mile”, acquisition. The solution to these problems is to built a “global enterprise-level system” aka a “supersized CMS”. And look, here’s one I built earlier! It’s called LON CAPA.
PS. “The entity that provides the marketplace, the service, and the support and keeps the whole enterprise moving forward is probably best implemented as a traditional company.”
I should point out that I am not familiar with LON-CAPA. I’m sure it’s a very good system as far as it goes, but I don’t think a “global enterprise-level system” is the answer to anything.
David Kernohan himself was quick off the mark when the article first started circulating, after tweeting a couple of its finer points:
“OERs have not noticeably disrupted the traditional business model of higher education”
“It is naïve to believe that OERs can be free for everybody involved.”
So the basic message of that paper is “OER IS BROKEN” and “NEED MOAR USER DATA”. Lovely.
Because, clearly, if we can’t measure the impact of something it is valueless.
Which is indeed a good point. Actually I think there are many ways you can measure the impact of OER but I’m not at all convinced that “disrupting traditional business models” is the only valid measure of success. After all, OER is just content + open licence at the end of the day. And we can’t expect content alone to change the world, can we?
This is the point that Pat Lockley was getting at when he tweeted:
My Blog will be coming soon “Why OER haven’t affected the growth of grass”
Facetious perhaps, but a very pertinent point. There has been so much hyperbole surrounding OER from certain quarters of the media that it’s all too easy to say “Ha! It’s all just a waste of money. OER will never change the world.” Well no, maybe not, but most right minded people never claimed it would. What we do have though, is access to a lot more freely available (both gratis and libre) clearly licenced educational resources out there on the open web. Surely that can’t be a bad thing, can it? If nothing else, OER has increased educators’ awareness and understanding of the importance of clearly licencing the content they create and use, and that is definitely a good thing.
Pat also commented:
I’m just tired of OER being about “research into OER”. The cart is so far before the horse.
Which is another very valid point. I probably shouldn’t repeat Pat’s later tweet when he reached the end of the article and discovered that the author was pimping his own system. It involved axes and jumberjacking. Nuff said.
Jim Groom was similarly concise in his criticism:
“For content to be truly reusable and remixable, it needs to be context-free.” Problematic.
What’s the problem with OER ten years on? Metadata. Hmmm, maybe it is actually imagination, or lack thereof. #killoerdead
While I don’t always agree with Mr Groom, I certainly do agree that such a partial analysis lacks imagination.
As is so often the case, it was left to Amber Thomas to see past the superficial bad and wrongness of the article to get at the issues underneath.
“The right questions, patchy evidence base, wrong solutions. And I still think oer is a descriptor not a distinct content type.”
And as is also often the case, I agree with Amber wholeheartedly. There are actually many valid points lurking within this article but, honestly, it’s like the last ten years never happened. For example, discussing discoverability, which I agree can be problematic, the author suggests:
The solution for this problem could be surprisingly simple: dynamic metadata based on crowdsourcing. As educators identify and sequence content resources for their teaching venues, this information is stored alongside the resources, e.g., “this resource was used before this other resource in this context and in this course.” This usage-based dynamic metadata is gathered without any additional work for the educator or the author. The repository “learns” its content, and the next educator using the system gets recommendations based on other educators’ choices: “people who bought this also bought that.”
Yes! I agree!
Simple? No, currently impossible, because the deployment of a resource is usually disconnected from the repository: content is downloaded from a repository and uploaded into a course management system (CMS), where it is sequenced and deployed.
Erm…impossible? Really? Experimental maybe, difficult even, but impossible? No. Why no mention here of activity data, paradata, analytics? Like I said, it’s like the last ten years never happened.
Anyway I had better stop there before I say something unprofessional. One last comment though, Martin Hawksey pointed out this morning that there is not a single comment on the Educause website about this article, and asked:
Censorship? (That’s the danger of CMSs configured this way, someone else controls the information.)
I can’t comment on whether there has been censorship, but there has certainly been control. (Is there a difference? Discuss.) In order to comment on the Educause site you have to register, which I did yesterday afternoon and got a response informing me that it would take “several business hours” to approve my registration. I finally received the approval notification at nine o’clock at night, by which point I had better things to do with my time than comment on “global enterprise-level systems” and “supersized CMS”.
So there you have it David. Do I get that G&T?
ETA The author of this article, Gerd Kortemeyer may just have pipped us all to the G&T with a measured and considered defence of his post over at oer-discuss. While his e-mail provides some much needed context to the original article, particularly in terms of clarifying the specfic type of educational institutions and usage scenarios he is referring to, many of the criticism remain. It’s well worth reading Gerd’s response to the challenge here. Andy Lane has also written a very thoughtful and detailed critique of the article here which I can highly recommend.
There have been a number of reports in the tech press this week about inBloom a new technology integration initiative for the US schools’ sector launched by the Shared Learning Collective. inBloom is “a nonprofit provider of technology services aimed at connecting data, applications and people that work together to create better opportunities for students and educators,” and it’s backed by a cool $100 million dollars of funding from the Carnegie Corporation and the Bill and Melinda Gates Foundation. In the press release, Iwan Streichenberger, CEO of inBloom Inc, is quoted as saying:
“Education technology and data need to work better together to fulfill their potential for students and teachers. Until now, tackling this problem has often been too expensive for states and districts, but inBloom is easing that burden and ushering in a new era of personalized learning.”
This initiative first came to my attention when Sheila circulated a TechCruch article earlier in the week. Normally any article that quotes both Jeb Bush and Rupert Murdoch would have me running for the hills but Sheila is made of sterner stuff and dug a bit deeper to find the inBloom Learning Standards Alignment whitepaper. And this is where things get interesting, because inBloom incorporates two core technologies that CETIS has had considerable involvement with over the last while, the Learning Registry and the Learning Resource Metadata Initiative, which Phil Barker has contributed to as co-author and Technical Working Group member.
I’m not going to attempt to summaries the entire technical architecture of inBloom, however the core components are:
- Data Store: Secure data management service that allows states and districts to bring together and manage student and school data and connect it to learning tools used in classrooms.
- APIs: Provide authorized applications and school data systems with access to the Data Store.
- Sandbox: A publicly-available testing version of the inBloom service where developers can test new applications with dummy data.
- inBloom Index: Provides valuable data about learning resources and learning objectives to inBloom-compatible applications.
- Optional Starter Apps: A handful of apps to get educators, content developers and system administrators started with inBloom, including a basic dashboard and data and content management tools.
Of the above components, it’s the inBloom index that is of most interest to me, as it appears to be a service built on top of a dedicated inBloom Learning Registry node, which in turn connects to the Learning Registry more widely as illustrated below.
According to the Standards Alignment whitepaper, the inBloom index will work as follows (Apologies for long techy quote, it’s interesting, I promise you!):
The inBloom Index establishes a link between applications and learning resources by storing and cataloging resource descriptions, allowing the described resources to be located quickly by the users who seek them, based in part on the resources’ alignment with learning standards. (Note, in this context, learning standards refers to curriculum standards such as the Common Core.)
inBloom’s Learning Registry participant node listens to assertions published to the Learning Registry network, consolidating them in the inBloom Index for easy access by applications. The usefulness of the information collected depends upon content publishers, who must populate the Learning Registry with properly formatted and accurately “tagged” descriptions of their available resources. This information enables applications to discover the content most relevant to their users.
Content descriptions are introduced into the Learning Registry via “announcement” messages sent through a publishing node. Learning Registry nodes, including inBloom’s Learning Registry participant node, may keep the published learning resource descriptions in local data stores, for later recall. The registry will include metadata such as resource locations, LRMI-specified classification tags, and activity-related tags, as described in Section 3.1.
The inBloom Index has an API, called the Learning Object Dereferencing Service, which is used by inBloom technology-compatible applications to search for and retrieve learning object descriptions (of both objectives and resources). This interface provides a powerful vocabulary that supports expression of either precise or broad search parameters. It allows applications, and therefore users, to find resources that are most appropriate within a given context or expected usage.
inBloom’s Learning Registry participant node is peered with other Learning Registry nodes so that it can receive resource description publications, and filters out announcements received from the network that are not relevant.
In addition, it is expected that some inBloom technology-compatible applications, depending on their intended functionality, will contribute information to the Learning Registry network as a whole, and therefore indirectly feed useful data back into the inBloom Index. In this capacity, such applications would require the use of the Learning Registry participant node.
One reason that this is so interesting is that this is exactly the way that the Learning Registry was designed to work. It was always intended that the Learning Registry would provide a layer of “plumbing” to allow the data to flow, education providers would push any kind of data into the Learning Registry network and developers would create services built on top of it to process and expose the data in ways that are meaningful to their stakeholders. Phil and I have both written a number of blog posts on the potential of this approach for dealing with messy educational content data, but one of our reservations has been that this approach has never been tested at scale. If inBloom succeeds in implementing their proposed technical architecture it should address these reservations, however I can’t help noticing that, to some extent, this model is predicated on there being an existing network of Learning Registry nodes populated with a considerable volume of educational content data, and as far as I’m aware, that isn’t yet the case.
I’m also rather curious about the whitepaper’s assertion that:
“The usefulness of the information collected depends upon content publishers, who must populate the Learning Registry with properly formatted and accurately “tagged” descriptions of their available resources.”
While this is certainly true, it’s also rather contrary to one of the original goals of the Learning Registry, which was to be able to ingest data in any format, regardless of schema. Of course the result of this “anything goes” approach to data aggregation is that the bulk of the processing is pushed up to the services and applications layer. So any service built on top of the Learning Registry will have to do the bulk of the data processing to spit out meaningful information. The JLeRN Experiment at Mimas highlighted this as one of their concerns about the Learning Registry approach, so it’s interesting to note that inBloom appears to be pushing some of that processing, not down to the node level, but out to the data providers. I can understand why they are doing this, but it potentially means that they will loose some of the flexibility that the Learning Registry was designed to accommodate.
Another interesting aspect of the inBloom implementation is that the more detailed technical architecture in the voluminous Developer Documentation indicates that at least one component of the Data Store, the Persistent Database, will be running on MongoDB, as opposed to CouchDB which is used by the Learning Registry. Both are schema free databases but tbh I don’t know how their functionality varies.
In terms of the metadata, inBloom appears to be mandating the adoption of LRMI as their primary metadata schema.
When scaling up teams and tools to tag or re-tag content for alignment to the Common Core, state and local education agencies should require that LRMI-compatible tagging tools and structures be used, to ensure compatibility with the data and applications made available through the inBloom technology.
A profile of the Learning Registry paradata specification will also be adopted but as far as I can make out this has not yet been developed.
It is important to note that while the Paradata Specification provides a framework for expressing usage information, it may not specify a standardized set of actors or verbs, or inBloom.org may produce a set that falls short of enabling inBloom’s most compelling use cases. inBloom will produce guidelines for expression of additional properties, or tags, which fulfill its users’ needs, and will specify how such metadata and paradata will conform to the LRMI and Learning Registry standards, as well as to other relevant or necessary content description standards.
All very interesting. I suspect with the volume of Gates and Carnegie funding backing inBloom, we’ll be hearing a lot more about this development and, although it may have no direct impact to the UK F//HE sector, it is going to be very interesting to see whether the technologies inBloom adopts, and the Learning Registry in particular, can really work at scale.
PS I haven’t had a look at the parts of the inBloom spec that cover assessment but Wilbert has noted that it seems to be “a straight competitor to the Assessment Interoperability Framework that the Obama administration Race To The Top projects are supposed to be building now…”
Do open access and open education need to work together more? That was the question posed by Pat Lockley and discussed on twitter on Friday evening by a group of open education folks using the hashtag #chatopen.
Open access in this instance was taken to refer to open access repositories of peer-reviewed papers and other scholarly works and associated open access policies and agendas. There was general agreement that open access and open education proponents should work together but also recognition that it was important to be aware of different agendas, workflows, technical requirements, etc. Suzanne Hardy of the University of Newcastle added that it was equally important to take heed of open research data too.
Although the group acknowledged that open access still faced considerable challenges, there was a general consensus that it was more mature, both in terms of longevity and uptake, and that it was embedded more widely in institutions. Amongst other factors, the relative success of open access was attributed to the fact that most universities already had policies and repositories for publishing and managing scholarly outputs, while few had comparable strategies for managing teaching and learning materials. Phil Barker added that research outputs were always intended for publication whereas teaching and learning materials were generally kept within the institution. Nick Sheppard of Leeds Met also pointed out that most institutional repositories could not handle teaching and learning resources and research data without significant modification. This led to the suggestion that while institutional repositories fit the culture of scholarly works and open access well, research data and OERs are much harder to manage and share.
In terms of uptake and maturity, although there was general agreement that open access was some way ahead of open education, it appears that open data is catching up fast due to institutional drivers such as the REF, high level policy support and initiatives such as opendata.gov. Funding council mandates were also recognised as being an important driver in this regard.
Different interpretations of the term ‘open” were discussed as the open in open access and open education were felt to be quite different. The distinction between gratis and libre was felt to be useful, though it is important to recognise more subtle variations of open.
There was some consensus that teaching and learning resources tend to be regarded as being of lesser importance to institutions than scholarly works and research data and that this was reflected in policy developments, staff appointments and promotion criteria. Furthermore, until impact measures, funding and business models change this is likely to remain the case. Open access and open education both reflect institutional culture but they are separate processes and this separation reflects university polices, priorities and funding streams.
The group also felt that different communities had emerged around open access and open education, with open access mainly being the concern of librarians and open education the domain of eLearning staff. Phil refined this distinction by suggesting that open access is driven by researchers but managed by librarians. However Nick Sheppard of Leeds Met suggested that the zeitgeist was changing and that open access, open education and open research data are starting to converge.
In response to the question “what open education could learn form open access?” one lesson may be that top down policy can help. Although open education processes are more complex and diverse than open access, the success of open access could aid open education.
Pat wrapped up the session by asking where next for open education? What do we do? Lis Parcell of RSC Wales cautioned against open education becoming the domain of “experts” and emphasised the importance of enabling new audiences to join the open debate, by using plain language where possible, meeting people where they are and providing routes to help them get a step on the ladder. There was also some appetite for open hackdays and codebashes that would bring teachers, researchers and developers together to build OA/OER mashups. Nick put forward the following usecase:
“I want to read a research paper, text mined & processed, AI takes me to relevant OER to consolidate learning!”
Finally everyone agreed that it’s important to keep talking, to keep open education on the agenda and try to transform open practice into open policy.
So there you have it! A brief summary of a wide-ranging debate conducted using only 140 characters! Who says you can’t have a proper conversation on twitter?! If you’re interested in reading the full transcript of the discussion, Martin Hawksey has helpfully set up a TAGS Viewer archive of the #chatopen here.
If you want to follow up any of the points or opinions raised here than feel free to comment below or send a mail to firstname.lastname@example.org
Many thanks once again to Pat Lockley for setting up the discussion and to all those who participated.
As a result of a request from the Cabinet Office to contribute to a paper on the use of hackdays during the procurement process, CETIS have been revisiting the “Codebash” events that we ran between 2002 and 2007. The codebashes were a series of developer events that focused on testing the practical interoperability of implementations of a wide range of content specifications current at the time, including IMS Content Packaging, Question and Test Interoperability, Simple Sequencing (I’d forgotten that even existed!), Learning Design and Learning Resource Meta-data, IEEE LOM, Dublin Core Metadata and ADL SCORM. The term “codebash” was coined to distinguish the CETIS events from the ADL Plugfests, which tested the interoperability and conformance of SCORM implementations. Over a five year period CETIS ran four content codebashes that attracted participants from 45 companies and 8 countries. In addition to the content codebashes, CETIS also additional events focused on individual specifications such as IMS QTI, or the outputs puts of specific JISC programmes such as the Designbashes and Widgetbash facilitated by Sheila MacNeill. As there was considerable interest in the codebashes and we were frequently asked for guidance on running events of this kind, I wrote and circulated a Codebash Facilitation document. It’s years since I’ve revisited this document, but I looked it out for Scott Wilson a couple of weeks ago as potential input for the Cabinet Office paper he was in the process of drafting together with a group of independents consultants. The resulting paper Hackdays – Levelling the Playing Field can be read and downloaded here.
The CETIS codebashes have been rather eclipsed by hackdays and connectathons in recent years, however it appears that these very practical, focused events still have something to offer the community so I thought it might be worth summarising the Codebash Facilitation document here.
Codebash Aims and Objectives
The primary aim of CETIS codebashes was to test the functional interoperability of systems and applications that implemented open learning technology interoperability standards, specifications and application profiles. In reality that meant bringing together the developers of systems and applications to test whether it was possible to exchange content and data between their products.
A secondary objective of the codebashes was to identify problems, inconsistencies and ambiguities in published standards and specifications. These were then fed back to the appropriate maintenance body in order that they could be rectified in subsequent releases of the standard or specification. In this way codebashes offered developers a channel through which they could contribute to the specification development process.
A tertiary aim of these events was to identify and share common practice in the implementation of standards and specifications and to foster communities of practice where developers could discuss how and why they had taken specific implementation decisions. A subsidiary benefit of the codebashes was that they acted as useful networking events for technical developers from a wide range of backgrounds.
The CETIS codebashes were promoted as closed technical interoperability testing events, though every effort was made to accommodate all developers who wished to participate. The events were aimed specifically at technical developers and we tried to discourage companies from sending marketing or sales representatives, though I should add that we were not always scucessful! Managers who played a strategic role in overseeing the development and implementation of systems and specifications were encouraged to participate however.
Capturing the Evidence
Capturing evidence of interoperability during early codebashes proved to be extremely difficult so Wilbert Kraan developed a dedicated website built on a Zope application server to facilitate the recording process. Participants were able to register the tools applications that they were testing and to upload content or data generated by these application. Other participants could then take this content test it in their own applications, allowing “daisy chains” of interoperability to be recorded. In addition, developers had the option of making their contributions openly available to the general public or visible only to other codebash participants. All participants were encouraged to register their applications prior to the event and to identify specific bugs and issues that they hoped to address. Developers who could not attend in person were able to participate remotely via the codebash website.
IPR, Copyright and Dissemination
The IPR and copyright of all resources produced during the CETIS codebashes remained with the original authors, and developers were neither required nor expected to expose the source code of their tools and applications to other participants.
Although CETIS disseminated the outputs of all the codebashes, and identified all those that had taken part, the specific performance of individual participants was never revealed. Bug reports and technical issues were fed back to relevant standards and specifications bodies and a general overview on the levels of interoperability achieved was disseminated to the developer community. All participants were free to publishing their own reports on the codebashes, however they were strongly discouraged from publicising the performance of other vendors and potential competitors. At the time, we did not require participants to sign non-disclosure agreements, and relied entirely on developers’ sense of fair play not to reveal their competitors performance. Thankfully no problems arose in this regard, although one or two of the bigger commercial VLE developers were very protective of their code.
Conformance and Interoperability
It’s important to note that the aim of the CETIS codebashes was to facilitate increased interoperability across the developer community, rather than to evaluate implementations or test conformance. Conformance testing can be difficult and costly to facilitate and govern and does not necessarily guarantee interoperability, particularly if applications implement different profiles of a specification or standard. Events that enable developers to establish and demonstrate practical interoperability are arguably of considerably greater value to the community.
Although CETIS codebashes had a very technical focus they were facilitated as social events and this social interaction proved to be a crucial component in encouraging participants to work closely together to achieve interoperability.
These days the value of technical developer events in the domain of education is well established, and a wide range of specialist events have emerged as a result. Some are general in focus such as the hugely successful DevCSI hackdays, others are more specific such as the CETIS Widgetbash, the CETIS / DecCSI OER Hackday and the EDINA Wills World Hack running this week which aims to build a Shakespeare Registry of metadata of digital resources relating to Shakespeare covering anything from its work and live to modern performance, interpretation or geographical and historical contextual information. At the time however, aside from the ADL Plugfests, the CETIS codebashes were unique in offering technical developers an informal forum to test the interoperability of their tools and applications and I think it’s fair to say that they had a positive impact not just on developers and vendors but also on the specification development process and the education technology community more widely.
Facilitating CETIS CodeBashes paper
Codebash 1-3 Reports, 2002 - 2005
Codebash 4, 2007
Codebash 4 blog post, 2007
OER Hackday, 2011
QTI Bash, 2012
Dev8eD Hackday, 2012
She’s probably going to kill me for writing this but what the hell….Amber is leaving JISC at the end of the week and I can’t let her go without a send off! I’ve known Amber professionally for more years than it would be polite to mention and to be honest I can’t actually remember where she was working when I first met her, though I think it was pre-Becta. I do remember being really pleased when she joined JISC because she had a reputation for Knowing Her Stuff and for really understanding technology from a teaching and learning perspective.
I’ve collaborated with Amber on a number of JISC programmes and for the last three years we’ve worked together with CETIS colleagues Phil Barker, R. John Robertson and Martin Hawksey to provide advice and guidance on digital infrastructure to support the JISC HEA Open Educational Resource Programmes. It’s been an immensely rewarding experience. Although the UK OER Programmes are not “about” digital infrastructure development per se, they have fostered some really innovative technical developments such as the OER Visualisation Project, the CETIS OER Technical Mini Projects, the JLeRN Experiment and the OER Rapid Innovation Programme, all of which, to a greater or lesser degree, are a result of Amber’s vision and willingness to take risks.
Over the last three years Amber has also become an influential voice in the global open education debate. One of the things I have always admired about her contribution to discussions is that she has an enviable ability to ask the right questions, to synthesise complex and often conflicting issues, and represent a wide range of views without ever loosing sight of her own perspective. Some of the posts she has written for the JISC Digital Infrastructure Team blog have been important markers in the development of the UK OER Programmes.
Above and beyond her undoubted technical expertise, I don’t think it’s too far fetched to say that Amber has been a really positive role model for other women working in a domain where female colleagues are still rather under-represented. She is immensely patient and understanding, and I personally feel that I have benefitted enormously from her support and encouragement. She’s also really quite silly and is immensely good fun to work with.
The last project Amber, Phil, Martin and I worked on was a booksprint earlier this autumn. The aim of the booksprint was to synthesise the technical outputs of all three years of the UK OER Programmes and to write a book in three days. It was Amber’s idea of course and I have to confess that I really wasn’t convinced we were up to the task. I’m delighted to admit that I was proved wrong. With patient input from booksprint facilitator Adam Hyde we did manage to write our book, or most of it at least, and we actually had great fun while we were at it!
So now Amber is off to the University of Warwick where, among other people, she’ll be working with the lovely Jenny Delasalle who some of you might remember as Phil’s predecessor as CETIS Metadata SIG coordinator. I’m sure we’ll all miss working so closely with Amber but I have the feeling that we haven’t seen the back of her yet! So good luck with the new job Amber and I hope we can look forward to working together again at some stage in the not too distant future.
Now I had better go and finish writing the conclusion of our book, otherwise Amber really will kill me
* ETA Brian has very kindly let me know that the picture above was taken by Kirsty Pitkin, @eventamplifier, or possibly by Mr@eventamplifier! Who ever took it, it’s lovely
After three years of innovation focused on the sustainable release of open educational resources, the JISC HEA UK OER Programme is drawing to a close and yesterday Martin and I went along to the final programme meeting in London. Phil wasn’t able to attend the meeting and instead posted the following e-mail to the oer-discuss mailing list:
Hello all, I can’t be in London today, so I’m kind of joining the end of programme discussion from afar. The last three years have been great. At one of the early planning meetings someone (Andy Powell, I think) said that one measure of whether the programme was successful could be the widespread recognition of UKOER / OER as an idea within UK F&HE and the existence of a community around it. I’m pretty sure that has happened, not just because of UKOER but we were there and helped. So well done all of us
But what now? The programme has always aimed at sustainable release of resources, change of culture and practice, not just a short burst of activity leading to a one-off dumping of resources. What will happen over the next few years by way of sustained release and which practices are sustainable? Also, of course, from a CETIS point of view, what technologies can help?
Happy diwali, keep the OER light shining.
Phil’s mail prompted Nick Sheppard to ask the apparently innocent question:
Possibly a silly question…but I should stop tagging new resources ukoer?!
This seemingly innocuous enquiry prompted the kind of mailing list explosion normally only seen on Friday afternoon, and it wasn’t long before the discussion had it’s own twitter tag: #oergate. I haven’t counted the number of replies but if the thread has reached double figures it wouldn’t surprise me. If you’re feeling brave, you can read the whole thread here.
Some colleagues were all in favour of continuing to use the ukoer tag, arguing that it now represents an active community which is powerful evidence to the sustainability of the funded programmes’ legacy. Others argued that continued use of the tag would muddy the waters for collection managers and make it difficult to identify resources produced through the funded phase of the programme.
Amber has now managed to capture the discussion in an excellent blog post UKOER: What’s in a tag?*. Although there is no conclusive consensus as to how to answer Nick’s original question, one thing that this discussion has clearly demonstrated is that there does appear to be a lively and active community that has grown up around the funded programmes and the ukoer tag, and that definitely has to be a good thing!
*Amber’s blog post was written with input from Sarah Currier (Jorum), David Kernohan (JISC), Martin Hawksey (CETIS), Lorna Campbell (CETIS), Jackie Carter (Jorum).
ETA It now appears that the #oergate debate borked JISCmail! It seems that the list exceeded posting limits or some such, and no further comments were posted to the list after 15.10 on Wednesday afternoon. I’m delighted to say that I got the last word in