I’ve transported all posts & comments on this blog to a new home on my own domain.
You can now find me at www.markpower.me.uk/workblog
I’ve transported all posts & comments on this blog to a new home on my own domain.
You can now find me at www.markpower.me.uk/workblog
This is a nice follow-on to my previous post regarding the web and the work of the W3C. As we’ve seen, the web and its technologies have been evolving and getting more powerful and while some will still eschew the growing relevance of the web (and its friendly neighbourhood viewing window, the browser) in a world of apps apps apps, the W3C continues to push forward its capabilities.
So step forward the newly formed Web Real-Time Communications Working Group. The mission of the group is to define a set of client-side APIs to enable real-time communications through the browser…video, audio, no plug-ins or downloads. The Charter page also states “supplementary real-time communication” so we’re also looking at screen sharing – or at least ‘browser window sharing’ – I think I’d be safe in saying.
One of the great things about this – imho – is that the working group will be looking closely at device APIs and pushing work on those forward, which, along with the DAP (Device APIs & Policy WG) should hopefully propel the development of APIs for device capabilities such as use of camera, microphone and the whole area of media capture and streaming. I then automatically think of the mobile space…mobile web apps for video chat anyone?
The working group has a timescale that looks at getting their first recommendations out toward the end of next year.
Want to see it in action? Well, Ericsson Labs (who are co-chairing the working group) rather kindly produced a video demo – Beyond HTML5: Peer-2-peer conversational video in HTML5. It is below…for your viewing pleasure. You can also read their accompanying blog post at https://labs.ericsson.com/developer-community/blog/beyond-html5-peer-peer-conversational-video
Yesterday I attended the launch event of the new W3C UK & Ireland office in Oxford, hosted by Nominet (who are hosting the office, not just the launch event).
It was a relatively short event (half a day) but packed full with some interesting talks, showcasing the work that is being done with the web by various parties in collaboration with the W3C. The talks did a nice job of giving us a look at how central the web is in fields like mobile delivery (MobileAware & Vodafone), future media (from the BBC), Internet & television (BBC R&D) and, underpinning much of this, was the importance and role of the web in sociological terms, with Prof. Bill Dutton, Director of the Oxford Internet Institution, rounding off things with a look at Freedom of Connection & Freedom of Expression. Prof. Dutton highlighted elements of a forthcoming UNESCO report that provides a new perspective on the social and political dynamics behind threats to freedom of expression using the Internet and the web through digital rights issues and how technical, legal and regulatory measures might be constraining the freedom that many of us see the Internet allowing us today. A line that stood out for me in particular was:
Freedom of expression is not an inevitable outcome of technological innovation
Sir Tim Berners Lee kicked off proceedings with a bit of history behind his invention of the web and the subsequent creation of the W3C, whose goal, Sir Tim told us, is to “lead the web to its full potential”. Around 20-25% of the globe now uses the web but now we have reached a point where we need to look at why the other 75-80% don’t. The W3C Web Foundation (http://www.w3.org/2009/Talks/0318_bratt_WebFoundation/WebFoundation.pdf) is there to tackle this issue and figure out ways to accelerate the take up of the web in the parts of the world that still don’t have it.
Sir Tim talked about the role of the web in supporting justice and democracy too (something that the UNESCO report investigates as I wrote previously) and asked the question of how we can optimise the web to support wider and more efficient democracy. Science too. How do we design the web to more easily bring together part formed ideas across people and countries to help these ideas feed off each other and evolve. And how can the web – in this new age of social networking – help us work more effectively and communicate wider than simply “friends of friends”, breaking through traditional social barriers and forming new relationships that may not normally occur?
An interesting question from the audience was the one around temporal bubble and how to ensure we can still view the web as we have now in decades to come – after all, so much content from 10 years ago cannot now be viewed (without a painstaking process of content conversion). It was a timely revisit to that question as on the train down I was reading about the hundreds of thousands of photographs shared on the fotopic.net have recently simply vanished due to fotopic going into liquidation. Then the day after I read that Google is now telling users of their Google Video service that they need to move them off there as, while it hasn’t supported new uploads for quite some time, Google will actually be folding the whole thing and putting up the closed sign.
So that was all just in the opening talk!We went on to hear about the W3C’s Open Web Platform and how HTML5 and related web standards are extending and evolving the power of the web, making it central to areas like mobile, gaming, government and social networking. On the topic of mobile, J Alan Bird of the W3C stated that,
The open web platform is the new mobile operating system
and the W3C’s work is ongoing to make it as robust as possible.
Dr. Adrian Woolard of BBC R&D talked about their work in Internet TV and how they are looking to free this from the set-top box, while focusing on the accessibility of New Broadcasting products and services. We’ve had the web on our televisions for a few years now, well, those of us with a Wii or Playstation 3 that is. But the Internet will be moving into the TV itself. On this topic the W3C recently formed the Web & Television Interest Group (January 2011) to start looking at requirements that will then form recommendations and a Working Group that will approach the standards issue in this space – see http://www.w3.org/2010/09/webTVIGcharter.html. This is something that I want to take a bit further in a future article, around the web in a Post-PC world. We’ve had the web on PCs for over a decade now, we have it, increasingly, in powerful mobile devices in our pockets, tablets, and now…that bastion of the living room…the TV!
Dan Appelquist of Vodafone outlined the company’s commitment to working with the W3C in the mobile space and nicely highlighted some of the reasons why Vodafone look to work with the W3C, contributing to web standards. Something Dan mentioned (kind of in passing) that I didn’t know about was around the social networking space. One was OneSocialWeb project (http://onesocialweb.org/), a free decentralised approach to the social network (in fact I’ve just this minute found they have an iPhone app that I’ll be duly installing after writing this) and something more grounded in the CETIS Standards space – oStatus, an open standard for distributed status updates, across networks. See http://ostatus.org/about
Ralph Rivera, Director of BBC Future Media talked to us about how the BBC is looking at the digital public space it inhabits as much as the programmes and services it creates and outlined what digital public space means to the BBC, and how the W3C and BBC can work in partnership. Ralph said a couple of things that really stood out for me. One was that the BBC is looking at the 2012 Olympics and planning their digital products & services around it to do for online broadcasting what the Coronation did for television. I thought that was pretty cool. He also said this, and I’ll round off the article with this…
There is no more important digital space than the web itself
I like that.
I’ve recently written a JISC CETIS briefing paper on the topic of Mobile Web Apps.
With the growth and constant shift in the mobile space institutions could be forgiven for feeling a little lost as to how to best tackle the issue of delivering content and/or services that are optimised for mobile devices. Apple, Android, Blackberry, Windows Phone…app ecosystems seemingly everywhere you turn and each requiring different development approaches; SDKs, programming languages, approval processes and terms & conditions. I think it’s fair to say that for institutions, looking to deliver to mobile devices while being as inclusive as possible, this area is something of a minefield.
A viable, alternative approach is developing Mobile Apps using open web technologies and standards; technologies that continue to improve performance and offer more powerful functionality – as is now being talked about quite a bit on the topic of HTML5.
The briefing paper is intended to give an overview of this space and cover some of the key talking points, with a collection of useful resources with which to delve deeper into the subject for those that decide that mobile web apps are indeed a workable solution for them. I’m hoping that an interested audience would consist of institutional web staff, students services, learning technologists, maybe even an IT services manager here and there
It’s in PDF format but I’ll also be looking to get it in web form on the CETIS website over the next few days and, of course, I’d welcome any feedback and questions on it here.
If you’re interested, get it at http://wiki.cetis.ac.uk/images/7/76/Mobile_Web_Apps.pdf
Argon is a mobile Augmented Reality (AR) Browser for the iPhone. From the website:
Argon is the completely open standards augmented reality browser that allows rapid development and deployment of Web 2.0 style augmented reality content.
Multiple simultaneous channels, analogous to browser tab on the desktop, let authors create dynamic and interactive AR content using existing web development toolsets.
The browser is stated as being the reference implementation of Georgia Tech’s work on the KHARMA Mobile AR Architecture, which combines HTML for content with KML for defining geographical co-ordinates (as used by Google Maps, Google Earth & Yahoo Maps).
One thing that seems to counter-balance this standards flag bearing though (for me, at least) is the fact that Argon is only available on iPhone – in fact, the developers go so far as to specify that it is best run on the latest version, iPhone 4. Hopefully that will change over time and we’ll see versions for the other popular mobile platforms too: the ever growing Android and the recently adrenaline-injected Windows Phone 7. After all, it would seem a little odd lauding the open standards route while then being restricted to a single delivery platform.
But there’s plenty of growing room in the still young AR space. With the technology making a significant appearance in this year’s Horizon Report – given a ‘Time-to-adoption’ period of 2-3 years, and us already seeing mobile augmented reality being implemented at Exeter Uni on their JISC LTIG Project: Unlocking the Hidden Curriculum, it’s good to see a new offering in this area to possibly compete with the current big players: Layar, Wikitude & Junaio.
My wish? My wish is that we could see something like Argon develop into a platform for AR developers, built on open standards, that would be supported by those players and open up the AR space to easily create interactive and immersive mobile AR experiences & content that you could then deploy cross-browser. Like I say though…early days yet. Hopefully we’ll see it happen.
Oh..one more thing…I have installed Argon on my (now lowly) iPhone 3GS and while the browser looks pretty standard fare – channel view, map, search, etc – unfortunately it seems there are absolutely no POIs (Points of Interest) nearby and the search for local channels isn’t yet implemented. So, as yet, it’s a bit difficult to get a handle of whether Argon would float my boat. Next up I shall go and check out the developer’s area and have a look at creating my own POIs and content. I’ll let you know how I get on…
The Argon browser can be found at http://argon.gatech.edu/
There are POIs available nearby – I just hadn’t looked at the getting started tutorial properly (I know…I’m one of those blokes that doesn’t read the manual). I’m liking the search box in the realview but the POI icon itself is a bit flaky and judders about a bit too much – I suspect their recommendation of using iPhone 4 is down to the gyroscope aiding with that, which the 3GS doesn’t have. But as you can see from the screenshot, it does the basics and I would imagine one can customise the look with your own CSS. Now…let’s hope their documentation is clear and helpful and not simply written by some Tefal headed genii in a Georgia Tech Lab…
Well…I’ve been travelling around the interweb, reading – or simply adding to Instapaper for later and trying to get round to reading – lots of lovely articles, blog posts and suchlike on the current happenings around the Mobile Web. As you’ll well know (seeing as you’re reading this) the Mobile Web is a hot topic at the moment, so I thought I’d highlight some of the things I’m reading up on right now.
The guys at Opera are superb when it comes to talking and teaching about web development techniques and the current state of the web. I’ve enjoyed listening and talking to both Patrick Lauke and Bruce Lawson in the last few months and Bruce has taken his talk around this and built it into a handy guide, available on the Opera Developers website. Bruce talks about the options available when looking to deliver your content to mobile devices and gives loads of really useful advice and tips on stuff to do, stuff to avoid and delivers a really nice outline on why CSS Media Queries are so powerful and can help you build mobile-aware, adaptive websites that don’t have to check which browser the content is being delivered to but checks the device settings themselves (think “display resolution”). I strongly encourage you to check this guide out if you haven’t before.
Following on with the CSS Media Queries angle, this article on quirksmode gives you a full walkthrough of how to combine <meta name=”viewport” content=”width=device-width”> with media queries to enable your website to resize to fit any display. It tells you what these do, why you should use them and gives the whole technique, with helpful screenshots. Excellent.
This is a truly great set of slides that Bryan Rieger of Yiibu recently delivered at the Over The Air event at Imperial College, London. Here Bryan gives us an outline of device usage, mobile browsers and – echoing Patrick’s & Bruce’s message – the options available to us when it comes to making mobile friendly websites (and apps – we can’t ignore those). Bryan puts a damn good point across that maybe we should design our sites for mobile devices FIRST, then add in the capability for the site to adapt for desktop. It’s a different way of approaching the whole creation and I’m really into that way of thinking.
Haha…now to end with something more leftfield This article, written by web designer John O’Nolan takes a playful swipe at those people that trot out the whole “Next Big Thing” line. John gives us an entertaining look at the evolution of the web on mobile and does put a nice perspective on things. What’s even better is that some of the thoughtful comments round out the whole thing to make a nicely smart piece on viewing the state of the mobile web.
So, a few things there for you to have a look at and digest. We’ll be seeing this talked about more and more I suspect. Cheers! M
I recently ran a JISC CETIS event on mobile technology at the University of Bolton and, it seemed to me, to be rather successful. Of course the day was packed, we ran over time and my session on AR at the end of the day was rushed and sketchy…but it nicely lines up some more focused future events.
First of all, the presentations from the day are available on our wiki at http://wiki.cetis.ac.uk/Mobile_Tech_Meeting_15th_June_2010
Throughout the day we highlighted some of the key challenges, issues and general questions that attendees shared in this space…
There are 2 main parties to think about here – Preparing staff within institutions & the support for students (perhaps through induction processes). Now, assuming this would involve different departments and that these should (ideally) have a dialogue with each other…who supports the supporters?
This ties in – for me at least – with the discussion around the Distributed Learning Environment & the widgets work that CETIS is heavily engaged in. The mobile device seems such an obvious part of a learner’s “PLE” (as in, it’s personal) that this area is ideal for focusing on the overlap and connectivity between institutionally controlled systems and the tools and services that learners use. Also, the provision of data from institutional services to mobile devices. Can I get a map of where I am on the campus? Can I see if there’s an available room nearby and book it, check my timetable or search the library?
This is an interesting one for me and it also links to the PLE area (in the way I think about it anyway). Increasingly, the ubiquity and all-round saturation of technology in so many parts of all our lives is leading to this blurring between work and private/personal life. As professionals we face these questions and for some of us, our whole use of technology has almost completely broken down the lines between the two. The things I do at work are the things I am interested in outside of work too, so I’ll find myself twittering and posting facebook links at any time, anywhere. But is this the same for learners? Also, context and location is hugely important. The use of mobile devices enables you to capture photographs, video, blog, twitter…whatever…from wherever you are (yes, assuming connection, etc), so what are the ethical issues?
Now, this seemed to get the most nodding of heads. How do we make the business case to our institutions for the need to engage with mobile technology and focus some development? Do we assume it is want the learners want or is it something that we think is important and growing and soon-to-be all pervasive? How can mobile learning improve learning in general? Is there a case for it? Where does the focus get placed and (!!) the money go toward?
Can the pedagogy map to the affordances given to us by the technology available? Two of the presentations on the day covered work in Geography field students and assessment in healthcare practices. I think it’s easy to to see how these areas are prime for the enabling and enhancing of in-the-workplace/field activities that mobile devices and their functionality providebut… Is mobile tech from an institutional, learning delivery sense, not really applicable or practical for all?
Lots and lots of questions.
One thing I’m sure of is that the mobile tech area is currently the most fast moving (almost dizzyingly so) and exciting areas around in educational technology at the moment. The opportunities that such increasingly affordable and powerful technology, always on, always connected are handing to so many of us are changing the shape of the learning landscape. Institutions need to get a handle on this, otherwise they’ll be quickly left behind…but I know, it’s not a simple issue.
Oh and yes, I know I said above that this tech is with “many of us”. I’ve not forgotten the very important aspect of inclusion, in all its forms. But I think I’ll leave you with this blog post from one of our speakers at the event, Dr. Richard Hall (DMU) - Inclusion, social relations and theory: issues in mobile learning
One of the big questions around institutions throwing themselves into the mobile learning world is how do you cater for such a huge variety of handsets and operating systems? Tom Hume, Managing Director of Future Platforms (http://www.futureplatforms.com/) recently presented at the excellent Eduserv Symposium: The Mobile University. Tom pointed out that to reach 70% of UK mobile owners, you need to be available on 375 different devices, 70 different families from 8 manufacturers.
But anyway, go and check out Tom’s talk, along with all the others from that day, on the Eduserv website: http://www.eduserv.org.uk/events/esym10/presentations
The following resource is related to this and exists in a debate that is building in some quarters: If different providers are channeling development of different application platforms, and you look at it and think, “Argh! how do we manage to cover THAT lot??”…do we get the question – Apps v. Web?
So…here’s the more in-depth link on the Webmonkey site that covers a few of these frameworks. Check it out..it’s very interesting
This is, of course, closely related to CETIS’s work in the widgets space and the Distributed Learning Environment.
March sees the end of the JISC funded XCRI Support Project as it signs off leaving the development of the XCRI (eXchanging Course Related Information) specification for sharing (and advertising) course information looking very healthy indeed.
The support project picked up where the original XCRI Reference Model project left off. Having identified the marketing and syndication of course descriptions as a significant opportunity for innovation – due to the general practice in this area being one of huge efforts around re-typing of information to accommodate various different systems, sites and services…then to have that information maintained separately in various places – the XCRI Reference Model project mapped out the spaces of course management, curriculum development and course marketing and provided the community with a common standard for exchanging course related information. This would streamline approaches to the syndication of such information and give us the benefits of cost savings when it comes to collecting and managing the data and opens up the opportunities for a more sustainable approach to lifelong learning services that rely on course information from learning providers.
Over the course of the next three years the XCRI Support project developed the XCRI Course Advertising Profile (XCRI-CAP), an XML specification designed to enable course offerings to be shared by providers (rather like an RSS feed) and by other services such as lifelong learning sites, course search sites and other services that support learners in trying to find the right courses for them. Through the supervision and support of several institutional implementation projects the support project – a partnership between JISC CETIS at the University of Bolton (http://bit.ly/PZdKw), Mark Stubbs of Manchester Metropolitan University (http://bit.ly/PZdKw) and Alan Paull of APS Limited (http://bit.ly/cF6Fhd) – promoted the uptake and sustainability of XCRI through engagement with the European standards process and endorsement by the UK Information Standards Board. Through this work the value of XCRI-CAP was demonstrated so successfully as to ensure it was placed on the strategic agenda of national agencies.
Hotcourses manages the National Learning Directory under contract from the Learning and Skills Council (LSC). With over 900,000 course records and 10,000 learning providers the NLD is possibly the largest source of information about learning opportunities in the UK, which learners and advisers can access through dozens of national, regional and local web portals. Working with a number of Further Education colleges Hotcourses is now developing and piloting ‘bulk upload’ facilities using XCRI to ease the burden on learning providers supplying and maintaining their information on the NLD. UCAS also continues to make progress towards XCRI adoption. Most recently, at the ISB Portfolio Learning Opportunities and Transcripts Special Interest Group on January 27, 2010, UCAS colleagues described a major data consolidation project that should pave the way for a data transfer initiative using XCRI, and cited growing demand from UK HEIs for data transfer rather than course-by-course data entry through UCAS web-link. The project is a two-phase one, with XCRI implementation in phase II, which is due to deliver sometime in 2011.
Having ensured that the specification gained traction and uptake the project has worked extensively at developing the core information used by XCRI into a European Norm with harmonisation from other standards that addressed this space developed elsewhere across Europe. It is this process which has seen the evolution of XCRI from a standalone specification to a UK application profile of a recognised international standard. This could now be transitioned to an actual British Standard through BSI IST 43 (the committee of the British Standards Institution which looks at technical standards for learning education and training). At the same time adoption of the specifications were continued to be supported through engagement with policymakers and suppliers while the technical tools developed for adopters continued to be updated and maintained.
A couple of key tools were developed by the support project to assist implementers of XCRI. An aggregator engine was setup and maintained by the project and is demonstrated at http://www.xcri.org/aggregator/. This shows how its possible to deploy an aggregator setup that pulls in courses from several providers, and offers a user interface with basic features such as searching, browsing, bookmarking, tags and so on. It also demonstrates some value-added aspects such as geocoding the course venues and displaying them on Google Maps. Once you’ve had a look at the demonstrator you can get hold of the code for it at http://bit.ly/9eViM2
The project also developed an XCRI Validator to help implementers check their data structure and content. This goes beyond structural validation to also analyse content and provide advice on common issues such as missing information. Currently the development of this is very much at a beta stage but implementers can try out this early proof-of-concept at http://bit.ly/aeLArY. Accompanying this is a blog post describing how to use the validator at http://bit.ly/aHoJtH
Up to press there have been around 15-20 “mini-projects” which were funded to pilot implementation of XCRI within institutions. These looked at developing course databases using the specification, extending existing systems and methods to map to XCRI and the general implementation of generating the information and the exporting of this via web services. Not to say that this was the only project activity around XCRI. Various other Lifelong Learning projects have had an XCRI element to them along the way and all these have contributed to forming an active community around the development and promotion of the spec.
This community’s online activity is centred around a wiki and discussion forum on the XCRI Support Project website at http://xcri.org and while the support project is now officially at an end, the website will stay around as long as there is a community using it – currently its maintained by CETIS. Some XCRI.org content may move to JISC Advance as XCRI moves from innovation into mainstream adoption. However, as long as people are trying out new things with XCRI – whether thats vocabularies and extensions or new exchange protocols – then XCRI.org provides a place to talk about it, with the LLL & WFD project at Liverpool (CCLiP – http://www.liv.ac.uk/cll/cclip/) currently looking at how to improve the site and provide more information for non-technical audiences.
More information on the XCRI projects can be found at the JISC website, specifically at http://bit.ly/awevwQ
I think we’re getting to the point where, by now, many of you will have found it hard to avoid hearing about the latest technology buzz. No, not Cloud Computing. This technology is, shall we say, more ‘tangible’. It is – quite literally – technology that you can hold in your hand.
Yes folks, by now many of you will be aware of this growing buzz around the rather snazzy and futuristic sounding ‘Augmented Reality’ (AR). The headlines are growing, the clamour is getting more excitable by the day and even though it’s only really hit the public consciousness relatively recently, I don’t think we’re far away from that glorious, early-doors hype bubble popping to the sounds of “well…there’s not many apps!”, “is it just for restaurants and tube stations?” and “it’s not a game-changer, it’s a fad!”. For that last one just look at the James Cameron’s Avatar 3D story (and nobody’s even seen it yet!)
Still. It’s an exciting technology and one that – like Cameron’s 3D in cinema – will be a game-changer (imo), not simply a fad, given the opportunities it will open up to enable and enhance the immersive delivery of rich content to mobile platforms. As with anything though, it’s not going to happen overnight. Right now the bugbear with 3D is the lack of supporting cinema screens and similar applies to AR capable devices. However, that will inevitably change.
There are 2 types of AR applications at the moment – Mobile & Desktop. Desktop uses marker-based images to create animations (both 2D & ’3D’) and – in some cases – also include interactive controls. I’ll cover this in a future post I think. For this brief post though I’m talking about Mobile AR. This is where an application uses your phone’s GPS to know where you are and its magnetometer – or more simply, the digital compass – to know which way you’re facing. Couple those together then add in the live video feed coming through your phone’s camera and bingo! We have location aware data overlayed on your image of the world around you.
At the moment I’ve got 3 AR apps on my iPhone – Robotvision (http://robotvision-ar.com/), Wikitude (http://www.wikitude.org/) & Layar (http://layar.com/). I’ll write a post in which I cover these soon but for now the obvious question is simply, “well, how could they be used in a learning activity?” – Oh and yes, I’m excluding ‘learning where the nearest Costa Coffee is’ from my criteria. Now I’m no teacher but I can see this…
Imagine, you’re studying Local History and looking at the changing architecture and layout of the town centre. You stand on a street, point your phone* at a scene and overlayed on the live image are archive photographs of the location spanning the decades. Touch the image of the Town Hall and you’re given the option of viewing a Flickr pool, visiting the town hall’s website, or going to the Wikipedia page and reading up on the history of the building. Buildings, street views, whole town layouts perhaps..
* I must point out that I use the term ‘phone’ very loosely here. What I really mean is the “smartphone”, the pocket sized computer. The gadget in my pocket that is already more powerful than the old PC I have in the back bedroom.
Or let’s say you’re on a Geology field trip, trekking around the Isle of Wight. Pointing your mobile device (see, we’re evolving already!) at a nearby outcrop and there on top of the scene, through your camera, are some controls that will display detailed information about what you’re looking at. That it’s made of sandstone perhaps, that it’s river formed as opposed to wind. The difference between river and wind formations!
So…AR meets Social Media meets The Cloud you could say. Like I say, there’ll be further posts around this from me – where I’ll attempt to look a bit closer at the applications out there and my thoughts on them.