Usability of VGI in Haiti earthquake response and the 2nd workshop on usability of geographic information

On the 23rd March 2010, UCL hosted the second workshop on usability of geographic information, organised by Jenny Harding (Ordnance Survey Research), Sarah Sharples (Nottingham), and myself. This workshop was extending the range of topics that we have covered in the first one, on which we have reported during the AGI conference last year. This time, we had about 20 participants and it was an excellent day, covering a wide range of topics – from a presentation by Martin Maguire (Loughborough) on the visualisation and communication of Climate Change data, to Johannes Schlüter (Münster) discussion on the use of XO computers with schoolchildren, to a talk by Richard Treves (Southampton) on the impact of Google Earth tours on learning. Especially interesting are the combination of sound and other senses in the work on Nick Bearman (UEA) and Paul Kelly (Queens University, Belfast).

Jenny’s introduction highlighted the different aspects of GI usability, from those that are specific to data to issues with application interfaces. The integration of data with software that creates the user experience in GIS was discussed throughout the day, and it is one of the reasons that the issue of the usability of the information itself is important in this field. The Ordnance Survey is currently running a project to explore how they can integrate usability into the design of their products – Michael Brown’s presentation discusses the development of a survey as part of this project. The integration of data and application was also central to Philip Robinson (GE Energy) presentation on the use of GI by utility field workers.

My presentation focused on some preliminary thoughts that are based on the analysis of OpenStreetMap  and Google Map communities response to the earthquake in Haiti at the beginning of 2010. The presentation discussed a set of issues that, if explored, will provide insights that are relevant beyond the specific case and that can illuminate issues that are relevant to daily production and use of geographic information. For example, the very basic metadata that was provided on portals such as GeoCommons and what users can do to evaluate fitness for use of a specific data set (See also Barbara Poore’s (USGS) discussion on the metadata crisis).

Interestingly, the day after giving this presentation I had a chance to discuss GI usability with Map Action volunteers who gave a presentation in GEO-10 . Their presentation filled in some gaps, but also reinforced the value of researching GI usability for emergency situations.

For a detailed description of the workshop and abstracts – see this site. All the presentations from the conference are available on SlideShare and my presentation is below.

OpenStreetMap Quality evalution and other comparisons

A comparison of my analysis of OpenStreetMap (OSM) quality evaluation to other examples of quality evaluation brings up some core issues about the nature of the new GeoWeb and the use of traditional sources. The examples that I’m referring to are from Etienne Cherdlu’s SOTM 2007 ‘OSM and the art of bicycle maintenance’, Dair Grant’s comparison of OSM to Google Maps and reality, Ed Johnson’s analysis this summer and Steven Feldman’s brief evaluation in Highgate.

Meridian 2 and OSM in the area of Highgate, North London
Meridian 2 and OSM in the area of Highgate, North London

The first observation is of the importance and abundance of well georeferenced, vector-derived public mapping sites, which make several of these comparisons possible (Chedlu, Dair and Feldman). The previous generation of stylised street maps is not readily available for a comparison. In addition to the availability, the ease with which they can be mashed-up is also a significant enabling factor. Without this comparable geographical information, the evaluation would be much more difficult.

Secondly, when a public mapping website was used, it was Google Maps. If Microsoft’s Virtual Earth had also been used, it would arguably allow a three-way comparison as the Microsoft site uses Navteq information, while Google uses TeleAtlas information. Using Ordnance Survey (OS) OpenSpace for comparison is also a natural candidate. Was this familiarity that led to the selection of Google Maps? Or is it because the method of comparison is visual inspection, so adding a third source makes it more difficult? Notice that Google has the cachet of being a correct depiction of reality, which Etienne, Dair and Bob Barr demonstrated not to be the case!

Thirdly, and most significantly, only when vector data was used – in our comparison and in parts of what Ed Johnson has done – a comprehensive analysis of large areas became possible. This shows the important aspect of the role of formats in the GeoWeb – raster is fabulous for the delivery of cartographic representations, but it is a vector that is suitable for analytical and computational analysis. Only OSM allows the user easy download of vector data – no other mass provider of public mapping does.

Finally, there is the issue of access to information, tools and knowledge. As a team that works at a leading research university (UCL), I and the people who worked with me got easy access to detailed vector datasets and the OS 1:10,000 raster. We also have at our disposal multiple GIS packages, so we can use whichever one performs the task with the least effort. The other comparisons had to rely on publically available datasets and software. In such unequal conditions, it is not surprising that I will argue that the comparison that we carried out is more robust and consistent. The issue that is coming up here is the balance between amateurs and experts, which is quite central to Web 2.0 in general. Should my analysis be more trusted than those of Dair’s or Etienne’s, both of whom who are very active in OSM? Does Steven’s familiarity with Highgate, which is greater than mine, make him more of an expert in that area than my consistent application of analysis?

I think that the answer is not clear cut; academic knowledge entails the consistent scrutiny of the data, and I do have the access and the training to conduct a very detailed geographical information quality assessment. In addition, my first job in 1988 was in geographical data collection and GIS development, so I also have professional knowledge in this area. Yet, local knowledge is just as valuable in a specific area and is much better than a mechanical, automatic evaluation. So what is happening is an exchange of knowledge, methods and experiences between the two sides in which both, I hope, can benefit.

OSM quality evaluation

In the past year I have worked on the evaluation of OpenStreetMap data. I was helped by Patrick Weber, Claire Ellul, and especially Naureen Zulfiqar who carried out part of the analysis of motorways. The OSM data was compared against Ordnance Survey Meridian 2 and the 1:10,000 raster as they have enough similarity to justify a comparison. Now, as the fourth birthday of OSM is approaching, it is good time to evaluate what was achieved. The analysis shows that, where OSM was collected by several users and benefited from some quality assurance, the quality of the data is comparable and can be fit for many applications. The positional accuracy is about 6 metres, which is expected for the data collection methods that are used in OSM. The comparison of motorways shows about 80% overlap between OSM and OS – but more research is required. The challenges are the many areas that are not covered – currently, OSM has good coverage for only 25% of the land area of England. In addition, in areas that are covered well, quality assurance procedures should be considered – and I’m sure that the OSM crowd will find great ways to make these procedures fun. OSM also doesn’t covered areas at the bottom of the deprivation scale as well as it covers areas that are wealthier. The map below shows the quality of coverage of the two datasets for England, with blue marking areas where OSM coverage is good and red where it is poor.

Difference between OSM and OS Meridian for England
Difference between OSM and OS Meridian for England

The full report is available here, and if someone is willing to sponsor further analysis – please get in touch!

The paper itself have been published – Haklay M, 2010, “How good is volunteered geographical information? A comparative study of OpenStreetMap and Ordnance Survey datasets” Environment and Planning B: Planning and Design 37(4) 682 – 703