OpenStreetMap and Meridian 2 – releasing the outputs

Back in September, during AGI Geocommunity ’09, I had a chat with Jo Cook about the barriers to the use of OpenStreetMap data by people who are not experts in the ways the data was created and don’t have the time and resources to evaluate the quality of the information. One of the difficulties is to decide if the coverage is complete (or close to complete) for a given area.

To help with this problem, I obtained permission from the Ordnance Survey research unit to release the results of my analysis, which compares OpenStreetMap coverage to the Ordnance Survey Meridian 2 dataset (see below about the licensing conundrum that the analysis produced as a by-product).

Before using the data, it is necessary to understnad how it was created. The methodology can be used for the comparison of completeness as well as the systematic analysis of other properties of two vector datasets. The methodology is based on the evaluation of two datasets A and B, where A is the reference dataset (Ordnance Survey Meridian 2 in this case) and B is the test dataset (OpenStreetMap), and a dataset C which includes the spatial units that will be used for the comparison (1km grid square across England).

The first step in the analysis is to decide on the spatial units that will be used in the comparison process (dataset C). This can be a reference grid with standard cell size, or some other meaningful geographical unit such as census enumeration units or administrative boundaries (see previous post, where lower level super output areas were used). There are advantages to the use of a regular grid, as this avoids problems that arise from the Modifiable Areal Unit Problem (MAUP) to some extent.

The two datasets (A and B) are then split along the boundaries of the geographical units, while preserving the attributes in each part of the object, to ensure that no information is lost. The splitting is necessary to support queries that address only objects that fall within each geographical unit.

The next step involves the creation of very small buffers around the geographical units. This is necessary because, due to computational errors in the algorithm that calculates the intersections and splits the objects and implementation of operators in the specific GIS package used, the co-ordinates where the object was split might be near, but not at, the boundary of the reference geographical unit. The buffers should be very small so as to ensure that only objects that should be calculated inside the unit’s area will be included in the analysis. In our case, the buffers are 25cm over grid square units that are 1km in length.

Finally, spatial queries can be carried out to evaluate the total length, area or any other property of dataset A that falls within each unit, and to compare these values to the results of the analysis of dataset B. The whole process is described in the image above.

The shape file provided here contains values from -4 to +4, and these values correspond to the difference between OpenStreetMap and Meridian 2. In each grid square, the following equation was calculated:

∑(OSM roads length)-∑(Meridian roads length)

If the value is negative, then the total length of Meridian objects is bigger than the length of OpenStreetMap objects. A value of -1, for example, means that ‘there are between 0 and 1000 metres more Meridian 2’ in this grid square whereas 1 means that ‘there are between 0 and 1000 metres more OpenStreetMap’. Importantly, 4 and -4 mean anything with a positive of negative difference of over 3000 metres. In general, the analysis shows that, if the difference is at levels 3 or 4, then you can consider OpenStreetMap as complete, while 1 and 2 will usually mean that some minor roads are likely to be missing. Also, -1 should be easy to complete. In areas where the values are -2 to -4, the OpenStreetMap community needs to do complete the map.

Finally, a licensing conundrum that shows the problems with both Ordnance Survey principles, which state that anything that is derived from its maps is Crown copyright and part of Ordnance Survey intellectual property, and with the use of the Creative Commons licence for OpenStreetMap data.

Look at the equation above. The left-hand side is indisputably derived from OpenStreetMap, so it is under the CC-By-SA licence. The right-hand side is indisputably derived from Ordnance Survey, so it is clearly Crown copyright. The equation, however, includes a lot of UCL’s work, and, most importantly, does not contain any geometrical object from either datasets – the grid was created afresh. Yet, without ‘deriving’ the total length from each dataset, it is impossible to compute the results that are presented here – but they are not derived by one or the other. So what is the status of the resulting dataset? It is, in my view, UCL copyright – but it is an interesting problem, and I might be wrong.

You can download the data from here – the file includes a metadata document.

If you use the dataset, please let me know what you have done with it.

Advertisements

OpenStreetMap and Ordnance Survey Master Map – Beyond good enough

OSM overlap with Master Map ITN for A and B roads
OSM overlap with Master Map ITN for A and B roads

In June, Aamer Ather, an M.Eng. student at the department, completed his research comparing OpenStreetMap (OSM) to Ordnance Survey Master Map Integrated Transport Layer (ITN). This was based on the previous piece of research in which another M.Eng. student, Naureen Zulfiqar, compared OSM to Meridian 2.

There are really surprising results. The analysis shows that when A-roads, B-roads and a motorway from ITN are compared to OSM data, the overlap can reach values that are over 95%. When the comparison with Master Map was completed, it became clear that OSM is of better quality than Meridian 2. It is also interesting to note that the results of higher overlap with ITN were achieved under stricter criteria for the buffering procedure that is used for comparison.

As noted, in the original analysis, Meridian 2 was used as the reference dataset, the ground truth. However, comparing Meridian 2 and OSM is not like with like, because OSM is not generalised and Meridian 2 is. The justification for treating Meridian 2 as the reference dataset was that the nodes are derived from high-accuracy datasets and it was expected that the 20 metres filter would not change positions significantly. It turns out that the generalisation impacts the quality of Meridian more than I anticipated. Yet, the advantage of Meridian 2 is that it allows comparisons for the whole of England, since the file size is still manageable, while the complexity of ITN would make an extensive comparison difficult, time-consuming and lengthy.

The results show that for the 4 Ordnance Survey London tiles that we’ve compared, the results put OSM only 10-30% from the ITN centre line. Rather impressive when you consider the knowledge, skills and backgrounds of the participants. My presentation from the State of the Map conference, below, provides more details of this analysis – and the excellent dissertation by Aamer Ather, which is the basis for this analysis, is available to download here.

The one caveat that will need to be explored in future projects is that the comparison in London means that OSM mappers had access to very high-resolution imagery from Yahoo! which have been georeferenced and rectified. Therefore, the high precision might be a result of tracing these images, and the question is what happens in places where high resolution images are not available. Thus, we need to test more tiles and in other places to validate the results in other areas of the UK.

Another student is currently comparing OSM to 1:10,000 map of Athens, so by the end of the summer I hope that it will be possible to estimate quality in other countries. The comparison to ITN in other areas of the UK will wait for a future student who will be interested in this topic!