Since early 2010, I had the privilege of being a member of the editorial board of the journal Transactions of the Institute of British Geographers . It is a fascinating position, as the journal covers a wide range of topics in geography, and is also recognised as one of the top journals in the field and therefore the submissions are usually of high quality. Over the past 3 years, I was following a range of papers that deal with various aspects of Geographic Information Science (GIScience) from submission to publication either as a reviewer or as associate editor.
In early 2011, I agreed to coordinate a virtual issue on GIScience. The virtual issue is a collection of papers from the archives of the journal, demonstrating the breadth of coverage and the development of GIScience within the discipline of geography over the years. The virtual issues provide free access to a group of papers for a period of a year, so they can be used for teaching and research.
Editing the virtual issue was a very interesting task – I was exploring the archives of the journal, going back to papers that appeared in the 1950s and 1960s. When looking for papers that are relevant to GIScience, I came across various papers that relate to geography’s ‘Quantitative Revolution‘. The evolution of use of computers in geography and later on the applications of GIS is covered in many papers, so the selection was a challenge. Luckily, another member of the editorial board, Brian Lees, is also well versed in GIScience as the editor of the International Journal of GIScience. Together, we made the selection of the papers that are included in the issue. Other papers are not part of the virtual issue but are valuable further reading.
To accompany the virtual issue, I have written a short piece, focusing on the nature of GIScience in geography. The piece is titled “Geographic Information Science: tribe, badge and sub-discipline” and is exploring how the latest developments in technology and practice are integrated and resisted by the core group of people who are active GIScience researchers in geography.
You can access the virtual issue on Wiley-Blackwell online library and you will find papers from 1965 to today, with links to further papers that are relevant but not free for access. The list of authors is impressive, including many names that are associated with the development of GIScience over the years from Torstan Hägerstrand or David Rhind to current researchers such as Sarah Elwood, Agnieszka Leszczynski or Matt Zook.
The virtual issue will be officially launched (and was timed to coincide with) at the GIScience 2012 conference.
As I cannot attend the conference, and as my paper mentioned the Twitter-based GeoWebChat (see http://mappingmashups.net/geowebchat/) which is coordinated by Alan McConchie, I am planning to use this medium for running a #geowebchat that is dedicated to the virtual issue on the 18th September 2012, at 4pm EDT, 9pm BST so those who attend the conference can join at the end of the workshops day.
At the State of the Map (EU) 2011 conference that was held in Vienna from 15-17 July, I gave a keynote talk on the relationships between the OpenStreetMap (OSM) community and the GIScience research community. Of course, the relationships are especially important for those researchers who are working on volunteered Geographic Information (VGI), due to the major role of OSM in this area of research.
The talk included an overview of what researchers have discovered about OpenStreetMap over the 5 years since we started to pay attention to OSM. One striking result is that the issue of positional accuracy does not require much more work by researchers. Another important outcome of the research is to understand that quality is impacted by the number of mappers, or that the data can be used with confidence for mainstream geographical applications when some conditions are met. These results are both useful, and of interest to a wide range of groups, but there remain key areas that require further research – for example, specific facets of quality, community characteristics and how the OSM data is used.
Reflecting on the body of research, we can start to form a ‘code of engagement’ for both academics and mappers who are engaged in researching or using OpenStreetMap. One such guideline would be that it is both prudent and productive for any researcher do some mapping herself, and understand the process of creating OSM data, if the research is to be relevant and accurate. Other aspects of the proposed ‘code’ are covered in the presentation.
Following successful funding for the European Union FP7 EveryAware and the EPSRC Extreme Citizen Science activities, the department of Civil, Environmental and Geomatic Engineering at UCL is inviting applications for a postdoctoral position and 3 PhD studentships. Please note that these positions are open to students from any EU country.
These positions are in the ‘Extreme Citizen Science’ (ExCiteS) research group. The group’s activities focus on the theory, methodologies, techniques and tools that are needed to allow any community to start its own bottom-up citizen science activity, regardless of the level of literacy of the users. Importantly, Citizen Science is understood in the widest sense, including perceptions and views – so participatory mapping and participatory geographic information are integral parts of the activities.
The research themes that the group explores include Citizen Science and Citizen Cyberscience; Community and participatory mapping/GIS; Volunteered Geographic Information (OpenStreetMap, Green Mapping, Participatory GeoWeb); Usability of geographic information and geographic information technology, especially with non-expert users; GeoWeb and mobile GeoWeb technologies that facilitate Extreme Citizen Science; and identifying scientific models and visualisations that are suitable for Citizen Science.
Research Associate in Extreme Citizen Science – a 2-year, postdoctoral research associate position commencing 1 May 2011.
The research associate will lead the development of an ‘Intelligent Map’ that allows non-literate users to upload data securely; and the system should allow the users to visualise their information with data from other users. Permissions need to be developed in accordance with cultural sensitivities. As uploaded data from multiple users sharing the same system increase over time, repeating patterns will begin to emerge that indicate particular environmental trends.
The role will also include some general project-management duties, guiding the PhD students who are working on the project. Travel to Cameroon to the forest communities that we are working with is necessary.
Complete details about this post and application procedure are available on the UCL jobs website.
PhD Studentship – understanding citizen scientists’ motivations, incentives and group organisation – a 3.5-year fully funded studentship. We are looking for applicants with a good honours degree (1st Class or 2:1 minimum), and an MA or MSc in anthropology, geography, sociology, psychology or related discipline. The applicant needs to be familiar with quantitative and qualitative research methods, and be able to work with a team that will include programmers and human-computer interaction experts who will design systems to be used in citizen science projects. Travel will be required as part of the project. A willingness to live for short periods in remote forest locations in simple lodgings, eating local food, will be necessary. French language skills are desirable.
The research itself will focus on motivations, incentives and understanding of the needs and wishes of participants in citizen science projects. We will specifically focus on engagement of non-literate people in such projects and need to understand how the process – from data collection to analysis – can be made meaningful and useful for their everyday life. The research will involve using quantitative methods to analyse large-scale patterns of engagement in existing projects, as well as ethnographic and qualitative study of participants. The project will include working with non-literate forest communities in Cameroon as well as marginalised communities in London.
Complete details about this post and application procedure are available on the UCL jobs website.
PhD Studentship in geographic visualisation for non-literate citizen scientists - a 3.5-year fully funded studentship. The applicant should possess a good honours degree (1st Class or 2:1 minimum), and an MSc in computer science, human-computer interaction, electronic engineering or related discipline. In addition, they need to be familiar with geographic information and software development, and be able to work with a team that will include anthropologists and human-computer interaction experts who will design systems to be used in citizen science projects. Travel will be required as part of the project. A willingness to live for short periods in remote forest locations in simple lodgings, eating local food, will be necessary. French language skills are desirable.
Complete details about this post and application procedure are available on the UCL jobs website.
In addition, we offer a PhD Studentship on How interaction design and mobile mapping influences participation in Citizen Science, which is part of the EveryAware project and is also open to any EU citizen.
Interesting talk from Mike Goodchild in a lecture at the US NSF entitled ‘From Community Mapping to Critical Spatial Thinking’. This talk is a good overview of VGI and links it to the understanding of spatial concepts and integrating them into teaching and research.
The interesting issue raised in the talk is the link between the ability of people to use spatial information and the development of spatial thinking. One vivid memory from the first State of the Map conference was a presentation from a person whowas trying to use a simple GPS receiver way beyond what it was capable of doing, and the tough questioning from the audience at the end, basically telling him that he got it wrong and needed to rethink his project. What was clear was that, for people who are engaged in active data collection and tools development, the critical spatial thinking and the understanding of the technology evolved. At the same time, the evidence from end-users of SatNav devices shows a reduction in spatial understanding due to the ‘tunnel vision’ that the user interface promotes.
Significantly, the number of the latter group is larger than the first group. So are we having shallow spatial understanding without critical spatial thinking?
12 June, 2010
While sorting out our departmental GIS library, I came across a small booklet titled Computers and the Renaissance of Cartography from 1976. It was written by Dr Tom Margerison, the first editor of New Scientist, and describes the activities of the Experimental Cartography Unit (ECU), which pioneered the use of computers for geographical and cartographical applications. Though Prof. David Rhind told me that the description should be taken with a pinch of salt, and that there are alternative accounts.
Interestingly, the ECU operated within the Royal College of Art to encourage new designs and innovations in map making. It was established in 1967 and operated until the late 1980s.
The booklet provides a description of the main processes of assembling maps at the ECU in the middle of the 1970s, and what is especially interesting is to see some amazing outputs of maps from that time, which, unlike the typical crude output of Symap, are beautiful and clear.
I have asked Dan Lewis, who was involved in the digitising of the CATMOG catalogue of booklets about quantitative methods in geography, to turn this booklet into PDF format so we can share it. Dan put some of the maps on his blog .
If you want to download the booklet – it is now available here.
Today is a good day to publish this booklet, following the announcement that Prof. Peter Woodsford, who was among the founders of Laser-Scan (now 1Spatial), received an MBE for his services to the geographic information industry in the Queen’s birthday honours list, and it was the equipment of Laser-Scan that enabled the creation of these maps.
4 April, 2010
The opening of Ordnance Survey datasets at the beginning of April 2010 is bound to fundamentally change the way OpenStreetMap (OSM) information is produced in the UK. So just before this major change start to influence OpenStreetMap, it is worth evaluating what has been achieved so far without this data. It is also the time to update the completeness study, as the previous ones were conducted with data from March 2008 and March 2009.
Following the same method that was used in all the previous studies (which is described in details here), the latest version of Meridian 2 from OS OpenData was downloaded and used and compared to OSM data which was downloaded from GeoFabrik. The processing is now streamlined with MapBasic scripts, PostGIS scripts and final processing in Manifold GIS so it is possible to complete the analysis within 2 days. The colour scheme for the map is based on Cynthia Brewer and Mark Harrower‘s ColorBrewer 2.
By the end of March 2010, OpenStreetMap coverage of England grown to 69.8% from 51.2% a year ago. When attribute information is taken into account, the coverage grown to 24.3% from 14.7% a year ago. The chart on the left shows how the coverage progressed over the past 2 years, using the 4 data points that were used for analysis – March 2008, March 2009, October 2009 and March 2010. Notice that in terms of capturing the geometry less than 5% are now significantly under mapped when compared to Meridian 2. Another interesting aspect is the decline in empty cells – that is grid cells that don’t have any feature in Meridian 2 but now have features from OSM appearing in them. So in terms of capturing road information for England, it seems like the goal of capturing the whole country with volunteer effort was within reach, even without the release of Ordnance Survey data.
On the other hand, when attributes are included in the analysis, the picture is very different.
The progression of coverage is far from complete, and although the area that is empty of features that include street or road name in Meridian 2 is much larger, the progress of OSM mappers in completing the information is much slower. While the geometry coverage gone up by 18.6% over the past year, less than 10% (9.6% to be precise) were covered when attributes are taken into account. The reason for this is likely to be the need to carry a ground survey to find the street name without using other copyrighted sources.
The attribute area is the one that I would expect will show the benefits of Ordnance Survey data release to OSM mapping. Products such as StreetView and VectorMap District can be used to either copy the street name (StreetView) or write an algorithm that will copy the street name and other attributes from a vector data set – such as Meridian 2 or VectorMap District.
Of course, this is a failure of the ‘crowd’ in the sense that as this bit of information previously required an actual visit on the ground and it was a more challenging task than finding the people who are happy to volunteer their time to digitise maps.
As in the previous cases, there are local variations, and the geography of the coverage is interesting. The information includes 4 time points, so the most appropriate visualisation is one that allows for comparison and transition between maps. Below is a presentation (you can download it from SlideShare) that provides maps for the whole of England as well as 5 regional maps, roughly covering the South West, London, Birmingham and the Midlands, Manchester and Liverpool, and Newcastle upon Tyne and the North West.
If you want to create your own visualisation, of use the results of this study, you can download the results in a shapefile format from here.
For a very nice visualisation of Meridian 2 and OpenStreetMap data – see Ollie O’Brien SupraGeography blog .
31 March, 2010
Finally, after 2 years in the making, Interacting with Geospatial Technologies is out. It is the first textbook dedicated to usability and Human-Computer Interaction (HCI) aspects of geographical information technologies. It covers desktop, Web and mobile applications and how they can be designed so they are more effective, efficient, error-free, easy to learn and enjoyable, which is one version of the 5 E’s of usability.
I started thinking about the book in 2004, when I realised that the most recent academic books dedicated to HCI and GIS were published in 1993 and 1995. These are respectively David Medyckyj-Scott and Hilary Hearnshaw’s Human Factors in Geographic Information Systems and the collection of papers from the NATO workshop Cognitive Aspects of Human-Computer Interaction for Geographic Information Systems, edited by Tim Nyerges, David Mark, Robert Laurini, and Max Egenhofer. While these books and the collections of papers in them are still valuable, it must be noted that in the early 1990s, Web-based GIS was just starting to appear, desktop GIS was fairly basic, mobile GIS was not even experimental and GIS trade journals argued about which UNIX workstation is the best for GIS.
Apart from these books, the proceedings of COSIT (Conference of Spatial Information Theory) are also valuable sources of academic research on spatial cognition and other principles of geographical and spatial information, and there are also many papers in academic journals about GIS.
However, not much attention was paid to everyday use of geographical information technologies, and no textbook included an introduction in a form accessible to postgraduate students and software developers. So, after complaining in various conferences that there is a clear need for such a book, I started working on it. It was an interesting process to identify suitable authors and encourage them to contribute to the book.
While offering the breadth of several authors who specialise in different aspects of the field, I think the textbook is coherent and consistent, and its style both accessible and readable. The editing process was more active and time-sensitive than is often the case in academic books, to ensure that the textbook is usefully up-to-date. On UCL’s MSc in GIS, a recent course based on the textbook was well received by students.
The book covers the principles and the practical aspects of interaction with geospatial technologies. There are sections about spatial cognition, cartography, user-centred design and usability engineering – here is the table of contents.
So, now you can get your own copy – and any feedback is welcomed.
5 March, 2010
The Commission on Use and User Issues of the International Cartographic Association (ICA) is currently working on a new handbook specifically addressing the application of user research methods and techniques in the area of geographical information and its applications.
In order to share experiences and interesting case studies a workshop is organized by the Commission, in collaboration with UCL, on the day preceding GISRUK 2010, Tuesday, 13th April 2010.
The programme for the workshop is now completed and the programme and abstracts for the papers that will be discussed during the meeting are available here.
For information on the commission, visit the website of the ICA Commission on Use and User Issues and to register to the workshop follow the instructions on the GISRUK2010 website.
22 February, 2010
The question from Jeremy Morley ‘An often quoted figure estimates that 80% of information contains some geographic reference.’ – anyone got the source reference for this?! led me to search for an answer. This assertion is indeed often quoted in governmental documents, academic papers and trade magazines.
So, what is the source? While V1 magazine suggests that it links to a magazine article from 1992, a search on Google Scholar shows that William Huxhold’s 1991 book ‘An Introduction to Urban Geographic Information Systems’ is mentioned when this factoid is used. For example, here, here or here, although the last one includes an independent assessment that uses an 80% value.
Let’s look at what was said in the original book, on pages 22-23:
‘A 1986 brochure (Municipality of Burnaby) published by the Municipality of Burnaby, British Columbia, reported the results of a needs analysis for an urban geographic information system (GIS) in that municipality: eighty to ninety percent of all the information collected and used was related to geography.’
On page 236, the following statement can be found:
‘Chapter 1 reported that 80-90 percent of all the information used by local government is related to geography.’
And the latter is probably the source of the famous statement. So for about 20 years, the GIS community has been using a powerful assertion which is actually based on a brochure and not on a rigorous analysis of evidence. Maybe, as John Fagan suggested, it wasn’t a good idea to look too closely!
29 January, 2010
After the publication of the comparison of OpenStreetMap and Google Map Maker coverage of Haiti, Nicolas Chavent from the Humanitarian OpenStreetMap Team contacted me and turned my attention to the UN Stabilization Mission in Haiti’s (known as MINUSTAH) geographical dataset, which is seen as the core set for the post earthquake humanitarian effort, and therefore a comparison with this dataset might be helpful, too. The comparison of the two Volunteered Geographical Information (VGI) datasets of OpenStreetMap and Google Map Maker with this core dataset also exposed an aspect of the usability of geographical information in emergency situations that is worth commenting on.
For the purpose of the comparison, I downloaded two datasets from GeoCommons – the detailed maps of Port-au-Prince and the Haiti road network. Both are reported on GeoCommons as originating from MINUSTAH. I combined them together, and then carried out the comparison. As in the previous case, the comparison focused only on the length of the roads, with the hypothesis that, if there is a significant difference in the length of the road at a given grid square, it is likely that the longer dataset is more complete. The other comparisons between established and VGI datasets give ground to this hypothesis, although caution must be applied when the differences are small. The following maps show the differences between the MINUSTAH dataset and OpenStreetMap and MINUSTAH and Google Map Maker datasets. I have also reproduced the original map that compares OpenStreetMap and Map Maker for the purpose of comparison and consistency, as well as for cartographic quality.
The maps show that MINUSTAH does provide fairly comprehensive coverage across Haiti (as expected) and that the volunteered efforts of OpenStreetMap and Map Maker provide further details in urban areas. There are areas that are only covered by one of the datasets, so they all have value.
The final comparison uses the 3 datasets together, with the same criteria as in the previous map – the dataset with the longest length of roads is the one that is considered the most complete.
It is interesting to note the south/north divide between OpenStreetMap and Google Map Maker, with Google Map Maker providing more details in the north, and OpenStreetMap in the south (closer to the earthquake epicentre). When compared over the areas in which there is at least 100 metres of coverage of MINUSTAH, OpenStreetMap is, overall, 64.4% complete, while Map Maker is 41.2% complete. Map Maker is covering further 354 square kilometres which are not covered by MINUSTAH or OpenStreetMap, and OpneStreetMap is covering further 1044 square kilometres that are missing from the other datasets, so clearly there is a benefit in integrating them. The grid that includes the analysis of the integrated datasets in shapefile format is available here, in case that it is of any use or if you like to carry out further analysis and or visualise it.
While working on this comparison, it was interesting to explore the data fields in the MINUSTAH dataset, with some of them included to provide operational information, such as road condition, length of time that it takes to travel through it, etc. These are the hallmarks of practical and operational geographical information, with details that are relevant directly to the end-users in their daily tasks. The other two datasets have been standardised for universal coverage and delivery, and this is apparent in their internal data structure. Google Map Maker schema is closer to traditional geographical information products in field names and semantics, exposing the internal engineering of the system – for example, including a country code, which is clearly meaningless in a case where you are downloading one country! OpenStreetMap (as provided by either CloudMade or GeoFabrik) keeps with the simplicity mantra and is fairly basic. Yet, the scheme is the same in Haiti as in England or any other place. So just like Google, it takes a system view of the data and its delivery.
This means that, from an end-user perspective, while these VGI data sources were produced in a radically different way to traditional GI products, their delivery is similar to the way in which traditional products were delivered, burdening the user with the need to understand the semantics of the different fields before using the data.
In emergency situations, this is likely to present an additional hurdle for the use of any data, as it is not enough to provide the data for download through GeoCommons, GeoFabrik or Google – it is how it is going to be used that matters. Notice that the maps tell a story in which an end-user who wants to have full coverage of Haiti has to combine three datasets, so the semantic interpretation can be an issue for such a user.
So what should a user-centred design of GI for an emergency situation look like? The general answer is ‘find the core dataset that is used by the first responders, and adapt your data to this standard’. In the case of Haiti, I would suggest that the MINUSTAH dataset is a template for such a thing. It is more likely to find users of GI on the ground who are already exposed to the core dataset and familiar with it. The fields are relevant and operational and show that this is more ‘user-centred’ than the other two. Therefore, it would be beneficial for VGI providers who want to help in an emergency situation to ensure that their data comply to the local de facto standard, which is the dataset being used on the ground, and bring their schema to fit it.
Of course, this is what GI ontologies are for, to allow for semantic interoperability. The issue with them is that they add at least two steps – define the ontology and figure out the process to translate the dataset that you have acquired to the required format. Therefore, this is something that should be done by data providers, not by end-users when they are dealing with the real situation on the ground. They have more important things to do than to find a knowledge engineer that can understand semantic interoperability…