Image representing Google Earth as depicted in...

It is always nice to announce good news. Back in February, together with Richard Treves at the University of Southampton, I submitted an application to the Google’s Faculty Research Award program for a grant to investigate Google Earth Tours in education. We were successful in getting a grant worth $86,883 USD.  The project builds on my expertise in usability studies of geospatial technologies, including the use of  eye tracking and other usability engineering techniques for GIS and Richard’s expertise in Google Earth tours and education, and longstanding interest in usability issues.

In this joint UCL/Southampton project, UCL will be lead partner and we will appoint a junior researcher for a year to develop run experiments that will help us in understanding of the effectiveness of Google Earth Tours in geographical learning, and we aim to come up with guidelines to their use. If you are interested, let me know.

Our main contact at Google for the project is Ed Parsons. We were also helped by Tina Ornduff and Sean Askay who acted as referees for the proposal.
The core question that we want to address is “How can Google Earth Tours be used create an effective learning experience?”

So what do we plan to do? Previous research on Google Earth Tours (GETs) has shown them to be an effective visualization technique for teaching geographical concepts, yet their use in this way is essentially passive.  Active learning is a successful educational approach where student activity is combined with instruction to enhance learning.  In the proposal we suggest that there is great education value in combining the advantages of the rich visualization of GETs with student activities. Evaluating the effectiveness of this combination is the purpose of the project, and we plan to do this by creating educational materials that consist of GETs and activities and testing them against other versions of the materials using student tests, eye tracking and questionnaires as data gathering techniques.

We believe that by improving the techniques by which spatial data is visualized we are improving spatial information access overall.
A nice aspect of the getting the project funded is that it works well with a project that is led by Claire Ellul and Kate Jones and funded by JISC. The G3 project, or “Bridging the Gaps between the GeoWeb and GIS” is touching on similar aspects and we surely going to share knowledge with them.
For more background on Richard Treves, see his blog (where the same post is published!)

In October 2007, Francis Harvey commissioned me to write a review article for Geography Compass on Neogeography. The paper was written in collaboration with Alex Singleton at UCL and Chris Parker from the Ordnance Survey.
The paper covers several issues. Firstly, it provides an overview of the developments in Web mapping from the early 1990s to today. Secondly, in a similar way to my Nestoria interview, it explains the reasons for the changes that enabled the explosion of geography on the Web in 2005: GPS availability, Web standards, increased spread of broadband, and a new paradigm in programming APIs. These changes affected the usability of geographic technologies and started a new era in Web mapping. Thirdly, we describe several applications that demonstrate the new wave – the London Profiler, OS OpenSpace and OpenStreetMap. The description of OSM is somewhat truncated, so my IEEE Pervasive Computing paper provides a better discussion.
The abstract of the paper is:

‘The landscape of Internet mapping technologies has changed dramatically since 2005. New techniques are being used and new terms have been invented and entered the lexicon such as: mash-ups, crowdsourcing, neogeography and geostack. A whole range of websites and communities from the commercial Google Maps to the grassroots OpenStreetMap, and applications such as Platial, also have emerged. In their totality, these new applications represent a step change in the evolution of the area of Internet geographic applications (which some have termed the GeoWeb). The nature of this change warrants an explanation and an overview, as it has implications both for geographers and the public notion of Geography. This article provides a critical review of this newly emerging landscape, starting with an introduction to the concepts, technologies and structures that have emerged over the short period of intense innovation. It introduces the non-technical reader to them, suggests reasons for the neologism, explains the terminology, and provides a perspective on the current trends. Case studies are used to demonstrate this Web Mapping 2.0 era, and differentiate it from the previous generation of Internet mapping. Finally, the implications of these new techniques and the challenges they pose to geographic information science, geography and society at large are considered.’

The paper is accessible on the Geography Compass website, and if you don’t have access to the journal, but would like a copy, email me.

A comparison of my analysis of OpenStreetMap (OSM) quality evaluation to other examples of quality evaluation brings up some core issues about the nature of the new GeoWeb and the use of traditional sources. The examples that I’m referring to are from Etienne Cherdlu’s SOTM 2007 ‘OSM and the art of bicycle maintenance’, Dair Grant’s comparison of OSM to Google Maps and reality, Ed Johnson’s analysis this summer and Steven Feldman’s brief evaluation in Highgate.

Meridian 2 and OSM in the area of Highgate, North London

Meridian 2 and OSM in the area of Highgate, North London

The first observation is of the importance and abundance of well georeferenced, vector-derived public mapping sites, which make several of these comparisons possible (Chedlu, Dair and Feldman). The previous generation of stylised street maps is not readily available for a comparison. In addition to the availability, the ease with which they can be mashed-up is also a significant enabling factor. Without this comparable geographical information, the evaluation would be much more difficult.

Secondly, when a public mapping website was used, it was Google Maps. If Microsoft’s Virtual Earth had also been used, it would arguably allow a three-way comparison as the Microsoft site uses Navteq information, while Google uses TeleAtlas information. Using Ordnance Survey (OS) OpenSpace for comparison is also a natural candidate. Was this familiarity that led to the selection of Google Maps? Or is it because the method of comparison is visual inspection, so adding a third source makes it more difficult? Notice that Google has the cachet of being a correct depiction of reality, which Etienne, Dair and Bob Barr demonstrated not to be the case!

Thirdly, and most significantly, only when vector data was used – in our comparison and in parts of what Ed Johnson has done – a comprehensive analysis of large areas became possible. This shows the important aspect of the role of formats in the GeoWeb – raster is fabulous for the delivery of cartographic representations, but it is a vector that is suitable for analytical and computational analysis. Only OSM allows the user easy download of vector data – no other mass provider of public mapping does.

Finally, there is the issue of access to information, tools and knowledge. As a team that works at a leading research university (UCL), I and the people who worked with me got easy access to detailed vector datasets and the OS 1:10,000 raster. We also have at our disposal multiple GIS packages, so we can use whichever one performs the task with the least effort. The other comparisons had to rely on publically available datasets and software. In such unequal conditions, it is not surprising that I will argue that the comparison that we carried out is more robust and consistent. The issue that is coming up here is the balance between amateurs and experts, which is quite central to Web 2.0 in general. Should my analysis be more trusted than those of Dair’s or Etienne’s, both of whom who are very active in OSM? Does Steven’s familiarity with Highgate, which is greater than mine, make him more of an expert in that area than my consistent application of analysis?

I think that the answer is not clear cut; academic knowledge entails the consistent scrutiny of the data, and I do have the access and the training to conduct a very detailed geographical information quality assessment. In addition, my first job in 1988 was in geographical data collection and GIS development, so I also have professional knowledge in this area. Yet, local knowledge is just as valuable in a specific area and is much better than a mechanical, automatic evaluation. So what is happening is an exchange of knowledge, methods and experiences between the two sides in which both, I hope, can benefit.

Finding your way as a tourist

26 February, 2008

During the visit to Turin, I had an opportunity to experience the consequences of address matching and georeferencing which I’ve noted in the entry ‘British Museum Test’. After touring the city, I needed to get to a restaurant to meet colleagues that were staying in the Institute for Scientific Interchange (ISI) in Turin. The meeting place was the ‘Il Porto di Savona’ restaurant in Piazza Vittorio Veneto 2. Since the hotel room was connected to the Internet through a relatively slow ‘Swisscom Hospitality Service’ connection, I decided to try to find my way to the restaurant with Google Maps, which are the fastest to download.

My first attempt with Google was unsuccessful – trying to search for ‘Piazza Vittorio Veneto 2, Turin’ pointed me to a place 10 miles away from the city. The next attempt was with Yahoo! Maps, but this one could not find anything. Microsoft Virtual Earth failed to find the full name, but offered a location called ‘Piazza Vittorio’ which I selected, only to zoom in and discover that the full proper name does appear on the map! Using this name (‘Piazza Vittorio’) with Google also worked and it managed to find the location.

Turin Map Virtual Earth

Interestingly, because the connection was relatively slow, the interface of Microsoft was fairly annoying as parts failed to upload, and I was deterred from using Multimap as I’ve experienced slow response in the past on a fast broadband connection at home. Even so, checking more recently with Multimap shows that it will direct you to the wrong place in the city – although again, if you zoom to the map, the square is clearly mapped with its proper name…

The experience demonstrated how significant the problem of georeferencing is on these public mapping sites. This is a fundamental problem for these search engines to make them really usable. In this case, I used my knowledge of the range of public mapping sites, manipulated the address until I got the location and did a lot of things that, I suspect, a less experienced user would not do. I persevered with the problem because of my interest in usability and because it was an interesting problem. Actually, in terms of efficiency, it would have taken me less time to just go downstairs and ask the concierge…

Another aspect is that download time still matters. This is an aspect that web designers tend to ignore. I suspect that the assumption here is that broadband connections are ubiquitous. The speed of downloading a page is significant in geospatial applications – because there is no way round the fact that, unlike text based sites, the map is the most significant part and must be delivered as graphic files which tend to bulk-up the overall size of the page, as far as the end-user is concerned.

I must note that once I managed to find the location, it was again a pleasure to use the old style tourist map to navigate to and from the restaurant, which, by the way, I warmly recommend.

Follow

Get every new post delivered to your Inbox.

Join 2,082 other followers