WUN Global GIS Academy Seminar – What’s So New in Neogeography?

These are the slides from the Worldwide Universities Network Global GIS Academy Seminar from the 22nd October. The seminar’s title is ‘What’s So New in Neogeography?’ and it is aimed largely at an academic audience with background in GIScience.

The aim of the talk is to critically review Neogeography: explain its origins, discuss the positive lessons from it – mainly in improved usability of geographic technologies, as well as highlighting aspects that I see as problematic.

The presentation starts with some definitions and with the notice that mapping/location is central to Web 2.0, and  thus we shouldn’t be surprised that we’ve noticed a step change in the use of GI over the past 3 years.

By understanding what changed around 2005, it is possible to explain the development of Neogeography. These changes are not just technical but also societal.

The core of the discussion is on the new issues that are important to Neogeography’d success, but also raising some theoretical and practical aspects that must be included in a comprehensive analysis of the changes and what they mean to Geography and geographers.

The presentation is available below from slideshare, and the (very rough and without proofing) notes are available here.

Advertisements

OpenStreetMap: User-Generated Street Maps – IEEE Pervasive Computing paper

Earlier this year, in April, John Krumm from Microsoft Research, the editor of IEEE Pervasive Computing commissioned me to write a paper about OpenStreetMap for the magazine. The paper was written together with Patrick Weber, and it is finally out. It went through the magazine peer review process, and it is part of a set of articales in the October-December issue of the magazine that are dedicated to aspects of user-generated content.

The article was written for a general audience, and aims to provide an easy to understand introduction to OSM that is suitable for technically minded readers (such as the readers of IEEE Pervasive Computing!). It provides some history, a description of the OSM geostack and how it operates and ends with some open issues and challenges that the project is facing.

You can access the article from IEEE website, and it’s full citation is
Haklay, M. and Weber, P., 2008, ‘OpenDtreetMap: User-Generated Street Maps‘, IEEE Pervasive Computing, October–December 2008, pp. 12-18.

A really useful mash-up demonstration

Over the summer, one of my students, Chris Osborne, worked together with Nestoria to create a demonstration of a mash-up that can help users find where they can live, i.e. rent or buy a home, at a given travel-time distance from a given underground or DLR station, assuming that they are using the network to commute. Building on the concepts that MySociety developed in their travel-time maps, Chris created the application by screen-scraping the journey planner of Transport for London and integrated it with Nestoria’s property information.

I had been thinking about this type of application for a long while, but before Google Maps APIs and the technologies of Web 2.0, this was practically impossible. Even though the demonstration proves that it is possible for one developer now to accomplish this task – which was something unimaginable even 3 years ago – it is still fairly challenging. Travel information is not readily available and calculating travel time for lots of places is not a trivial task.

In addition, Chris integrated user-centred design principles and has done a very good job in creating an effective – and really useful – application. I can imagine this application continuing to develop to become, practically, a multi-criteria analysis system for people to find places to live in. Chris is going for the ‘Show Us a Better Way’ competition, and I hope that he’ll be able to continue and develop the application further.

Why your boss should buy you a larger monitor

As I’ve noted, the AGI GeoCommunity ’08 was a great conference, but it was especially pleasing to end up with the paper that I wrote with Kate Jones being selected as runner-up for the best paper competition by the conference team (and I kept myself at arms length from the judging!). Maybe it is a sign that the message about the importance of usability and interaction is starting to gain traction within the GIS community, though I should also note that Clare Davies from the OS raised the issue in the AGI conference in 2005 – so it’s still one usability paper every 3 years!

While you can download the full paper from here, or look at the presentation below, the short explanation of the argument behind the large monitor is actually raising a very significant and overlooked aspect of interaction with GIS.

Inherently, the issue is that interaction with maps is all about the context. You can’t design the position of a telephone pole if you can’t see the other poles, and you can’t understand where you are in relation to a local tube station without seeing it. This is where the abysmal resolution of current computer monitors causes a problem. Because the information density (the amount of information that you can cram into a specific area, say a square inch) of a monitor is low – it’s 10 times lower than a printed map – it’s actually very rare that you can put all the information that the user needs on one screen.

This is why all GIS developers are giving too much attention to zoom and pan operations, as they are perceived as the solution to this problem. However, and this is the most important point – zoom and pan are never part of the user’s task. The user is not interested in zooming and panning for their own sake, but in manipulating the map so they can see the area that they need to perform their task (adding a pole to the map, analysing neighbourhood, etc.). In an ideal world, the GIS will ‘know’ what area the user is looking for and will show it so there is no need to manipulate the map. However, we don’t have this so we must use zoom and pan…

Here is where the productivity issue kicks in. An average zoom or pan operation in a GIS application can take up to 30 seconds. Over a working month, this can accumulate into many hours for a heavy user of a GIS. A larger monitor (24 inch or even 32 inch) will reduce the number of zoom and pan operations, and thus increase the productivity of the user. Considering that a GIS analyst’s minute is costing about £0.30 (a conservative estimation), the large monitor will return the investment within 2 months.

But even more important is the issue of GIS interface design – this analysis emphasises why the decision on how much screen assets are dedicated to the map should take into account the user’s task, and not assume that they’ll zoom and pan!

AGI GeoCommunity ’08 – some reflections

The AGI GeoCommunity ’08 is over – and it was a great conference. Building on the success of last year, the conference this year was packed with good papers and with 600 delegates. I found the papers from Joanna Cook, of Oxford Archaeology, about the use of Open Source GIS as the main set of products in a business environment, and from Nick Black, of CloudMade, on Crowd Sourced Geographical Information especially interesting.

What is especially good about the AGI in general, and the conference in particular, is that unlike other forums that cater for a narrow audience (say mainly neogeographers in Where 2.0, or academics in GISRUK), the AGI is a good forum where you can see vendors, commercial providers, veterans and new users all coming together to talk about different aspects of GI. Even if they disagree about various issues such as what is important, having the forum for the debate is what makes this conference so valuable Just look at the blogs of Ed, Adena, Joanna, Andy and Steven for such a debate to see that there are issues that people will argue about quite fiercely – which is a sign of a great conference.

I’m especially pleased with the success this year of bringing in people from the academic community who presented papers and attended the conference. This interaction is very significant as, through our teaching programmes, we are actually training the people who will join this crowd in the future, and we should keep an eye on the trends and needs of the sector.

For example, one of my conclusions from the conference is that the existing ‘business model’ of the M.Sc. in GIS programmes, which was, inherently, ‘we’ll train you in using [ArcGIS|Mapinfo|other package] so you can get a job’, is over. The industry is diverging, and the needs are changing. Being a GI professional is not about operating a GIS package.

We should now highlight the principles of manipulating geographical information, and, as Adena Schutzberg commented during the debate, train people how to ask the right questions, and to answer the most important ‘So what?’ question about the analyses that they are producing.

We should also encourage our students to participate in forums, like the AGI, so they continue to learn about their changing world.

OpenStreetMap Quality evalution and other comparisons

A comparison of my analysis of OpenStreetMap (OSM) quality evaluation to other examples of quality evaluation brings up some core issues about the nature of the new GeoWeb and the use of traditional sources. The examples that I’m referring to are from Etienne Cherdlu’s SOTM 2007 ‘OSM and the art of bicycle maintenance’, Dair Grant’s comparison of OSM to Google Maps and reality, Ed Johnson’s analysis this summer and Steven Feldman’s brief evaluation in Highgate.

Meridian 2 and OSM in the area of Highgate, North London
Meridian 2 and OSM in the area of Highgate, North London

The first observation is of the importance and abundance of well georeferenced, vector-derived public mapping sites, which make several of these comparisons possible (Chedlu, Dair and Feldman). The previous generation of stylised street maps is not readily available for a comparison. In addition to the availability, the ease with which they can be mashed-up is also a significant enabling factor. Without this comparable geographical information, the evaluation would be much more difficult.

Secondly, when a public mapping website was used, it was Google Maps. If Microsoft’s Virtual Earth had also been used, it would arguably allow a three-way comparison as the Microsoft site uses Navteq information, while Google uses TeleAtlas information. Using Ordnance Survey (OS) OpenSpace for comparison is also a natural candidate. Was this familiarity that led to the selection of Google Maps? Or is it because the method of comparison is visual inspection, so adding a third source makes it more difficult? Notice that Google has the cachet of being a correct depiction of reality, which Etienne, Dair and Bob Barr demonstrated not to be the case!

Thirdly, and most significantly, only when vector data was used – in our comparison and in parts of what Ed Johnson has done – a comprehensive analysis of large areas became possible. This shows the important aspect of the role of formats in the GeoWeb – raster is fabulous for the delivery of cartographic representations, but it is a vector that is suitable for analytical and computational analysis. Only OSM allows the user easy download of vector data – no other mass provider of public mapping does.

Finally, there is the issue of access to information, tools and knowledge. As a team that works at a leading research university (UCL), I and the people who worked with me got easy access to detailed vector datasets and the OS 1:10,000 raster. We also have at our disposal multiple GIS packages, so we can use whichever one performs the task with the least effort. The other comparisons had to rely on publically available datasets and software. In such unequal conditions, it is not surprising that I will argue that the comparison that we carried out is more robust and consistent. The issue that is coming up here is the balance between amateurs and experts, which is quite central to Web 2.0 in general. Should my analysis be more trusted than those of Dair’s or Etienne’s, both of whom who are very active in OSM? Does Steven’s familiarity with Highgate, which is greater than mine, make him more of an expert in that area than my consistent application of analysis?

I think that the answer is not clear cut; academic knowledge entails the consistent scrutiny of the data, and I do have the access and the training to conduct a very detailed geographical information quality assessment. In addition, my first job in 1988 was in geographical data collection and GIS development, so I also have professional knowledge in this area. Yet, local knowledge is just as valuable in a specific area and is much better than a mechanical, automatic evaluation. So what is happening is an exchange of knowledge, methods and experiences between the two sides in which both, I hope, can benefit.

The new London crime mapping website

The Metropolitan Police Authority has released a beta version of their new Crime Mapping application, showing the generalised level of crime (burglary, robbery and vehicle) for Lower-level Super Output Areas (LSOAs). The application uses generalised boundaries of LSOAs , and use a clear classification of the level of crime. Interestingly, the Show Us a Better Way website includes several suggestions for crime mapping – so there is an ongoing public interest.

This is not surprising, based on my own experience with CamStats, which was developed in collaboration between me, Kate Jones and Dave Ashby for Camden Police in late 2003, with the website operating from early 2004 until late 2007.

As you can see from the slideshow above, the information that CamStats provided is richer than what is available today. CamStats was based on static maps, and was very easy to produce – we designed it so a team administrator (with no GIS skills) could compile monthly and annual statistics simply by copying a data file to a processing machine, and then clicking one button in Mapinfo Professional which called Mapbasic, Perl scripts and other utilities to create, process and map the data and compile the HTML pages for the website into one zip file. All the user had to do was transfer the zip file to the Met web team who easily updated the webserver by unzipping the files. The fact that it was running for three years without any request for support is something that Kate and I are justifiably proud of.

Notice that CamStats provided options to see different geographical units, different forms of visualisation and to view the information in tabular and chart forms. Users could even download the aggregate counts for each area to compile there own reports. This was particularly useful for a number of community groups in Camden.

There is no question that the use of Google Maps, which provide context for the statistics is a huge usability improvement over our implementation. However, it will be interesting to see how long it will take the Met team to reach the functionality and ease of use CamStats provided …