Back in 2005, when I worked with Artemis Skarlatidou on an evaluation of public mapping websites, we came up with a simple test to check how well these search sites perform: Can a tourist find a famous landmark easily?

The reasoning behind raising this question was that tourists are an obvious group of users of public mapping sites such as Multimap, MapQuest, Yahoo! Maps, Microsoft’s Virtual Earth or Google Maps. Market research information presented by Vincent Tao from Microsoft in a seminar a year ago confirmed this assumption.

During the usability evaluation, we gave the participants the instruction ‘Locate the following place on the map: British Museum: Great Russell Street, London, WC1B 3DG’. Not surprising, those participants who started with the postcode found the information quickly, but about a third typed ‘British Museum, London. While our participants were London’s residents and were used to postcodes as a means of stating an address precisely, a more realistic expectation from tourists is that they would not use postcodes when searching for a landmark.

In the summer of 2005 when we ran the test, the new generation of public mapping websites (such as Google Maps and Microsoft Virtual Earth) performed especially bad.
The most amusing result came from Google Maps, pointing to Crewe as the location of the British Museum (!).
Google - British Museum in Crewe

The most simple usability test for a public mapping site that came out of this experiment is the ‘British Museum Test’: find the 10 top tourist attractions in a city/country and check if the search engine can find them. Here is how it works for London:

The official Visit London site suggests the following top attractions: Tate Modern, British Museum, National Gallery, Natural History Museum, the British Airways London Eye, Science Museum, the Victoria & Albert Museum (V&A Museum), the Tower of London, St Paul’s Cathedral and the National Portrait Gallery.

Now, we can run the test by typing the name of the attraction in the search box of public mapping sites. As an example, here I’ve used Yahoo! Maps, Google Maps, Microsoft’s Virtual Earth and Multimap. With all these sites I’ve imitated a potential tourist – I’ve accessed the international site (e.g. maps.google.com) and panned the map to the UK, and then typed the query. The results are:

Attraction (search term used) Yahoo! Google Microsoft Multimap
Tate Modern Found and zoomed Found and zoomed Found and zoomed Found and zoomed
British Museum Found and zoomed Found as part of a list Found and zoomed Found and zoomed
National Gallery Found and zoomed Found as part of a list Found and zoomed Found as part of a list (twice!)
Natural History Museum Failed Found as part of a list Found and zoomed Found and zoomed
British Airways London Eye (commonly abbreviated to London Eye) Failed on the full name, found and zoomed on the common abbreviation Found as part of a list, failed on the common abbreviation Failed on the full name, found and zoomed on the common abbreviation Failed on the full name, found and zoomed on the common abbreviation
Science Museum Found and zoomed Found as part of a list Found and zoomed Found and zoomed
The Victoria & Albert Museum (commonly abbreviated to V&A Museum) Found and zoomed on both Found and zoomed, but failed on the common abbreviation Found and zoomed, but failed on the common abbreviation Found and zoomed, but the common abbreviation zoomed on Slough (!)
The Tower of London Found and zoomed Found and zoomed Found and zoomed (failed if ‘the’ included in the search) Found and zoomed
St Paul’s Cathedral Found and zoomed Found and zoomed Found as part of a list Failed
National Portrait Gallery Failed (zoomed to the one in Washington DC) Found and zoomed Found and zoomed Found and zoomed

Notice that none of these search engines managed to pass the test on all the top ten attractions, which are visited by millions every year. There is a good reason for this – geographical search is not a trivial matter and the semantics of place names can be quite tricky (for example, if you look at a map of Ireland and the UK, there are two National Galleries).

On the plus side, I can note that search engines are improving. At the end of 2005 and for most of 2006 the failure rate was much higher. I used the image above in several presentations and have run the ‘British Museum Test’ several times since then, with improved results in every run.

The natural caveat is that I don’t have access to the server logs of the search engines and, therefore, can’t say that the test really reflects the patterns of use. It would be very interesting to have Google Maps Hot Trends or to see it for other search engines. Even without access to the search logs though, the test reveals certain aspects in the way that information is searched and presented and is useful in understanding how good the search engines are in running geographical queries.

By a simple variation of the test you can see how tolerant an engine is for spelling errors, and which one you should use when guests visit your city and you’d like to help them in finding their way around. It is also an indication of the general ability of the search engine to find places. You can run your own test on your city fairly quickly – it will be interesting to compare the results!

For me, Microsoft Virtual Earth is, today, the best one for tourists, though it should improve the handling of spelling errors…

An interesting issue that emerges from The Cult of
the Amateur is about Participatory GIS or PPGIS. As Chris Dunn mentioned in her recent paper in Progress in Human Geography, Participatory GIS makes many references to ‘democratisation’ of GIS (together with Renee Sieber’s 2006 review, these two papers are excellent introduction to PPGIS) .

According to the OED, democratisation is ‘the action of rendering, or process of becoming, democratic’, and democracy is defined as ‘Government by the people; that form of government in which the sovereign power resides in the people as a whole, and is exercised either directly by them (as in the small republics of antiquity) or by officers elected by them. In modern use often more vaguely denoting a social state in which all have equal rights, without hereditary or arbitrary differences of rank or privilege.’ [emphasis added].
The final point is the notion that is mostly used when advocates of Web 2.0 use the term, and it seems that in this notion of democratisation, erasure of hereditary or arbitrary differences is extended also to expertise and hierarchies in the media and knowledge production. In some areas, Web 2.0 actively erodes the differentiation between experts and amateurs, using mechanisms such as anonymous contributions that hide from the reader any information about who is contributing, what their authority is and why we should listen to them.
As Keen notes, doing away with social structures and equating amateurs with experts is actually not a good thing in the long run.
This brings us back to Participatory GIS – the PGIS literature discusses the need to ‘level the field’ and deal with power structures and inequalities in involvement in decision making – and this is exactly what we are trying to achieve in the Mapping Change for Sustainable Communities project. We also know very well from the literature that, even in complex issues, individuals and groups are investing time and effort to understand complex issues and as a result can become quite expert. For example, the work of Maarten Wolsink on NIMBYs shows that this very local focus is not so parochial after all.
I completely agree with the way Dunn puts it (p. 627-8):

‘Rather than the ‘democratization of GIS’ through th[e] route [of popularization] , it would seem that technologizing of deliberative democracy through Participatory GIS currently offers a more effective path towards individual and community empowerment – an analytical as opposed to largely visual process; an interventionist approach which actively rather than passively seeks citizen involvement; and a community-based as opposed to individualist ethos.’

Yet, what I’m taking from Keen is that we also need to rethink the role of the expert within Participatory GIS – at the end of the day, we are not suggesting we do away with planning departments or environmental experts.
I don’t recall that I’ve seen much about how to define the role of experts and how to integrate hierarchies of knowledge in Participatory GIS processes – potentially an interesting research topic?

Continuing to reflect on Keen’s The Cult of the Amateur, I can’t fail to notice how Web 2.0 influences our daily lives – from the way we implement projects, to the role of experts and non-experts in the generation of knowledge. Some of the promises of Web 2.0 are problematic – especially the claim for ‘democratisation’.

Although Keen doesn’t discuss this point, Jakob Nielsen’s analysis of ‘Participation Inequality on the Web’ is pertinent here. As Nielsen notes, on Wikipedia 0.003% of users contribute two thirds of the content, with a further 0.2% contributing something and 99.8% who just use the information. Blogs are supposed to have a 95-5-0.1 (95% just read, 5% post infrequently, 0.1% post regularly). In Blogs, this posting inequality is enhanced by readership inequalities on the Web (power laws are influencing this domain, too – top blogs are read by far more people).

This aspect of access and influence means that the use of the word ‘democratisation’ is a misnomer to quite an extent. If anything, it is a weird laissez-faire democracy, where a few plutocrats rule. Not a democracy of the type that I’d like to live in.

I have just finished reading Andrew Keen’s The Cult of the Amateur, which, together with Paulina Borsook’s Cyberselfish, provides quite a good antidote to the overexcitement of The Long Tail, Wikinomics and a whole range publications about Web 2.0 that marvel in the ‘democratisation’ capacity of technology. Even if Keen’s and Borsook’s books are seen as dystopian (and in my opinion they are not), I think that through their popularity these critical analyses of current online culture are very valuable in encouraging reflection on how technology influences society.

The need for a critical reflection on technology and society stems from the fact that most of society seems to accept the ‘common-sense’ perspective that technology is a human activity which is neutral and ‘value-free’ (values here in the meaning of guiding principles in life) – that it can be used for good ends or bad ones, but by itself it does not encapsulate any values internally.

In contrast, I personally prefer Andrew Feenberg’s analysis in Questioning Technology and Transforming Technology where he suggests that a more complete attitude towards technology must accept that technology encapsulates certain values and that these values should be taken into account when we evaluate the impact of new technologies on our life.

In Feenberg’s terms, we should not separate means from ends and should understand how certain cultural values influence technological projects and end up integrated in them. For example, Wikipedia’s decision to ‘level the playing field’ so experts do not have any more authority in editing content than other contributors should be seen as a an important value judgment, suggesting that expertise is not important or significant or that experts cannot be trusted. Such a point of view does have an impact on a tool that it widely used and therefore influences society.

Follow

Get every new post delivered to your Inbox.

Join 2,310 other followers