29 January, 2014
Once upon a time, Streetmap.co.uk was one of the most popular Web Mapping sites in the UK, competing successfully with the biggest rival at the time, Multimap. Moreover, it was ranked second in The Daily Telegraph list of leading mapping sites in October 2000 and described at ‘Must be one of the most useful services on the web – and it’s completely free. Zoom in on any UK area by entering a place name, postcode, Ordnance Survey grid reference or telephone code.’ It’s still running and because of its legacy, it’s around the 1250 popular website in the UK (though 4 years ago it was among the top 350).
So far, nothing is especially noteworthy – popular website a decade ago replaced by a newer website, Google Maps, which provide better search results, more information and is the de facto standard for web mapping. Moreover, already in 2006 Artemis Skaraltidou demonstrated that of the UK Web Mapping crop, Streetmap scored lowest on usability with only MapQuest, which largely ignored the UK, being worse.
However, recently, while running a practical session introducing User-Centred Design principles to our MSc in GIS students, I have noticed an interesting implication of the changes in the environment of Web Mapping – Streetmap has stopped being usable just because it didn’t bother to update its interaction. By doing nothing, while the environment around it changed, it became unusable, with users failing to perform even the most basic of tasks.
The students explored the mapping offering from Google, Bing, Here and Streetmap. It was fairly obvious that across this cohort (early to mid 20s), Google Maps were the default, against which other systems were compared. It was not surprising to find impressions that Streetmap is ‘very old fashioned‘ or ‘archaic‘. However, more interesting was to notice people getting frustrated that the ‘natural’ interaction of zooming in and out using the mouse wheel just didn’t worked. Or failing to find the zoom in and out buttons. At some point in the past 10 years, people internalised the interaction mode of using the mouse and stopped using the zoom in and out button on the application, which explains the design decision in the new Google Maps interface to eliminate the dominant zoom slider from the left side of the map. Of course, Streetmap interface is also not responsive to touch screen interactions which are also learned across applications.
I experienced a similar, and somewhat amusing incident during the registration process of SXSW Eco, when I handed over my obviously old laptop at the registration desk to provide some detail, and the woman was trying to ‘pinch’ the screen in an attempt to zoom in. Considering that she was likely to be interacting with tablets most of the day (it was, after all, SXSW), this was not surprising. Interactions are learned and internalised, and we expect to experience them across devices and systems.
So what’s to learn? while this is another example of ‘Jacob’s Law of Internet User Experience‘ which states that ‘Users spend most of their time on other sites’, it is very relevant to many websites that use Web Mapping APIs to present information – from our own communitymaps.org.uk to the Environment Agency What’s in Your Backyard. In all these cases, it is critical to notice the basic map exploration interactions (pan, zoom, search) and make sure that they match common practices across the web. Otherwise, you might end like Streetmap.
18 March, 2013
The Consumers’ Association Which? magazine is probably not the first place to turn to when you look for usability studies. Especially not if you’re interested in computer technology – for that, there are sources such as PC Magazine on the consumer side, and professional magazines such as Interactions from Association for Computing Machinery (ACM) Special Interest Group on Computer-Human Interaction (SIGCHI).
Over the past few years, Which? is reviewing, testing and recommending Satnavs (also known Personal Navigation Devices – PNDs). Which? is an interesting case because it reaches over 600,000 households and because of the level of trust that it enjoys. If you look at their methodology for testing satnavs , you’ll find that it does resemble usability testing – click on the image to see the video from Which? about their methodology. The methodology is more about everyday use and the opinion of the assessors seems to play an important role.
Professionals in geographical information science or human-computer interaction might dismiss the study as unrepresentative, or not fitting their ways of evaluating technologies, but we need to remember that Which? is providing an insight into the experience of the people who are outside our usual professional and social context – people who go to a high street shop or download an app and start using it straightaway. Therefore, it’s worth understanding how they review the different systems and what the experience is like when you try to think like a consumer, with limited technical knowledge and understanding of maps.
There are also aspects that puncture the ‘filter bubble‘ of geoweb people – Google Maps are now probably the most used maps on the web, but the satnav application using Google Maps was described as ‘bad, useful for getting around on foot, but traffic information and audio instructions are limited and there’s no speed limit or speed camera data‘. Waze, the crowdsourced application received especially low marks and the magazine noted that it ‘lets users share traffic and road info, but we found its routes and maps are inaccurate and audio is poor‘ (both citations from Which? Nov 2012, p. 38). It is also worth reading their description of OpenStreetMap when discussing map updates, and also the opinions on the willingness to pay for map updates.
There are many ways to receive information about the usability and the nature of interaction with geographical technologies, and some of them, while not traditional, can provide useful insights.
6 June, 2011
It is always nice to announce good news. Back in February, together with Richard Treves at the University of Southampton, I submitted an application to the Google’s Faculty Research Award program for a grant to investigate Google Earth Tours in education. We were successful in getting a grant worth $86,883 USD. The project builds on my expertise in usability studies of geospatial technologies, including the use of eye tracking and other usability engineering techniques for GIS and Richard’s expertise in Google Earth tours and education, and longstanding interest in usability issues.
In this joint UCL/Southampton project, UCL will be lead partner and we will appoint a junior researcher for a year to develop run experiments that will help us in understanding of the effectiveness of Google Earth Tours in geographical learning, and we aim to come up with guidelines to their use. If you are interested, let me know.
Our main contact at Google for the project is Ed Parsons. We were also helped by Tina Ornduff and Sean Askay who acted as referees for the proposal.
The core question that we want to address is “How can Google Earth Tours be used create an effective learning experience?”
So what do we plan to do? Previous research on Google Earth Tours (GETs) has shown them to be an effective visualization technique for teaching geographical concepts, yet their use in this way is essentially passive. Active learning is a successful educational approach where student activity is combined with instruction to enhance learning. In the proposal we suggest that there is great education value in combining the advantages of the rich visualization of GETs with student activities. Evaluating the effectiveness of this combination is the purpose of the project, and we plan to do this by creating educational materials that consist of GETs and activities and testing them against other versions of the materials using student tests, eye tracking and questionnaires as data gathering techniques.
We believe that by improving the techniques by which spatial data is visualized we are improving spatial information access overall.
A nice aspect of the getting the project funded is that it works well with a project that is led by Claire Ellul and Kate Jones and funded by JISC. The G3 project, or “Bridging the Gaps between the GeoWeb and GIS” is touching on similar aspects and we surely going to share knowledge with them.
For more background on Richard Treves, see his blog (where the same post is published!)
At the beginning of May, I gave a lecture at the UCL Interaction Centre (UCLIC) seminar titled ‘Interacting with Geospatial Technologies – Overview and Research Challenges’. The talk was somewhat similar to the one that I gave at the BCS Geospatial SIG. However, I was trying to answer a question that I was asked during a UCLIC seminar in 2003, when, together with Carolina Tobón, I presented the early work on usability of GIS for e-government applications. During that talk, the discussion was, as always is in UCLIC, intensive. One core question that remained with me from the discussion was: ‘What makes geospatial technology special or is it just another case of a complex and demanding information system that you should expect difficulties with and spend time to master?’
Over the years, I have been trying to improve the answer beyond the ‘it’s special because it’s about maps‘ or ‘geospatial information comes in large volumes and requires special handling‘ or similar partial answers. In the book Interacting with Geospatial Technologies different chapters deal with these aspects in detail. During the talk, I tried to cover some of them. In particular, I highlighted the lag of geospatial technologies behind other computing technologies (an indication of complexity), the problems of devices such as SatNavs that require design intervention in the physical world to deal with a design fault (see image), and the range of problems in interfaces of GIS as were discovered in the snapshot study that was carried out by Antigoni Zafiri.
There was an excellent discussion after the presentation ended. Some of the very interesting questions that I think need addressing are the following:
- In the talk, I highlighted that examples of spatial representations exist in non-literate societies, and that, therefore, the situation with computers, where textual information is much more accessible than geographical information, is something that we should consider as odd. The question that was raised was about the accessibility of these representations – how long does it take people from the societies that use them to learn them? Is the knowledge about them considered privileged or held by a small group?
- For almost every aspect of geospatial technology use, there is some parallel elsewhere in the ICT landscape, but it is the combination of issues – such as the need for a base map as a background to add visualisation on top of it, or the fact that end users of geospatial analysis need the GIS operators as intermediaries (and the intermediaries are having problems with operating their tools – desktop GIS, spatial databases etc. – effectively) – that creates the unique combination that researchers who are looking at HCI issues of GIS are dealing with. If so, what can be learned from existing parallels, such as the organisations where intermediaries are used in decision making (e.g. statisticians)?
- The issue of task analysis and considerations of what the user is trying to achieve were discussed. For example, Google Maps makes the task of ‘finding directions from A to B’ fairly easy by using a button on the interface that allows the user to put in the information. To what extent do GIS and web mapping applications help users to deal with more complex, temporally longer and less well-defined tasks? This is a topic that was discussed early on in the HCI (Human-Computer Interaction) and GIS literature in the 1990s, and we need to continue and explore.
In my talk I used a slide about a rude group in Facebook that relates to a specific GIS package. I checked it recently and was somewhat surprised to see that it is still active. I thought that it would go away with more recent versions of the software that should have improved its usability. Clearly there is space for more work to deal with the frustration of the users. Making users happy is, after all, the goal of usability engineering…
31 March, 2011
The G3 Project, is a new project led by Claire Ellul and Kate Jones and funded by the JISC geospatial working group. The project’s aim is to create an interactive online mapping tutorial system for students in areas that are not familiar with GIS such as urban design, anthropology and environmental management.
The project can provides a template for the introduction of geographical concepts to new groups of learners. By choosing a discipline specific scenario, key geographic concepts and functions will be presented to novices in a useful and useable manner so the learning process is improved. Users will be introduced to freely available geographic data relevant to their particular discipline and know where to look for more. G3 Project will create a framework to support learners and grow their confidence without facing the difficult interfaces and complexity of desktop mapping systems that are likely to create obstacles for students, with the feeling that ‘this type of analysis is not for me’.
31 March, 2010
Finally, after 2 years in the making, Interacting with Geospatial Technologies is out. It is the first textbook dedicated to usability and Human-Computer Interaction (HCI) aspects of geographical information technologies. It covers desktop, Web and mobile applications and how they can be designed so they are more effective, efficient, error-free, easy to learn and enjoyable, which is one version of the 5 E’s of usability.
I started thinking about the book in 2004, when I realised that the most recent academic books dedicated to HCI and GIS were published in 1993 and 1995. These are respectively David Medyckyj-Scott and Hilary Hearnshaw’s Human Factors in Geographic Information Systems and the collection of papers from the NATO workshop Cognitive Aspects of Human-Computer Interaction for Geographic Information Systems, edited by Tim Nyerges, David Mark, Robert Laurini, and Max Egenhofer. While these books and the collections of papers in them are still valuable, it must be noted that in the early 1990s, Web-based GIS was just starting to appear, desktop GIS was fairly basic, mobile GIS was not even experimental and GIS trade journals argued about which UNIX workstation is the best for GIS.
Apart from these books, the proceedings of COSIT (Conference of Spatial Information Theory) are also valuable sources of academic research on spatial cognition and other principles of geographical and spatial information, and there are also many papers in academic journals about GIS.
However, not much attention was paid to everyday use of geographical information technologies, and no textbook included an introduction in a form accessible to postgraduate students and software developers. So, after complaining in various conferences that there is a clear need for such a book, I started working on it. It was an interesting process to identify suitable authors and encourage them to contribute to the book.
While offering the breadth of several authors who specialise in different aspects of the field, I think the textbook is coherent and consistent, and its style both accessible and readable. The editing process was more active and time-sensitive than is often the case in academic books, to ensure that the textbook is usefully up-to-date. On UCL’s MSc in GIS, a recent course based on the textbook was well received by students.
The book covers the principles and the practical aspects of interaction with geospatial technologies. There are sections about spatial cognition, cartography, user-centred design and usability engineering – here is the table of contents.
So, now you can get your own copy – and any feedback is welcomed.
29 January, 2010
After the publication of the comparison of OpenStreetMap and Google Map Maker coverage of Haiti, Nicolas Chavent from the Humanitarian OpenStreetMap Team contacted me and turned my attention to the UN Stabilization Mission in Haiti’s (known as MINUSTAH) geographical dataset, which is seen as the core set for the post earthquake humanitarian effort, and therefore a comparison with this dataset might be helpful, too. The comparison of the two Volunteered Geographical Information (VGI) datasets of OpenStreetMap and Google Map Maker with this core dataset also exposed an aspect of the usability of geographical information in emergency situations that is worth commenting on.
For the purpose of the comparison, I downloaded two datasets from GeoCommons – the detailed maps of Port-au-Prince and the Haiti road network. Both are reported on GeoCommons as originating from MINUSTAH. I combined them together, and then carried out the comparison. As in the previous case, the comparison focused only on the length of the roads, with the hypothesis that, if there is a significant difference in the length of the road at a given grid square, it is likely that the longer dataset is more complete. The other comparisons between established and VGI datasets give ground to this hypothesis, although caution must be applied when the differences are small. The following maps show the differences between the MINUSTAH dataset and OpenStreetMap and MINUSTAH and Google Map Maker datasets. I have also reproduced the original map that compares OpenStreetMap and Map Maker for the purpose of comparison and consistency, as well as for cartographic quality.
The maps show that MINUSTAH does provide fairly comprehensive coverage across Haiti (as expected) and that the volunteered efforts of OpenStreetMap and Map Maker provide further details in urban areas. There are areas that are only covered by one of the datasets, so they all have value.
The final comparison uses the 3 datasets together, with the same criteria as in the previous map – the dataset with the longest length of roads is the one that is considered the most complete.
It is interesting to note the south/north divide between OpenStreetMap and Google Map Maker, with Google Map Maker providing more details in the north, and OpenStreetMap in the south (closer to the earthquake epicentre). When compared over the areas in which there is at least 100 metres of coverage of MINUSTAH, OpenStreetMap is, overall, 64.4% complete, while Map Maker is 41.2% complete. Map Maker is covering further 354 square kilometres which are not covered by MINUSTAH or OpenStreetMap, and OpneStreetMap is covering further 1044 square kilometres that are missing from the other datasets, so clearly there is a benefit in integrating them. The grid that includes the analysis of the integrated datasets in shapefile format is available here, in case that it is of any use or if you like to carry out further analysis and or visualise it.
While working on this comparison, it was interesting to explore the data fields in the MINUSTAH dataset, with some of them included to provide operational information, such as road condition, length of time that it takes to travel through it, etc. These are the hallmarks of practical and operational geographical information, with details that are relevant directly to the end-users in their daily tasks. The other two datasets have been standardised for universal coverage and delivery, and this is apparent in their internal data structure. Google Map Maker schema is closer to traditional geographical information products in field names and semantics, exposing the internal engineering of the system – for example, including a country code, which is clearly meaningless in a case where you are downloading one country! OpenStreetMap (as provided by either CloudMade or GeoFabrik) keeps with the simplicity mantra and is fairly basic. Yet, the scheme is the same in Haiti as in England or any other place. So just like Google, it takes a system view of the data and its delivery.
This means that, from an end-user perspective, while these VGI data sources were produced in a radically different way to traditional GI products, their delivery is similar to the way in which traditional products were delivered, burdening the user with the need to understand the semantics of the different fields before using the data.
In emergency situations, this is likely to present an additional hurdle for the use of any data, as it is not enough to provide the data for download through GeoCommons, GeoFabrik or Google – it is how it is going to be used that matters. Notice that the maps tell a story in which an end-user who wants to have full coverage of Haiti has to combine three datasets, so the semantic interpretation can be an issue for such a user.
So what should a user-centred design of GI for an emergency situation look like? The general answer is ‘find the core dataset that is used by the first responders, and adapt your data to this standard’. In the case of Haiti, I would suggest that the MINUSTAH dataset is a template for such a thing. It is more likely to find users of GI on the ground who are already exposed to the core dataset and familiar with it. The fields are relevant and operational and show that this is more ‘user-centred’ than the other two. Therefore, it would be beneficial for VGI providers who want to help in an emergency situation to ensure that their data comply to the local de facto standard, which is the dataset being used on the ground, and bring their schema to fit it.
Of course, this is what GI ontologies are for, to allow for semantic interoperability. The issue with them is that they add at least two steps – define the ontology and figure out the process to translate the dataset that you have acquired to the required format. Therefore, this is something that should be done by data providers, not by end-users when they are dealing with the real situation on the ground. They have more important things to do than to find a knowledge engineer that can understand semantic interoperability…
21 December, 2009
In March 2009 Ordnance Survey together with the Human Factors group at the University of Nottingham, ran a workshop on the usability of geographic information. Bringing together a new grouping of researchers from across disciplines of Human Factors, HCI, Computer Science and Geographic Information Science, the aim of the workshop was to share perspectives on research challenges for investigating usability of data products – in particular geographic information products. In so doing we wished to help build an interdisciplinary network of contacts in this field and identify priority areas for further investigation.
Findings from the workshop were presented in a paper at AGI2009, and in a report which is available on the Ordnance Survey’s website. These confirmed there is indeed a clear need to focus on usability of information, as well as on interfaces used to access information. Rationale centred on the fact that current research and established methodologies in the field of product usability focus on objects such as devices, and on computer interfaces, with not so much focus on usability of data products such as digital geographic information.
The March 2010 workshop
As with the 2009 workshop, this one day workshop aims to bring together people researching usability of data/information across different disciplines, including Human Factors, HCI, Computer Science, Geographic Information Science.
The objective will be to share case studies on theory and/or application of methods for investigating usability of data or information, in particular geographic data/ information.
We hope the workshop will:
- Identify theoretical frameworks and methodologies, through a range of case studies, for applying usability evaluation to data or information.
- Help to build further an interdisciplinary network of research contacts in this field
- Form the basis for a publication
If you would like to participate…
Please send a short position paper (around 1000 words), based on a case study where you have addressed issues of usability of geographic information, to the contact details below by 29th January 2010.
A workshop agenda and venue details will be sent once we have all position papers.
Support for reasonable travel and accommodation costs may be provided – if you may need assistance please contact me (details below).
Jenny Harding, Ordnance Survey Research
firstname.lastname@example.org Phone: +44 (0)23 8079 2052
This is call for papers for a workshop on methods and research techniques that are suitable for geospatial technologies. The workshop is planned for the day before GISRUK 2010, and we are aware of the clashes with the AAG 2010 annual meeting, CHI 2010 and the Ergonomics Society Annual Conference. However, if you would like to contribute to the book that the commission is developing but can’t attend the workshop, please send an abstract and inform us that you can’t attend.
In the near future I’ll publish information about another workshop in March 2010 about the usability and Human-Computer Interaction aspects of geographical information itself – see the report from the Ordnance Survey workshop earlier in 2009.
So here is the full call:
Workshop on Methods and Techniques of Use, User and Usability Research in Geo-information Processing and Dissemination
Tuesday 13 April 2010 at University College London
The Commission on Use and User Issues of the International Cartographic Association (ICA) is currently working on a new handbook specifically addressing the application of user research methods and techniques in the geodomain.
In order to share experiences and interesting case studies a workshop is organized by the Commission, in collaboration with UCL, on the day preceding GISRUK 2010.
CALL FOR PAPERS
While there is growing awareness within the research community on the need to develop usability engineering and use and user research methods that are suitable for geographical and spatial information and systems, to date there is a lack of organized and documented experience in this area.
We therefore invite researchers with recent experience with use, user and usability research in the broad geodomain (cartography, GIS, geovisualization, Location Based Services, geographical information, GeoWeb etc.) to present a paper specifically focusing on the research methods and techniques applied, with an aim to develop the body of knowledge for the domain.
To participate, please send an abstract of 1 page A4 at maximum containing:
- A description of the research method(s) and technique(s) applied
- A short description of the case in which they have been applied
- The overall research framework
- Contact details and affiliation of the author(s)
We are also encouraging PhD researchers to submit paper proposals and share experiences from their research. At the workshop there will be ample time for discussing the application of user research methods and techniques. Good papers may be the basis for contributions to the handbook that is planned for publication in 2011.
Abstracts should be submitted on or before 1 December 2009 to the Chairman of the Commission Corné van Elzakker ( email@example.com )
the website of the ICA Commission on Use and User Issues and the GISRUK2010 website
22 October, 2009
While Google wasn’t the first website to implement slippy maps – maps that are based on tiles, download progressively and allow fairly smooth user interaction – it does deserve the credit for popularising them. The first version of Google Maps was a giant leap in terms of public web mapping applications, as described in our paper about Web Mapping 2.0.
In terms of usability, the slippy map increased the affordability of the map with direct manipulation functionality for panning, clear zoom operating through predefined scales, the use of as much screen assets for the map as possible, and the iconic and simple search box at the top. Though the search wasn’t perfect (see the post about the British Museum test), overall it offered a huge improvement in usability. It is not surprising that it became the most popular web mapping site and the principles of the slippy map are the de facto standard for web mapping interaction.
However, in recent months I couldn’t avoid noticing that the quality of the interface has deteriorated. In an effort to cram more and more functionality (such as the visualisation of the terrain, pictures, or StreetView), ease of use has been scarificed. For example, StreetView uses the icon of a person on top of the zoom scale, which the user is supposed to drag and drop on the map. It is the only such object on the interface, and appears on the zoom scale regardless of whether it is relevant or available. When you see the whole of the UK for example, you are surely not interested in StreetView, and if you are zooming to a place that wasn’t surveyed, the icon greys out after a while. There is some blue tinge to indicate where there is some coverage, but the whole interaction with it is very confusing. It’s not difficult to learn, though.
Even more annoying is that when you zoom to street level on the map, it switches automatically to StreetView, which I found distracting and disorientating.
There are similar issues with Google Earth – compare versions 4 and 5 in terms of ease of use for novice users, and my guess is that most of them will find 4 easier to use. The navigation both above the surface and at surface level is anything but intuitive in version 5. While in version 4 it was clear how to tilt the map, this is not the case in 5.
So maybe I should qualify what I wrote previously. There seems to be a range here, so it is not universally correct to say that the new generation of geographical applications are very usable just because they belong to the class of ‘neogeography’. Maybe, as ‘neogeography’ providers are getting more experienced, they are falling into the trap of adding functionality for the sake of it, and are slowly, but surely, destroying the advantages of their easy-to-use interfaces… I hope not!