AAG 2015 notes – day 3 – Civic Technology, Citizen Science, Crowdsourcing and mapping

The sessions today covered Civic technology, citizen science, and the new directions in mapping – Open Source/Crowdsourcing/Big Data

First, Civic technology: governance, equity and inclusion considerations, with Pamela Robinson – Ryerson University (Chair) and Peter A. Johnson – University of Waterloo, Teresa Scassa – University of Ottawa and Jon Corbett – University of British Columbia-Okanagan. The Discussant is Betsy Donald – Queen’s University.

The background of the panel is participatory mapping (Jon), government use of open and geoweb tools (Peter), law (Teresa), urban planning (Pamela), and geography (Betsy) .

First question for the panel: what are the challenges to civic technologiy support government?

Peter – taking technology perspective. Looking at Chicago Open Data site – had a section that doesn’t only deliver CSV that is, actually, specialised knowledge. They have a featured data set – you can download it, or look at it on the map. The ‘problem landlord dataset’ only show points on the map – no information what does it mean or how it came about. The focus is on tech-savvy users that will access open data, and assumption of tech-intermediaries who will use it for civic purposes. If that is the case, shouldn’t it be city staff who use the data and act as infomediaries? Pamela – looked at civic hackathons (see yesterday). City staff are asked to make  a business case for open data and hackathon. It ask to bring business thinking into something that is not about civic engagement (how you monetise that?). It’s a weird tool in terms of the aim of the process which are not financial. There is also pressure to demonstrate an outcome of ‘killer app’ and they are more about bringing people together with tech knowledge and civic minded people. Teresa – open data is equated with free – from regulations, costs, no limitation to use. There are costs associated in making the data free in the first place, and that is a problem in terms of government not giving thought to these costs. Datasets are being opened without due thought to privacy concern. Is open data just a subsidy to companies that use to pay for it? Also free from regulations – part of neoliberalism view of removing all the bureaucracy that is halting the market. But part of it is there with social justice perspective – e.g. requirement about accessibility, language, regulations that are there to protect vulnerable people against abuse. So the concept of free is not simple. Also what happen when the government pass it to the private sector, so they circumvent their own regulations, or the private sector to use the data but without protection. Jon – problematise the question. Relationship of municipal and state. There are examples in Canada that don’t fit the model – e.g. first nation governance. There the data is not that simple. The other issue is the support to government – is the open data movement can be used to resist and challenge government? Renee – the non-profit open north use data to cause problems to the city and demonstrated corruption in government – a very tech-savvy  organisation. Pamela – some forms of collaboration is to get out of the way of community organisations to allow them to do the work. Jon – there can be high level of cynicism on how the data is going to be used. Access to data is not the same as accessibility. There are issues of scale – in large cities there are enough capacity but in smaller cities there is no capacity in government and or civic society to deal with data. Pamela – in Toronto there are urbanists community coming in search of tech support. There are blurred lines between civil servant and civic engagement in free time. Thomas (audience) – in Chicago, people build visualisation at a county budget, also exposing data about closing schools. Also good applications such as streamlining expunging negative records to allow people to develop new career http://www.expunge.io/. There was also the city lands project – http://largelots.org/ – you can buy a lot for $1, and that can be a burden to people who are involved. Renee – heterogenise the state, not to think about local government as ‘them’ and it is organisation in which people make decisions about opening or resisting opening data. There are plenty decisions that are done at individual level. Also worth looking at prior data acquisition. There are examples of extracting data from the city and relationships before the open data. Betsy – what open government mean? What about digital divides? gender, age, social-economic background. Mike – there is a need to consider Marxist notion of free – it is not free in reality as it just allow people to be consumed into a system with different power relationship. I’ve raised that ‘open government’ is coming from view of ‘government as platform’. Renee – civic tech is a mutating terms over the years. Teresa – open government rhetoric are around transparency and accountability where the agenda is around innovations and market solutions to civic problems.
The second question was: How would we start to evaluate the impact of civic technology? Peter – what metrics will we use? money (time saved, internal and external benefits, eyeballs – evaluating hits). Municipal staff want to measure that they do and are looking to justify their work. Jon – need to evaluate impact and value. Scale, and point of evaluation. Scale of interaction and intervention. Should projects be evaluated by the number of people involved? We usually evaluate during the project lifetime but fail to do longer analysis. need to understand long term impacts – beyond page views. Teresa – which impact – economic? use? engagement? from privacy perspective – when government encourage engagement through Google or Twitter – you are actually giving data to Big Data engines not just the government. There is erosion of public/private distinction in services, leading to erosion of citizen rights and recourse? When transit apps record information, there is plenty of information that leaks to private sector companies who don’t have the same responsibilities and obligations. Pamela – civic for whom? when there are income distribution is so different in cities, and we need to understand digital divide at the city level. Teresa – some of the legal infrastructure to deal with protection and access to information when private sector work so closely with the government. Betsy – there are groups that work in big tech companies, which were curios of geographers and what they do, but no idea of social science. Privacy have been given away. Jon – there are windows of opportunity around edifices of open data, where are we going to end up? Teresa – contracting out to external companies can lead to issues of data ownership and that require managing it at the contracting stage.

Citizen Science and Geoweb, with Renee Sieber (Chair) with talks covering a range of areas – from cartography to bird watching.

Andrea Minano – University of Waterloo – Geoweb Tools for Climate Change Adaptation: A Case Study in Nova Scotia’s South Shore. Her work explores the link between participation and citizen science. In terms climate change adaptation it is about understanding impacts in local context, lack of risk awareness – and it also political issue. The participatory Geoweb can be used to display and share information online. She carried out research in Nova Scotia which rely on fisheries, most people live by the shore. People are aware and they’ve seen climate change in front of them – Municipal Climate Change Action Plan (MCCAP) are being developed. Each of the 5 municipalities that she worked with created plan and they are concerned with flooding. They had 3D LiDAR data and can visualise prediction of floods and return periods, but they didn’t know how to use it and what to do with it. She created two prototypes and tested them. She used satellite images as backdrop, she made LiDAR data usable on the web. AdaptNS allow multiple geographic scales and temporal scales – allow people to show concerns and indicate them on the map. Carried out a workshop of 2 hours with 11 participants. People were concerned about critical infrastructure – single road that might be flooded. The tool help people to understand what climate change mean and adaptation discourse at wider scales.

Jana M Viel – UW-Milwaukee – Habitat Preferences of the Common Nighthawk (Chordeiles minor) in Cities and Villages in Southeastern Wisconsin. Nighthawk is active in dawn and dusk – new tropical migrants, they don’t do much during the day. They nest on the ground or flat gravelled roofed. They experience slow population decline in the last 40 years. Maybe there is lack of roof sustrate – not enough gravelled roofs – volunteers started installing gravelled section on roof with little success. In Wisconsin, they try to understand the decline with data from breeding bird survey , Wisconsin nightjar survey, Wisconsin Breeding Bird Atlas – studies don’t cover urban area and lacking observation in dusk. There is a limitation, that need to do the study in the months when the birds are there. The aim of the research is to help monitoring and improve methodology. The study measured some variables in the field but then used other geographic information for analysis. There was help from both volunteers and organisation – there was special volunteer trianing – easy because it’s one type of bird that people should recognise. Volunteers – reason: love birds, fun, help conservation. Half of the people that participate are retired and most people with education and not for profit. Some volunteers drop out but not too many. People used phones to navigate to survey site. People use point count with paper forms. Information was also recorded on eBird. Results – total 31000 survey hours, with 1412 survey in which 98 nighthawks where detected. Isseus: no data, problems with Google Maps, unfamiliar with technology. The summary – success in carry out baseline survey. Clear research question, clear protocol, training and resources, need coordinator that is active, researchers need to update about the analysis and do outreach and thanking volunteers. Sustainability and who will continue the work is an issue – answering phone calls.

Kevin Sparks, Alexander Klippel &Jan Oliver Wallgrün – The Pennsylvania State University with David Mark – NCGIA & Department of Geography, University at Buffalo – Assessing Environmental Information Channels for Citizen Science Land Cover Classification. Kevin looked at COBWEB goals – the project is enabling citizens to collect data and working with them to deal with data quality. Geo-Wiki project which allow people to manage information about land cover. Kevin looked at Degree confluence Project – collected 770 photos and link them to land cover database, and corresponding value to each one of them. – they have 7 samples in each class of the 11 land cover classes – then compared lay participants, vs educated lay participants vs experts. They ask people to select class and say how confidence they are. They created experiment with ground-based photo and another with ground & aerial-based images. Participants recruited through AMT. There was 45.97% agreement with National Land Cover Data. when shown the aerial image, reduced to 42.97% – when looking at the confidence level, the success goes to 71.91% agreement. Variation in participants, interface and stimuli – you see similar patterns that are not influence by these factors but by the semantic nature of land cover classification. Aerial photoes favour more homogonized classes.

Robert Edsall – Idaho State University –  Case Studies in Citizen-enabled Geospatial Inquiry. Exploring from cartographic perspective and interested how society interact with maps. How maps being intermediaries in citizen science projects. From Citizen Science 2015 ocnference, he noticed that there is a growing understanding of the potential of citizen science to be collaborative or co-created projects. In Geogrpahy, we got success in VGI. Asking people to develop hypotheses is less developed, although that happened in PPGIS and PGIS. Participatory GIS and citizen science are parallel in helping the shaping of the environment. Rob is interested in visualisation and visual analytics. Collabroative citizen science does seem to be a good match for visual analytics. Although the tools are sometime design for experts, they can be used by citizens. Incorporating serious games seem to work in some cases – and attract citizen scientists (MacGonigal 2012). We can think of volunteered geographic analysis. Now we can look at examples – nature mapping in Jackson Hole to suggest to people that they can analysed their data, after a while, people disengaged and didn’t want to expose their data and other reasons. The second case study is about historical data – images from different collection, but they are not catalogued or geolocated. They are doing Metadatagems project that helps people to locate information. People can specify ranges and location. We can enable engaged citizens in higher levels of the analysis.

New Directions in Mapping 2: Open Source, Crowd-sourcing and “Big Data”. With Matthew Zook – University of Kentucky (chair) Sean Gorman – Timbr.io Inc. ,  Andrew Hill – CartoDB, Courtney Claessens – Esri , Randy Meech – Mapzen and  Charlie Lloyd – Mapbox.

Questions: future of the map and the mapable? question that what can be mapped doesn’t need to be mapped. Definition of what is the map and what is mappable changes. Mapping is becoming so pervasive that it ‘disappear’. Fear about view of the world that is not representative – only of the digital haves. If the future of the map is crowdsourced what should we do about places that are left out? History the map was always biassed.

What does ‘open mapping’ mean? Is open source/FOSS still a real thing and how do we maintain an open mapping ethos? agreement on the panel that open source is here to stay, and a belief that open mapping where companies and different bodies collaborate to share data openly will win over proprietary datasets.

How do we address the uneven nature of crowd-sourcing and its impact on what and where is mapped? Assumption that people want to map empty areas and it is less motivating when the map is full (wonder if that is true). Issues of what do we do if the crowd falsify information. It is not either/or, we should have an hybrid of government and crowdsourced information together. Need to understand that community diverse and data is diverse – imports, power users, one timers, local and people from far away.

How might we push geographers/mappers ‘beyond the geotag’ and consider other other (and non-spatial) aspects of data? By dragging just the geo part from data in its context, we are losing a lot of important information. Need to tell stories about geo and integrate narratives. We need to tell real stories more naturally. Need to consider relational mapping – we are in network society and extending into different spaces. Adding meaning to mapping has remained difficult – maybe should promote slow mapping. People do create maps and get meaning from them. Fast mapping – it’s easy to make bad map that doesn’t give you any information that help you to understand place. What people are getting out of maps? What happen when you produce bad maps and how to tell it to people?

Happy 10th Birthday, OpenStreetMap!

Today, OpenStreetMap celebrates 10 years of operation as counted from the date of registration. I’ve heard about the project when it was in early stages, mostly because I knew Steve Coast when I was studying for my Ph.D. at UCL.  As a result, I was also able to secured the first ever research grant that focused on OpenStreetMap (and hence Volunteered Geographic Information – VGI) from the Royal Geographical Society in 2005. A lot can be said about being in the right place at the right time!

OSM Interface, 2006 (source: Nick Black)
OSM Interface, 2006 (source: Nick Black)

Having followed the project during this decade, there is much to reflect on – such as thinking about open research questions, things that the academic literature failed to notice about OSM or the things that we do know about OSM and VGI because of the openness of the project. However, as I was preparing the talk for the INSPIRE conference, I was starting to think about the start dates of OSM (2004), TomTom Map Share (2007), Waze (2008), Google Map Maker (2008).  While there are conceptual and operational differences between these projects, in terms of ‘knowledge-based peer production systems’ they are fairly similar: all rely on large number of contributors, all use both large group of contributors who contribute little, and a much smaller group of committed contributors who do the more complex work, and all are about mapping. Yet, OSM started 3 years before these other crowdsourced mapping projects, and all of them have more contributors than OSM.

Since OSM is described  as ‘Wikipedia of maps‘, the analogy that I was starting to think of was that it’s a bit like a parallel history, in which in 2001, as Wikipedia starts, Encarta and Britannica look at the upstart and set up their own crowdsourcing operations so within 3 years they are up and running. By 2011, Wikipedia continues as a copyright free encyclopedia with sizable community, but Encarta and Britannica have more contributors and more visibility.

Knowing OSM closely, I felt that this is not a fair analogy. While there are some organisational and contribution practices that can be used to claim that ‘it’s the fault of the licence’ or ‘it’s because of the project’s culture’ and therefore justify this, not flattering, analogy to OSM, I sensed that there is something else that should be used to explain what is going on.

TripAdvisor FlorenceThen, during my holiday in Italy, I was enjoying the offline TripAdvisor app for Florence, using OSM for navigation (in contrast to Google Maps which are used in the online app) and an answer emerged. Within OSM community, from the start, there was some tension between the ‘map’ and ‘database’ view of the project. Is it about collecting the data so beautiful maps or is it about building a database that can be used for many applications?

Saying that OSM is about the map mean that the analogy is correct, as it is very similar to Wikipedia – you want to share knowledge, you put it online with a system that allow you to display it quickly with tools that support easy editing the information sharing. If, on the other hand, OSM is about a database, then OSM is about something that is used at the back-end of other applications, a lot like DBMS or Operating System. Although there are tools that help you to do things easily and quickly and check the information that you’ve entered (e.g. displaying the information as a map), the main goal is the building of the back-end.

Maybe a better analogy is to think of OSM as ‘Linux of maps’, which mean that it is an infrastructure project which is expected to have a lot of visibility among the professionals who need it (system managers in the case of Linux, GIS/Geoweb developers for OSM), with a strong community that support and contribute to it. The same way that some tech-savvy people know about Linux, but most people don’t, I suspect that TripAdvisor offline users don’t notice that they use OSM, they are just happy to have a map.

The problem with the Linux analogy is that OSM is more than software – it is indeed a database of information about geography from all over the world (and therefore the Wikipedia analogy has its place). Therefore, it is somewhere in between. In a way, it provide a demonstration for the common claim in GIS circles that ‘spatial is special‘. Geographical information is infrastructure in the same way that operating systems or DBMS are, but in this case it’s not enough to create an empty shell that can be filled-in for the specific instance, but there is a need for a significant amount of base information before you are able to start building your own application with additional information. This is also the philosophical difference that make the licensing issues more complex!

In short, both Linux or Wikipedia analogies are inadequate to capture what OSM is. It has been illuminating and fascinating to follow the project over its first decade,  and may it continue successfully for more decades to come.

Assertions on crowdsourced geographic information & citizen science #1

Looking across the range of crowdsourced geographic information activities, some regular patterns are emerging and it might be useful to start notice them as a way to think about what is possible or not possible to do in this area. Since I don’t like the concept of ‘laws’ – as in Tobler’s first law of geography which is  stated as ‘Everything is related to everything else, but near things are more related than distant things.’ – I would call them assertions. There is also something nice about using the word ‘assertion’ in the context of crowdsourced geographic information, as it echos Mike Goodchild’s differentiation between asserted and authoritative information. So not laws, just assertions or even observations.

The first one, is rephrasing a famous quote:

you can be supported by a huge crowd for a very short time, or by few for a long time, but you can’t have a huge crowd all of the time (unless data collection is passive)’

So the Christmas Bird Count can have tens of thousands of participants for a short time, while the number of people who operate weather observation stations will be much smaller. Same thing is true for OpenStreetMap – for crisis mapping, which is a short term task, you can get many contributors  but for the regular updating of an area under usual conditions, there will be only few.

The exception for the assertion is the case for passive data collection, where information is collected automatically through the logging of information from a sensor – for example the recording of GPS track to improve navigation information.

OSM Haiyan

Neogeography and the delusion of democratisation

At the end of 2010, Matt Wilson (University of Kentucky) and Mark Graham(Oxford Internet Institute), started coordinating a special issue of Environment and Planning Adedicated to ‘Situating Neogeography’, asking ‘How might we situate neogeography?  What are the various assemblages, networks, ecologies, configurations, discourses, cyborgs, alliances that enable/enact these technologies?’

My response to this call is a paper titled ‘Neogeography and the delusion of democratisation’ and it is finally been accepted for publication. I am providing below an excerpt from the introduction, to provide a flavour of the discussion:

“Since the emergence of the World Wide Web (Web) in the early 1990s, claims about its democratic potential and practice are a persistent feature in the discourse about it. While awareness of the potential of ‘anyone, anytime, anywhere’ to access and use information was extolled for a long while (for an early example see Batty 1997), the emergence of Web 2.0 in the mid-2000s (O’Reilly 2005) increased this notion. In the popular writing of authors such as Friedman (2006), these sentiments are amplified by highlighting the ability of anyone to ‘plug into the flat earth platform’ from anywhere and anytime.

Around the middle of the decade, the concept of neogeography appeared and the ability to communicate geographic information over the Web (in what is termed the GeoWeb) gained prominence (see Haklay et al. 2008). Neogeography increased the notion of participation and access to geographic information, now amplified through the use of the political term democratisation. The following citations provide a flavour of the discourse within academic and popular writing – for example, in Mike Goodchild’s declaration that ‘Just as the PC democratised computing, so systems like Google Earth will democratise GIS’ (quoted in Butler 2006), or Turner’s (2006) definition of neogeography as ‘Essentially, Neogeography is about people using and creating their own maps, on their own terms and by combining elements of an existing toolset. Neogeography is about sharing location information with friends and visitors, helping shape context, and conveying understanding through knowledge of place’.  This definition emphasises the wide access to the technology in everyday practice. Similar and stronger statements can be found in Warf and Sui (2010) who clarify that ‘neogeography has helped to foster an unprecedented democratization of geographic knowledge’ (p. 200) and, moreover, ‘Wikification represents a significant step forward in the democratization of geographic information, shifting control over the production and use of GIS data from a handful of experts to large groups of users’ (ibid.). Even within international organisations this seems to be the accepted view as Nigel Snoad, strategy adviser for the communications and information services unit of the United Nations Office for the Coordination of Humanitarian Affairs (OCHA), stated: ‘On the technology side, Google, Microsoft and OpenStreetMap have really democratized mapping’ (cited in Lohr 2011).

However, what is the nature of this democratisation and what are its limits? To what extent do the technologies that mediate the access to, and creation of, geographic information allow and enable such democratisation?

To answer these questions, we need to explore the meaning of democratisation and, more specifically, within the context of interaction between people and technology. According to the Oxford English Dictionary, democratisation is ‘the action of rendering, or process of becoming, democratic’, and democracy is defined as ‘Government by the people; that form of government in which the sovereign power resides in the people as a whole, and is exercised either directly by them (as in the small republics of antiquity) or by officers elected by them. In modern use often more vaguely denoting a social state in which all have equal rights, without hereditary or arbitrary differences of rank or privilege’ [emphasis added]. A more colloquial notion of democratisation, and a much weaker one, is making a process or activity that used to be restricted to an elite or privileged group available to a wider group in society and potentially to all. For example, with mobile telephony now available across the globe, the statement ‘mobile telephony has been democratised’ aims to express the fact that, merely three decades ago, only the rich and powerful members of Western society had access to this technology.

Therefore, it is accepted from the start that the notion of democratisation cited above is more about the potential of neogeography to make the ability to assemble, organise and share geographical information accessible to anyone, anywhere and anytime and for a variety of purposes than about advancing the specific concept of democracy. And yet, it will be wrong to ignore the fuller meaning of the concept. Democratisation has a deeper meaning in respect of making geographic information technologies more accessible to hitherto excluded or marginalised groups in a way that assists them to make a change in their life and environment. Democratisation evokes ideas about participation, equality, the right to influence decision making, support to individual and group rights, access to resources and opportunities, etc. (Doppelt 2006). Using this stronger interpretation of democratisation reveals the limitation of current neogeographic practices and opens up the possibility of considering alternative development of technologies that can, indeed, be considered as democratising.

To explore this juncture of technology and democratisation, this paper relies on Andrew Feenberg’s critical philosophy of technology, especially as explored in his Questioning Technology (1999) and Transforming Technology (2002), which is useful as he addresses issues of democratisation and technology directly. For readers who are not familiar with the main positions within philosophy of technology, a very brief overview – based on Feenberg’s interpretation (1999) – is provided. This will help to explain his specific critique and suggestion for ‘deep democratisation’ of technology.

Equipped with these concepts, attention is turned to the discussion about the democratic potential of Geographic Information Systems (GIS), which appears in early discussions about GIS and society in the 1990s, and especially to the discussions within the literature on (Public) Participatory GIS (PPGIS/PGIS – assumed to be interchangeable here) and critical GIS. As we shall see, discussions about empowerment, marginalisation and governance are central to this literature from its inception and provide the foundations to build a deeper concept of democratisation when considering neogeographic practices.

Based on this historical understanding, the core of the paper explores why it is that neogeographic practices are assumed to be democratising and, more importantly, what the limitations are on their democratic potential. To do that, a hierarchy of ‘hacking’ – that is the artful alteration of technology beyond the goals of its original design or intent – is suggested. Importantly, here ‘hacking’ does not mean the malicious alteration of technology or unauthorised access to computer systems, or the specific culture of technology enthusiasts (‘hacker culture’). The term is used to capture the first and second instrumentation that Feenberg (1996, 2002) describes.  As we shall see, by exploring the ability to alter systems, there is some justification in the democratisation claims of neogeography as it has, indeed, improved the outreach of geographic technologies and opened up the potential of their use in improving democratic processes, but in a much more limited scope and extent. The paper concludes with observations on the utilisation of neogeographic technologies within the participatory process that aim to increase democratisation in its deeper sense.”

The paper’s concepts are based on talk that I originally gave in 2008 as part of the World University Netowrk seminar on Neogeography. A final note is about the length of time that some ideas need from first emerging until publication – even with the current imagination of ‘fast moving technology’, there is a value in thinking through an idea over 4 years.

Google Geo applications – deteriorating interfaces?

While Google wasn’t the first website to implement slippy maps – maps that are based on tiles, download progressively and allow fairly smooth user interaction – it does deserve the credit for popularising them. The first version of Google Maps was a giant leap in terms of public web mapping applications, as described in our paper about Web Mapping 2.0.

In terms of usability, the slippy map increased the affordability of the map with direct manipulation functionality for panning, clear zoom operating through predefined scales, the use of as much screen assets for the map as possible, and the iconic and simple search box at the top. Though the search wasn’t perfect (see the post about the British Museum test), overall it offered a huge improvement in usability. It is not surprising that it became the most popular web mapping site and the principles of the slippy map are the de facto standard for web mapping interaction.

However, in recent months I couldn’t avoid noticing that the quality of the interface has deteriorated. In an effort to cram more and more functionality (such as the visualisation of the terrain, pictures, or StreetView), ease of use has been scarificed. For example, StreetView uses the icon of a person on top of the zoom scale, which the user is supposed to drag and drop on the map. It is the only such object on the interface, and appears on the zoom scale regardless of whether it is relevant or available. When you see the whole of the UK for example, you are surely not interested in StreetView, and if you are zooming to a place that wasn’t surveyed, the icon greys out after a while. There is some blue tinge to indicate where there is some coverage, but the whole interaction with it is very confusing. It’s not difficult to learn, though.

Even more annoying is that when you zoom to street level on the map, it switches automatically to StreetView, which I found distracting and disorientating.

There are similar issues with Google Earth – compare versions 4 and 5 in terms of ease of use for novice users, and my guess is that most of them will find 4 easier to use. The navigation both above the surface and at surface level is anything but intuitive in version 5. While in version 4 it was clear how to tilt the map, this is not the case in 5.

So maybe I should qualify what I wrote previously. There seems to be a range here, so it is not universally correct to say that the new generation of geographical applications are very usable just because they belong to the class of ‘neogeography’. Maybe, as ‘neogeography’ providers are getting more experienced, they are falling into the trap of adding functionality for the sake of it, and are slowly, but surely, destroying the advantages of their easy-to-use interfaces… I hope not!

Neo and Paleo GIS – is the difference in the usability culture?

At several recent GIS industry and academic conferences, I was not very surprised to see GIS presentations in which the presenter started by talking about ‘usability enhancements’ and ‘we took usability very seriously in this application’ but failed to deliver. In contrast to such statements, the application itself was breaking basic usability guidelines such as not giving any feedback to the user about some activity of the system, or grouping related elements together in the interface, among other problems.

Then I came across a report from 1991, which talks about User-Centred Graphical User Interface for GIS and notes that ‘It is not unusual for more than 60% of the code in a complex software system to be dedicated purely to the user interface. This stands in sharp contrast to the 35% dedicated to the user interface in early GISs’. This is still true in spirit, if not in percentage. GIS applications require sophisticated data manipulation, and most of the development effort of GIS vendors or Open Source GIS projects is focused on the information itself and its manipulation. The interface is probably seen as an add-on – the ‘fun’ bit of the development that you leave to the end after cracking all the engineering challenges that make the application work.

What I would argue is that, as a result, GIS as an industry doesn’t have a ‘usability culture’. Compare that to Apple, where usability and interaction with users has been at the centre of what they are doing since they started. Or with e-commerce which also shows a ‘usability culture’ because, if you fail on usability, there is a direct link to loss of sales. These are examples of organisations and sectors who know that usability is important and commit resources to ensuring that their products are usable.

In contrast, in the GIS industry there is a feeling that usability is a ‘nice to have’ element of the development process, so there is no practice of involving usability experts in software development projects. There are relatively few examples of user-centred design in GIS, and they are mostly in research papers, very rarely in practice.

Neogeography is changing it somewhat, since parts of it are coming from companies and developers who see the value in understanding the users. Maybe the competition between the existing developers of GIS and neogeography companies will cause the former to change and they will become more serious about usability.

Public geographies and accidental geographers

In the post about the Engaging Geography seminar, I’ve discussed how different levels of engagement with geography can be used to define if a person using a system should be considered a ‘public geographer’ or just a consumer of geographical information in a passive and ephemeral way.

Thinking more broadly on geotechnologies, it is appropriate to include the people who are producing many of the everyday geographical representations. Frequently, the people who are producing these representations use GIS.

When thinking about the Web, it’s clear that the vast majority of the people involved in public geographies do not have any ‘formal’ geographical background. You might think that, in the case of GIS, because of the barriers to entry, the situation will be different.
This is not so. As Dave Unwin noted in his paper in 2005, many of the people operating GIS are actually ‘accidental geographers’. When you take the number of GIS users worldwide, it is clear that only a few have gone through formal geographical education beyond basic school geography. Unwin notes that ‘accidental geographers’ have naïve conceptualisations of geography (for example that it is all about the location of factual objects in space), lack of understanding of spatial analysis and sometimes have a dismissive attitude to the academic disciplines of geography or cartography.

Neogeography is putting these accidental geographers in a new light. Some users do indeed see geography as uncomplicated and GIS as the ‘something that produces maps’. However, as a person is exposed to systems that are dealing with geography for a sustained period she is more likely to start questioning the nature of this geography and the way that it is represented. After a while, these questions will lead to a process of learning about geographical concepts – and the fact that so much information is now available on the Web will certainly help. Sometimes, the commitment to geography might lead to joining organisations such as the AGI and maybe even becoming Chartered Geographers (GIS).

So, in summary, there is a whole range of commitments and interests in geography, and both accidental geographers and neogeographers can be positioned along a continuum from ignorance to expert knowledge. I think that most will move through this continuum and enjoy the process of developing geographic knowledge.