As noted  in the previous post, which focused on the linkage between GIS and Environmental Information Systems,  the Eye on Earth Summit took place in Abu Dhabi on the 12 to 15 December 2011, and focused on ‘the crucial importance of environmental and societal information and networking to decision-making’.  Throughout the summit, two aspects of public access to environmental information were discussed extensively. On the one hand, Principle 10 of the Rio declaration from 1992 which call for public access to information, participation in decision making and access to justice was frequently mentioned including the need to continue and extend its implementation across the world. On the other, the growing importance of citizen science and crowdsourced  environmental information was highlighted as a way to engage the wider public in environmental issues and contribute to the monitoring and understanding of the environment. They were not presented or discussed as mutually exclusive approaches to public involvement in environmental decision making, and yet, they do not fit together without a snag – so it is worth minding the gap.

As I have noted in several talks over the past 3 years (e.g. at the Oxford Transport Research Unit from which the slides above were taken), it is now possible to define 3 eras of public access to environmental information. During the first era, between the first UN environmental conference, held in Stockholm in 1972, were the UN Environmental Programme (UNEP) was established, and the Earth conference in Rio in 1992, environmental information was collected by experts, to be analysed by experts, and to be accessed by experts. The public was expected to accept the authoritative conclusions of the experts. The second period, between 1990s and until the mid 2000s and the emergence of Web 2.0, the focus turned to the provision of access to the information that was collected and processed by experts. This is top-down delivery of information that is at the centre of Principle 10:

‘Environmental issues are best handled with participation of all concerned citizens, at the relevant level. At the national level, each individual shall have appropriate access to information concerning the environment that is held by public authorities, including information on hazardous materials and activities in their communities, and the opportunity to participate in decision-making processes. States shall facilitate and encourage public awareness and participation by making information widely available. Effective access to judicial and administrative proceedings, including redress and remedy, shall be provided’

Notice the two emphasised sections which focus on passive provision of information to the public – there is no expectation that the public will be involved in creating it.

With the growth of the interactive web (or Web 2.0), and the increase awareness to citizen or community science , new modes of data collection started to emerge, in which the information is being produced by the public. Air pollution monitoring, noise samples or traffic surveys – all been carried out independently by communities using available cheap sensors or in collaboration with scientists and experts. This is a third era of access to environmental information: produced by experts and the public, to be used by both.

Thus, we can identify 3 eras of access to environmental information: authoritative (1970s-1990s), top-down (1990s-2005) and collaborative (2005 onward).

The collaborative era presents new challenges. As in previous periods, the information needs to be at the required standards, reliable and valid. This can be challenging for citizen science information. It also need to be analysed, and many communities don’t have access to the required expertise (see my presentation from the Open Knowledge Foundation Conference in 2008 that deals with this issue). Merging information from citizen science studies with official information is challenging. These and other issues must be explored, and – as shown above – the language of Principle 10 might need revision to account for this new era of environmental information.

In October 2007, Francis Harvey commissioned me to write a review article for Geography Compass on Neogeography. The paper was written in collaboration with Alex Singleton at UCL and Chris Parker from the Ordnance Survey.
The paper covers several issues. Firstly, it provides an overview of the developments in Web mapping from the early 1990s to today. Secondly, in a similar way to my Nestoria interview, it explains the reasons for the changes that enabled the explosion of geography on the Web in 2005: GPS availability, Web standards, increased spread of broadband, and a new paradigm in programming APIs. These changes affected the usability of geographic technologies and started a new era in Web mapping. Thirdly, we describe several applications that demonstrate the new wave – the London Profiler, OS OpenSpace and OpenStreetMap. The description of OSM is somewhat truncated, so my IEEE Pervasive Computing paper provides a better discussion.
The abstract of the paper is:

‘The landscape of Internet mapping technologies has changed dramatically since 2005. New techniques are being used and new terms have been invented and entered the lexicon such as: mash-ups, crowdsourcing, neogeography and geostack. A whole range of websites and communities from the commercial Google Maps to the grassroots OpenStreetMap, and applications such as Platial, also have emerged. In their totality, these new applications represent a step change in the evolution of the area of Internet geographic applications (which some have termed the GeoWeb). The nature of this change warrants an explanation and an overview, as it has implications both for geographers and the public notion of Geography. This article provides a critical review of this newly emerging landscape, starting with an introduction to the concepts, technologies and structures that have emerged over the short period of intense innovation. It introduces the non-technical reader to them, suggests reasons for the neologism, explains the terminology, and provides a perspective on the current trends. Case studies are used to demonstrate this Web Mapping 2.0 era, and differentiate it from the previous generation of Internet mapping. Finally, the implications of these new techniques and the challenges they pose to geographic information science, geography and society at large are considered.’

The paper is accessible on the Geography Compass website, and if you don’t have access to the journal, but would like a copy, email me.

An interesting issue that emerges from The Cult of
the Amateur is about Participatory GIS or PPGIS. As Chris Dunn mentioned in her recent paper in Progress in Human Geography, Participatory GIS makes many references to ‘democratisation’ of GIS (together with Renee Sieber’s 2006 review, these two papers are excellent introduction to PPGIS) .

According to the OED, democratisation is ‘the action of rendering, or process of becoming, democratic’, and democracy is defined as ‘Government by the people; that form of government in which the sovereign power resides in the people as a whole, and is exercised either directly by them (as in the small republics of antiquity) or by officers elected by them. In modern use often more vaguely denoting a social state in which all have equal rights, without hereditary or arbitrary differences of rank or privilege.’ [emphasis added].
The final point is the notion that is mostly used when advocates of Web 2.0 use the term, and it seems that in this notion of democratisation, erasure of hereditary or arbitrary differences is extended also to expertise and hierarchies in the media and knowledge production. In some areas, Web 2.0 actively erodes the differentiation between experts and amateurs, using mechanisms such as anonymous contributions that hide from the reader any information about who is contributing, what their authority is and why we should listen to them.
As Keen notes, doing away with social structures and equating amateurs with experts is actually not a good thing in the long run.
This brings us back to Participatory GIS – the PGIS literature discusses the need to ‘level the field’ and deal with power structures and inequalities in involvement in decision making – and this is exactly what we are trying to achieve in the Mapping Change for Sustainable Communities project. We also know very well from the literature that, even in complex issues, individuals and groups are investing time and effort to understand complex issues and as a result can become quite expert. For example, the work of Maarten Wolsink on NIMBYs shows that this very local focus is not so parochial after all.
I completely agree with the way Dunn puts it (p. 627-8):

‘Rather than the ‘democratization of GIS’ through th[e] route [of popularization] , it would seem that technologizing of deliberative democracy through Participatory GIS currently offers a more effective path towards individual and community empowerment – an analytical as opposed to largely visual process; an interventionist approach which actively rather than passively seeks citizen involvement; and a community-based as opposed to individualist ethos.’

Yet, what I’m taking from Keen is that we also need to rethink the role of the expert within Participatory GIS – at the end of the day, we are not suggesting we do away with planning departments or environmental experts.
I don’t recall that I’ve seen much about how to define the role of experts and how to integrate hierarchies of knowledge in Participatory GIS processes – potentially an interesting research topic?

Continuing to reflect on Keen’s The Cult of the Amateur, I can’t fail to notice how Web 2.0 influences our daily lives – from the way we implement projects, to the role of experts and non-experts in the generation of knowledge. Some of the promises of Web 2.0 are problematic – especially the claim for ‘democratisation’.

Although Keen doesn’t discuss this point, Jakob Nielsen’s analysis of ‘Participation Inequality on the Web’ is pertinent here. As Nielsen notes, on Wikipedia 0.003% of users contribute two thirds of the content, with a further 0.2% contributing something and 99.8% who just use the information. Blogs are supposed to have a 95-5-0.1 (95% just read, 5% post infrequently, 0.1% post regularly). In Blogs, this posting inequality is enhanced by readership inequalities on the Web (power laws are influencing this domain, too – top blogs are read by far more people).

This aspect of access and influence means that the use of the word ‘democratisation’ is a misnomer to quite an extent. If anything, it is a weird laissez-faire democracy, where a few plutocrats rule. Not a democracy of the type that I’d like to live in.

I have just finished reading Andrew Keen’s The Cult of the Amateur, which, together with Paulina Borsook’s Cyberselfish, provides quite a good antidote to the overexcitement of The Long Tail, Wikinomics and a whole range publications about Web 2.0 that marvel in the ‘democratisation’ capacity of technology. Even if Keen’s and Borsook’s books are seen as dystopian (and in my opinion they are not), I think that through their popularity these critical analyses of current online culture are very valuable in encouraging reflection on how technology influences society.

The need for a critical reflection on technology and society stems from the fact that most of society seems to accept the ‘common-sense’ perspective that technology is a human activity which is neutral and ‘value-free’ (values here in the meaning of guiding principles in life) – that it can be used for good ends or bad ones, but by itself it does not encapsulate any values internally.

In contrast, I personally prefer Andrew Feenberg’s analysis in Questioning Technology and Transforming Technology where he suggests that a more complete attitude towards technology must accept that technology encapsulates certain values and that these values should be taken into account when we evaluate the impact of new technologies on our life.

In Feenberg’s terms, we should not separate means from ends and should understand how certain cultural values influence technological projects and end up integrated in them. For example, Wikipedia’s decision to ‘level the playing field’ so experts do not have any more authority in editing content than other contributors should be seen as a an important value judgment, suggesting that expertise is not important or significant or that experts cannot be trusted. Such a point of view does have an impact on a tool that it widely used and therefore influences society.

Follow

Get every new post delivered to your Inbox.

Join 2,227 other followers