8 July, 2013
The term ‘Citizen Science’ is clearly gaining more recognition and use. It is now get mentioned in radio and television broadcasts, social media channels as well as conferences and workshops. Some of the clearer signs for the growing attention include discussion of citizen science in policy oriented conferences such as UNESCO’s World Summit on Information Society (WSIS+10) review meeting discussion papers (see page ), or the Eye on Earth users conference (see the talks here) or the launch of the European Citizen Science Association in the recent EU Green Week conference.
Another aspect of the expanding world of citizen science is the emerging questions from those who are involved in such projects or study them about the efficacy of the term. As is very common with general terms, some reflections on the accuracy of the term are coming to the fore – so Rick Bonney and colleagues suggest to use ‘Public Participation in Scientific Research‘ (significantly, Bonney was the first to use ‘Citizen Science’ in 1995); Francois Grey coined Citizen Cyberscience to describe projects that are dependent on the Internet; recently Chris Lintott discussed some doubts about the term in the context of Zooniverse; and Katherine Mathieson asks if Citizen Science is just a passing fad. In our own group, there are also questions about the correct terminology, with Cindy Regalado suggestions to focus on ‘Publicly Initiated Scientific Research (PIScR)‘, and discussion on the meaning of ‘Extreme Citizen Science‘.
One way to explore what is going on is to consider the evolution of the ‘hype’ around citizen science through ‘Gartner’s Hype Cycle‘ which can be seen as a way to consider the way technologies are being adopted in a world of rapid communication and inflated expectations from technologies. leaving aside Gartner own hype, the story that the model is trying to tell is that once a new approach (technology) emerges because it is possible or someone reconfigured existing elements and claim that it’s a new thing (e.g. Web 2.0), it will go through a rapid growth in terms of attention and publicity. This will go on until it reaches the ‘peak of inflated expectations’ where the expectations from the technology are unrealistic (e.g. that it will revolutionize the way we use our fridges). This must follow by a slump, as more and more failures come to light and the promises are not fulfilled. At this stage, the disillusionment is so deep that even the useful aspects of the technology are forgotten. However, if it passes this stage, then after the realisation of what is possible, the technology is integrated into everyday life and practices and being used productively.
So does the hype cycle apply to citizen science?
If we look at Gartner cycle from last September, Crowdsourcing is near the ‘peak of inflated expectations’ and some descriptions of citizen science as scientific crowdsourcing clearly match the same mindset.
There is a growing evidence of academic researchers entering citizen science out of opportunism, without paying attention to the commitment and work that is require to carry out such projects. With some, it seems like that they decided that they can also join in because someone around know how to make an app for smartphones or a website that will work like Galaxy Zoo (failing to notice the need all the social aspects that Arfon Smith highlights in his talks). When you look around at the emerging projects, you can start guessing which projects will succeed or fail by looking at the expertise and approach that the people behind it take.
Another cause of concern are the expectations that I noticed in the more policy oriented events about the ability of citizen science to solve all sort of issues – from raising awareness to behaviour change with limited professional involvement, or that it will reduce the resources that are needed for activities such as environmental monitoring, but without an understanding that significant sustained investment is required – community coordinator, technical support and other aspects are needed here just as much. This concern is heightened by statements that promote citizen science as a mechanism to reduce the costs of research, creating a source of free labour etc.
On the other hand, it can be argued that the hype cycle doesn’t apply to citizen science because of history. Citizen science existed for many years, as Caren Cooper describe in her blog posts. Therefore, conceptualising it as a new technology is wrong as there are already mechanisms, practices and institutions to support it.
In addition, and unlike the technologies that are on Gartner chart, academic projects within which citizen science happen benefit from access to what is sometime termed patient capital without expectations for quick returns on investment. Even with the increasing expectations of research funding bodies for explanations on how the research will lead to an impact on wider society, they have no expectations that the impact will be immediate (5-10 years is usually fine) and funding come in chunks that cover 3-5 years, which provides the breathing space to overcome the ‘through of disillusionment’ that is likely to happen within the technology sector regarding crowdsourcing.
And yet, I would guess that citizen science will suffer some examples of disillusionment from badly designed and executed projects – to get these projects right you need to have a combination of domain knowledge in the specific scientific discipline, science communication to tell the story in an accessible way, technical ability to build mobile and web infrastructure, understanding of user interaction and user experience to to build an engaging interfaces, community management ability to nurture and develop your communities and we can add further skills to the list (e.g. if you want gamification elements, you need experts in games and not to do it amateurishly). In short, it need to be taken seriously, with careful considerations and design. This is not a call for gatekeepers , more a realisation that the successful projects and groups are stating similar things.
Which bring us back to the issue of the definition of citizen science and terminology. I have been following terminology arguments in my own discipline for over 20 years. I have seen people arguing about a data storage format for GIS and should it be raster or vector (answer: it doesn’t matter). Or arguing if GIS is tool or science. Or unhappy with Geographic Information Science and resolutely calling it geoinformation, geoinformatics etc. Even in the minute sub-discipline that deals with participation and computerised maps that are arguments about Public Participation GIS (PPGIS) or Participatory GIS (PGIS). Most recently, we are debating the right term for mass-contribution of geographic information as volunteered geographic information (VGI), Crowdsourced geographic information or user-generated geographic information.
It’s not that terminology and precision in definition is not useful, on the contrary. However, I’ve noticed that in most cases the more inclusive and, importantly, vague and broad church definition won the day. Broad terminologies, especially when they are evocative (such as citizen science), are especially powerful. They convey a good message and are therefore useful. As long as we don’t try to force a canonical definition and allow people to decide what they include in the term and express clearly why what they are doing is falling within citizen science, it should be fine. Some broad principles are useful and will help all those that are committed to working in this area to sail through the hype cycle safely.
17 May, 2013
The UCL Urban Laboratory is a cross-disciplinary initiative that links various research interest in urban issues, from infrastructure to the way they are expressed in art, films and photography. The Urban Laboratory has just published its first Urban Pamphleteer which aim to ‘confront key contemporary urban questions from diverse perspectives. Written in a direct and accessible tone, the intention of these pamphlets is to draw on the history of radical pamphleteering to stimulate debate and instigate change.’
My contribution to the first pamphleteer, which focused on ‘Future & Smart Cities’ is dealing with the balance between technology companies, engineers and scientists and the values, needs and wishes of the wider society. In particular, I suggest the potential of citizen science in opening up some of the black boxes of smart cities to wider societal control. Here are the opening and the closing paragraphs of my text, titled Beyond quantification: we need a meaningful smart city:
‘When approaching the issue of Smart Cities, there is a need to discuss the underlying assumptions at the basis of Smart Cities and challenge the prevailing thought that only efficiency and productivity are the most important values. We need to ensure that human and environmental values are taken into account in the design and implementation of systems that will influence the way cities operate…
…Although these Citizen Science approaches can potentially develop new avenues for discussing alternatives to the efficiency and productivity logic of Smart Cities, we cannot absolve those with most resources and knowledge from responsibility. There is an urgent need to ensure that the development and use of the Smart Cities technologies that are created is open to democratic and societal control, and that they are not being developed only because the technologists and scientists think that they are possible.’
The pamphleteer is not too long – 32 pages – and include many thought-provoking pieces from researchers in Geography, Environmental Engineering, Architecture, Computer Science and Art. It can be downloaded here.
As I’ve noted in the previous post, I have just attended CHI (Computer-Human Interaction) conference for the first time. It’s a fairly big conference, with over 3000 participants, multiple tracks that evolved over the 30 years that CHI have been going, including the familiar paper presentations, panels, posters and courses, but also the less familiar ‘interactivity areas’, various student competitions, alt.CHI or Special Interest Groups meetings. It’s all fairly daunting even with all my existing experience in academic conferences. During the GeoHCI workshop I have discovered the MyCHI application, which helps in identifying interesting papers and sessions (including social recommendations) and setting up a conference schedule from these papers. It is a useful and effective app that I used throughout the conference (and wish that something similar can be made available in other large conferences, such as the AAG annual meeting).
With MyCHI in hand, while the fog started to lift and I could see a way through the programme, the trepidation about the relevance of CHI to my interests remained and even somewhat increased, after a quick search of the words ‘geog’,'marginal’,'disadvantage’ returned nothing. The conference video preview (below) also made me somewhat uncomfortable. I have a general cautious approach to the understanding and development of digital technologies, and a strong dislike to the breathless excitement from new innovations that are not necessarily making the world a better place.
Luckily, after few more attempts I have found papers about ‘environment’, ‘development’ and ‘sustainability’. Moreover, I discovered the special interest groups (SIG) that are dedicated to HCI for Development (HCI4D) and HCI for Sustainability and the programme started to build up. The sessions of these two SIGs were an excellent occasion to meet other people who are active in similar topics, and even to learn about the fascinating concept of ‘Collapse Informatics‘ which is clearly inspired by Jared Diamond book and explores “the study, design, and development of sociotechnical systems in the abundant present for use in a future of scarcity“.
Beyond the discussions, meeting people with shared interests and seeing that there is a scope within CHI to technology analysis and development that matches my approach, several papers and sessions were especially memorable. The studies by Elaine Massung an colleagues about community activism in encouraging shops to close the doors (and therefore waste less heating energy) and Kate Starbird on the use of social media in passing information between first responders during the Haiti earthquake, explored how volunteered, ‘crowd’ information can be used in crisis and environmental activism.
Other valuable papers in the area of HCI for development and sustainability include the excellent longitudinal study by Susan Wyche and Laura Murphy on the way mobile charging technology is used in Kenya , a study by Adrian Clear and colleagues about energy use and cooking practices of university students in Lancaster, a longitudinal study of responses to indoor air pollution monitoring by Sunyoung Kim and colleagues, and an interesting study of 8-bit, $10 computers that are common in many countries across the world by Derek Lomas and colleagues.
The ‘CHI at the Barricades – an activist agenda?‘ was one of the high points of the conference, with a showcase of the ways in which researchers in HCI can take a more active role in their research and lead to social or environmental change, and considering how the role of interactions in enabling or promoting such changes can be used to achieve positive outcomes. The discussions that followed the short interventions from the panel covered issues from accessibility to ethics to ways of acting and leading changes. Interestingly, while some presenters were comfortable with their activist role, the term ‘action-research’ was not mentioned. It was also illuminating to hear Ben Shneiderman emphasising his view that HCI is about representing and empowering the people who use the technologies that are being developed. His call for ‘activist HCI’ provides a way to interpret ‘universal usability‘ as an ethical and moral imperative.
So despite the early concerned, CHI was a conference worth attending and the specific jargon of CHI now seem more understandable. I wish that there was on the conference website a big sign ‘new to CHI? Start here…’
The talk, which is titled ‘Science for everyone by everyone – the re-emergence of citizen science‘ covered the area of citizen science and explained what we are trying to achieve within the Extreme Citizen Science research group.
Because the lunch hour lectures are open to all, I preferred not to assume any prior knowledge of citizen science (or public participation in scientific research) and start by highlighting that public participation in scientific research is not new. After a short introduction to the history and to the fact that many people are involved in scientific activities in their free time, from bird watching to weather or astronomical observations and that this never stopped, there is a notable difference in the attention that is paid to citizen science in recent years.
Therefore, I covered the trends in education and technology that are ushering in a new era of citizen science – access to information through the internet, use of location aware mobile devices, growth in social knowledge creation web-based systems, increased in education and the ability to deal with abstract ideas (Flynn effect is an indicator of this last point). The talk explored the current trends and types of citizen science, and demonstrate a model for extreme citizen science, in which any community, regardless of their literacy, can utilise scientific methods and tools to understand and control their environment. I have used examples of citizen science activities from other groups at UCL, to demonstrate the range of topics, domains and activities that are now included in this area.
The talk was recorded, and is available on YouTube and below
Since early 2010, I had the privilege of being a member of the editorial board of the journal Transactions of the Institute of British Geographers . It is a fascinating position, as the journal covers a wide range of topics in geography, and is also recognised as one of the top journals in the field and therefore the submissions are usually of high quality. Over the past 3 years, I was following a range of papers that deal with various aspects of Geographic Information Science (GIScience) from submission to publication either as a reviewer or as associate editor.
In early 2011, I agreed to coordinate a virtual issue on GIScience. The virtual issue is a collection of papers from the archives of the journal, demonstrating the breadth of coverage and the development of GIScience within the discipline of geography over the years. The virtual issues provide free access to a group of papers for a period of a year, so they can be used for teaching and research.
Editing the virtual issue was a very interesting task – I was exploring the archives of the journal, going back to papers that appeared in the 1950s and 1960s. When looking for papers that are relevant to GIScience, I came across various papers that relate to geography’s ‘Quantitative Revolution‘. The evolution of use of computers in geography and later on the applications of GIS is covered in many papers, so the selection was a challenge. Luckily, another member of the editorial board, Brian Lees, is also well versed in GIScience as the editor of the International Journal of GIScience. Together, we made the selection of the papers that are included in the issue. Other papers are not part of the virtual issue but are valuable further reading.
To accompany the virtual issue, I have written a short piece, focusing on the nature of GIScience in geography. The piece is titled “Geographic Information Science: tribe, badge and sub-discipline” and is exploring how the latest developments in technology and practice are integrated and resisted by the core group of people who are active GIScience researchers in geography.
You can access the virtual issue on Wiley-Blackwell online library and you will find papers from 1965 to today, with links to further papers that are relevant but not free for access. The list of authors is impressive, including many names that are associated with the development of GIScience over the years from Torstan Hägerstrand or David Rhind to current researchers such as Sarah Elwood, Agnieszka Leszczynski or Matt Zook.
The virtual issue will be officially launched (and was timed to coincide with) at the GIScience 2012 conference.
As I cannot attend the conference, and as my paper mentioned the Twitter-based GeoWebChat (see http://mappingmashups.net/geowebchat/) which is coordinated by Alan McConchie, I am planning to use this medium for running a #geowebchat that is dedicated to the virtual issue on the 18th September 2012, at 4pm EDT, 9pm BST so those who attend the conference can join at the end of the workshops day.
8 August, 2012
On the 4th and 5th August, Portland, OR, was the gathering place for 300 participants that came to the workshop on Public Participation in Scientific Research. The workshop was timed just before the annual meeting of the Ecological Society of America, and therefore it was not surprising that the workshop focused on citizen science projects that are linked to ecology and natural environments monitoring. These projects are some of the longest running citizen science activities, that are now gaining recognition and attention.
The workshop was organised as a set of thematic talks interlaced with long poster sessions. This way, the workshop included over 180 presentations in a day and a half. That set the scene for a detailed discussion at the end of the second day, to explore what is the way forward to the field of PPSR/Citizen Science/Civic Science etc., with attention to sharing lessons, developing and supporting new activities, considering codes of ethics, etc.
I presented the last talk of the workshop, describing Extreme Citizen Science and arguing for the potential of public participation to go much deeper in terms of engagement. The presentation is provided below, together with an interview that was conducted with me shortly after it.
And the interview,
22 June, 2012
At the end of 2010, Matt Wilson (University of Kentucky) and Mark Graham(Oxford Internet Institute), started coordinating a special issue of Environment and Planning Adedicated to ‘Situating Neogeography’, asking ‘How might we situate neogeography? What are the various assemblages, networks, ecologies, configurations, discourses, cyborgs, alliances that enable/enact these technologies?’
My response to this call is a paper titled ‘Neogeography and the delusion of democratisation’ and it is finally been accepted for publication. I am providing below an excerpt from the introduction, to provide a flavour of the discussion:
“Since the emergence of the World Wide Web (Web) in the early 1990s, claims about its democratic potential and practice are a persistent feature in the discourse about it. While awareness of the potential of ‘anyone, anytime, anywhere’ to access and use information was extolled for a long while (for an early example see Batty 1997), the emergence of Web 2.0 in the mid-2000s (O’Reilly 2005) increased this notion. In the popular writing of authors such as Friedman (2006), these sentiments are amplified by highlighting the ability of anyone to ‘plug into the flat earth platform’ from anywhere and anytime.
Around the middle of the decade, the concept of neogeography appeared and the ability to communicate geographic information over the Web (in what is termed the GeoWeb) gained prominence (see Haklay et al. 2008). Neogeography increased the notion of participation and access to geographic information, now amplified through the use of the political term democratisation. The following citations provide a flavour of the discourse within academic and popular writing – for example, in Mike Goodchild’s declaration that ‘Just as the PC democratised computing, so systems like Google Earth will democratise GIS’ (quoted in Butler 2006), or Turner’s (2006) definition of neogeography as ‘Essentially, Neogeography is about people using and creating their own maps, on their own terms and by combining elements of an existing toolset. Neogeography is about sharing location information with friends and visitors, helping shape context, and conveying understanding through knowledge of place’. This definition emphasises the wide access to the technology in everyday practice. Similar and stronger statements can be found in Warf and Sui (2010) who clarify that ‘neogeography has helped to foster an unprecedented democratization of geographic knowledge’ (p. 200) and, moreover, ‘Wikification represents a significant step forward in the democratization of geographic information, shifting control over the production and use of GIS data from a handful of experts to large groups of users’ (ibid.). Even within international organisations this seems to be the accepted view as Nigel Snoad, strategy adviser for the communications and information services unit of the United Nations Office for the Coordination of Humanitarian Affairs (OCHA), stated: ‘On the technology side, Google, Microsoft and OpenStreetMap have really democratized mapping’ (cited in Lohr 2011).
However, what is the nature of this democratisation and what are its limits? To what extent do the technologies that mediate the access to, and creation of, geographic information allow and enable such democratisation?
To answer these questions, we need to explore the meaning of democratisation and, more specifically, within the context of interaction between people and technology. According to the Oxford English Dictionary, democratisation is ‘the action of rendering, or process of becoming, democratic’, and democracy is defined as ‘Government by the people; that form of government in which the sovereign power resides in the people as a whole, and is exercised either directly by them (as in the small republics of antiquity) or by officers elected by them. In modern use often more vaguely denoting a social state in which all have equal rights, without hereditary or arbitrary differences of rank or privilege’ [emphasis added]. A more colloquial notion of democratisation, and a much weaker one, is making a process or activity that used to be restricted to an elite or privileged group available to a wider group in society and potentially to all. For example, with mobile telephony now available across the globe, the statement ‘mobile telephony has been democratised’ aims to express the fact that, merely three decades ago, only the rich and powerful members of Western society had access to this technology.
Therefore, it is accepted from the start that the notion of democratisation cited above is more about the potential of neogeography to make the ability to assemble, organise and share geographical information accessible to anyone, anywhere and anytime and for a variety of purposes than about advancing the specific concept of democracy. And yet, it will be wrong to ignore the fuller meaning of the concept. Democratisation has a deeper meaning in respect of making geographic information technologies more accessible to hitherto excluded or marginalised groups in a way that assists them to make a change in their life and environment. Democratisation evokes ideas about participation, equality, the right to influence decision making, support to individual and group rights, access to resources and opportunities, etc. (Doppelt 2006). Using this stronger interpretation of democratisation reveals the limitation of current neogeographic practices and opens up the possibility of considering alternative development of technologies that can, indeed, be considered as democratising.
To explore this juncture of technology and democratisation, this paper relies on Andrew Feenberg’s critical philosophy of technology, especially as explored in his Questioning Technology (1999) and Transforming Technology (2002), which is useful as he addresses issues of democratisation and technology directly. For readers who are not familiar with the main positions within philosophy of technology, a very brief overview – based on Feenberg’s interpretation (1999) – is provided. This will help to explain his specific critique and suggestion for ‘deep democratisation’ of technology.
Equipped with these concepts, attention is turned to the discussion about the democratic potential of Geographic Information Systems (GIS), which appears in early discussions about GIS and society in the 1990s, and especially to the discussions within the literature on (Public) Participatory GIS (PPGIS/PGIS – assumed to be interchangeable here) and critical GIS. As we shall see, discussions about empowerment, marginalisation and governance are central to this literature from its inception and provide the foundations to build a deeper concept of democratisation when considering neogeographic practices.
Based on this historical understanding, the core of the paper explores why it is that neogeographic practices are assumed to be democratising and, more importantly, what the limitations are on their democratic potential. To do that, a hierarchy of ‘hacking’ – that is the artful alteration of technology beyond the goals of its original design or intent – is suggested. Importantly, here ‘hacking’ does not mean the malicious alteration of technology or unauthorised access to computer systems, or the specific culture of technology enthusiasts (‘hacker culture’). The term is used to capture the first and second instrumentation that Feenberg (1996, 2002) describes. As we shall see, by exploring the ability to alter systems, there is some justification in the democratisation claims of neogeography as it has, indeed, improved the outreach of geographic technologies and opened up the potential of their use in improving democratic processes, but in a much more limited scope and extent. The paper concludes with observations on the utilisation of neogeographic technologies within the participatory process that aim to increase democratisation in its deeper sense.”
The paper’s concepts are based on talk that I originally gave in 2008 as part of the World University Netowrk seminar on Neogeography. A final note is about the length of time that some ideas need from first emerging until publication – even with the current imagination of ‘fast moving technology’, there is a value in thinking through an idea over 4 years.
17 December, 2011
The Eye on Earth Summit took place in Abu Dhabi on the 12 to 15 December 2011, and focused on ‘the crucial importance of environmental and societal information and networking to decision-making’. The summit was an opportunity to evaluate the development of Principle 10 from Rio declaration in 1992 as well as Chapter 40 of Agenda 21 both of which focus on environmental information and decision making. The summit’s many speakers gave inspirational talks – with an impressive list including Jane Goodall highlighting the importance of information for education; Mathis Wackernagel updating on the developments in Ecological Footprint; Rob Swan on the importance of Antarctica; Sylvia Earle on how we should protect the oceans; Mark Plotkin, Rebecca Moore and Chief Almir Surui on indigenous mapping in the Amazon and man others. The white papers that accompany the summit can be found in the Working Groups section of the website, and are very helpful updates on the development of environmental information issues over the past 20 years and emerging issues.
Interestingly, Working Group 2 on Content and User Needs is mentioning the conceptual framework of Environmental Information Systems (EIS) which I started developing in 1999 and after discussing it in the GIS and Environmental Modelling conference in 2000, I have published it as the paper ‘Public access to environmental information: past, present and future’ in the journal Computers, Environment and Urban Systems in 2003.
Discussing environmental information for a week made me to revisit the framework and review the changes that occurred over the past decade.
First, I’ll present the conceptual framework, which is based on 6 assertions. The framework was developed on the basis of a lengthy review in early 1999 of the available information on environmental information systems (the review was published as CASA working paper 7). While synthesising all the information that I have found, some underlying assumptions started to emerge, and by articulating them and putting them together and showing how they were linked, I could make more sense of the information that I found. This helped in answering questions such as ‘Why do environmental information systems receive so much attention from policy makers?’ and ‘Why are GIS appearing in so many environmental information systems ?’. I have used the word ‘assertions’ as the underlying principles seem to be universally accepted and taken for granted. This is especially true for the 3 core assumptions (assertions 1-3 below).
- Sound knowledge, reliable information and accurate data are vital for good environmental decision making.
- Within the framework of sustainable development, all stakeholders should take part in the decision making processes. A direct result of this is a call for improved public participation in environmental decision making.
- Environmental information is exceptionally well suited to GIS (and vice versa). GIS development is closely related to developments in environmental research, and GIS output is considered to be highly advantageous in understanding and interpreting environmental data.
- (Notice that this is emerging from combining 1 and 2) To achieve public participation in environmental decision making, the public must gain access to environmental information, data and knowledge.
- (Based on 1 and 3) GIS use and output is essential for good environmental decision making.
- (Based on all the others) Public Environmental Information Systems should be based on GIS technologies. Such systems are vital for public participation in environmental decision making.
Intriguingly, the Eye on Earth White Paper notes ‘This is a very “Geospatial” centric view; however it does summarise the broader principles of Environmental Information and its use’. Yet, my intention was not to develop a ‘Geospatial’ centric view – I was synthesising what I have found, and the keywords that I have used in the search did not include GIS. Therefore, the framework should be seen as an attempt to explain the reason that GIS is so prominent.
With this framework in mind, I have noticed a change over the past decade. Throughout the summit, GIS and ‘Geospatial’ systems were central – and they were mentioned and demonstrated many times. I was somewhat surprised how prominent they were in Sha Zukang speech (He is the Undersecretary General, United Nations, and Secretary General Rio +20 Summit). They are much more central than they were when I carried out the survey, and I left the summit feeling that for many speakers, presenters and delegates, it is now expected that GIS will be at the centre of any EIS. The wide acceptance does mean that initiatives such as the ‘Eye on Earth Network’ that is based on geographic information sharing is now possible. In the past, because of the very differing data structures and conceptual frameworks, it was more difficult to suggest such integration. The use of GIS as a lingua franca for people who are dealing with environmental information is surely helpful in creating an integrative picture of the situation at a specific place, across multiple domains of knowledge.
However, I see a cause for concern for the equivalence of GIS with EIS. As the literature in GIScience discussed over the years, GIS is good at providing snapshots, but less effective in modelling processes, or interpolating in both time and space, and most importantly, is having a specific way of creating and processing information. For example, while GIS can be coupled with system dynamic modelling (which was used extensively in environmental studies – most notably in ‘Limits to Growth’) it is also possible to run such models and simulations in packages that don’t use geographic information – For example, in the STELLA package for system dynamics or in bespoke models that were created with dedicated data models and algorithms. Importantly, the issue is not about the technical issues of coupling different software packages such as STELLA or agent-based modelling with GIS. Some EIS and environmental challenge might benefit from different people thinking in different ways about various problems and solutions, and not always forced to consider how a GIS play a part in them.
27 November, 2011
This post continues to the theme of the previous one, and is also based on the chapter that will appear next year in the book:
The post focuses on the participatory aspect of different Citizen Science modes:
Against the technical, social and cultural aspects of citizen science, we offer a framework that classifies the level of participation and engagement of participants in citizen science activity. While there is some similarity between Arnstein’s (1969) ‘ladder of participation’ and this framework, there is also a significant difference. The main thrust in creating a spectrum of participation is to highlight the power relationships that exist within social processes such as urban planning or in participatory GIS use in decision making (Sieber 2006). In citizen science, the relationship exists in the form of the gap between professional scientists and the wider public. This is especially true in environmental decision making where there are major gaps between the public’s and the scientists’ perceptions of each other (Irwin 1995).
In the case of citizen science, the relationships are more complex, as many of the participants respect and appreciate the knowledge of the professional scientists who are leading the project and can explain how a specific piece of work fits within the wider scientific body of work. At the same time, as volunteers build their own knowledge through engagement in the project, using the resources that are available on the Web and through the specific project to improve their own understanding, they are more likely to suggest questions and move up the ladder of participation. In some cases, the participants would want to volunteer in a passive way, as is the case with volunteered computing, without full understanding of the project as a way to engage and contribute to a scientific study. An example of this is the many thousands of people who volunteered to the Climateprediction.net project, where their computers were used to run global climate models. Many would like to feel that they are engaged in one of the major scientific issues of the day, but would not necessarily want to fully understand the science behind it.
Therefore, unlike Arnstein’s ladder, there shouldn’t be a strong value judgement on the position that a specific project takes. At the same time, there are likely benefits in terms of participants’ engagement and involvement in the project to try to move to the highest level that is suitable for the specific project. Thus, we should see this framework as a typology that focuses on the level of participation.
At the most basic level, participation is limited to the provision of resources, and the cognitive engagement is minimal. Volunteered computing relies on many participants that are engaged at this level and, following Howe (2006), this can be termed ‘crowdsourcing’. In participatory sensing, the implementation of a similar level of engagement will have participants asked to carry sensors around and bring them back to the experiment organiser. The advantage of this approach, from the perspective of scientific framing, is that, as long as the characteristics of the instrumentation are known (e.g. the accuracy of a GPS receiver), the experiment is controlled to some extent, and some assumptions about the quality of the information can be used. At the same time, running projects at the crowdsourcing level means that, despite the willingness of the participants to engage with a scientific project, their most valuable input – their cognitive ability – is wasted.
The second level is ‘distributed intelligence’ in which the cognitive ability of the participants is the resource that is being used. Galaxy Zoo and many of the ‘classic’ citizen science projects are working at this level. The participants are asked to take some basic training, and then collect data or carry out a simple interpretation activity. Usually, the training activity includes a test that provides the scientists with an indication of the quality of the work that the participant can carry out. With this type of engagement, there is a need to be aware of questions that volunteers will raise while working on the project and how to support their learning beyond the initial training.
The next level, which is especially relevant in ‘community science’ is a level of participation in which the problem definition is set by the participants and, in consultation with scientists and experts, a data collection method is devised. The participants are then engaged in data collection, but require the assistance of the experts in analysing and interpreting the results. This method is common in environmental justice cases, and goes towards Irwin’s (1995) call to have science that matches the needs of citizens. However, participatory science can occur in other types of projects and activities – especially when considering the volunteers who become experts in the data collection and analysis through their engagement. In such cases, the participants can suggest new research questions that can be explored with the data they have collected. The participants are not involved in detailed analysis of the results of their effort – perhaps because of the level of knowledge that is required to infer scientific conclusions from the data.
Finally, collaborative science is a completely integrated activity, as it is in parts of astronomy where professional and non-professional scientists are involved in deciding on which scientific problems to work and the nature of the data collection so it is valid and answers the needs of scientific protocols while matching the motivations and interests of the participants. The participants can choose their level of engagement and can be potentially involved in the analysis and publication or utilisation of results. This form of citizen science can be termed ‘extreme citizen science’ and requires the scientists to act as facilitators, in addition to their role as experts. This mode of science also opens the possibility of citizen science without professional scientists, in which the whole process is carried out by the participants to achieve a specific goal.
This typology of participation can be used across the range of citizen science activities, and one project should not be classified only in one category. For example, in volunteer computing projects most of the participants will be at the bottom level, while participants that become committed to the project might move to the second level and assist other volunteers when they encounter technical problems. Highly committed participants might move to a higher level and communicate with the scientist who coordinates the project to discuss the results of the analysis and suggest new research directions.