Following the two previous assertions, namely that:

you can be supported by a huge crowd for a very short time, or by few for a long time, but you can’t have a huge crowd all of the time (unless data collection is passive)’ (original post here)


‘All information sources are heterogeneous, but some are more honest about it than others’  (original post here)

The third assertion is about pattern of participation. It is one that I’ve mentioned before and in some way it is a corollary of the two assertions above.

‘When looking at crowdsourced information, always keep participation inequality in mind’ 

Because crowdsourced information, either Volunteered Geographic Information or Citizen Science, is created through a socio-technical process, all too often it is easy to forget the social side – especially when you are looking at the information without the metadata of who collected it and when. So when working with OpenStreetMap data, or viewing the distribution of bird species in eBird (below), even though the data source is expected to be heterogeneous, each observation is treated as similar to other observation and assumed to be produced in a similar way.

Distribution of House Sparrow

Yet, data is not only heterogeneous in terms of consistency and coverage, it is also highly heterogeneous in terms of contribution. One of the most persistence findings from studies of various systems – for example in Wikipedia , OpenStreetMap and even in volunteer computing is that there is a very distinctive heterogeneity in contribution. The phenomena was term Participation Inequality by Jakob Nielsn in 2006 and it is summarised succinctly in the diagram below (from Visual Liberation blog) – very small number of contributors add most of the content, while most of the people that are involved in using the information will not contribute at all. Even when examining only those that actually contribute, in some project over 70% contribute only once, with a tiny minority contributing most of the information.

Participation Inequality Therefore, when looking at sources of information that were created through such process, it is critical to remember the nature of contribution. This has far reaching implications on quality as it is dependent on the expertise of the heavy contributors, on their spatial and temporal engagement, and even on their social interaction and practices (e.g. abrasive behaviour towards other participants).

Because of these factors, it is critical to remember the impact and implications of participation inequality on the analysis of the information. There will be some analysis to which it will have less impact and some where it will have major one. In either cases, it need to be taken into account.

Following the last post, which focused on an assertion about crowdsourced geographic information and citizen science I continue with another observation. As was noted in the previous post, these can be treated as ‘laws’ as they seem to emerge as common patterns from multiple projects in different areas of activity – from citizen science to crowdsourced geographic information. The first assertion was about the relationship between the number of volunteers who can participate in an activity and the amount of time and effort that they are expect to contribute.

This time, I look at one aspect of data quality, which is about consistency and coverage. Here the following assertion applies: 

‘All information sources are heterogeneous, but some are more honest about it than others’

What I mean by that is the on-going argument about authoritative and  crowdsourced  information sources (Flanagin and Metzger 2008 frequently come up in this context), which was also at the root of the Wikipedia vs. Britannica debate, and the mistrust in citizen science observations and the constant questioning if they can do ‘real research’

There are many aspects for these concerns, so the assertion deals with the aspects of comprehensiveness and consistency which are used as a reason to dismiss crowdsourced information when comparing them to authoritative data. However, at a closer look we can see that all these information sources are fundamentally heterogeneous. Despite of all the effort to define precisely standards for data collection in authoritative data, heterogeneity creeps in because of budget and time limitations, decisions about what is worthy to collect and how, and the clash between reality and the specifications. Here are two examples:

Take one of the Ordnance Survey Open Data sources – the map present themselves as consistent and covering the whole country in an orderly way. However, dig in to the details for the mapping, and you discover that the Ordnance Survey uses different standards for mapping urban, rural and remote areas. Yet, the derived products that are generalised and manipulated in various ways, such as Meridian or Vector Map District, do not provide a clear indication which parts originated from which scale – so the heterogeneity of the source disappeared in the final product.

The census is also heterogeneous, and it is a good case of specifications vs. reality. Not everyone fill in the forms and even with the best effort of enumerators it is impossible to collect all the data, and therefore statistical analysis and manipulation of the results are required to produce a well reasoned assessment of the population. This is expected, even though it is not always understood.

Therefore, even the best information sources that we accept as authoritative are heterogeneous, but as I’ve stated, they just not completely honest about it. The ONS doesn’t release the full original set of data before all the manipulations, nor completely disclose all the assumptions that went into reaching the final value. The Ordnance Survey doesn’t tag every line with metadata about the date of collection and scale.

Somewhat counter-intuitively, exactly because crowdsourced information is expected to be inconsistent, we approach it as such and ask questions about its fitness for use. So in that way it is more honest about the inherent heterogeneity.

Importantly, the assertion should not be taken to be dismissive of authoritative sources, or ignoring that the heterogeneity within crowdsources information sources is likely to be much higher than in authoritative ones. Of course all the investment in making things consistent and the effort to get universal coverage is indeed worth it, and it will be foolish and counterproductive to consider that such sources of information can be replaced as is suggest for the census or that it’s not worth investing in the Ordnance Survey to update the authoritative data sets.

Moreover, when commercial interests meet crowdsourced geographic information or citizen science, the ‘honesty’ disappear. For example, even though we know that Google Map Maker is now used in many part

s of the world (see the figure), even in cases when access to vector data is provided by Google, you cannot find out about who contribute, when and where. It is also presented as an authoritative source of information. 

Despite the risk of misinterpretation, the assertion can be useful as a reminder that the differences between authoritative and crowdsourced information are not as big as it may seem.

Looking across the range of crowdsourced geographic information activities, some regular patterns are emerging and it might be useful to start notice them as a way to think about what is possible or not possible to do in this area. Since I don’t like the concept of ‘laws’ – as in Tobler’s first law of geography which is  stated as ‘Everything is related to everything else, but near things are more related than distant things.’ – I would call them assertions. There is also something nice about using the word ‘assertion’ in the context of crowdsourced geographic information, as it echos Mike Goodchild’s differentiation between asserted and authoritative information. So not laws, just assertions or even observations.

The first one, is rephrasing a famous quote:

you can be supported by a huge crowd for a very short time, or by few for a long time, but you can’t have a huge crowd all of the time (unless data collection is passive)’

So the Christmas Bird Count can have tens of thousands of participants for a short time, while the number of people who operate weather observation stations will be much smaller. Same thing is true for OpenStreetMap – for crisis mapping, which is a short term task, you can get many contributors  but for the regular updating of an area under usual conditions, there will be only few.

The exception for the assertion is the case for passive data collection, where information is collected automatically through the logging of information from a sensor – for example the recording of GPS track to improve navigation information.

OSM Haiyan

The Spatial Data Infrastructure Magazine ( is a relatively new e-zine dedicated to the development of spatial  data infrastructures around the world. Roger Longhorn, the editor of the magazine, conducted an email interview with me, which is now published.

In the interview, we are covering the problematic terminology used to describe a wider range of activities; the need to consider social and technical aspects as well as goals of the participants; and, of course, the role of the information that is produced through crowdsourcing, citizen science, VGI with spatial data infrastructures.

The full interview can be found here.


At the 2012 Annual Meeting of the Association of American Geographers, I presented during the session Information Geographies: Online Power, Representation and Voice’, which was organised by Mark Graham (Oxford Internet Institute) and Matthew Zook (University of Kentucky). For an early morning session on a Saturday, the session was well attended – and the papers in the session were very interesting.

My presentation, titled ‘Nobody wants to do council estates’ – digital divide, spatial justice and outliers‘, was the result of thinking about the nature of social information that is available on the Web and which I partially articulated in a response to a post on GeoIQ blog. When Mark and Matt asked for an abstract, I provided the following:

The understanding of the world through digital representation (digiplace) and VGI is frequently carried out with the assumption that these are valid, comprehensive and useful representations of the world. A common practice throughout the literature on these issues is to mention the digital divide and, while accepting it as a social phenomenon, either ignore it for the rest of the analysis or expect that it will solve itself over time through technological diffusion. The almost deterministic belief in technological diffusion absolves the analyst from fully confronting the political implication of the divide.

However, what VGI and social media analysis reveals is that the digital divide is part of deep and growing social inequalities in Western societies. Worse still, digiplace amplifies and strengthens them.

In digiplace the wealthy, powerful, educated and mostly male elite is amplified through multiple digital representations. Moreover, the frequent decision of algorithm designers to highlight and emphasise those who submit more media, and the level of ‘digital cacophony’ that more active contributors create, means that a very small minority – arguably outliers in every analysis of normal distribution of human activities – are super empowered. Therefore, digiplace power relationships are arguably more polarised than outside cyberspace due to the lack of social check and balances. This makes the acceptance of the disproportional amount of information that these outliers produce as reality highly questionable.

The following notes might help in making sense of the slides.

Slide 2 takes us back 405 years to Mantua, Italy, where Claudio Monteverdi has just written one of the very first operas – L’Orfeo – as an after-dinner entertainment piece for Duke Vincenzo Gonzaga. Leaving aside the wonderful music – my personal recommendation is for Emmanuelle Haïm’s performance and I used the opening toccata in my presentation – there is a serious point about history. For a large portion of human history, and as recent as 400 years ago, we knew only about the rich and the powerful. We ignored everyone else because they ‘were not important’.

Slide 3 highlights two points about modern statistics. First, that it is a tool to gain an understanding about the nature of society as a whole. Second, when we look at the main body of society, it is within the first 2 standard deviations of a normalised distribution. The Index of Deprivation of the UK (Slide 4) is an example ofthis type of analysis. Even though it was designed to direct resources to the most needy, it analyses the whole population (and, by the way, is normalised).

Slide 5 points out that on the Web, and in social media in particular, the focus is on ‘long tail’ distributions. My main issue is not with the pattern but with what it means in terms of analysing the information. This is where participation inequality (Slide 6) matters and the point of Nielsen’s analysis is that outlets such as Wikipedia (and, as we will see, OpenStreetMap) are suffering from even worse inequality than other communication media. Nielsen’s recent analysis in his newsletter (Slide 7) demonstrates how this is playing out on Facebook (FB). Notice the comment ‘these people have no life‘ or, as Sherry Turkle put it, they got life on the screen

Slide 8 and 9 demonstrate that participation inequality is strongly represented in OpenStreetMap, and we can expect it to play out in FourSquare, Google Map Maker, Waze and other GeoWeb social applications. Slide 10 focuses on other characteristics of the people that are involved in the contribution of content: men, highly educated, age 20-40. Similar characteristics have been shown in other social media and the GeoWeb by Monica Stephens & Antonella Rondinone, and by many other researchers.

In slides 11-14, observed spatial biases in OpenStreetMap are noted – concentration on highly populated places, gap between rich and poor places (using the Index of Deprivation from Slide 4), and difference between rural and urban areas. These differences were also observed in other sources of Volunteer Geographic Information (VGI) such as photo sharing sites (in Vyron Antoniou’s PhD).

Taken together, participation inequality, demographic bias and spatial bias point to a very skewed group that is producing most of the content that we see on the GeoWeb. Look back at Slide 3, and it is a good guess that this minority falls within 3 standard deviations of the centre. They are outliers – not representative of anything other than of themselves. Of course, given the large number of people online and the ability of outliers to ‘shout’ louder than anyone else, and converse among themselves, it is tempting to look at them as a population worth listening to. But it is, similarly to the opening point, a look at the rich and powerful (or super enthusiastic) and not the mainstream.

Strangely, when such a small group controls the economy, we see it as a political issue (Slide 15, which was produced by Mother Jones as part of the response to the Occupy movement). We should be just as concerned when it happens with digital content and sets the agenda of what we see and how we understand the world.

Now to the implication of this analysis, and the use of the GeoWeb and social media to understand society. Slide 17 provides the link to the GeoIQ post that argued that these outliers are worth listening to. They might be, but the issue is what you are trying to find out by looking at the data:

The first option is to ask questions about the resulting data such as ‘can it be used to update national datasets?’ – accepting the biases in the data collection as they are and explore if there is anything useful that comes out of the outcomes (Slides 19-21, from the work of Vyron Antoniou and Thomas Koukoletsos). This should be fine as long as the researchers don’t try to state something general about the way society works from the data. Even so, researchers ought to analyse and point to biases and shortcomings (Slides 11-14 are doing exactly that).

The second option is to start claiming that we can learn something about social activities (Slides 22-23, from the work of Eric Fischer and Daniel Gayo-Avello, as well as Sean Gorman in the GeoIQ post). In this case, it is wrong to read too much into the dataas Gayo-Avello noted – as the outliers’ bias renders the analysis as not representative of society. Notice, for example, the huge gap between the social media noise during the Egyptian revolution and the outcomes of the elections, or the political differences that Gayo-Avello noted.

The third option is to find data that is representative (Slide 24, from the MIT Senseable City Lab), which looks at the ‘digital breadcrumbs’ that we leave behind on a large scale – phone calls, SMS, travel cards, etc. This data is representative, but provides observations without context. There is no qualitative or contextual information that comes with it and, because of the biases that are noted above, it is wrong to integrate it with the digital cacophony of the outliers. It is most likely to lead to erroneous conclusions.

Therefore, the understanding of the concept of digiplace (Slide 25) – the ordering of digital representation through software algorithms and GeoWeb portals – is, in fact, double filtered. The provision of content by outliers means that the algorithms will tend to amplify their point of view and biases.  Not only that, digital inequality, which is happening on top of social and economic inequality, means that more and more of our views of the world are being shaped by this tiny minority.

When we add to the mix aspects of digital inequalities (some people can only afford a pay-as-you-go function phone, while a tiny minority consumes a lot of bandwidth over multiple devices), we should stop talking about the ‘digital divide’ as something that will close over time. This is some sort of imaginary trickle-down  theory that is being proven not to withstand the test of reality. If anything, it grows as the ‘haves’ are using multiple devices to shape digiplace in their own image.

This is actually one of the core problems that differentiates to approaches to engagement in data collection. There is the laissez-faire approach to engaging society in collecting information about the world (Slides 27-28 showing OpenStreetMap mapping parties) which does not confront the biases and opposite it, there are participatory approaches (Slides 29-30 showing participatory mapping exercises from the work of Mapping for Change) where the effort is on making the activity inclusive.

This point about the biases, inequality and influence on the way we understand the world is important to repeat – as it is too often ignored by researchers who deal with these data.

The previous post focused on citizen science as participatory science. This post is discussing the meaning of this differentiation. It is the final part of the chapter that will appear in the book:

Sui, D.Z., Elwood, S. and M.F. Goodchild (eds.), 2013. Crowdsourcing Geographic Knowledge. Berlin: SpringerHere is a link to the chapter.

The typology of participation can be used across the range of citizen science activities, and one project should not be classified only in one category. For example, in volunteer computing projects most of the participants will be at the bottom level, while participants that become committed to the project might move to the second level and assist other volunteers when they encounter technical problems. Highly committed participants might move to a higher level and communicate with the scientist who coordinates the project to discuss the results of the analysis and suggest new research directions.

This typology exposes how citizen science integrates and challenges the way in which science discovers and produces knowledge. Questions about the way in which knowledge is produced and truths are discovered are part of the epistemology of science. As noted above, throughout the 20th century, as science became more specialised, it also became professionalised. While certain people were employed as scientists in government, industry and research institutes, the rest of the population – even if they graduated from a top university with top marks in a scientific discipline – were not regarded as scientists or as participants in the scientific endeavour unless they were employed professionally to do so. In rare cases, and following the tradition of ‘gentlemen/women scientists’, wealthy individuals could participate in this work by becoming an ‘honorary fellow’ or affiliated to a research institute that, inherently, brought them into the fold. This separation of ‘scientists’ and ‘public’ was justified by the need to access specialist equipment, knowledge and other privileges such as a well-stocked library. It might be the case that the need to maintain this separation is a third reason that practising scientists shy away from explicitly mentioning the contribution of citizen scientists to their work in addition to those identified by Silvertown (2009).

However, similarly to other knowledge professionals who operate in the public sphere, such as medical experts or journalists, scientists need to adjust to a new environment that is fostered by the Web. Recent changes in communication technologies, combined with the increased availability of open access information and the factors that were noted above, mean that processes of knowledge production and dissemination are opening up in many areas of social and cultural activities (Shirky 2008). Therefore, some of the elitist aspects of scientific practice are being challenged by citizen science, such as the notion that only dedicated, full-time researchers can produce scientific knowledge. For example, surely it should be professional scientists who can solve complex scientific problems such as long-standing protein-structure prediction of viruses. Yet, this exact problem was recently solved through a collaboration of scientists working with amateurs who were playing the computer game Foldit (Khatib et al. 2011). Another aspect of the elitist view of science can be witnessed in interaction between scientists and the public, where the assumption is of unidirectional ‘transfer of knowledge’ from the expert to lay people. Of course, as in the other areas mentioned above, it is a grave mistake to argue that experts are unnecessary and can be replaced by amateurs, as Keen (2007) eloquently argued. Nor is it suggested that, because of citizen science, the need for professionalised science will diminish, as, in citizen science projects, the participants accept the difference in knowledge and expertise of the scientists who are involved in these projects. At the same time, the scientists need to develop respect towards those who help them beyond the realisation that they provide free labour, which was noted above.

Given this tension, the participation hierarchy can be seen to be moving from a ‘business as usual’ scientific epistemology at the bottom, to a more egalitarian approach to scientific knowledge production at the top. The bottom level, where the participants are contributing resources without cognitive engagement, keeps the hierarchical division of scientists and the public. The public is volunteering its time or resources to help scientists while the scientists explain the work that is to be done but without expectation that any participant will contribute intellectually to the project. Arguably, even at this level, the scientists will be challenged by questions and suggestions from the participants and, if they do not respond to them in a sensitive manner, they will risk alienating participants. Intermediaries such as the IBM World Community Grid, where a dedicated team is in touch with scientists who want to run projects and a community of volunteered computing providers, are cases of ‘outsourcing’ the community management and thus allowing, to an extent, the maintenance of the separation of scientists and the public.

As we move up the ladder to a higher level of participation, the need for direct engagement between the scientist and the public increases. At the highest level, the participants are assumed to be on equal footing with the scientists in terms of scientific knowledge production. This requires a different epistemological understanding of the process, in which it is accepted that the production of scientific insights is open to any participant while maintaining scientific standards and practices such as systematic observations or rigorous statistical analysis to verify that the results are significant. The belief that, given suitable tools, many lay people are capable of such endeavours is challenging to some scientists who view their skills as unique. As the case of the computer game that helped in the discovery of new protein formations (Khatib et al. 2011) demonstrated, such collaboration can be fruitful even in cutting-edge areas of science. However, it can be expected that the more mundane and applied areas of science will lend themselves more easily to the fuller sense of collaborative science in which participants and scientists identify problems and develop solutions together. This is because the level of knowledge required in cutting-edge areas of science is so demanding.

Another aspect in which the ‘extreme’ level challenges scientific culture is that it requires scientists to become citizen scientists in the sense that Irwin (1995), Wilsdon, Wynne and Stilgoe (2005) and Stilgoe (2009) advocated (Notice Stilgoe’s title: Citizen Scientists). In this interpretation of the phrase, the emphasis is not on the citizen as a scientist, but on the scientist as a citizen. It requires the scientists to engage with the social and ethical aspects of their work at a very deep level. Stilgoe (2009, p.7) suggested that, in some cases, it will not be possible to draw the line between the professional scientific activities, the responsibilities towards society and a fuller consideration of how a scientific project integrates with wider ethical and societal concerns. However, as all these authors noted, this way of conceptualising and practising science is not widely accepted in the current culture of science.

Therefore, we can conclude that this form of participatory and collaborative science will be challenging in many areas of science. This will not be because of technical or intellectual difficulties, but mostly because of the cultural aspects. This might end up being the most important outcome of citizen science as a whole, as it might eventually catalyse the education of scientists to engage more fully with society.

This post continues to the theme of the previous one, and is also based on the chapter that will appear next year in the book:

Sui, D.Z., Elwood, S. and M.F. Goodchild (eds.), 2013. Crowdsourcing Geographic Knowledge. Berlin: SpringerHere is a link to the chapter.

The post focuses on the participatory aspect of different Citizen Science modes:

Against the technical, social and cultural aspects of citizen science, we offer a framework that classifies the level of participation and engagement of participants in citizen science activity. While there is some similarity between Arnstein’s (1969) ‘ladder of participation and this framework, there is also a significant difference. The main thrust in creating a spectrum of participation is to highlight the power relationships that exist within social processes such as urban planning or in participatory GIS use in decision making (Sieber 2006). In citizen science, the relationship exists in the form of the gap between professional scientists and the wider public. This is especially true in environmental decision making where there are major gaps between the public’s and the scientists’ perceptions of each other (Irwin 1995).

In the case of citizen science, the relationships are more complex, as many of the participants respect and appreciate the knowledge of the professional scientists who are leading the project and can explain how a specific piece of work fits within the wider scientific body of work. At the same time, as volunteers build their own knowledge through engagement in the project, using the resources that are available on the Web and through the specific project to improve their own understanding, they are more likely to suggest questions and move up the ladder of participation. In some cases, the participants would want to volunteer in a passive way, as is the case with volunteered computing, without full understanding of the project as a way to engage and contribute to a scientific study. An example of this is the many thousands of people who volunteered to the project, where their computers were used to run global climate models. Many would like to feel that they are engaged in one of the major scientific issues of the day, but would not necessarily want to fully understand the science behind it.

Levels of Participation in Citizen Science

Therefore, unlike Arnstein’s ladder, there shouldn’t be a strong value judgement on the position that a specific project takes. At the same time, there are likely benefits in terms of participants’ engagement and involvement in the project to try to move to the highest level that is suitable for the specific project. Thus, we should see this framework as a typology that focuses on the level of participation.

At the most basic level, participation is limited to the provision of resources, and the cognitive engagement is minimal. Volunteered computing relies on many participants that are engaged at this level and, following Howe (2006), this can be termed ‘crowdsourcing’. In participatory sensing, the implementation of a similar level of engagement will have participants asked to carry sensors around and bring them back to the experiment organiser. The advantage of this approach, from the perspective of scientific framing, is that, as long as the characteristics of the instrumentation are known (e.g. the accuracy of a GPS receiver), the experiment is controlled to some extent, and some assumptions about the quality of the information can be used. At the same time, running projects at the crowdsourcing level means that, despite the willingness of the participants to engage with a scientific project, their most valuable input – their cognitive ability – is wasted.

The second level is ‘distributed intelligence’ in which the cognitive ability of the participants is the resource that is being used. Galaxy Zoo and many of the ‘classic’ citizen science projects are working at this level. The participants are asked to take some basic training, and then collect data or carry out a simple interpretation activity. Usually, the training activity includes a test that provides the scientists with an indication of the quality of the work that the participant can carry out. With this type of engagement, there is a need to be aware of questions that volunteers will raise while working on the project and how to support their learning beyond the initial training.

The next level, which is especially relevant in ‘community science’ is a level of participation in which the problem definition is set by the participants and, in consultation with scientists and experts, a data collection method is devised. The participants are then engaged in data collection, but require the assistance of the experts in analysing and interpreting the results. This method is common in environmental justice cases, and goes towards Irwin’s (1995) call to have science that matches the needs of citizens. However, participatory science can occur in other types of projects and activities – especially when considering the volunteers who become experts in the data collection and analysis through their engagement. In such cases, the participants can suggest new research questions that can be explored with the data they have collected. The participants are not involved in detailed analysis of the results of their effort – perhaps because of the level of knowledge that is required to infer scientific conclusions from the data.

Finally, collaborative science is a completely integrated activity, as it is in parts of astronomy where professional and non-professional scientists are involved in deciding on which scientific problems to work and the nature of the data collection so it is valid and answers the needs of scientific protocols while matching the motivations and interests of the participants. The participants can choose their level of engagement and can be potentially involved in the analysis and publication or utilisation of results. This form of citizen science can be termed ‘extreme citizen science’ and requires the scientists to act as facilitators, in addition to their role as experts. This mode of science also opens the possibility of citizen science without professional scientists, in which the whole process is carried out by the participants to achieve a specific goal.

This typology of participation can be used across the range of citizen science activities, and one project should not be classified only in one category. For example, in volunteer computing projects most of the participants will be at the bottom level, while participants that become committed to the project might move to the second level and assist other volunteers when they encounter technical problems. Highly committed participants might move to a higher level and communicate with the scientist who coordinates the project to discuss the results of the analysis and suggest new research directions.

As part of the Volunteered Geographic Information (VGI) workshop that was held in Seattle in April 2011, Daniel Sui, Sarah Elwood and Mike Goodchild announced that they will be editing a volume dedicated to the topic, published as ‘Crowdsourcing Geographic Knowledge‘  (Here is a link to the Chapter in Crowdsourcing Geographic Knowledge)

My contribution to this volume focuses on citizen science, and shows the links between it and VGI. The chapter is currently under review, but the following excerpt discusses different types of citizen science activities, and I would welcome comments:

“While the aim here is not to provide a precise definition of citizen science. Yet, a definition and clarification of what the core characteristics of citizen science are is unavoidable. Therefore, it is defined as scientific activities in which non-professional scientists volunteer to participate in data collection, analysis and dissemination of a scientific project (Cohn 2008; Silvertown 2009). People who participate in a scientific study without playing some part in the study itself – for example, volunteering in a medical trial or participating in a social science survey – are not included in this definition.

While it is easy to identify a citizen science project when the aim of the project is the collection of scientific information, as in the recording of the distribution of plant species, there are cases where the definition is less clear-cut. For example, the process of data collection in OpenStreetMap or Google Map Maker is mostly focused on recording verifiable facts about the world that can be observed on the ground. The tools that OpenStreetMap mappers use – such as remotely sensed images, GPS receivers and map editing software – can all be considered scientific tools. With their attempt to locate observed objects and record them on a map accurately, they follow the footsteps of surveyors such as Robert Hooke, who also carried out an extensive survey of London using scientific methods – although, unlike OpenStreetMap volunteers, he was paid for his effort. Finally, cases where facts are collected in a participatory mapping activity, such as the one that Ghose (2001) describes, should probably be considered a citizen science only if the participants decided to frame it as such. For the purpose of the discussion here, such a broad definition is more useful than a limiting one that tries to reject certain activities.

Notice also that, by definition, citizen science can only exist in a world in which science is socially constructed as the preserve of professional scientists in academic institutions and industry, because, otherwise, any person who is involved in a scientific project would simply be considered a contributor and potentially a scientist. As Silvertown (2009) noted, until the late 19th century, science was mainly developed by people who had additional sources of employment that allowed them to spend time on data collection and analysis. Famously, Charles Darwin joined the Beagle voyage, not as a professional naturalist but as a companion to Captain FitzRoy. Thus, in that era, almost all science was citizen science albeit mostly by affluent gentlemen scientists and gentlewomen. While the first professional scientist is likely to be Robert Hooke, who was paid to work on scientific studies in the 17th century, the major growth in the professionalisation of scientists was mostly in the latter part of the 19th and throughout the 20th centuries.

Even with the rise of the professional scientist, the role of volunteers has not disappeared, especially in areas such as archaeology, where it is common for enthusiasts to join excavations, or in natural science and ecology, where they collect and send samples and observations to national repositories. These activities include the Christmas Bird Watch that has been ongoing since 1900 and the British Trust for Ornithology Survey, which has collected over 31 million records since its establishment in 1932 (Silvertown 2009). Astronomy is another area where amateurs and volunteers have been on par with professionals when observation of the night sky and the identification of galaxies, comets and asteroids are considered (BBC 2006). Finally, meteorological observations have also relied on volunteers since the early start of systematic measurements of temperature, precipitation or extreme weather events (WMO 2001).

This type of citizen science provides the first type of ‘classic’ citizen science – the ‘persistence’ parts of science where the resources, geographical spread and the nature of the problem mean that volunteers sometimes predate the professionalisation and mechanisation of science. These research areas usually require a large but sparse network of observers who carry out their work as part of a hobby or leisure activity. This type of citizen science has flourished in specific enclaves of scientific practice, and the progressive development of modern communication tools has made the process of collating the results from the participants easier and cheaper, while inherently keeping many of the characteristics of data collection processes close to their origins.

A second set of citizen science activities is environmental management and, even more specifically, within the context of environmental justice campaigns. Modern environmental management includes strong technocratic and science oriented management practices (Bryant & Wilson 1998; Scott & Barnett 2009) and environmental decision making is heavily based on scientific environmental information. As a result, when an environmental conflict emerges – such as a community protest over a local noisy factory or planned expansion of an airport – the valid evidence needs to be based on scientific data collection. This aspect of environmental justice struggle is encouraging communities to carry out ‘community science’ in which scientific measurements and analysis are carried out by members of local communities so they can develop an evidence base and set out action plans to deal with problems in their area. A successful example of such an approach is the ‘Global Community Monitor’ method to allow communities to deal with air pollution issues (Scott & Barnett 2009). This is performed through a simple method of sampling air using plastic buckets followed by analysis in an air pollution laboratory, and, finally, the community being provided with instructions on how to understand the results. This activity is termed ‘Bucket Brigade’ and was used across the world in environmental justice campaigns. In London, community science was used to collect noise readings in two communities that are impacted by airport and industrial activities. The outputs were effective in bringing environmental problems to the policy arena (Haklay, Francis & Whitaker 2008). As in ‘classic’ citizen science, the growth in electronic communication has enabled communities to identify potential methods – e.g. through the ‘Global Community Monitor’ website – as well as find international standards , regulations and scientific papers that can be used together with the local evidence.
However, the emergence of the Internet and the Web as a global infrastructure has enabled a new incarnation of citizen science: the realisation of scientists that the public can provide free labour, skills, computing power and even funding, and, the growing demands from research funders for public engagement all contributing to the motivation of scientists to develop and launch new and innovative projects (Silvertown 2009; Cohn 2008). These projects utilise the abilities of personal computers, GPS receivers and mobile phones to double as scientific instruments.

This third type of citizen science has been termed ‘citizen cyberscience’ by Francois Grey (2009). Within it, it is possible to identify three sub-categories: volunteered computing, volunteered thinking and participatory sensing.

Volunteered computing was first developed in 1999, with the foundation of SETI@home (Anderson et al. 2002), which was designed to distribute the analysis of data that was collected from a radio telescope in the search for extra-terrestrial intelligence. The project utilises the unused processing capacity that exists in personal computers, and uses the Internet to send and receive ‘work packages’ that are analysed automatically and sent back to the main server. Over 3.83 million downloads were registered on the project’s website by July 2002. The system on which SETI@home is based, the Berkeley Open Infrastructure for Network Computing (BOINC), is now used for over 100 projects, covering Physics, processing data from the Large Hadron Collider through LHC@home; Climate Science with the running of climate models in; and Biology in which the shape of proteins is calculated in Rosetta@home.

While volunteered computing requires very little from the participants, apart from installing software on their computers, in volunteered thinking the volunteers are engaged at a more active and cognitive level (Grey 2009). In these projects, the participants are asked to use a website in which information or an image is presented to them. When they register onto the system, they are trained in the task of classifying the information. After the training, they are exposed to information that has not been analysed, and are asked to carry out classification work. Stardust@home (Westphal et al. 2006) in which volunteers were asked to use a virtual microscope to try to identify traces of interstellar dust was one of the first projects in this area, together with the NASA ClickWorkers that focused on the classification of craters on Mars. Galaxy Zoo (Lintott et al. 2008), a project in which volunteers classify galaxies, is now one of the most developed ones, with over 100,000 participants and with a range of applications that are included in the wider Zooniverse set of projects (see .

Participatory sensing is the final and most recent type of citizen science activity. Here, the capabilities of mobile phones are used to sense the environment. Some mobile phones have up to nine sensors integrated into them, including different transceivers (mobile network, WiFi, Bluetooth), FM and GPS receivers, camera, accelerometer, digital compass and microphone. In addition, they can link to external sensors. These capabilities are increasingly used in citizen science projects, such as Mappiness in which participants are asked to provide behavioural information (feeling of happiness) while the phone records their location to allow the linkage of different locations to wellbeing (MacKerron 2011). Other activities include the sensing of air-quality (Cuff 2007) or noise levels (Maisonneuve et al. 2010) by using the mobile phone’s location and the readings from the microphone.”

At the State of the Map (EU) 2011 conference that was held in Vienna from 15-17 July, I gave a keynote talk on the relationships between the OpenStreetMap  (OSM) community and the GIScience research community. Of course, the relationships are especially important for those researchers who are working on volunteered Geographic Information (VGI), due to the major role of OSM in this area of research.

The talk included an overview of what researchers have discovered about OpenStreetMap over the 5 years since we started to pay attention to OSM. One striking result is that the issue of positional accuracy does not require much more work by researchers. Another important outcome of the research is to understand that quality is impacted by the number of mappers, or that the data can be used with confidence for mainstream geographical applications when some conditions are met. These results are both useful, and of interest to a wide range of groups, but there remain key areas that require further research – for example, specific facets of quality, community characteristics  and how the OSM data is used.

Reflecting on the body of research, we can start to form a ‘code of engagement’ for both academics and mappers who are engaged in researching or using OpenStreetMap. One such guideline would be  that it is both prudent and productive for any researcher do some mapping herself, and understand the process of creating OSM data, if the research is to be relevant and accurate. Other aspects of the proposed ‘code’ are covered in the presentation.

The talk is also available as a video from the TU Wien Matterhorn server



GIS Research UK (GISRUK) is a long running conference series, and the 2011 instalment was hosted by the University of Portsmouth at the end of April.

During the conference, I was asked to give a keynote talk about Participatory GIS. I decided to cover the background of Participatory GIS in the mid-1990s, and the transition to more advanced Web Mapping applications from the mid-2000s. Of special importance are the systems that allow user-generated content, and the geographical types of systems that are now leading to the generation of Volunteer Geographic Information (VGI).

The next part of the talk focused on Citizen Science, culminating with the ideas that are the basis for Extreme Citizen Science.

Interestingly, as in previous presentations, one of the common questions about Citizen Science came up. Professional scientists seem to have a problem with the suggestion that citizens are as capable as scientists in data collection and analysis. While there is an acceptance about the concept, the idea that participants can suggest problems, collect data rigorously and analyse it seems to be too radical – or worrying.

What is important to understand is that the ideas of Extreme Citizen Science are not about replacing the role of scientists, but are a call to rethink the role of the participants and the scientists in cases where Citizen Science is used. It is a way to consider science as a collaborative process of learning and exploration of issues. My own experience is that participants have a lot of respect for the knowledge of the scientists, as long as the scientists have a lot of respect for the knowledge and ability of the participants. The participants would like to learn more about the topic that they are exploring and are keen to know: ‘what does the data that I collected mean?’ At the same time, some of the participants can become very serious in terms of data collection, reading about the specific issues and using the resources that are available online today to learn more. At some point, they are becoming knowledgeable participants and it is worth seeing them as such.

The slides below were used for this talk, and include links to the relevant literature.


Get every new post delivered to your Inbox.

Join 2,082 other followers