How many citizen scientists in the world?

Since the development of the proposal for the Doing It Together Science project (DITOs), I have been using the “DITOs escalator” model to express the different levels of engagement in science, while also demonstrating that the higher level have fewer participants, which mean that there is a potential for people to move between levels of engagement – sometime towards deeper engagement, and sometime towards lighter one according to life stages, family commitments, etc. This is what the escalator, after several revisions, look like:

DitosEscalator7

I have an ongoing interest in participation inequality (the observation that very few participants are doing most of the work) and the way it plays out and influences citizen science projects. When you start attaching numbers to the different levels of public engagement in science, participation inequality is appearing in this area, too. Since writing the proposal in 2015, I have been looking for indications that will support the estimation of the number of participants. During the process of working on a paper that uses the escalator, I’ve done the research to identify sources of information to support these estimations. While the paper is starting its peer review journey, I am putting out the part that relates to these numbers so this part can get open peer review here. I have decided to use 2017 as a recent year for which we can carry out the analysis. As for geographical scale, I’m using the United Kingdom as a country with very active citizen science community as my starting point.

At the bottom of the escalator, Level 1 considers the whole population, about 65 million people. Because of the impact of science across society, the vast majority, if not all, will have some exposure to science – even if this is only in the form of medical encounters.

However, the bare minimum of engagement is to passively consume information about science through newspapers, websites, and TV and Radio programme (Level 2). We can gauge the number of people at this level from the BBC programmes Blue Planet II and Planet Earth II, both focusing on natural history, with viewing figures of 14 million and about 10 million, respectively. We can, therefore, estimate these “passive consumers” at about 25% of the population.

At the next level is active consumption of science – such as visits to London’s Science Museum (UK visitors in 2017 – about 1.3), or the Natural History Museum (UK visitors in 2017 – about 2.1m), so an estimation of participation at 10% of the population seem justified.

Next, we can look at active engagement in citizen science but to a limited degree. Here, the Royal Society for the Protection of Birds (RSPB) annual Big Garden Birdwatch requires the participants to dedicate a single hour in the year. The project attracted about 500,000 participants in 2017, and we can, therefore, estimate participation at this level at about 1% of the population. This should also include about 170,000 people who carried out a single task on Zooniverse and other online projects.

At the fifth level, there are projects that require remote engagement, such as volunteer thinking on the Zooniverse platform, or in volunteer computing on the IBM World Community Grid (WCG), in which participants download a software on their computer to allow processing to assist scientific research. The number of participants in WCG from the UK in 2017 was about 18,000. In Zooniverse about 74,0000 people carried out more than a single task in 2017, thus estimating participation at this level at 0.1% of the population (thanks to Grant Miller, Zooniverse and Caitlin Larkin, IBM for these details).

The sixth level requires the regular data collection, such as the participation in the British Trust of Ornithology Garden Birdwatch got about 6,500 active participants in 2017 (BTO 2018), while about 5000 contributed to the biodiversity recording system iRecord (thanks to Tom August, CEH) and it will be reasonable to estimate that the participation is about 0.01% of the population.

The most engaged level include those who are engaged in DIY Science, such as exploring DIY Bio, or developing their own sensors, etc. We can estimate that it represents 0.001% of the UK population at most (thanks to Philippe Boeing & Ilia Levantis).

We can see that as the level of engagement increases, the demand from participants increase and the number of participants drops. Not that this is earth-shattering, though what is interesting is that the difference between levels is in order of magnitude. We also know that the UK enjoys all the possible benefits that are needed to foster citizen science: a long history of citizen science activities, established NGOs and academic institutions that support citizen science, good technological infrastructure (broadband, mobile phone use), well-educated population (39.1% with tertiary education), etc. So we’re talking about a best-case scenario.

It is also important, already at this point, to note that UNESCO’s estimates of the percentage of UK population who are active scientists (working in research jobs), is 0.4% which is bigger than the 0.111 for levels 5,6 and 7.  

Let’s try to extrapolate from the UK to the world.

First, how many people we can estimate to have the potential of being citizen scientists? We want them to be connected and educated, with a middle-class lifestyle that gives them leisure time for hobbies and volunteering.

The connectivity gives us a large number – according to ITU, 3.5 Billion people are using the Internet. The estimation of the size of middle-class is a bit smaller, at 3.2 Billion people.  However, we know that participants in most citizen science projects which use passive inclusiveness, where everyone is welcome without an active effort in outreach to under-represented groups, tend to be from people with higher education (a.k.a tertiary education). There is actually data about it – here is the information from Wikipedia about tertiary educational attainment. According to UNESCO’s statistics, there were over 672 million people with a form of tertiary education in 2017. Let’s say that not everyone in citizen science is with tertiary education (which is true) so our potential starting number is 1 Billion.

I’ll assume the same proportion of the UK, ignoring that it present for us the best case. So about 250 million of these are passive consumers of science (L2), and 100 million are active consumer (e.g. going to science museums) (L3). We can then have 10 million people that participate in the once a year events (L4); 1 million that are active in online citizen science (this is more than a one-off visit or trial) (L5); about 100,000 who are the committed participants (mostly nature observers) and about 10,000 DIY bio, makers, and DIY science people (L6 and L7).

Are these numbers make sense? Looking at the visits to science/natural history museums on Wikipedia, level 3 seems about right. Level 4 looks very optimistic – in addition to Big Garden Birdwatch, there were about 17,000 people participating in City Nature Challenge, and 73,000 participants in the Christmas Bird Count, and about 888,000 done a single task on Zooniverse – it looks like that a more realistic number is 3 million or 4 million. Level 5 is an underestimate – IBM Word Community Grid have 753,000 members, and there are other volunteer computing projects which will make it about 1 million, then there were about 163,000 global Zooniverse contributors (thanks to the information from Grant Miller), 130,000 Wikipedians, 50,000 active contributors in OpenStreetMap, and other online projects such as EyeWire etc. So let’s say that it’s about 1.5 Million. At level 6, again the number is about right – e.g. eBird reports 20,000 birders in their peak day. For the sake of the argument, let’s say that it’s double the number – 200,000. Level 7 also seems right, based on estimations of biohackers numbers in Europe.

Now let’s look at the number of scientists globally: in 2013 there were 7.3 million researchers worldwide. With the estimation of “serious” citizen scientists (levels 5,6 and 7) at about 1.7 million, we can see the issue of crowdsourcing here: the potential crowdsourcer community is, at the moment, much bigger than the volunteers.

Something that is important to highlight here is the amazing productivity of citizen scientists in terms of their ability to analyse, collect information, or inventing tools – we know from participation inequality that this tiny group of participants are doing a huge amount of work – the 50,000 OSM volunteers are mapping the world or the 73,000 Christmas Bird Count participants provided 56,000,000 observations or the attention impact of the Open Insulin Project. So numbers are not the only thing that we need to think about.

Moreover, this is not a reason to give up on increasing the number of citizen scientists. Look at the numbers of Google Local Guides – out of 1 Billion users, a passive crowdsourcing approach reached 50 million single time contributors, and 465,000 in the equivalent of levels 5 to 7. Therefore, citizen science has the potential of reaching much larger numbers. At the minimum, there is the large cohort of people with tertiary education, with at least 98 million people with Masters and PhD in the world.

Therefore, to enable a wider and deeper public engagement with science, apart from the obvious point of providing funding, institutional support, and frameworks to scale up citizen science, we can think of an “escalator” like process, which makes people aware of the various levels and assists them in moving up or down the engagement level. For example, due to a change in care responsibilities or life stages, people can become less active for a period of time and then chose to become more active later. With appropriate funding, support, and attention, growing the global citizen science should be possible. 

Advertisements

Papers from PPGIS 2017 meeting: state of the art and examples from Poland and the Czech Republic

dsc_0079About a year ago, the Adam Mickiewicz University in Poznań, Poland, hosted the PPGIS 2017 workshop (here are my notes from the first day and the second day). Today, four papers from the workshop were published in the journal Quaestiones Geographicae which was established in 1974 as an annual journal of the Faculty of Geographical and Geological Sciences at the university.

The four papers (with their abstracts) are:

Muki Haklay, Piotr Jankowski, and Zbigniew Zwoliński: SELECTED MODERN METHODS AND TOOLS FOR PUBLIC PARTICIPATION IN URBAN PLANNING – A REVIEW “The paper presents a review of contributions to the scientific discussion on modern methods and tools for public participation in urban planning. This discussion took place in Obrzycko near Poznań, Poland. The meeting was designed to allow for an ample discussion on the themes of public participatory geographic information systems, participatory geographic information systems, volunteered geographic information, citizen science, Geoweb, geographical information and communication technology, Geo-Citizen participation, geo-questionnaire, geo-discussion, GeoParticipation, Geodesign, Big Data and urban planning. Participants in the discussion were scholars from Austria, Brazil, the Czech Republic, Finland, Ireland, Italy, the Netherlands, Poland, the United Kingdom, and the USA. A review of public participation in urban planning shows new developments in concepts and methods rooted in geography, landscape architecture, psychology, and sociology, accompanied by progress in geoinformation and communication technologies.
The discussions emphasized that it is extremely important to state the conditions of symmetric cooperation between city authorities, urban planners and public participation representatives, social organizations, as well as residents”

Jiří Pánek PARTICIPATORY MAPPING IN COMMUNITY PARTICIPATION – CASE STUDY OF JESENÍK, CZECH REPUBLIC “Community participation has entered the 21st century and the era of e-participation, e-government and e-planning. With the opportunity to use Public Participation Support Systems, Computer-Aided Web Interviews and crowdsourcing mapping platforms, citizens are equipped with the tools to have their voices heard. This paper presents a case study of the deployment of such an online mapping platform in Jeseník, Czech Republic. In total, 533 respondents took part in the online mapping survey, which included six spatial questions. Respondents marked 4,714 points and added 1,538 comments to these points. The main aim of the research was to find whether there were any significant differences in the answers from selected groups (age, gender, home location) of respondents. The results show largest differences in answers of various (below 20 and above 20 year) age groups. Nevertheless, further statistical examination would be needed to confirm the visual comparison”.

Edyta Bąkowska-Waldmann, Cezary Brudka, and Piotr Jankowski: LEGAL AND ORGANIZATIONAL FRAMEWORK FOR THE USE OF GEOWEB METHODS FOR PUBLIC PARTICIPATION IN SPATIAL PLANNING IN POLAND: EXPERIENCES, OPINIONS AND CHALLENGES “Geoweb methods offer an alternative to commonly used public participation methods in spatial planning. This paper discusses two such geoweb methods – geo-questionnaire and geo-discussion in the context of their initial applications within the spatial planning processes in Poland. The paper presents legal and organizational framework for the implementation of methods, provides their development details, and assesses insights gained from their deployment in the context of spatial planning in Poland. The analysed case studies encompass different spatial scales ranging from major cities in Poland (Poznań and Łódź) to suburban municipalities (Rokietnica and Swarzędz in Poznań Agglomeration). The studies have been substantiated by interviews with urban planners and local authorities on the use and value of Geoweb methods in public consultations.”

Michał Czepkiewicz, Piotr Jankowski, and Zbigniew Zwoliński: GEO-QUESTIONNAIRE: A SPATIALLY EXPLICIT METHOD FOR ELICITING PUBLIC PREFERENCES, BEHAVIOURAL PATTERNS, AND LOCAL KNOWLEDGE – AN OVERVIEW “Geo-questionnaires have been used in a variety of domains to collect public preferences, behavioural patterns, and spatially-explicit local knowledge, for academic research and environmental and urban planning. This paper provides an overview of the method focusing on the methodical characteristics of geo-questionnaires including software functions, types of collected data, and techniques of data analysis. The paper also discusses broader methodical
issues related to the practice of deploying geo-questionnaires such as respondent selection and recruitment, representativeness, and data quality. The discussion of methodical issues is followed by an overview of the recent examples of geo-questionnaire applications in Poland, and the discussion of socio-technical aspects of geo-questionnaire use in spatial planning”

These papers provide examples from Participatory GIS in Poland and the Czech Republic, which are worth examining, as well as our review of the major themes from the workshop. All the papers are open access.

Identifying success factors in crowdsourced geographic information use in government

GFDRRA few weeks ago, the Global Facility for Disaster Reduction and Recovery (GFDRR), published an update for the report from 2014 on the use of crowdsourced geographic information in government. The 2014 report was very successful – it has been downloaded almost 1,800 times from 41 countries around the world in about 3 years (with more than 40 academic references) which showed the interests of researchers and policymakers alike and outlined its usability. On the base of it, it was pleasing to be approached by GFDRR about a year ago, with a request to update it.

In preparation for this update, we sought comments and reviews from experts and people who used the report regarding possible improvements and amendments. This feedback helped to surface that the seven key factors highlighted by the first report as the ones that shaped the use of VGI in government (namely: incentives, aims, stakeholders, engagement, technical aspects, success factors, and problems) have developed both independently and in cross-cutting modes and today there is a new reality for the use of VGI in government.

Luckily, in the time between the first report and the beginning of the new project, I learned about Qualitative Comparative Analysis (QCA) in the Giving Time event and therefore we added Matt Ryan to our team to help us with the analysis. QCA allowed us to take 50 cases, have an intensive face to face team workshop in June last year to code all the cases and agree on the way we create the input to QCA. This helped us in creating multiple models that provide us with an analysis of the success factors that help explain the cases that we deemed successful. We have used the fuzzy logic version of QCA, which allowed a more nuanced analysis.

Finally, in order to make the report accessible, we created a short version, which provides a policy brief to the success factors, and then the full report with the description of each case study.

It was pleasure working with the excellent team of researchers that worked on this report: Vyron Antoniou, Hellenic Army Geographic Directorate, Sofia Basiouka, Hellenic Ministry of Culture and Sport, Robert Soden, World Bank, Global Facility for Disaster Reduction & Recovery (GFDRR), Vivien Deparday, World Bank, Global Facility for Disaster Reduction & Recovery (GFDRR). Matthew Ryan, University of Southampton, and Peter Mooney, National University of Ireland, Maynooth. We were especially lucky to be helped by Madeleine Hatfield of Yellowback Publishing who helped us in editing the report and making it better structured and much more readable.

The full report, which is titled “Identifying success factors in crowdsourced geographic information use in government” is available here.

And the Policy Brief is available here. 

Citizen Science & Scientific Crowdsourcing – week 5 – Data quality

This week, in the “Introduction to Citizen Science & Scientific Crowdsourcing“, our focus was on data management, to complete the first part of the course (the second part starts in a week’s time since we have a mid-term “Reading Week” at UCL).

The part that I’ve enjoyed most in developing was the segment that addresses the data quality concerns that are frequently raised about citizen science and geographic crowdsourcing. Here are the slides from this segment, and below them a rationale for the content and detailed notes

I’ve written a lot on this blog about data quality and in many talks that I gave about citizen science and crowdsourced geographic information, the question about data quality is the first one to come up. It is a valid question, and it had led to useful research – for example on OpenStreetMap and I recall the early conversations, 10 years ago, during a journey to the Association for Geographic Information (AGI) conference about the quality and the longevity potential of OSM.

However, when you are being asked the same question again, and again, and again, at some point, you start considering “why am I being asked this question?”. Especially when you know that it’s been over 10 years since it was demonstrated that the quality is beyond “good enough”, and that there are over 50 papers on citizen science quality. So why is the problem so persistent?

Therefore, the purpose of the segment was to explain the concerns about citizen science data quality and their origin, then to explain a core misunderstanding (that the same quality assessment methods that are used in “scarcity” conditions work in “abundance” conditions), and then cover the main approaches to ensure quality (based on my article for the international encyclopedia of geography). The aim is to equip the students with a suitable explanation on why you need to approach citizen science projects differently, and then to inform them of the available methods. Quite a lot for 10 minutes!

So here are the notes from the slides:

[Slide 1] When it comes to citizen science, it is very common to hear suggestions that the data is not good enough and that volunteers cannot collect data at a good quality, because unlike trained researchers, they don’t understand who they are – a perception that we know little about the people that are involved and therefore we don’t know about their ability. There are also perceptions that like Wikipedia, it is all a very loosely coordinate and therefore there are no strict data quality procedures. However, we know that even in the Wikipedia case that when the scientific journal Nature shown over a decade ago (2005) that Wikipedia is resulting with similar quality to Encyclopaedia Britannica, and we will see that OpenStreetMap is producing data of a similar quality to professional services.
In citizen science where sensing and data collection from instruments is included, there are also concerns over the quality of the instruments and their calibration – the ability to compare the results with high-end instruments.
The opening of the Hunter et al. paper (which offers some solutions), summarises the concerned that are raised over data

[Slide 2] Based on conversations with scientists and concerned that are appearing in the literature, there is also a cultural aspect at play which is expressed in many ways – with data quality being used as an outlet to express them. This can be similar to the concerns that were raised in the cult of the amateur (which we’ve seen in week 2 regarding the critique of crowdsourcing) to protect the position of professional scientists and to avoid the need to change practices. There are also special concerns when citizen science is connected to activism, as this seems to “politicise” science or make the data suspicious – we will see next lecture that the story is more complex. Finally, and more kindly, we can also notice that because scientists are used to top-down mechanisms, they find alternative ways of doing data collection and ensuring quality unfamiliar and untested.

[Slide 3] Against this background, it is not surprising to see that checking data quality in citizen science is a popular research topic. Caren Cooper have identified over 50 papers that compare citizen science data with those that were collected by professional – as she points: “To satisfy those who want some nitty gritty about how citizen science projects actually address data quality, here is my medium-length answer, a brief review of the technical aspects of designing and implementing citizen science to ensure the data are fit for intended uses. When it comes to crowd-driven citizen science, it makes sense to assess how those data are handled and used appropriately. Rather than question whether citizen science data quality is low or high, ask whether it is fit or unfit for a given purpose. For example, in studies of species distributions, data on presence-only will fit fewer purposes (like invasive species monitoring) than data on presence and absence, which are more powerful. Designing protocols so that citizen scientists report what they do not see can be challenging which is why some projects place special emphasize on the importance of “zero data.”
It is a misnomer that the quality of each individual data point can be assessed without context. Yet one of the most common way to examine citizen science data quality has been to compare volunteer data to those collected by trained technicians and scientists. Even a few years ago I’d noticed over 50 papers making these types of comparisons and the overwhelming evidence suggested that volunteer data are fine. And in those few instances when volunteer observations did not match those of professionals, that was evidence of poor project design. While these studies can be reassuring, they are not always necessary nor would they ever be sufficient.” (http://blogs.plos.org/citizensci/2016/12/21/quality-and-quantity-with-citizen-science/)

[Slide 4] One way to examine the issue with data quality is to think of the clash between two concepts and systems of thinking on how to address quality issue – we can consider the condition of standard scientific research conditions as ones of scarcity: limited funding, limited number of people with the necessary skills, a limited laboratory space, expensive instruments that need to be used in a very specific way – sometimes unique instruments.
The conditions of citizen science, on the other hand, are of abundance – we have a large number of participants, with multiple skills, but the cost per participant is low, they bring their own instruments, use their own time, and are also distributed in places that we usually don’t get to (backyards, across the country – we talked about it in week 2). Conditions of abundance are different and require different thinking for quality assurance.

[Slide 5] Here some of the differences. Under conditions of scarcity, it is worth investing in long training to ensure that the data collection is as good as possible the first time it is attempted since time is scarce. Also, we would try to maximise the output from each activity that our researcher carried out, and we will put procedures and standards to ensure “once & good” or even “once & best” optimisation. We can also force all the people in the study to use the same equipment and software, as this streamlines the process.
On the other hand, in abundance conditions we need to assume that people are coming with a whole range of skills and that training can be variable – some people will get trained on the activity over a long time, while to start the process we would want people to have light training and join it. We also thinking of activities differently – e.g. conceiving the data collection as micro-tasks. We might also have multiple procedures and even different ways to record information to cater for a different audience. We will also need to expect a whole range of instrumentation, with sometimes limited information about the characteristics of the instruments.
Once we understand the new condition, we can come up with appropriate data collection procedures that ensure data quality that is suitable for this context.

[Slide 6] There are multiple ways of ensuring data quality in citizen science data. Let’s briefly look at each one of these. The first 3 methods were suggested by Mike Goodchild and Lina Li in a paper from 2012.

[Slide 7] The first method for quality assurance is crowdsourcing – the use of multiple people who are carrying out the same work, in fact, doing peer review or replication of the analysis which is desirable across the sciences. As Watson and Floridi argued, using the examine of Zooniverse, the approaches that are being used in crowdsourcing give these methods a stronger claim on accuracy and scientific correct identification because they are comparing multiple observers who work independently.

[Slide 8] The social form of quality assurance is using more and less experienced participants as a way to check the information and ensure that the data is correct. This is fairly common in many areas of biodiversity observations and integrated into iSpot, but also exist in other areas, such as mapping, where some information get moderated (we’ve seen that in Google Local Guides, when a place is deleted).

[Slide 9] The geographical rules are especially relevant to information about mapping and locations. Because we know things about the nature of geography – the most obvious is land and sea in this example – we can use this knowledge to check that the information that is provided makes sense, such as this sample of two bumble bees that are recorded in OPAL in the middle of the sea. While it might be the case that someone seen them while sailing or on some other vessel, we can integrate a rule into our data management system and ask for more details when we get observations in such a location. There are many other such rules – about streams, lakes, slopes and more.

[Slide 10] The ‘domain’ approach is an extension of the geographic one, and in addition to geographical knowledge uses a specific knowledge that is relevant to the domain in which information is collected. For example, in many citizen science projects that involved collecting biological observations, there will be some body of information about species distribution both spatially and temporally. Therefore, a new observation can be tested against this knowledge, again algorithmically, and help in ensuring that new observations are accurate. If we see a monarch butterfly within the marked area, we can assume that it will not harm the dataset even if it was a mistaken identity, while an outlier (temporally, geographically, or in other characteristics) should stand out.

[Slide 11] The ‘instrumental observation’ approach removes some of the subjective aspects of data collection by a human that might make an error, and rely instead on the availability of equipment that the person is using. Because of the increase in availability of accurate-enough equipment, such as the various sensors that are integrated in smartphones, many people keep in their pockets mobile computers with the ability to collect location, direction, imagery and sound. For example, images files that are captured in smartphones include in the file the GPS coordinates and time-stamp, which for a vast majority of people are beyond their ability to manipulate. Thus, the automatic instrumental recording of information provides evidence for the quality and accuracy of the information. This is where the metadata of the information becomes very valuable as it provides the necessary evidence.

[Slide 12] Finally, the ‘process oriented’ approach bring citizen science closer to traditional industrial processes. Under this approach, the participants go through some training before collecting information, and the process of data collection or analysis is highly structured to ensure that the resulting information is of suitable quality. This can include the provision of standardised equipment, online training or instruction sheets and a structured data recording process. For example, volunteers who participate in the US Community Collaborative Rain, Hail & Snow network (CoCoRaHS) receive standardised rain gauge, instructions on how to install it and online resources to learn about data collection and reporting.

[Slide 13]  What is important to be aware of is that methods are not being used alone but in combination. The analysis by Wiggins et al. in 2011 includes a framework that includes 17 different mechanisms for ensuring data quality. It is therefore not surprising that with appropriate design, citizen science projects can provide high-quality data.

 

 

Citizen Science for Observing and Understanding the Earth

Since the end of 2015, I’ve been using the following mapping of citizen science activities in a range of talks:

Range of citizen science activities
Explaining citizen science

The purpose of this way of presentation is to provide a way to guide my audience through the landscape of citizen science (see examples on SlideShare). The reason that I came up with it, is that since 2011 I give talks about citizen science. It started with the understanding that I can’t explain extreme citizen science when my audience doesn’t understand what citizen science is, and that turned into general talks on citizen science.

Similarly to Caren Cooper, I have an inclusive approach to citizen science activities, so in talks, I covered everything – from bird watching to DIY science. I felt that it’s too much information, so this “hierarchy” provides a map to go through the overview (you can look at our online course to see why it’s not a great typology). It is a very useful way to go through the different aspects of citizen science, while also being flexible enough to adapt it – I can switch the “long-running citizen science” fields according to the audience (e.g. marine projects for marine students).

An invitation for Pierre-Philippe Mathieu (European Space Agency) in 2015 was an opportunity to turn this mapping and presentation into a book chapter. The book is dedicated to “Earth Observation Open Science and Innovation and was edited by Pierre-Philippe and Christoph Aubrecht.

When I got to writing the chapter, I contacted two researchers with further knowledge of citizen science and Earth Observation – Suvodeep Mazumdar and Jessica Wardlaw. I was pleased that they were happy to join me in the effort.

Personally, I’m very pleased that we could include in the chapter the story of the International Geophysical Year, (thank Alice Bell for this gem), with Moonwatch and Sputnik monitoring.

The book is finally out, it is open access, and you can read our chapter, “Citizen Science for Observing and Understanding the Earth” for free (as well as all the other chapters). The abstract of the paper is provided below:

Citizen Science, or the participation of non-professional scientists in a scientific project, has a long history—in many ways, the modern scientific revolution is thanks to the effort of citizen scientists. Like science itself, citizen science is influenced by technological and societal advances, such as the rapid increase in levels of education during the latter part of the twentieth century, or the very recent growth of the bidirectional social web (Web 2.0), cloud services and smartphones. These transitions have ushered in, over the past decade, a rapid growth in the involvement of many millions of people in data collection and analysis of information as part of scientific projects. This chapter provides an overview of the field of citizen science and its contribution to the observation of the Earth, often not through remote sensing but a much closer relationship with the local environment. The chapter suggests that, together with remote Earth Observations, citizen science can play a critical role in understanding and addressing local and global challenges.

 

Citizen Science & Scientific Crowdsourcing – week 2 – Google Local Guides

The first week of the “Introduction to Citizen Science and Scientific Crowdsourcing” course was dedicated to an introduction to the field of citizen science using the history, examples and typologies to demonstrate the breadth of the field. The second week was dedicated to the second half of the course name – crowdsourcing in general, and its utilisation in scientific contexts. In the lecture, after a brief introduction to the concepts, I wanted to use a concrete example that shows a maturity in the implementation of commercial crowdsourcing. I also wanted something that is relevant to citizen science and that many parallels can be drawn from, so to learn lessons. This gave me the opportunity to use Google Local Guides as a demonstration.

My interest in Google Local Guides (GLG) come from two core aspects of it. As I pointed in OpenStreetMap studies, I’m increasingly annoyed by claims that OpenStreetMap is the largest Volunteered Geographical Information (VGI) project in the world. It’s not. I guessed that GLG was, and by digging into it, I’m fairly confident that with 50,000,000 contributors (of which most are, as usual, one-timers), Google created the largest VGI project around. The contributions are within my “distributed intelligence” and are voluntary. The second aspect that makes the project is fascinating for me is linked to a talk from 2007 in one of the early OSM conferences about the usability barriers that OSM (or more general VGI) need to cross to reach a wide group of contributors – basically about user-centred design. The design of GLG is outstanding and shows how much was learned by the Google Maps and more generally by Google about crowdsourcing. I had very little information from Google about the project (Ed Parsons gave me several helpful comments on the final slide set), but by experiencing it as a participant who can notice the design decisions and implementation, it is hugely impressive to see how VGI is being implemented professionally.

As a demonstration project, it provides examples for recruitment, nudging participants to contribute, intrinsic and extrinsic motivation, participation inequality, micro-tasks and longer tasks, incentives, basic principles of crowdsourcing such as “open call” that support flexibility, location and context aware alerts, and much more. Below is the segment from the lecture that focuses on Google Local Guides, and I hope to provide a more detailed analysis in a future post.

The rest of the lecture is available on UCLeXtend.

Chapter in Routledge Handbook of Mapping and Cartography – VGI and Beyond: From Data to Mapping

Hot on the heels of the Routledge Handbook of Environmental Justice is thThe Routledge Handbook of Mapping and CartographyRoutledge Handbook of Mapping and Cartography. The handbook was edited by Alex Kent (Canterbury Christ Church University) who is currently the President of the British Cartographic Society and Editor of The Cartographic Journal; and Peter Vujakovic (also from Canterbury Christ Church University) who edited The Cartographic Journal.

Like the other handbooks, this is an extensive collection of 43 chapters and almost 600 page about maps and mapping. The chapters provide a vivid demonstration that cartography and map making is art and science, and that it links to many sciences and practices – from cognitive psychology to geodesy. The list of authors is impressive and includes many of the people that are shaping current cartographic research.

However, with a price tag of £195 for the Book, this collection is expensive and suitable for university libraries and to professional or commercial mapping organisation. The eBook is £35, which makes it much more affordable, though having used the online system, the interface could be better. Luckily the policy of Routledge permits sharing the chapters on personal websites.

My contribution to the book is in a joint paper that was led by Vyron Antoniou titled VGI and Beyond: From Data to Mapping. The chapter is building on a collaboration between Vyron, myself and Cristina Capineri during the COST Action on Volunteered Geographic Information (ENERGIC). In the chapter, we look at the concept of Volunteered Geographic Information (VGI) within practices of mapping and cartography and we attempted to provide an accessible overview of the area. We define what VGI is, provide an overview of the area, look at the advantages and disadvantages of VGI in mapping and cartography, and then look at the impacts of VGI on national mapping agencies, the public, and public bodies. The chapter is available here and we would be very happy to hear comments on it.