Citizen Science & Scientific Crowdsourcing – week 5 – Data quality

This week, in the “Introduction to Citizen Science & Scientific Crowdsourcing“, our focus was on data management, to complete the first part of the course (the second part starts in a week’s time since we have a mid-term “Reading Week” at UCL).

The part that I’ve enjoyed most in developing was the segment that addresses the data quality concerns that are frequently raised about citizen science and geographic crowdsourcing. Here are the slides from this segment, and below them a rationale for the content and detailed notes

I’ve written a lot on this blog about data quality and in many talks that I gave about citizen science and crowdsourced geographic information, the question about data quality is the first one to come up. It is a valid question, and it had led to useful research – for example on OpenStreetMap and I recall the early conversations, 10 years ago, during a journey to the Association for Geographic Information (AGI) conference about the quality and the longevity potential of OSM.

However, when you are being asked the same question again, and again, and again, at some point, you start considering “why am I being asked this question?”. Especially when you know that it’s been over 10 years since it was demonstrated that the quality is beyond “good enough”, and that there are over 50 papers on citizen science quality. So why is the problem so persistent?

Therefore, the purpose of the segment was to explain the concerns about citizen science data quality and their origin, then to explain a core misunderstanding (that the same quality assessment methods that are used in “scarcity” conditions work in “abundance” conditions), and then cover the main approaches to ensure quality (based on my article for the international encyclopedia of geography). The aim is to equip the students with a suitable explanation on why you need to approach citizen science projects differently, and then to inform them of the available methods. Quite a lot for 10 minutes!

So here are the notes from the slides:

[Slide 1] When it comes to citizen science, it is very common to hear suggestions that the data is not good enough and that volunteers cannot collect data at a good quality, because unlike trained researchers, they don’t understand who they are – a perception that we know little about the people that are involved and therefore we don’t know about their ability. There are also perceptions that like Wikipedia, it is all a very loosely coordinate and therefore there are no strict data quality procedures. However, we know that even in the Wikipedia case that when the scientific journal Nature shown over a decade ago (2005) that Wikipedia is resulting with similar quality to Encyclopaedia Britannica, and we will see that OpenStreetMap is producing data of a similar quality to professional services.
In citizen science where sensing and data collection from instruments is included, there are also concerns over the quality of the instruments and their calibration – the ability to compare the results with high-end instruments.
The opening of the Hunter et al. paper (which offers some solutions), summarises the concerned that are raised over data

[Slide 2] Based on conversations with scientists and concerned that are appearing in the literature, there is also a cultural aspect at play which is expressed in many ways – with data quality being used as an outlet to express them. This can be similar to the concerns that were raised in the cult of the amateur (which we’ve seen in week 2 regarding the critique of crowdsourcing) to protect the position of professional scientists and to avoid the need to change practices. There are also special concerns when citizen science is connected to activism, as this seems to “politicise” science or make the data suspicious – we will see next lecture that the story is more complex. Finally, and more kindly, we can also notice that because scientists are used to top-down mechanisms, they find alternative ways of doing data collection and ensuring quality unfamiliar and untested.

[Slide 3] Against this background, it is not surprising to see that checking data quality in citizen science is a popular research topic. Caren Cooper have identified over 50 papers that compare citizen science data with those that were collected by professional – as she points: “To satisfy those who want some nitty gritty about how citizen science projects actually address data quality, here is my medium-length answer, a brief review of the technical aspects of designing and implementing citizen science to ensure the data are fit for intended uses. When it comes to crowd-driven citizen science, it makes sense to assess how those data are handled and used appropriately. Rather than question whether citizen science data quality is low or high, ask whether it is fit or unfit for a given purpose. For example, in studies of species distributions, data on presence-only will fit fewer purposes (like invasive species monitoring) than data on presence and absence, which are more powerful. Designing protocols so that citizen scientists report what they do not see can be challenging which is why some projects place special emphasize on the importance of “zero data.”
It is a misnomer that the quality of each individual data point can be assessed without context. Yet one of the most common way to examine citizen science data quality has been to compare volunteer data to those collected by trained technicians and scientists. Even a few years ago I’d noticed over 50 papers making these types of comparisons and the overwhelming evidence suggested that volunteer data are fine. And in those few instances when volunteer observations did not match those of professionals, that was evidence of poor project design. While these studies can be reassuring, they are not always necessary nor would they ever be sufficient.” (http://blogs.plos.org/citizensci/2016/12/21/quality-and-quantity-with-citizen-science/)

[Slide 4] One way to examine the issue with data quality is to think of the clash between two concepts and systems of thinking on how to address quality issue – we can consider the condition of standard scientific research conditions as ones of scarcity: limited funding, limited number of people with the necessary skills, a limited laboratory space, expensive instruments that need to be used in a very specific way – sometimes unique instruments.
The conditions of citizen science, on the other hand, are of abundance – we have a large number of participants, with multiple skills, but the cost per participant is low, they bring their own instruments, use their own time, and are also distributed in places that we usually don’t get to (backyards, across the country – we talked about it in week 2). Conditions of abundance are different and require different thinking for quality assurance.

[Slide 5] Here some of the differences. Under conditions of scarcity, it is worth investing in long training to ensure that the data collection is as good as possible the first time it is attempted since time is scarce. Also, we would try to maximise the output from each activity that our researcher carried out, and we will put procedures and standards to ensure “once & good” or even “once & best” optimisation. We can also force all the people in the study to use the same equipment and software, as this streamlines the process.
On the other hand, in abundance conditions we need to assume that people are coming with a whole range of skills and that training can be variable – some people will get trained on the activity over a long time, while to start the process we would want people to have light training and join it. We also thinking of activities differently – e.g. conceiving the data collection as micro-tasks. We might also have multiple procedures and even different ways to record information to cater for a different audience. We will also need to expect a whole range of instrumentation, with sometimes limited information about the characteristics of the instruments.
Once we understand the new condition, we can come up with appropriate data collection procedures that ensure data quality that is suitable for this context.

[Slide 6] There are multiple ways of ensuring data quality in citizen science data. Let’s briefly look at each one of these. The first 3 methods were suggested by Mike Goodchild and Lina Li in a paper from 2012.

[Slide 7] The first method for quality assurance is crowdsourcing – the use of multiple people who are carrying out the same work, in fact, doing peer review or replication of the analysis which is desirable across the sciences. As Watson and Floridi argued, using the examine of Zooniverse, the approaches that are being used in crowdsourcing give these methods a stronger claim on accuracy and scientific correct identification because they are comparing multiple observers who work independently.

[Slide 8] The social form of quality assurance is using more and less experienced participants as a way to check the information and ensure that the data is correct. This is fairly common in many areas of biodiversity observations and integrated into iSpot, but also exist in other areas, such as mapping, where some information get moderated (we’ve seen that in Google Local Guides, when a place is deleted).

[Slide 9] The geographical rules are especially relevant to information about mapping and locations. Because we know things about the nature of geography – the most obvious is land and sea in this example – we can use this knowledge to check that the information that is provided makes sense, such as this sample of two bumble bees that are recorded in OPAL in the middle of the sea. While it might be the case that someone seen them while sailing or on some other vessel, we can integrate a rule into our data management system and ask for more details when we get observations in such a location. There are many other such rules – about streams, lakes, slopes and more.

[Slide 10] The ‘domain’ approach is an extension of the geographic one, and in addition to geographical knowledge uses a specific knowledge that is relevant to the domain in which information is collected. For example, in many citizen science projects that involved collecting biological observations, there will be some body of information about species distribution both spatially and temporally. Therefore, a new observation can be tested against this knowledge, again algorithmically, and help in ensuring that new observations are accurate. If we see a monarch butterfly within the marked area, we can assume that it will not harm the dataset even if it was a mistaken identity, while an outlier (temporally, geographically, or in other characteristics) should stand out.

[Slide 11] The ‘instrumental observation’ approach removes some of the subjective aspects of data collection by a human that might make an error, and rely instead on the availability of equipment that the person is using. Because of the increase in availability of accurate-enough equipment, such as the various sensors that are integrated in smartphones, many people keep in their pockets mobile computers with the ability to collect location, direction, imagery and sound. For example, images files that are captured in smartphones include in the file the GPS coordinates and time-stamp, which for a vast majority of people are beyond their ability to manipulate. Thus, the automatic instrumental recording of information provides evidence for the quality and accuracy of the information. This is where the metadata of the information becomes very valuable as it provides the necessary evidence.

[Slide 12] Finally, the ‘process oriented’ approach bring citizen science closer to traditional industrial processes. Under this approach, the participants go through some training before collecting information, and the process of data collection or analysis is highly structured to ensure that the resulting information is of suitable quality. This can include the provision of standardised equipment, online training or instruction sheets and a structured data recording process. For example, volunteers who participate in the US Community Collaborative Rain, Hail & Snow network (CoCoRaHS) receive standardised rain gauge, instructions on how to install it and online resources to learn about data collection and reporting.

[Slide 13]  What is important to be aware of is that methods are not being used alone but in combination. The analysis by Wiggins et al. in 2011 includes a framework that includes 17 different mechanisms for ensuring data quality. It is therefore not surprising that with appropriate design, citizen science projects can provide high-quality data.

 

 

Advertisements

Citizen Science & Scientific Crowdsourcing – week 3 – Participation inequality

One of the aspects that fascinates me about citizen science and crowdsourcing is the nature of participation and in particular participation inequality. As I’ve noted last week, when you look at large scale systems, you expected to see it in them (so Google Local Guides is exhibiting 95:5:0.005 ratio).

I knew that this phenomenon has been observed many times in Massive Online Open Courses (MOOCs) so I expected it to happen in the course. I’m particularly interested in the question of the dynamic aspect of participation inequality: for example, at the point of the beginning of the “introduction to citizen science and scientific crowdsourcing” course, every single person is at exactly the same level of participation – 0. However, within three weeks, we are starting to see the pattern emerges. Here are some of the numbers:

At this point in time, there are 497 people that went through the trouble of accessing UCLeXtend and creating a profile. They are a small group of the people that seen the blog post (about 1,100) or the tweet about it (about 600 likes, retweets or clicking on the link). There are further 400 people that filled in the online form that I set before the course was open and stated their interest in it.

The course is structured as a set of lectures, each of them broken into segments of 10 minutes each, and although the annotated slides are available and it is likely that many people prefer them over listening to a PowerPoint video (it’s better in class!), the rate of viewing of the videos gives an indication of engagement.

Here are our viewing statistics for now:

ICSSC260118Videos

We can start seeing how the sub-tasks (viewing a series of videos) is already creating the inequality – lots of people watch part of the first video, and either give up (maybe switching to the notes) or leaving it to another time. By part 4 of the first lecture, we are already at very few views (the “Lecture 3 Part 2” video is the one that I’ve integrated in the previous blog post).

What is interesting to see is how fast participation inequality emerges within the online course, and notice that there is now a core of about 5-10 people (about 1% to 2%) that are following the course at the same rate as the 9 students who are in the face to face class. I expect people to also follow the course over a longer period of time, so I wouldn’t read too much into the pattern and wait until the end of the course and a bit after it to do a full analysis.

When I was considering setting up the course as a hybrid online/offline, I was expecting this, since the amount of time that is required to follow up the course is nearly 4-5 hours a week – something reasonable for an MSc student during a course, but tough for a distance learner (I have a huge appreciation to these 10 people that are following!).

 

 

Citizen Science & Scientific Crowdsourcing – week 2 – Google Local Guides

The first week of the “Introduction to Citizen Science and Scientific Crowdsourcing” course was dedicated to an introduction to the field of citizen science using the history, examples and typologies to demonstrate the breadth of the field. The second week was dedicated to the second half of the course name – crowdsourcing in general, and its utilisation in scientific contexts. In the lecture, after a brief introduction to the concepts, I wanted to use a concrete example that shows a maturity in the implementation of commercial crowdsourcing. I also wanted something that is relevant to citizen science and that many parallels can be drawn from, so to learn lessons. This gave me the opportunity to use Google Local Guides as a demonstration.

My interest in Google Local Guides (GLG) come from two core aspects of it. As I pointed in OpenStreetMap studies, I’m increasingly annoyed by claims that OpenStreetMap is the largest Volunteered Geographical Information (VGI) project in the world. It’s not. I guessed that GLG was, and by digging into it, I’m fairly confident that with 50,000,000 contributors (of which most are, as usual, one-timers), Google created the largest VGI project around. The contributions are within my “distributed intelligence” and are voluntary. The second aspect that makes the project is fascinating for me is linked to a talk from 2007 in one of the early OSM conferences about the usability barriers that OSM (or more general VGI) need to cross to reach a wide group of contributors – basically about user-centred design. The design of GLG is outstanding and shows how much was learned by the Google Maps and more generally by Google about crowdsourcing. I had very little information from Google about the project (Ed Parsons gave me several helpful comments on the final slide set), but by experiencing it as a participant who can notice the design decisions and implementation, it is hugely impressive to see how VGI is being implemented professionally.

As a demonstration project, it provides examples for recruitment, nudging participants to contribute, intrinsic and extrinsic motivation, participation inequality, micro-tasks and longer tasks, incentives, basic principles of crowdsourcing such as “open call” that support flexibility, location and context aware alerts, and much more. Below is the segment from the lecture that focuses on Google Local Guides, and I hope to provide a more detailed analysis in a future post.

The rest of the lecture is available on UCLeXtend.

Launching a citizen science course – week 1

Today, I gave the opening lectures of the new UCL course ‘Introduction to Citizen Science and Scientific Crowdsourcing‘. In a way, it was more work than I originally thought, but I also thought that I’m underestimating the effort – so it’s not completely unexpected.

Although I am responsible for the first installation of Moodle, the virtual learning environment, at UCL in 2003, I have not used it in the context of an online course for remote learners. I have experienced the development of the Esri Survey123 module with Patrick Rickles and the excellent team at Esri that done most the work. It’s actually quite a challenge. Luckily, the e-learning support team of UCL was happy to guide us and set us on an appropriate path of developing the material for the course.

Having the course materialising is also closing a part of the original ExCiteS proposal that was left open. Here what the proposal for Challenging Engineering said: “In the fourth year, the research group will begin to consolidate the technology (with the first PhD students completing their studies) and will develop a further focused research proposal utilising the lessons from Adventure 2… In this year, a module on Citizen Science will be offered for MSc and PhD students at UCL.”. The project officially started in September 2011, so the fourth year was 2016 – so launching it in early 2018, within the 2017/2018 academic year should be considered to be on time in academic proposal terms!

Compared to things that I’ve done in the past, I have to note that the evolution of what is considered as boring technology – e.g. Microsoft PowerPoint (MSPP) – is instrumental to the ability to put this course together. Below you’ll see the opening segment. In actual terms, the extra effort to turn it into online teaching material was not huge – record voice over in MSPP, save as a video, upload to YouTube, link to Moodle (or here). I do hope that we’re getting it right with the course, but I’ll see as we develop it.

The rest of the lecture is available on UCLeXtend.

Online course – Introduction to Citizen Science and Scientific Crowdsourcing

It’s a new year, and just the right time to announce that starting on the 11th January, UCL will run an 11 weeks hybrid (online and face to face) course called “Introduction to Citizen Science and Scientific Crowdsourcing“. This course aim is to introduce students to the theory and practice of citizen science and scientific crowdsourcing. The module will explore the history, theoretical foundations, and practical aspects of designing and running citizen science projects and it will be mostly taught by members of the Extreme Citizen Science group (we have some guests from other organisations!)

The course is run for the first time as part of the M.Sc. programmes at the Department of Geography at UCL, with face to face lectures and practical work. In the spirit of citizen science, we’re opening the course, and it is available on the UCLeXtend website.

The course will run as a hybrid – the material was designed to develop the learning of the students in the class, but then organised in a way that anyone who wants to join the course remotely can do so. For example, you will be able to follow the lectures online all the slides and the audio is available on UCLeXtend. The reading material and class preparation videos are all open access, and in the practicals, we are using open source software or websites that you can access regardless of your registration. Of course, you can’t get UCL credits for attending the class if you are just joining remotely – and those that attend the class will be assessed through two assignments that will be marked, but there are plenty of reflection questions and discussions in the online course for you to assess your progress and to provide us with feedback on how the course is going. We will dedicate some effort to support our distance learners and you will be able to interact with the students who take the class at UCL as you will be using the same material and system that they use.

Each week, there will be two lectures and a practical session that will demonstrate some aspects of the issues that were covered during the lectures. Each lecture and the activities that are linked to it are planned to last about an hour.

As a preparation for class, we will provide a video or two to watch and 2 or 3 pieces of text to read. These are necessary since the lecture assumes this preparation. The necessary readings are marked “Core Reading”. We also provide “Additional Reading” – these are usually pieces that were discussed in class. Finally, the “Deep Dive” reading are expanding on the class material and might be used in assignments (if you take the face to face course), or to expand your understanding (if you are taking the course remotely).

Below you’ll find an outline of the course and its content:

Date Content Lead
11 Jan Lecture: Historical citizen science, current trends that influence citizen science, and an overview Muki Haklay
Lecture: Landscape of citizen science – Typologies Muki Haklay
Practical: experiencing citizen science – PenguinWatch, Gender and Tech Magazines, and GalaxyZoo Alex Papadopoulos
18 Jan Lecture: Crowdsourcing principles and practice Muki Haklay
Lecture: Scientific crowdsourcing examples (guest lecture TBA) Muki Haklay
Practical: More complex crowdsourcing – OpenStreetMap and EyeOnAlz Alice Sheppard
25 Jan Lecture: User-centred design principles for citizen science technology Artemis Skarlatidou
Lecture: Online volunteer engagement, management, and care Alice Sheppard
Practical: Volunteers engagement scenarios Alice Sheppard
1 Feb Lecture: User-Centred Design methods for citizen science technology Artemis Skarlatidou
Lecture: User-Centred Design Methods for citizen science technology (guest lecture TBA) Artemis Skarlatidou
Practical: Usability evaluation of citizen science application – cognitive walkthrough and heuristic evaluation Alex Papadopoulos
8 Feb Lecture: Dealing with data in citizen science – quality, management, and sharing Muki Haklay
Lecture: Practical aspects of data management – technologies and existing systems Muki Haklay
Practical: using and analysing citizen science data with OPAL Data Explorer Alex Papadopoulos
15 Feb — No Class — Reading week
22 Feb Lecture: Citizen Science in environmental management and monitoring Muki Haklay
Lecture: Scales and types of environmental citizen science (guest lecture from Earthwatch TBA)
Practical: developing data collection tool with Esri Survey123 Alex Papadopoulos
1 Mar Lecture: Ethics and legal aspects of citizen science Muki Haklay
Lecture: Introduction to data collection for non-literate participants, Sapelli Julia Altenbuchner
Practical: Developing data collection app with Sapelli Julia Altenbuchner
8 Mar Lecture: Evaluation of citizen science activities – types and approaches Cindy Regalado
Lecture: Tools and methods of evaluation and demonstration on projects Cindy Regalado
Practical: Developing an evaluation framework and plan for a project Cindy Regalado
15 Mar Lecture: Policy and organisational aspects of citizen science Muki Haklay
Lecture: Understanding terminologies and definitions of citizen science Muki Haklay
Practical: Data collection with Sapelli and evaluation of results Julia Altenbuchner
22 Mar Lecture: Theoretical frameworks for citizen science – from Actor-Network Theory to Post-Normal Science Christian Nold
Lecture: Science and society framing of citizen science – from Alan Irwin to Responsible Research and Innovation Muki Haklay
Practical: Using iNaturalist or iSpot to collect data in the wild, and preparation to City Nature Challenge 2018 Muki Haklay

 

Part of the reason that we can open the course is through the support of UCL Geography department, with additional support from the following bodies:

Natural Environment Research Council (NERC) project “OPENER: Scoping out a national cOmmunity of Practice for public ENgagement with Environmental Research” (NE/R012067/1)

Engineering and Physical Sciences Research Council (EPSRC) projects “Extreme Citizen Science” (EP/I025278/1) and “Challenging RISK: Achieving Resilience by Integrating Societal and Technical Knowledge” (EP/K022377/1)

EU Horizon 2020 projects “Doing It Together science (DITOs)” (Project ID 709443) and “WeGovNow” (Project ID 693514).

European Research Council (ERC) Advanced Grant “Extreme Citizen Science: Analysis and Visualisation” (Project ID 694767)