Contours of Citizen Science paper published in Royal Society Open Science

At the end of 2019, just before the pandemic, I was lucky to be hosted and supported by the Centre for Research and Interdisciplinarity in Paris (CRI-Paris.org) and with colleagues from ECSA and the European project EU-Citizen.Science carried out a survey to help identify what are the boundaries of citizen science, in terms of the type of activities that people will answer the question “would you call this citizen science?”. The study was structured around 50 vignettes, each of them in a form of a short description of about 100 words, like this:

Text on an image, showing a case study of crowdfunding, A block of text on white background

The survey was very successful, with about 5100 ranking responses. These allowed us to have a good understanding of different factors that influence decisions about what is, and what isn’t citizen science. The analysis led to the creation of the ECSA characteristics of citizen science.

The paper was published in the journal “Royal Society Open Science” and you can find it here https://royalsocietypublishing.org/doi/10.1098/rsos.202108. The paper explains the study and the results.

Here is the abstract:

Citizen science has expanded rapidly over the past decades. Yet, defining citizen science and its boundaries remained a challenge, and this is reflected in the literature—for example in the proliferation of typologies and definitions. There is a need for identifying areas of agreement and disagreement within the citizen science practitioners community on what should be considered as citizen science activity. This paper describes the development and results of a survey that examined this issue, through the use of vignettes—short case descriptions that describe an activity, while asking the respondents to rate the activity on a scale from ‘not citizen science’ (0%) to ‘citizen science’ (100%). The survey included 50 vignettes, of which five were developed as clear cases of not-citizen science activities, five as widely accepted citizen science activities and the others addressing 10 factors and 61 sub-factors that can lead to controversy about an activity. The survey has attracted 333 respondents, who provided over 5100 ratings. The analysis demonstrates the plurality of understanding of what citizen science is and calls for an open understanding of what activities are included in the field.

However, maybe more valuable than the paper are the two datasets that we are releasing with the publication. We held back as it will allow reuse by attributing them to the paper, so provide a proper citation and knowledge of where it came from.

First, you’ll find the collection of 50 vignettes at https://zenodo.org/record/4281293#.YS_CoY5KguU. The vignettes are organised in such a way that 1-40 are matching different factors that make them controversial in some form or another, 40-45 designed to be non-controversial citizen science, while 46-50 designed to be non-controversial not citizen science (didn’t work exactly this way, but that’s life). Anyhow, the vignettes have already been proven to be very useful in teaching and discussing citizen science and they were used in Austria, Germany, and Israel. They are now free to use, translate, and do whatever you want with them. I personally think that this set can be increased to about 70, to include other cases of citizen science. Anyhow, do check it and consider how they can help you.

Second, the full results of the survey, as they came in, are now available at https://zenodo.org/record/4266685#.YS_EBY5KguU. The results that we present in the paper are just scratching the surface, and we’ve eliminated some of the analysis for lack of space. There are about 50,000 words in qualitative responses, and you can find there all sorts of things. For example, the terms that people will use to describe the vignette (e.g. community science, crowdsourcing etc.). We haven’t done an analysis on the basis of people disciplinary background, and many other things. There are plenty of interesting questions, and if you have a class of graduate students who are looking for data to explore, or a master student that is looking for a project, that might be a good source. We will be happy to support such a student. The dataset can also be useful in the debate about citizen/community science. You can use it and publish papers on the basis of it, without asking us (just attribute it correctly). Notice that in this dataset you will find the name of the people who contributed some of the responses – we have checked their consent twice (during the survey and before releasing the results), and they have asked that if their words are used, they should be attributed to them. This is done to make sure that credit is given properly throughout the research.

The process of writing the paper and carrying out the research was highly collaborative (which is why I use the “we” here) and includes a diverse group of people: Dilek Fraisl, Bastian Greshake Tzovaras, Margaret Gold, Gerid Hager, Susanne Hecker, Luigi Ceccaroni, Barbara Kieslinger, Uta Wehn, Sasha Woods, Christian Nold, Bálint Balázs, Marzia Mazzonetto, ,Simone Ruefenacht, Lea Shanley, Katherin Wagenknecht, Alice Motion, Andrea Sforzi, Dorte Riemenschneider, Daniel Dorler, Florian Heigl, Teresa Schaefer, Ariel Lindner, Maike Weißpflug, Lionel Deveaux, Soledad Luna, Monika Mačiulienė, Katrin Vohland, Fredrik Brounéus, Katja Heuer, and Tim Woods

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.