Introducing: ECSA Characteristics of Citizen Science

Today the European Citizen Science Association (ECSA) is launching a document that is aimed to help with identifying the type of Image of the first page of ECSA characteristics of citizen science activities that belong to citizen science. The document “ECSA Characteristics of Citizen Science” is coming ready with an interpretation document, which is called “ECSA Characteristics of Citizen Science – Explanation Notes“. They are aimed to work together (and build on) the highly successful “ECSA 10 Principles of Citizen Science“. So what are these documents? Why do we need to define the characteristics of citizen science and who exactly need them? 

I would state from the start – these documents are attempting to make a fuzzy cloud shape out of a sharp cornered box, it’s trying to have a box that is fuzzy and have a lot of out-of-the-box space. They are trying to avoid a strict definition of citizen science, while at the same time list – over 5 pages – what sort of things you can expect in a citizen science project (which is, at the end of the day, a form of a definition). This seems like a very complex way to go about it – so why not just have “ECSA definition of citizen science”? 

The answer for this is “it’s complex”. You can see that in a specific context, a definition can be very useful – especially if you need to make decisions. For example, if you are creating a national website for citizen science, you want to be transparent and open about what type of projects will be hosted there – this is why the coordinators of the Austrian platform Österreich forscht set out quality criteria for their platform, an one of the early document for the EU-Citizen.Science platform deals with criteria for inclusion and exclusion. There is also a need that comes from policymakers and research funders – as citizen science gains more profile, the response to calls such as the “Science with and for Society” programme of Horizon 2020 increased. When making funding decisions, it is important to be open and transparent about what types of projects will be eligible for funding and support. There is also an importance in a clear definition for members of the public who participate in a project, and for scientists. Although this is a fairly small group, for some participants it will be important to know if what they are participating in is a bona fide citizen science, and scientists who heard about citizen science and want to offer a project need to know if what they are offering is, indeed, appropriate. So there are multiple groups, who are not experts in citizen science, who need to know if a given project should be part of the area of citizen science or not.

However, once the Österreich forscht criteria were published in a journal paper with a call for an international definition of citizen science, came the criticism and the warning that a too narrow a definition can harm citizen science in the long run (I have contributed to the response letter). So this created a problem – on the one hand, there is a need for definition, and on the other hand, we need to avoid a narrow one. 

Two opportunities emerge last autumn – I had time as part of a short-term research fellowship at the CRI in Paris, and the EU-Citizen.Science project received requests from policy officers from the European Commission and from the Open Science Policy Platform (OSPP) to come up with a definition.

Page image of OpenStreetMap Wiki on use casesIn order to deal with the conundrum of creating a definition without making it narrow and precise, I have suggested an approach that I originally learned from the OpenStreetMap licence change process: instead of starting from the definition, start with case studies and identify all the aspects that you want to include in a definition. Once you have clarity about the plurality and the characteristics of what you want, it is possible to articulate them. The process in OpenStreetMap was lengthy and not without problems, but the case studies approach seemed to work well. The reason to use the case studies (or vignettes) approach for citizen science is that it allows for context. For example, while it seems simple to ask “is a project that pays participants can be called citizen science?”, an answer cannot be provided without clarity about the context – in some case will be appropriate, while in other it will not. By including context and issues, it can be lead to a much clearer understanding of people’s positions. I have consulted with ECSA team about this approach and we suggested the following methodology in early October 2019:

The development of the characteristics will be carried out through several online/offline workshops – currently envisaged about three of 2 to 3 hours. The first in mid/late October, the second one in November and the final one in December. This process should lead to an early draft that can be shared and commented on at the end of 2019 and the beginning of 2020. To allow wider and deeper discussion, the effort with ECSA is envisaged to continue well into 2020, with scope for discussion during the annual conference in Trieste.

The process of engagement should progressively involve more people, both from within the citizen science practice communities and beyond. Comments should be sought from different relevant stakeholders, including the ECSA working group on national platforms, which is working on criteria for citizen science based on the Austrian experience as well as other working groups.

Wall with the word science on itThe suggested process is aimed at defining the “contours of citizen science” or “defining the landscape” and then developing a set of characteristics on the basis of the process. To identify these contours, we will start with identifying 30-50 types of activities – some of them within citizen science (bioblitz), some of them clearly outside (social survey), some in-between (an interactive exhibit in a science museum). They will be all described in more or less the same way (e.g. project owner, what is happening, what the participants do, how the results are used, payment to participants or payment to participate in the project …). These will be short descriptions (50-70 words), followed by a compilation of the case study, a survey that will include the case, a slider of “not citizen science” to “clearly citizen science”, alternative name, and space for notes. These descriptions will, eventually, be provided in the accompanying document to the characteristics but will not form part of them (similarly to the Robinson 2019 chapter and the 10 Principles).

The cases will be organised in a survey in which the people who answer see the cases in randomised order. Ideally, the participant can choose to stop when they want – even after 10 cases. They will then be asked a few demographic/professional questions, and about their knowledge of citizen science. The survey will be spread out to people in and outside the citizen science community. At the end of it, the opinions (as expressed in the slider) can define what clearly is inside and outside, and where the edge cases are. Once that is set, common characteristics across the projects will be distilled and set into different groupings. These will then be compiled to a final document.

This methodology was indeed followed through, with enhancements and improvements – with a working group of 25 people who participated in workshops, contributed their knowledge and connections, it was possible to progress with the design of the study – from identifying the framework and parameters that will use to construct the case studies, to the design of the survey (which was a vignette study), and the analysis of the results.  The survey was run in December 2019 and provided a very rich source of information – 330 people responded, and each case study received, on average, over 100 gradings and an indication of their degree of citizen science. The working group is currently working on a paper that will share the results.

With so much information, it was possible to identify the different characteristics that people disagree and agree on and construct from these the set of characteristics that are representing the views of a wide range of people on what is and what isn’t citizen science.

The characteristics are not without challenges – for example, the survey revealed a strong animosity towards commercially focused citizen science – something that we need to find a way to support in the future if we want citizen science to be able to sustain activities in the long run.

A lot of work of many people was done on these characteristics, and I would be interested to see how they are being picked up and use. The OSPP already endorsed them, which is a start.

The work was supported by the CRI short term fellowship and by my ERC grant ECSAnVis, as well as the EU-Citizen.Science project.



How Does Citizen Science “Do” Governance? Reflections from the DITOs Project

This is the second post about papers in the special collection of papers in the journal “Citizen Science: Theory and Practice” was dedicated to Policy Perspectives of Citizen Science. The first paper is described in this post.

It is fairly rare to be able to catch an image close to the time when a concept for a paper was hatched but the case of the paper “How Does Citizen Science “Do” Governance? Reflections from the DITOs Project“, there is such thing:

WhatsApp Image 2018-06-27 at 8.39.12 PM

The paper emerged from discussion the Claudia Gobel started during a Doing It Together Science (DITOs) project meeting in Ljubljana in June 2018. Claudia, Aleks (both in the picture, mapping all the connections between project partners) together with Christian and myself discussed what we can learn from our project about the rationale for policy makers to commission and use citizen science. It is starting from the notion that citizen science relationships with political processes is more than a source of data or an object of research policy. DITOs, with its huge variety of events that were both aimed at policy makers and at the public, and across different places and topics, provided a good basis for the analysis. We identified four modes of governance that are relevant to DITOs, and this provided the basis for the paper. The paper can be accessed here.  The abstract of it is:

Citizen science (CS) is increasingly becoming a focal point for public policy to provide data for decision-making and to widen access to science. Yet beyond these two understandings, CS engages with political processes in a number of other ways. To develop a more nuanced understanding of governance in relation to CS, this paper brings together theoretical analysis by social science researchers and reflections from CS practice. It draws on concepts from Science and Technology Studies and political sciences as well as examples from the “Doing-It-Together Science” (DITOs) project. The paper develops a heuristic of how CS feeds into, is affected by, forms part of, and exercises governance. These four governance modes are (1) Source of information for policy-making, (2) object of research policy, (3) policy instrument, and (4) socio-technical governance. Our analysis suggests that these four dimensions represent different conceptions of how science and technology governance takes place that have not yet been articulated in the CS literature. By reflecting on the DITOs project, the paper shows how this heuristic can enrich CS. Benefits include project organisers better communicating their work and impacts. In its conclusion, the paper argues that focusing on the complexity of governance relations opens up new ways of doing CS regarding engagement methodologies and evaluation. The paper recommends foregrounding the broad range of governance impacts of CS and reflecting on them in cooperation between researchers and practitioners.

How Does Policy Conceptualise Citizen Science? A Qualitative Content Analysis of International Policy Documents

In early December, a special collection of papers in the journal “Citizen Science: Theory and Practice” was dedicated to Policy Perspectives of Citizen Science. I have contributed to two papers in this collection. The first one is “How Does Policy Conceptualise Citizen Science? A Qualitative Content Analysis of International Policy Documents“. The paper is led by Susanne Hecker who organised the writing team in March 2018. Her idea was to identify a cohort of policy documents from across the world and look at the way that they use the term “citizen science” – specifically. By so doing, the qualitative analysis of the document can help in revealing the way that the term and the practice are framed within policy circles. A lot of the collaboration and the development of the paper was carried out through online calls, and we met to discuss the paper once (during the ECSA conference in 2018). Susanne and Nina did the analysis work of the documents, while Aletta and I provided suggestions for documents, and influenced the framework for analysis. Overall, 43 policy documents were analysed – the full paper can be accessed here, and the abstract reads:

Policy and science show great interest in citizen science as a means to public participation in research. To recognize how citizen science is perceived to foster joint working at the science-society-policy interface, a mutual understanding of the term “citizen science” is required. Here, we assess the conceptualisation and strategic use of the term “citizen science” in policy through a qualitative content analysis of 43 international policy documents edited by governments and authorities. Our results show that most documents embrace the diversity of the research approach and emphasize the many benefits that citizen science may provide for science, society, and policy. These include boosting spatio-temporal data collection through volunteers, tapping into distributed knowledge domains, increasing public interest and engagement in research, and enhancing societal relevance of the respective research. In addition, policy documents attribute educational benefits to citizen science by fostering scientific literacy, individual learning, and skill development, as well as by facilitating environmental stewardship. Through active participation, enhanced ownership of research results may improve policy decision-making processes and possibly democratise research as well as public policy processes, although the latter is mentioned only in a few European Union (EU) documents. Challenges of citizen science mentioned in the analysed policy documents relate mainly to data quality and management, to organisational and governance issues, and to difficulties of the uptake of citizen science results into actual policy implementation due to a lack of citizen science alignment with current policy structures and agendas. Interestingly, documents largely fail to address the benefits and challenges of citizen science as a tool for policy development, i.e., citizen science is mainly perceived as only a science tool. Overall, policy documents seem to be influenced strongly by the citizen science discourse in the science sector, which indicates a joint advocacy for citizen science.



The role of learning in citizen science and the impact of participation inequality

From August to December I was hosted at the Centre for Research and interdisciplinarity in Paris. This short term research fellowship had a focus on learning and citizen science. The recording below is from a seminar in November 2019, titled “the role of learning in citizen science and the impact of participation inequality”.

The talk explored how learning is integrated into citizen science in its different modes. As a background, we will start with published typologies and identified goals and objectives of learning within citizen science projects. Based on these, we can examine different projects – some are more contributory (top-down projects, in which scientists are setting the project and calling to people to join) and collegial projects (bottom-up projects, in which the community have more control over the project), and in the projects of the extreme citizen science group. We will end by questions about the interaction between learning and participation (exploring the implications of participation inequality), and the way citizen science fit within different disciplinary practices.

What can we learn from analysing citizen science training materials?

As part of the EU-Citizen.Science project, UCL is leading on the training work package. This means that we coordinate the part of the platform that will help to store and share training material for citizen science projects, and generally for the field (such as the UCL online course). The stay at the Centre for Research and Interdisciplinarity (CRI) in Paris, provided an opportunity to work with two interns of the interdisciplinary undergraduate programme in life sciences on this issue. At the beginning of the term, the UCL team, together with Myriam Fockenoy and Morgane Opoix  (the student interns) carried out a workshop to decide on the data collection scheme – identifying material, recording it and checking its content. It was especially helpful that Myriam and Morgane could analyse material in French, which will be useful for the project as a whole. They worked several hours every week, finding material and checking it thoroughly. Additional material was contributed by Earthwatch and Yaqian Wu from UCL. We ended with 30 pieces of training material that we looked at and catalogued. Finally, we worked on analysing the material, and this led to a short report, which is provided here.

You can read the report here.



Platforms for citizen science

A CRI-Muséum national d’Histoire naturelle workshop: created by Anshu (CRI long term fellow) and Simon (MNHN), from a meeting at the Galaxy community in Freiburg. I joined the design process and it was structured so the museum and the CRI present the systems that are being developed, with a scope for a discussion about lessons and collaboration. Here are the details of the workshop on the CRI website. These are the rough notes from the workshop.


Ariel Lindner – since the first major grant of the CRI (Citizen Cyberlab), there is an interest at the CRI in digital platforms for engagement. At the same time, they received a grant to innovate in education, and since then the CRI has become a centre for learning sciences and research with a link between learning, digital sciences, and life sciences. The principles are mentorship/empowerment, the right to err, and share. For CRI, open science means transparency and collaboration. Few of the important things for the day: gaps – distances between public and research which can grow and there is distrust, but on the other hand, kids are going to the street with an issue that is scientific. There are digital gaps, instrumentation in labs that are more complex and not available within the scientific community. We need to consider how we address the gaps – how a collaborative approach can help us to progress.

I covered the ExCiteS platforms and some of my experience from different collaborative platforms that we developed in ExCiteS. The slides are provided below.

DSC_0073Romain Julliard: citizen science: [Big] quality data and [Artificial] collective intelligence. The museum created over 15 years of experience, over 15 projects, with over 15,000 active participants a year. All part of the National Museum of Natural History role to the monitoring of biodiversity through citizen science. They see the projects as volunteers, scientific experts and NGOs facilitators. There are projects such as Spipoll which is the photography of insects pollinators – it is difficult to take a picture: quite challenging. The. A positive correlation between longevity of engagement and data quality. Finding the zone of flow as in computer game. Identify the skills that are required from the participants and communicate with them. The second lesson is the importance of the social platform and communication among the project participants to improve data quality control – participants are “policing” each other and guide the process of improvement of data quality. There is a comparative study that demonstrated that the visibility of data and the ability of participants to learn from each other is critical in term of following the protocol and producing relevant data. They learned that making data visible by all allows imitation and more homogenous data. Comments and discussion allow advice and help and quality control. There is also an improvement that is made by the contributor through versioning. There are differences to textbook statements: e.g. that data observations should be independent of each other, that there is a need to train participants in advance. They recommend imitation, allow participants to engage with each other and have shared a part in the QA. The project 65 Millions d’observateurs is a project with major funding and they are creating a common system for data collection. They have a common approach across projects – they are currently working on shared infrastructure for citizen science projects. One project is an open observatory for all species with over 146 different sub-projects. They are creating a new service unit MOSaic with Sorbonne to provide ongoing skils on technology for citizen science, with over 15 people covering a range of skills.
DSC_0074Simon Bénateau / Galaxy-Bricks: Toward collaborative data analysis – creating tools for analysing the data. The tool is aiming to allow share and make errs, and the aim to create communities. The citizen science is diverse – from high schools students to experts, working on environmental issues and on organisms. Some people with very little knowledge to quite a high level of expertise. The process includes in the network that Museum that works through protocols and data with participants. There is also for researchers and partners in the scientific community. It allows for new ways to participate and ask questions of the data. They also want to help in teaching the scientific approach, and data literacy.  Choosing Galaxy mean that there is an existing development community, they support sharing of the methodology, it is FAIR, open-source, and even provide access to high-performance computing. Their aim is to simplify the UI and allow to simplify the process of constructing an analysis workflow. Using Scratch which is a development of an analysis process that is suitable for learning. The process includes following the structure of scientific research: setting your research question, import data, process data, visualise, carry out statistical tests, and reach conclusions.

DSC_0075Eric Cherel: The Learning Planet – the team at CRI trying to build tools that can help a model campus digital infrastructure – from tactile information screens and other tools that can be used elsewhere. There are learning tools that are supposed to be empowering the community. The project system on the CRI is used to present the project – who you are working with, what you are working on, linking to different tools. The tools that are used to create descriptions of projects: from small to large and help to relate projects. The global project WeLearn is to catalyse learning. currently a browser extension – when you come across it, you mark a source as a learning source. The system tries to extract the concept from the page, but also with crowdsourcing and it creates a global map (currently in French and English). It also creates a profile of the learner, so it might be able to match learners with the material. A lot of potential to map learning resources on a massive scale. They use cartography of concepts as a way to present to people their topics and learning. They use Wikipedia to train an ML model and analysing a way to extract concepts. They work with people from data4good who helped. Linking to EdTech companies to share ontologies and abilities to manage concepts. Integrating the use of smartphone can allow capturing of books and other not online learning resources and events.  Aim to add more information to support reflexivity, recommendation, self-documented learning. Hope to reach out to EdTechm Wikipedia and open science platform.

DSC_0071Anshu Bhardwaj: Collaborative Tools to Accelerate Infectious Disease Research. The projects that she aims at are researchers, undergrad, industry – they will have some knowledge in the area before joining the project. In particular, she works on drug discovery. TB is an example of the issue with antibiotic resistance. Drug discovery is a complex, risky process with a high attrition rate. It takes 12-15 years from idea to drug and it is very expensive. There is a need for a wide range of skills. Within the pharma industry is that failures are not shared. Within an open-source drug discovery information and failures are shared and allowing learning. The open innovation model allows for creating a collaborative platform. Sysborg 2.0 – point of contact for idea, data, result and peer-review platform that allow for improvement. It allows a project management system, a social network to find peers. There are 13 functions and a social network type page. There is also a need to manage micro-attribution – to allow recognising small contributions. They created the portal from a range of open-source tools – Galaxy, DoProject, Moodle, etc. It includes collaboration with Infosys because of the technical complexity of developing such a project. On each project, they have developed metadata that is recorded in the system, but they created a flat hierarchy that allows anyone to update information with version control in case that people changed information that the project manager wants to change. They also have an OSDDCHem – and open chemistry initiative and that because of the complexity of following compounds as they go through the process. The system also helped in recording the structures and the molecules and different diagrams and putting diagrams in the style of chemistry communication. They have seen self-organisation of groups of students and also been able to analyse 45,000 publications. So far, they integrated 84 PIs with 88 projects and identified 11 compounds that can lead to drugs.

Marc Santolini & Thomas Landrain: Just One Giant Lab – learning and solving together. JOGL is about opening up the process of involvement in research and designing projects to people outside academia. It also links itself to the SDGs. The background to it is the experience of an open laboratory in Paris by Thomas (La Paillasse), but to get out of the physical space and collaboration. The next stage was to create collaboration online in epidemiological research (with support from Roche). An open science platform can bring people on a level playing way – from specialists, data scientists, patients etc. There are many problems that are not suitable for business problem-solving. Many don’t have such an opportunity. We need to consider the agile space of communities that don’t sustain their involvement but need to document and pass their experience forward. The challenge is that we have – about 10m active contributors to science, but 1 billion people with higher education. We need researchers without being within the formal research system. The existing collaborative research systems ( researchgate…) are locking data and output and work by exploiting the vanity of contributors, not on collaboration. The idea of Jogl is that research/entrepreneurs/civil servants/activists might have their own problems that they need to solve, and on the other hand, there are students, patients, citizens that can contribute and build experience through participation in real projects.

Marc – there is a growth in science: increase collaboration and publication. No one can be in control of an area, so need to have designed serendipity (from Michael Nielsen). They look at team success, science innovation, open-source community, and collaborative learning. iGEM is a synthbio competition of over 300 teams, everything is on a wiki lab book network. The analysis looked at features that can help in understanding the competition, for example, team size, experience, mentorship but also with a network analysis. There is a collaboration core that can predict success.

Bastian Greshake Tzovaras: OpenHumans – sharing very personal data to use for research in a way that protects our privacy. The idea is that there is one system that stores the data safely and securely: GPS location, DNA data, Google Search History and Tweets. The first thing that it allows is analysing the data with notebooks of research that is coming out of it (predict eye colour on 23and Me data – it can allow you to try and run shared open notebook on your data without sharing it. The notebooks just share analysis and not the data. There are also projects that are using the data. An example is Dana Lewis insulin pumps that are using information about continuous glucose monitoring (nightscout) with patients controlling their data. Another example is nobism which is working on cluster headaches – they share data with code academy that know how to analyse the data. some of the reports by the students are shared and patient-led experiments. There are big issues of governance and trust. The OpenHumans foundation is a not-for-profit. Community is participating in the approval of a project which is proposed on the system. The community discussed it for a long time. The community is also asked to participate in the nomination of the board by anyone in the community. There is some mechanism to deal with the community seats

Valerie Lerouyer: BioLab, a future collaborative and experimental space at the Cite de Sciences et Industrie. Biolab should allow linking people to biology and the environment. Aiming for partnership with INRA towards research on soil and fermentation. The aim is to help with understanding the ecological transition. Aim for a different audience – children, adults. They want people to discover the microscopic world, and conduct collaborative about ecological transition and set participatory projects. The aim is to create a dynamic process and that is an issue with communication – the central aspects of the plan is as an entry to the right to dialogue, to share the results, to research, to find out out about things – create. They are going to explore living organisms in the part and the canal in different ecosystems. and ask the public to sample from their gardens and their areas. Focus on microbiology and biotechnologies and developing partnerships with secondary schools. Thinking about DIY – e.g. fermentation which is impossible to do in a lab (e.g. Kefir) to collect observations from different places.  The exhibit will open in April.

Anirudh Krishnakumar: Dynamic Digital Drivers for Open Collaborative Science – MindLogger is a data collection platform that is aimed to build apps for citizen science without any programming. Allowing different data collection: a survey that allows people to create different response option, collecting different types of information (audio, video) and sensors features. It provides different elements – markdown text, slider, date, time range, table counter. Allowing people to give information in different ways – e.g. a set of fields that allow data entry. There is an option of active geolocation but actively elected by participants. They want to provide support with a wider library of citizen science projects – so if someone created a survey, someone else can pick it up. There is a thought about integrating MindLonger with ETH Zurich/ Citizen Cyberlab SDG toolkit. They would like to see different use cases and experimentation with the tool.

Joel Chevrier: Look at your hand when you write. Recently started research neuromotor in handwriting in children. Joel is using sensors – the interest in how you can measure movement with accelerometers and some examples of assessing movement and understanding movements. You can teach the system on different gestures, and the system is learning the link between colour and letter. The system is linked to Centre Pompidou. The fact that we can work with devices can also help in providing more accuracy to the assessment of the way people are moving (e.g. for patients with motoric issues). Research questions include the degree in which we can use movement and monitoring of grasping actions that allow us to understand the handwriting of children.

Some general insights: use of open source library is valuable, and there is a need to pay special attention to software packages that are used outside your discipline, but then also consider where the knowledge on how to use it will come from. There is a clear need for a community manager and someone who will continue to encourage activities with the system. OpenHumans is a good example that is based on minimal development. Use of APIs is a good way to interact and not on integration and complex connections.

The workshop was supported by my short term fellowship at the CRI in Paris.


ActEarly – outline paper published

ActEarly is a new project, which has started in September. The project is a 5 years “city collaboratory” in Bradford and Tower Hamlets to research early promotion of good health and wellbeing. The project is part of a set of projects that are funded under the UK Prevention Research Partnership (UKPRP) scheme, which includes an alliance of funders, including multiple research councils, charities, and government bodies. The consortium that is involved in ActEarly is quite extensive, and the framework of the project and explanation of what it is aiming to achieve is now published in an open-access paper.



ActEarly includes an explicit participatory element, and citizen science is an integral part of the research. You can find out more in the paper.

The paper abstract is: Economic, physical, built, cultural, learning, social and service environments have a profound effect on lifelong health. However, policy thinking about health research is dominated by the ‘biomedical model’ which promotes medicalisation and an emphasis on diagnosis and treatment at the expense of prevention. Prevention research has tended to focus on ‘downstream’ interventions that rely on individual behaviour change, frequently increasing inequalities. Preventive strategies often focus on isolated leverage points and are scattered across different settings. This paper describes a major new prevention research programme that aims to create City Collaboratory testbeds to support the identification, implementation and evaluation of upstream interventions within a whole system city setting. Prevention of physical and mental ill-health will come from the cumulative effect of multiple system-wide interventions. Rather than scatter these interventions across many settings and evaluate single outcomes, we will test their collective impact across multiple outcomes with the goal of achieving a tipping point for better health. Our focus is on early life (ActEarly) in recognition of childhood and adolescence being such critical periods for influencing lifelong health and wellbeing.

You can access the paper here.

You can also see the role of citizen science and community engagement in the logic model of the project:


Published: Citizen science and the United Nations Sustainable Development Goals

Back in October 2018, I reported on the workshop at the International Institute for Advanced Systems Analysis (IIASA) about non-traditional data approaches and the Sustainable Development Goals. The outcome of this workshop has now been published in Nature SustainabilityThe writing process was coordinated by Dr Linda See of IIASA, and with a distributed process that included multiple teams of participants of the workshop working on different parts (for example, I have helped in coordinating the section “Citizen science for new goals and targets”). The final outcome is providing a comprehensive analysis of citizen science as a data source for monitoring and implementing the sustainable development goals (SDGs).


You can read the full paper here, and share it, as it is open access (in contrast to other Nature Sustainability paper, with funding for it provided by Steffen Fritz group at IIASA).

The abstract of the paper is: Traditional data sources are not sufficient for measuring the United Nations Sustainable Development Goals. New and non-traditional sources of data are required. Citizen science is an emerging example of a non-traditional data source that is already making a contribution. In this Perspective, we present a roadmap that outlines how citizen science can be integrated into the formal Sustainable Development Goals reporting mechanisms. Success will require leadership from the United Nations, innovation from National Statistical Offices and focus from the citizen-science community to identify the indicators for which citizen science can make a real contribution.

The UNEP team that participated in the writing, provided a blog post that explains why it is a valuable contribution to the discussion on SDGs (they also integrated a great music video within it!).

EU Research & Innovation Days 2019 – reflections


The previous post is more of a summary of the conference, but this one is aimed at capturing my reflections from these three days of (fairly high level) science event. This wasn’t a typical event, and it somewhat felt like Carlos Moedas (the leaving commissioner) farewell action as a commissioner, to get the research community that is linked to EU funding on board of the vision that he set for Horizon Europe.

But as I pointed, while it was great to see that in terms of participation, the gender balance in science is getting better (trying to guess I would estimate 30% or more female participants), this conference was mostly middle-aged, affluent, white participants. One of the speakers in the sessions about science policy pointed out – we need to have conversations with people who don’t look like us, but will be impacted by the research and the investment. These people (and their representation in some form of civil society, youth organisations etc.) were missing in the rooms.


A second reflection is that the conference provided a perfect parable for the problem of not involving research participants in the process, and using (a form of) algorithmic governance. On the second day, around lunchtime, the access to the first floor where a lot of sessions were held was blocked by the staff on site. Announcements asking people that finished upstairs to leave the place to allow others to go were made, however, the rooms were actually not full, nor the outside area.

So what was going on? this is what it looked like: the side is post-industrial and there are restrictions on how many people can be at each area for safety purposes, and the conference had to monitor it. The way they decided to do it is by stewards scanning the QR codes on participants badges. However, the scanning was done without an explanation why it was done and how it is linked to safety, so it felt like you’re being scanned when you get into a room, when you leave it, when you go upstairs, and when you go downstairs. Now (some) scientists are very happy to devise methods to monitor and analyse the movement of big crowds but don’t feel that it applies to them, and it did feel intrusive. So my guess is that by around lunchtime, there were plenty of ghost participants on the first floor – counted in, but not out – and no mechanism to adjust the calculation to the reality of not full rooms, and empty outside areas was in place. So no matter what reality said, the counting was indicating capacity and therefore stopping people and causing frustration. You can imagine that if, as you enter, the purpose of the data collection has been made clear to participants, the situation might be averted (and of course many other solutions are possible technically). It was strange to see how a mini example of bad science is impacting the conference itself!


A third reflection is on the variety of how citizen science is understood in the policy circles, and how valuable it can be to have a clearer set of characteristics to help newcomers. e.g. this

It was also interesting to hear in the session about policy advice in a complex world one of the participant say “I’m a physicist, and I think that science can only be made by experts and it is going to change with the whole community participating, how do we going to give advice? Increase of the noise?”. There are multiple understanding and interpretation, and it was great to hear Karel Luyben in the Open Science session seeing a role for people outside academia not only in data collection but also in analysis and in using results of open science and more.


The final point is something that I now calling the “deficit model bingo”. I’ve written before that the most common questions after introducing citizen science are about data quality, and then motivation. But I also realised lately that when I’m talking with people about a potential new project, the deficit model comes along quite regularly. If you’re not familiar with it, Wikipedia put it “the model attributes public scepticism or hostility to science and technology to a lack of understanding, resulting from a lack of information. It is associated with a division between experts who have the information and non-experts who do not. The model implies that communication should focus on improving the transfer of information from experts to non-experts.” At some point, the scientists will start setting out that what they need to do is to educate the public. What is especially odd about this is that there is no notion that the public continues to become more and more educated – just look at this graph from Eurostat . Some European countries have over 50% of the population with tertiary education. How much more education does this expert think we need to make people see the world the way that they see it?

So this is a thread that I put at the end, especially when there is an effort to work with policymakers, but I don’t see the same effort to create material that is suitable for a much wider range of stakeholders. For example, in scientific assessment there is a regular “summary for decision-makers”, but where is the “summary for educated public” or “summary for civil society organisation” etc.? For me, part of the issues that people face with acceptance of science is not because people are not educated – exactly the opposite. Filter bubble and other issues are important, and there are plenty of other mechanisms that impact people (it was great to hear talks about values, ideologies etc. as part of how people use scientific information, but it is interesting how fast scientists – even those who surely heard about the issue with the deficit model – default to it.