The previous post is more of a summary of the conference, but this one is aimed at capturing my reflections from these three days of (fairly high level) science event. This wasn’t a typical event, and it somewhat felt like Carlos Moedas (the leaving commissioner) farewell action as a commissioner, to get the research community that is linked to EU funding on board of the vision that he set for Horizon Europe.
But as I pointed, while it was great to see that in terms of participation, the gender balance in science is getting better (trying to guess I would estimate 30% or more female participants), this conference was mostly middle-aged, affluent, white participants. One of the speakers in the sessions about science policy pointed out – we need to have conversations with people who don’t look like us, but will be impacted by the research and the investment. These people (and their representation in some form of civil society, youth organisations etc.) were missing in the rooms.
A second reflection is that the conference provided a perfect parable for the problem of not involving research participants in the process, and using (a form of) algorithmic governance. On the second day, around lunchtime, the access to the first floor where a lot of sessions were held was blocked by the staff on site. Announcements asking people that finished upstairs to leave the place to allow others to go were made, however, the rooms were actually not full, nor the outside area.
So what was going on? this is what it looked like: the side is post-industrial and there are restrictions on how many people can be at each area for safety purposes, and the conference had to monitor it. The way they decided to do it is by stewards scanning the QR codes on participants badges. However, the scanning was done without an explanation why it was done and how it is linked to safety, so it felt like you’re being scanned when you get into a room, when you leave it, when you go upstairs, and when you go downstairs. Now (some) scientists are very happy to devise methods to monitor and analyse the movement of big crowds but don’t feel that it applies to them, and it did feel intrusive. So my guess is that by around lunchtime, there were plenty of ghost participants on the first floor – counted in, but not out – and no mechanism to adjust the calculation to the reality of not full rooms, and empty outside areas was in place. So no matter what reality said, the counting was indicating capacity and therefore stopping people and causing frustration. You can imagine that if, as you enter, the purpose of the data collection has been made clear to participants, the situation might be averted (and of course many other solutions are possible technically). It was strange to see how a mini example of bad science is impacting the conference itself!
A third reflection is on the variety of how citizen science is understood in the policy circles, and how valuable it can be to have a clearer set of characteristics to help newcomers. e.g. this
It was also interesting to hear in the session about policy advice in a complex world one of the participant say “I’m a physicist, and I think that science can only be made by experts and it is going to change with the whole community participating, how do we going to give advice? Increase of the noise?”. There are multiple understanding and interpretation, and it was great to hear Karel Luyben in the Open Science session seeing a role for people outside academia not only in data collection but also in analysis and in using results of open science and more.
The final point is something that I now calling the “deficit model bingo”. I’ve written before that the most common questions after introducing citizen science are about data quality, and then motivation. But I also realised lately that when I’m talking with people about a potential new project, the deficit model comes along quite regularly. If you’re not familiar with it, Wikipedia put it “the model attributes public scepticism or hostility to science and technology to a lack of understanding, resulting from a lack of information. It is associated with a division between experts who have the information and non-experts who do not. The model implies that communication should focus on improving the transfer of information from experts to non-experts.” At some point, the scientists will start setting out that what they need to do is to educate the public. What is especially odd about this is that there is no notion that the public continues to become more and more educated – just look at this graph from Eurostat . Some European countries have over 50% of the population with tertiary education. How much more education does this expert think we need to make people see the world the way that they see it?
So this is a thread that I put at the end, especially when there is an effort to work with policymakers, but I don’t see the same effort to create material that is suitable for a much wider range of stakeholders. For example, in scientific assessment there is a regular “summary for decision-makers”, but where is the “summary for educated public” or “summary for civil society organisation” etc.? For me, part of the issues that people face with acceptance of science is not because people are not educated – exactly the opposite. Filter bubble and other issues are important, and there are plenty of other mechanisms that impact people (it was great to hear talks about values, ideologies etc. as part of how people use scientific information, but it is interesting how fast scientists – even those who surely heard about the issue with the deficit model – default to it.