Participatory [Citizen] Science

Citizen Science as Participatory Science‘ is one of the most popular posts that I have published here. The post is the core section of a chapter that was published in 2013 (the post itself was written in 2011). For the first European Citizen Science Association conference I was asked to give a keynote on the second day of the conference, which I have titled ‘Participatory Citizen Science‘, to match the overall theme of the conference, which is  ‘Citizen Science – Innovation in Open Science, Society and Policy’. The abstract of the talk:

In the inaugural ECSA conference, we are exploring the intersection of innovation, open science, policy and society and the ways in which we can established new collaborations for a common good. The terms participation and inclusion are especially important if we want to fulfil the high expectations from citizen science, as a harbinger of open science. In the talk, the conditions for participatory citizen science will be explored – the potential audience of different areas and activities of citizen science, and the theoretical frameworks, methodologies and techniques that can be used to make citizen science more participatory. The challenges of participation include designing projects and activities that fit with participants’ daily life and practices, their interests, skills, as well as the resources that they have, self-believes and more. Using lessons from EU FP7 projects such as EveryAware, Citizen Cyberlab, and UK EPSRC projects Extreme Citizen Science, and Street Mobility, the boundaries of participatory citizen science will be charted.

As always, there is a gap between the abstract and the talk itself – as I started exploring the issues of participatory citizen science, some questions about the nature of participation came up, and I was trying to discuss them. Here are the slides:

After opening with acknowledgement to the people who work with us (and funded us), the talk turn the core issue – the term participation.

https://www.google.co.uk/search?q=sherry+and+george+arnstein
Sherry Arnstein with Harry S Truman (image by George Arnstein)

Type ‘participation’ into Google Scholar, and the top paper, with over 11,000 citations, is Sherry Rubin Arnstein’s ‘A ladder of citizen participation’. In her ladder, Sherry offered 8 levels of participation – from manipulation to citizen control. Her focus was on political power and the ability of the people who are impacted by the decisions to participate and influence them. Knowingly simplified, the ladder focus on political power relationships, and it might be this simple presentation and structure that explains its lasting influence.

Since its emergence, other researchers developed versions of participation ladders – for example Wiedmann and Femers (1993), here from a talk I gave in 2011:

These ladders come with baggage: a strong value judgement that the top is good, and the bottom is minimal (in the version above) or worse (in Arnstein’s version). The WeGovNow! Project is part of the range of ongoing activities of using digital tools to increase participation and move between rungs in these concept of participation, with an inherent assumption about the importance of high engagement.

Levels of Citizen Science 2011
Levels of Citizen Science 2011

At the beginning of 2011, I found myself creating a ladder of my own. Influenced by the ladders that I learned from, the ‘levels of citizen science’ make an implicit value judgement in which ‘extreme’ at the top is better than crowdsourcing. However, the more I’ve learned about citizen science, and had time to reflect on what participation mean and who should participate and how, I feel that this strong value judgement is wrong and a simple ladder can’t capture the nature of participation in Citizen Science.

There are two characteristics that demonstrate the complexity of participation particularly well: the levels of education of participants in citizen science activities, and the way participation inequality (AKA 90-9-1 rule) shape the time and effort investment of participants in citizen science activities.

We can look at them in turns, by examining citizen science projects against the general population. We start with levels of education – Across the EU28 countries, we are now approaching 27% of the population with tertiary education (university). There is wide variability, with the UK at 37.6%, France at 30.4%, Germany 23.8%, Italy 15.5%, and Romania 15%. This is part of a global trend – with about 200 million students studying in tertiary education across the world, of which about 2.5 million (about 1.25%) studying to a doctoral level.

However, if we look at citizen science project, we see a different picture: in OpenStreetMap, 78% of participants hold tertiary education, with 8% holding doctoral level degrees. In Galaxy Zoo, 65% of participants with tertiary education and 10% with doctoral level degrees. In Transcribe Bentham (TB), 97% of participants have tertiary education and 24% hold doctoral level degrees. What we see here is much more participation with people with higher degrees – well above their expected rate in the general population.

The second aspect, Participation inequality, have been observed in OpenStreetMap volunteer mapping activities, iSpot – in both the community of those who capture information and those that help classify the species, and even in an offline conservation volunteering activities of the Trust for Conservation Volunteers. In short, it is very persistent aspect of citizen science activities.

For the sake of the analysis, lets think of look at citizen science projects that require high skills from participants and significant engagement (like TB), those that require high skills but not necessarily a demanding participation (as many Zooniverse project do), and then the low skills/high engagement project (e.g. our work with non-literate groups), and finally low skills/low engagement projects. There are clear benefits for participation in each and every block of this classification:

high skills/high engagement: These provide provide a way to include highly valuable effort with the participants acting as virtual research assistants. There is a significant time investment by them, and opportunities for deeper engagement (writing papers, analysis)

high skills/low engagement: The high skills might contribute to data quality, and allow the use of disciplinary jargon, with opportunities for lighter or deeper engagement to match time/effort constraints

low skills/high engagement: Such activities are providing an opportunity for education, awareness raising, increased science capital, and other skills. They require support and facilitation but can show high potential for inclusiveness.

low skills/low engagement: Here we have an opportunity for active engagement with science with limited effort, there is also a potential for family/Cross-generational activities, and outreach to marginalised groups (as OPen Air Laboratories done)

In short – in each type of project, there are important societal benefits for participation, and it’s not only the ‘full inclusion at the deep level’ that we should focus on.

Interestingly, across these projects and levels, people are motivated by science as a joint human activity of creating knowledge that is shared.

So what can we say about participation in citizen science – well, it’s complex. There are cases where the effort is exploited, and we should guard against that, but outside these cases, the rest is much more complex picture.

The talk move on to suggest a model of allowing people to adjust their participation in citizen science through an ‘escalator’ that we are aiming to conceptually develop in DITOs.

Finally, with this understanding of participation, we can understand better the link to open science, open access and the need of participants to potentially analyse the information.

Securing funding and balancing efforts: a tale of 21 research applications

EU H2020 Participants Portal
EU H2020 Participants Portal

The last 3 months were a gradual sigh of relief for the Extreme Citizen Science group (ExCites), Mapping for Change (MfC), and for me. As the UCL engineering website announced, the ExCiteS group, which I co-direct, secured funding through 3 research grants from the European Union’s Horizon 2020 programme (H2020), with enough funding to continue our work for the next 3 years, which is excellent. As usual in publicity, UCL celebrates successes, not the work that led to it. However, there are implications for the effort of securing funding and it is worth reflecting on them – despite the fact that we are in the success camp. While the criticism of the application process to European projects on the ROARS website is a bit exaggerated, it does give a good context for this post. In what follows I cover the context for the need to apply for funding, look at the efforts, successes and failures from mid 2014 to early 2016 (mostly failures), and then look at the implications. 

This is not a piece to boast about success or moan about failure, but I find writing as a useful way to reflect, and I wanted to take stock of the research application process. I hope that it will help in communicating what is the process of securing funding for an interdisciplinary, research intensive group.

Background & context 

The background is that the ExCiteS group started at the end of 2011, with a large group of PhD students – as common to early stage research groups. With the support of the UK Engineering and Physical Science Research Council (EPSRC) award, which is about to end soon, it was possible to start a group. With additional funding from the European Union (EU) and EPSRC projects, including EveryAware (2011-2014), Citizen Cyberlab (2012-2015), Challenging Risk (2013-2018), and Cap4Access (2014-2016), it was possible to develop the activities of ExCiteS & MfC. This is evident in the software packages that are emerging from our work: Sapelli, GeoKey, and a new version of Community Maps, methodologies for using these tools within participatory processes. the academic and non-academic outputs, and the fact that people know about our work.

However, what was clear since 2011 was that 2015 will be a crunch point, when we will need funding to allow members of the group to move from PhD students to post-doctoral researchers (postdocs). The financial implication of funding a postdoc is about three-times the funding for a PhD student. In addition, while at earlier years members of the group (regardless of their career stage) participated in writing research proposals – and helped winning them (e.g. Citizen Cyberlab), when people write-up their PhD theses it is inappropriate to expect them to invest significant amount of time in research applications. Finally, all our funding come through research projects – we don’t have other sources of income.

Research Applications – effort, successes, failures 

UK Research Councils system (Je-S)
UK Research Councils system (Je-S)

So it was very clear that 2015 is going to be full of research applications. To give an idea of how many and the work that was involved, I’m listing them here – more or less in order of effort. I’m providing more details on successful applications but only partial on the failed ones – mostly because I didn’t check with the coordinators or the partners to see if they allow me to do so.

We started in mid 2014, when we started working on the first version of what is now DITOs. Coordinating an EU H2020 project proposal with 11 partners mean that between May and September 2014 we’ve invested an estimated 6 or 7 person months within the group in preparing it. We’ve submitted it early October, only to be disappointed in early March 2015 when we heard that although we scored high (13/15), we won’t be funded – only 1 project out of 19 that applied was funded. We then resurrected the proposal in July 2015, dedicated further 5 person months, resubmitted it and won funding after competition with 56 other proposals – of which only 2 were funded.

The next major investment was into a first stage proposal to the Citizen Observatories call of H2020. ExCiteS coordinated one proposal, and MfC participated in another. The process required an outline submission and then a full proposal. We worked on the proposal from December 2014 to April 2015, and it wasn’t a huge surprise to discover that 47 proposals were submitted to the first stage, of which 11 progressed to the second. The one coordinated by ExCiteS, with an investment of about 5 person months, scored 7/10, so didn’t progressed to the second stage. MfC also invested 2.5 person months in another proposal, as a partner. This proposal passed the first stage, but failed in the second.

Participating as a major partner in a proposal is also a significant effort, especially in H2020 projects in which there are multiple partners. The collaborative effort of MfC and ExCiteS in the proposal that emerged at WeGovNow! required about 4 person months. The proposal was submitted twice – first in July 2015 to a call for “Collective Awareness Platforms for Sustainability and Social Innovation” which received 193 proposals of which 22 were funded, and then again to a call for “Meeting new societal needs by using emerging technologies in the public sector” where only 2 proposals were submitted in December 2015 (you can be lucky sometimes!).

The proposal for the European Research Council (ERC) was developed between May and June 2015, with about 3 person months – and luckily was successful. It competed with 1953 applications in total (423 in the social sciences), of which 277 (59) were successful – about 14% success rate.

Another fellowship proposal in response to an EPSRC call passed the first round, and failed at the interview stage (where 2 out of 5 candidates were selected). This one was developed from May 2015 and failed in February 2016, after an effort of about 2.5 person months.

We also developed the Economic and Social Science Research Council (ESRC) responsive mode proposal, which mean that we’ve applied to the general funds, and not to a specific call. We collaborated with colleagues at the Institute of Education from January 2015 to July 2015 , with an effort of about 2.5 person months, but we learned that it was unsuccessful in March 2016.

Another 2 person months were dedicated to an ESRC call for methodological research, for which 65 applications were submitted out of which 6 were funded, with our proposal ranking 22 out of about 65. In parallel, I had a small part in another proposal for the same call, which was ranked 56.

We’ve invested a month in an unsuccessful application to Wellcome Trust Science Learning + call in July 2014.

Less time was spent on proposals where we had a smaller role – a failed H2020 ICT proposal in April 2015, or another H2020 about Integrating Society in Science and Innovation September 2015. This also include a successful proposal to the Climate and Development Knowledge Network (CDKN).  Because of all the other proposals, information such as the description of our activities, CVs and other bits were ready and adjusted quite easily.

ExCiteS and MfC also participated in an EU LIFE proposal – this was for funding for applied activities, with very low level of funding of only 50%, so there was a need to think carefully about which add-on activities can be used for it. However, as the proposal failed, it wasn’t an issue.

Along the way, there were also small parts in an application to the Wellcome trust in early 2015 (failed), in an EPSRC programme grant (a large grant with many partners) that was organised at UCL and on which we dedicated time from June 2014 to February 2015 (ditto), an outline for Leverhulme trust (ditto), an ERC research proposal (ditto), and finally  a COST Action application for a research network on Citizen Science (which was successful!)

So let’s summarise all these proposals, success, failure, effort in one table. Lines where the funder marked in bold mean that we’ve coordinated the proposal:

Funder Effort (Months) Success/Failure
1 H2020 7 Failure
2 H2020 5 Success
3 H2020 5 Failure
4 H2020 2.5 Failure
5 H2020 4 Failure
6 H2020 1 Success
7 ERC 3 Success
8 EPSRC 2.5 Failure
9 ESRC 2.5 Failure
10 ESRC 2 Failure
11 ESRC 0.5 Failure
12 Wellcome 1 Failure
13 H2020 0.25 Failure
14 H2020 0.25 Failure
15 CDKN 0.25 Success
16 EU LIFE 0.5 Failure
17 Wellcome 0.5 Failure
18 EPSRC 0.5 Failure
19 Leverhulme 0.5 Failure
20 ERC 0.25 Failure
21 COST 0.5 Success

So what?

We’ve applied to lots of funders and mechanisms – fellowships, calls for proposals, and open calls for research ideas. We applied to UK funders and EU. As we are working in an interdisciplinary area, we have applied to social science as well as engineering, Information and Communication Technologies (ICT) and in between these areas. In third of the case we led the proposal, but in the rest we joined in to proposals that were set by others. So the first point to notice is that we didn’t fixate on one source, mechanism or role.

As the table shows, we’re not doing badly. Out of the 7 proposals that we’ve led, 2 succeeded (30%). Also among the 14 that we’ve participated in 3 succeeded (20%). The overall success is about quarter. Over about 18 months a group of about 10 people invested circa 40 person months in securing future funding (about 20% of the time) for the next 3 years, which doesn’t sound excessive.

However, the load was not spread equally, so some people spent a very significant amount of their time on proposals. I was involved in almost all of these 21 proposals during this period, much more in those that we led, and in some of those that we participated as partner, I was the only person in the group that worked on the proposal. It was increasingly challenging to keep submitting and working on proposals with so much uncertainty and the very long gap between submission and results (look above, and you’ll see that it can be up to 9 months). Because of the uncertainty about success, and an assumption that only 20% will be successful at best (that’s 4 wasted proposals for every successful one), I felt that I need to keep on going, but there were moment when I thought that it’s a doomed effort.

There is also the issue of morale – as should be obvious from the fact that we’ve announced the successes recently, as the failures mounted up during the second part of 2015, it was harder to be cheerful. Because of the long gap between proposal submission and result that I mentioned, the future of the group is unknown for a significant period, and that influences decisions by people about staying or leaving, or how to use the funds that we do have.

Implications

Leaving aside that by early 2016 it became hard to find the energy to be involved in more proposal writing, there is an issue about how interdisciplinary research groups are funded. While we can apply to more funding opportunities, the responses from the failures indicated that it’s tough to convince disciplinary evaluators that the work that is being done is important. This mean that we knew all along that we need to apply more. Maybe it was a coincident, but the EU funding evaluations seem more open to these ideas than UK funders.

Second, such a high number of applications take time from other research activities (e.g. check my publications in 2014-2015). Applications, with all the efforts that is associated with them, are not seen as an academic output, so all the effort of writing the text, proofing it and revising it are frequently wasted when a proposal fail.

Third, all these proposals burn social capital, ‘business capital’, and cash reserves – e.g. having a consultant to help with H2020 project or covering the costs of meetings, asking for letter of support from business partners, raising hopes and making links with partners only to write at the end that we won’t be working together beyond the proposal. There are also negotiations with the Head of Department on the level of support from the university, requests for help from research facilitators, financial administrators and other people at the university.

Fourth, considering how much effort, experience, support – and luck – is needed to secure research funding, I’m not surprise that some people are so despondent about their chances to do so, but all the above is the result of a large team and I would argue that the clue to the ability to keeping the stamina is team spirit and having a clear goal on why the hell you want the funding in the first place (in our case, we want to materialise Extreme Citizen Science).

Finally, looking at the number of the submissions, the ranking and the general success rate of applications in the areas that we’ve applied to (about 15% or less), I have concerns that under such conditions there is a ‘crowding out’ situation, in which groups that got better resources around them (e.g. the institutional infrastructure at UCL, our internal experience) make it harder for new entrants or smaller groups. At a higher funding rate, we could have secured the funding in less proposals, at which point we wouldn’t continue to apply, and therefore allow others to secure funding.

Epilogue

I have no plans for another period like the one that led to the current results. I am incredibly grateful to have such a level of success, which is about the institution that I’m in, the hard work and the evolving experience in preparing proposals and, always, luck. It is very possible that this post would have counted 19 failures, so we’re very grateful to all the people who evaluated out proposals positively and gave us the funding.

Back to the funding, with all the successes, in people terms, we’ve secured funding for the 10 people that I’ve mentioned for 3 years, with further 6 PhD students joining us over that period. There are still other people in the group that will need funding soon, so probably we will put the accumulated knowledge and experience to use soon.

New PhD Opportunity: Human Computer Interaction and Spatial Data Quality for Online Civic Engagement

We have a new scholarship opening at the Extreme Citizen Science group for a PhD student who will research in Human Computer Interaction and Spatial Data Quality for Online Civic Engagement. The studentship is linked and contextualised by the European Union H2020 funded project, WeGovNow! . This project will focus on the use of digital technologies for effectively supporting civic society, whereby citizens are partners as opposed to customers in the delivery of public services. By integrating a set of innovative technologies from different European partners in Germany, Italy, and Greece to create citizen engagement platform, the project explores the use of digital tools for citizen reporting, e-participation, and communication between the citizen and local government. Building on previous research and technology development, the project will include programme of innovation in technology and services delivery. More information on the UCL ExCiteS blog

Source: New PhD Opportunity

Extreme Citizen Science in Esri ArcNews

The winter edition of Esri ArcNews (which according to Mike Gould of Esri, is printed in as many copies as Forbes) includes an article on the activities of the Extreme Citizen Science group in supporting indigenous groups in mapping. The article highlights the Geographical Information Systems (GIS) aspects of the work, and mentioning many members of the group.

You can read it here: http://www.esri.com/esri-news/arcnews/winter16articles/mapping-indigenous-territories-in-africa

Citizen Cyberlab – notes from final review (26-27 January, Geneva)

Citizen Cyberlab LogoEvery project ends, eventually. The Citizen Cyberlab project was funded through the seventh framework programme of the European Union (or EU FP7 in short), and run from September 2012 to November 2015. Today marks the final review of the project in with all the project’s partners presenting the work that they’ve done during the project.

wp-1453931121093.jpgThe project had a technical elements throughout its work, with platforms (technologies that provide foundation to citizen science projects), tools (technologies that support projects directly by being part of what volunteers use), and pilots – projects that use the technologies from citizen cyberlab as well as from other sources, to carry out citizen science projects. In addition to the platforms, tools or pilots – the project used all these elements as the background for a detailed understanding of creativity and learning in citizen cyberscience, which rely on Information and Communication Technologies (ICT). So the evaluation of the pilots and technologies was aimed to illuminate this question.

This post summarises some of the major points from the project. The project produced a system to develop and share research ideas (ideaweave.io), a framework for scientific games (RedWire.io) which is accompanied with tools to measure and observe the actions of gamers (RedMetrics.io), systems for sharing computation resources through virtual machines (through CitizenGrid platform), and a framework to track user actions across systems (CCLTracker), a platform for community mapping (GeoKey), mobile data collection tools (EpiCollect+).

Some of the systems that used these platforms and tools include Mapping for Change Community Maps, CERN Virtual Atom Smasher, and UNITAR Geotag-X.

The RedWire platform supports the development of games and the mixing of code between project (borrowing concepts from synthetic biology to computing!), and as the system encourages open science, even data from the different games can be mixed to create new ones. The integration with player behaviour tracking ability is significant in the use of games for research (so that’s done with RedMatrics). The analytics data is open, so there is a need to take care of privacy issues. An example of the gaming platform is Hero.Coli – a game about synthetic biology.

The GeoKey platform that was developed at UCL ExCiteS is now integrated with Community Maps, ArcGIS Online and can receive data trough Sapelli, EpiCollect or other HTML5 apps (as the air quality app on Google Play shows). The system is progressing and includes an installation package that make it easier to deploy. Within a year, there are about 650 users on the system, and further anonymous contributions, and over 60 mini-sites, many of them ported from the old system. The system is already translated to Polish and Spanish.

The Citizen Grid is a platform that improve volunteer computing, and allow the access to resources in a simplified manner, with launching of virtual machines through a single link. It can use shared resources from volunteers, or cloud computing.

The IdeaWeave system, which is a social network to support the development of ideas and projects, and share information about these projects. The final system supports challenges, badges and awards. They also add project blogging and ability for voting on proposals.

EpiCollect+ is a new implementation of EpiCollect which was supposed to be device independent through HTML5. There are issues with many APIs, and this lead to finding out limitations in different mobile platforms. There are many applications

wp-1453880231866.jpgThe Virtual Atom Smasher application in CERN was redesign with the use of learning analytics, which shown that many people who start engaging with it don’t go through the learning elements and then find the interface confusing, so the restructuring was geared towards this early learning process. The process help people to understand theoretical and experimental physics principles. The system, which test4theory.cern.ch . After participants log in, they go through a questionnaire to understand what the participant know, and then go through video and interactive elements that help them to understand the terminology that is needed to use the interface effectively, and the rest of the process supports asking questions in forums, finding further information through links and more. Some of the side projects that were developed from Virtual Atom Smasher include to TooTR framework that supports creating tutorials that are web-based and include videos and interactive parts. During the project, they have attracted 790 registered participants, 43 spent more than 12 hours with the game. Now the game is gaining attention from more scientists who are now seeing that it is worth while to engage with citizen science. The project is fusing volunteer computing and volunteer thinking.

wp-1453882325415.jpgGeoTag-X provides a demonstrator for volunteer thinking, and was developed by UNITAR. It allow the capturing of relevant imagery and pictures from disaster or conflict situations. It support UNITAR humanitarian operations. They wanted to assess if the system is useful. They have 549 registered volunteers, with 362 completing at least one task. GeoTag-X engaged with the humanitarian Geo community – for example with GISCorps, UN Volunteers Online, and Humanity Road.

The Synthetic Biology pilot included the development of MOOC that explains the principles of the area, the game Hero.coli, developed a new spectrometer that will be produced at very large scale in India.

wp-1453889426937.jpgOur own extreme citizen science pilots focused on projects that use cyberlab technology, so focusing on air quality monitoring in which we used GeoKey and EpiCollect to record the location of diffusion tubes and the street context in which it was installed. In addition, we included the use of public lab technology for studying the environment, and playshops to explore the exposure to science.

The research into learning and creativity, shown that there is plenty of learning of the ‘on topic’ and the mechanics of the citizen science, with small minority showing deep engagement with active learning. There is variety of learning – personal development – from self-confidence to identity and cultural change; generic knowledge and skills; and finally project specific aspects. The project provides a whole set of methods for exploring citizen science: checklists that can be used to help designing for citizen science learning, surveys, interviews, analysing blogs, user analytics, and lab studies. Some of the interesting finding include: in GeoTag-X, even a complex interface was learnt quite quickly, and connecting emotionally to the issue of humanitarian issue and participation can predict learning. The Virtual Atom Smasher demonstrated that participants learned about the work of scientists and science (e.g. the plenty use of statistics). wp-1453894997879.jpgIn SynBio4All, there was plenty of organisational skills, lab work, scientific communication and deeper contact with science – all through need to involved in a more significant way. The ExCiteS pilots show involvement and emotional learning, and evidence for community ‘hands on’ situated learning with high engagement of participants. There are examples for personal development, scientific literacy and community organisation, hosting workshop and other skills. One of the major achievement of this study is a general survey, which had 925 complete responses and 2500 partial ones – from volunteers across citizen science (80 projects) –  clusters show 25% learn about technology and science skills, 21% learn about the topic and scientific skills, about 20% learn about science skills, but some collaboration and communication, 13% pure on-topic learning. In citizen science, high percentage learn from project documentation, next about 20% learns through the project and some from documentation, about 17% learn from the project and external documentation, next there was a group learning through discussion. Most feel that they learn (86%). learning is not initial motivation, but become an important factors, and also learning about new area of science. Highly engaged volunteers take on specific and various roles – translators, community managers, event organisers etc.

wp-1453931104656.jpgOn the creativity side, interviews provided the richest source of information on creativity and how it is integrated into citizen science. Interviews with 96 volunteers provided one of the biggest qualitative survey in citizen science. Motivations – curiosity, interest in science and desire to contribute to research. They sustained participation due to continued interest, ability, time. The reasons for different audience composition are task time, geography and subject matter. In a lab study, it was shown that citizen cyberscience results are related to immersion in the game. There is also evidence that people are multi-tasking – they have plenty of distractions to the engagement in any given online project. The key finding about creativity include examples in the analysis of the images and geotagging in GeoTag-X. in the Virtual Atom Smasher, adjusting parameters seen as creative, while in SynBio4all the creation of games, or the creation of the MOOC were examples of creativity. In ExCiteS there are photos, drawing, sculptures , blog posts With air quality we’ve seen examples of newsletter, t-shirts, or creating maps. There are routes through the Motivations, learning and creativity. Might need to look at models for people who lead projects. To support creativity face-to-face collaboration is important, allow entry level of volunteers, and provide multiple methods for volunteers to provide feedback.

wp-1453931086530.jpgIn terms of engagement – we carried out ThinkCamp events, linking to existing online communities, working through engagement and participation. Interestingly, analysis of twitter shown following from fellow researchers and practitioners in citizen science.

The citizen cyberlab will now continue as an activity of the university of Geneva – so watch this space!

wp-1453931077287.jpg

 

 

 

New publication: Citizen Science and the Nexus (water, energy, food, population)

Under the leadership of Roger Fradera of the Centre for Environmental Policy at Imperial College London, I was involved as a co-author on a ‘thinkpiece’ about citizen science and the nexus. If you haven’t come across the term, ‘nexus’ is the linkage of food, energy, water and the environment as a major challenge for the future.

The paper is now published:

Fradera, R., Slawson, D., Gosling, L., Geoghegan, H., Lakeman-Fraser, P.,  Makuch, K. Makuch, Z., Madani, K., Martin, K., Slade, R., Moffat, A. and Haklay, M. Exploring the nexus through citizen science, Nexus Network think piece Series, Paper 010, November 2015

The paper explores the background of citizen science, and then suggests few recommendations in the context of the nexus, including:

  • Inclusivity: a co-created citizen science approach is likely to be more appropriate both to address the more complex nexus issues and to engage all sectors of society.
  • Engagement: Citizen science practitioners and nexus scientists should explore developing citizen science programmes with multi-scale engagement of citizens, for example programmes focusing on a nexus issue that combine local, citizen-led or co-created projects.
  • Barriers: Research is needed to understand the motivations, attitudes and willingness to change behaviours across all nexus stakeholders, and to better understand and find solutions to barriers.

The work was funded under the ESRC Nexus Network initiative

Giving time – randomised experiments on volunteering and citizen social science

As the event blurb explained  “the Giving Time experiments were led by a team from four UK universities, who wanted to know whether sharing information about how others have volunteered could help to improve volunteering… this was about giving time – and whether volunteers can be nudged. The methodology was randomised control trial (RCTs) in real-life field settings involving university student volunteers, Parish Councils, National Trust volunteers, and housing association residents.  The research was funded by the Economic and Social Research Council (ESRC).” The discussion of RCTs and Citizen Science in the same event was bound to generate interesting points.

In the first session, Prof Peter John (UCL) discussed The research challenges of large scale RCTs with volunteers and volunteering organisations. Peter covered the principles for Randomised Control Trials  (RCTs) – using randomness in trying something: assuming that two random groups will behave the same if you leave them alone, so you do things only to one group and observe the results. Start with baseline, random allocation to programme and control group, and then compare the outcome. Tying the outcomes to random allocation and – they are unbiased estimates of the impact of outcomes. Key distinguishing features of RCTs: need to deliver an intervention and the research at the same time. He suggests a 10 steps process – assessment of fit for RCTs, recruitment of partner organisations in which the work will be carried out, select a site, decide treatment, specify control, calculation of sample size, develop the procedure for random allocation, collection of data on the subjects, preparation of research plans, and assessment of ethical principles. The things can go wrong include: loss of subjects – people drop out along the way; failed randomization – deciding on who will be included in the process; treatment not given or modified; interference between treatment and control – when the groups meet; unavoidable confounds – when something come along in policy or media and policy change; poor quality data – what the data mean and what is going on with it; loss of cooperation with partners; and unexpected logistical challenges.
The Giving Time was the first RCTs on volunteering experiments – volunteering is more complex than giving money. The question is if behavioural methods can impact on the changes in the process. Working with the volunteering sector was challenging as they don’t have detailed records of volunteers that can be used to develop RCTs. There was willingness to participate in experiments and it was quite interesting to work with such organisations. There was a high level of attrition for those who are staying in the study – just getting volunteers to volunteer – from getting people to be interested until they do something. Is it possible to make it easier, get better quality data? RCTs required changes in organisational practices – if they are information based they are not hugely costly. It is possible to design trials to be sensitive to organisational practice and can be used quickly in decision making. There are issues with data protection and have a clear data sharing agreement.

Against this background, the second session Towards ‘Extreme Citizen Social Science’ – or volunteering as a tool for both social action and enquiry explored a contrasting approach. The session description already explored challenge: “For many, the scale of engagement with volunteers undertaken through Giving Time brings to mind related questions about the role of citizens in formal research – and then of course Citizen Science – or perhaps ‘Citizen Social Science’? At the same time we see the emergence of “Extreme Citizen Science” aimed at stimulating debate and challenging power relationships through citizen involvement in large scale scientific investigations. Extreme citizen science often starts from natural and physical sciences and has citizen researchers working with formal researchers to define the central research questions, and methods of investigation. But what is the potential for Extreme Citizen Social Science – characterised by being large scale, focused on social science questions, exploiting digital technology, having a high degree of participant control, and orientated towards influencing policy?”

Liz Richardson (Manchester Uni) gave her view on citizen social science approach. She is doing a lot of participatory research, and you need to explore with participants what is accepted to do with them. We can solve problems in a better way, if we have conversations on wide knowledge base in science – e.g. – a rough guide to spotting bad science. Liz compared her experience to early memories of the RSPB Big Garden Bird Watch – the natural sciences version of citizen science, and part of it is access to back gardens and wide area research. She also reflected on her participation in Zooniverse and the confusion about what is the science there – e.g. why scientists ask which direction wildebeest look? There are different levels of engagement in citizen science classification, such as Haklay 2013 and a version in the book community research for participation – from low participation to high level. Citizen social science  – example for a basic one is the 2011 big class survey in the BBC – just giving and sharing information – more crowdsourcing. Another, more complex example is Christian Nold emotional maps when people responded to arousal measurement, part of evolution in visualising information and sharing mapping. The app MapLocal is used in local planning and sharing information by community members. Groups can also collect data and analyse it – they then work with social scientists how to make sense of data that they collected (work carried out with White Rock Trust in Hasting). It’s not research that done alone but integrated and leading to a change – it’s community consultation. An example is a game in Boston with Participatory Chinatown – and example for a community-led action research from the Morris Justice Project with support from academics.

I provided a presentation about extreme citizen science, positioning it within social science context (similar to my talk for the Institute for Global Prosperity) with some pointers to underlying social theory – especially that the approach that we take in contrast to some behaviour change approaches that take methodological individualism for granted.

Jemma Mouland (Family Mosaic) provided the provider point of view. Head of research at large social housing provider, with about 45,000 tenants. They have done project with Liz, and she explained it from provider point of view. Family Mosaic is looking at community involvement and decision making – what affect them in their daily life and where the housing provider come in? How to work more collaboratively with the residents. They run the a citizen science project around the meaning of community. They have done that through the Giving Time project – they sent email to recruit people to become citizen scientists – from 8000 people that received the message, 82 were interested, then 13 people were involved. They provided the material to carry out workshops, and didn’t instructed how to carry out the research – that led to 50 responses, although instructed to get at least 3, so some people moved beyond just 3. They also got the citizen scientists to analyse the data and the residents interpreted the data that they have gathered. The results from the survey – different definition of community, with active minority, and barriers include time and articulating the benefits (‘why should I do it?’). The residents felt that it was great, but they weren’t sure about doing it again – and also acting on behalf of the provider can be an issue, as well as feeling that all familiar contacts where used. The issue of skills is also interesting – gave very little information, and it can be effective to train people more. For Family Mosaic – the data was not ground breaking, but prove that collaboration can work and have a potential, it gave evident that it can work for the organisation.

So, *can* volunteers be nudged? Turning the spotlight on the future of Nudge techniques. Professor Gerry Stoker (Southampton Uni) The reasons for the lack of success of intervention was the use of the wrong tool and significant difference of money donation and time donation. Nudge come with a set of ideas – drawing on behavioural economics – we use short-cuts and tricks to make decision and we do what other do and then government followed it in a way to influence and work with people and change their behaviour. There are multiple doubts about nudge – nudge assumes fast thinking, but giving time is in slow thinking mode – donating money closer to type 1 (fast thinking) and volunteering closer to type 2 (slow thinking). Second, humans are not just cognitive misers – there are degrees of fast and slow thinking. Almost all nudging techniques are about compliance. Also it’s naive and overly promotional – and issues when the topic is controversial. The individual focus missed the social – changing people mind require persuasion. Complexity also make clear answers harder to find – internal and external validity, and there are very complex models of causality. There are ironic politics of nudge and experiments – allowed space only at the margins of policy making. Need to recognise that its a tool along other tools, and need to deal with groups side by side with other tools. Nudge is a combination with structural or institutional change, wider strategies of behaviour change, and not that other techniques are not without their own problems and issues

Discussion – need to have methodologies that are responsive to the local situation and context. A question is how do you nudge communities and not work at the individual level.

The final talk before the panel discussion was Volunteers will save us – volunteering as a panacea. Presenter: Dr Justin Davis-Smith (National Council for Voluntary Orgs) State of volunteering in 2015 – volunteering can lead to allow social transformations – e.g. ex-offenders being released to volunteering roles and that help avoiding offending. Another success is to involve people who are far from the job market to get employable skills through volunteering. Volunteering also shown that volunteers have better mental health and wellbeing. Not volunteering has a negative impact on your wellbeing. There are volunteering that can be based on prescription (e.g. Green Gyms). Volunteers are engaged in public services, such as special constables. Social capital is also improved through volunteering. Replacement value £40Bn, and the other impacts of volunteering are not being quantified so the full value is estimated at £200Bn. So volunteer will save us?
However, volunteering is cost effective but not without cost and require investment, which is difficult to make. The discussion about engagement of volunteers in public service put the volunteers against paid labour, instead of co-production. There are also unhealthy dynamic with paid staff if it only seen as cost-saving measure. We have a small core that provide their volunteering effort, and the vast majority of volunteering is made by a small group (work on civic core by the centre for third sector research was mentioned). The search for the panacea is therefore complex. Over effort of 15 years in different forms of volunteering, there is only 5% change in the amount people report about volunteering. Some of the nudge mechanisms didn’t work – there is a lot of evidence to show that campaign on volunteering don’t work well. People react negatively to campaigns. Barrier for volunteering is lack of time, and concerned that getting involved will demand more and more of their time. Reflecting on time constraints and micro-volunteering can work.

The final panel explored issues of co-production of research and the opportunities to work with volunteering organisations to start the process – many social services providers do want to have access to research but find it difficult to start the process.