At the beginning of the Challenging Risk project, the project team considered that before we go out and develop participatory tools to engage communities in earthquake and fire preparedness, we should check what is available.
To achieve that, we have commissioned Enrica Verrucci to help us with the review, and later on other members of the team updated the information – including Patrick Rickles, David Rush, and Gretchen Fagg. We then thought about the development of a paper from the review, and an interesting interdisciplinary discussion ensue, with different potential emphasis and structures were suggested. It took us several iterations until we’ve agreed that the best way to communicate the purpose of the review is by linking the use of digital technologies to behaviour change, with the guidance of Gabriela Perez-Fuentes and Helene Joffe who are the psychological experts on the team.
The resulting paper have just been published in Natural Hazards. It is the first paper from the project that covers all the groups that are involved in the project. Here is the abstract:
“Natural or human-made hazards may occur at any time. Although one might assume that individuals plan in advance for such potentially damaging events, the existing literature indicates that most communities remain inadequately prepared. In the past, research in this area has focused on identifying the most effective ways to communicate risk and elicit preparedness by means of public hazard education campaigns and risk communication programmes. Today, web- and mobile-based technologies are offering new and far-reaching means to inform communities on how to prepare for or cope with extreme events, thus significantly contributing to community preparedness. Nonetheless, their practical efficacy in encouraging proactive hazard preparedness behaviours is not yet proven. Building on behaviour change interventions in the health field and looking in particular at earthquakes and fire hazards, the challenging RISK team has reviewed the currently active websites, Web, and mobile applications that provide information about earthquake and home fire preparedness. The review investigates the type of information provided, the modality of delivery, and the presence of behaviour change techniques in their design. The study proves that most of the digital resources focus on a single hazard and fail to provide context-sensitive information that targets specific groups of users. Furthermore, behaviour change techniques are rarely implemented in the design of these applications and their efficacy is rarely systematically evaluated. Recommendations for improving the design of Web- and mobile-based technologies are made so as to increase their effectiveness and uptake for a multi-hazard approach to earthquake and home fire preparedness.”
‘Citizen Science as Participatory Science‘ is one of the most popular posts that I have published here. The post is the core section of a chapter that was published in 2013 (the post itself was written in 2011). For the first European Citizen Science Association conference I was asked to give a keynote on the second day of the conference, which I have titled ‘Participatory Citizen Science‘, to match the overall theme of the conference, which is ‘Citizen Science – Innovation in Open Science, Society and Policy’. The abstract of the talk:
In the inaugural ECSA conference, we are exploring the intersection of innovation, open science, policy and society and the ways in which we can established new collaborations for a common good. The terms participation and inclusion are especially important if we want to fulfil the high expectations from citizen science, as a harbinger of open science. In the talk, the conditions for participatory citizen science will be explored – the potential audience of different areas and activities of citizen science, and the theoretical frameworks, methodologies and techniques that can be used to make citizen science more participatory. The challenges of participation include designing projects and activities that fit with participants’ daily life and practices, their interests, skills, as well as the resources that they have, self-believes and more. Using lessons from EU FP7 projects such as EveryAware, Citizen Cyberlab, and UK EPSRC projects Extreme Citizen Science, and Street Mobility, the boundaries of participatory citizen science will be charted.
As always, there is a gap between the abstract and the talk itself – as I started exploring the issues of participatory citizen science, some questions about the nature of participation came up, and I was trying to discuss them. Here are the slides:
After opening with acknowledgement to the people who work with us (and funded us), the talk turn the core issue – the term participation.
Type ‘participation’ into Google Scholar, and the top paper, with over 11,000 citations, is Sherry Rubin Arnstein’s ‘A ladder of citizen participation’. In her ladder, Sherry offered 8 levels of participation – from manipulation to citizen control. Her focus was on political power and the ability of the people who are impacted by the decisions to participate and influence them. Knowingly simplified, the ladder focus on political power relationships, and it might be this simple presentation and structure that explains its lasting influence.
Since its emergence, other researchers developed versions of participation ladders – for example Wiedmann and Femers (1993), here from a talk I gave in 2011:
These ladders come with baggage: a strong value judgement that the top is good, and the bottom is minimal (in the version above) or worse (in Arnstein’s version).The WeGovNow! Projectis part of the range of ongoing activities of using digital tools to increase participation and move between rungs in these concept of participation, with an inherent assumption about the importance of high engagement.
At the beginning of 2011, I found myself creating a ladder of my own. Influenced by the ladders that I learned from, the ‘levels of citizen science’ make an implicit value judgement in which ‘extreme’ at the top is better than crowdsourcing. However, the more I’ve learned about citizen science, and had time to reflect on what participation mean and who should participate and how, I feel that this strong value judgement is wrong and a simple ladder can’t capture the nature of participation in Citizen Science.
There are two characteristics that demonstrate the complexity of participation particularly well: the levels of education of participants in citizen science activities, and the way participation inequality (AKA 90-9-1 rule) shape the time and effort investment of participants in citizen science activities.
We can look at them in turns, by examining citizen science projects against the general population. We start with levels of education – Across the EU28 countries, we are now approaching 27% of the population with tertiary education (university). There is wide variability, with the UK at 37.6%, France at 30.4%, Germany 23.8%, Italy 15.5%, and Romania 15%. This is part of a global trend – with about 200 million students studying in tertiary education across the world, of which about 2.5 million (about 1.25%) studying to a doctoral level.
However, if we look at citizen science project, we see a different picture: in OpenStreetMap, 78% of participants hold tertiary education, with 8% holding doctoral level degrees. In Galaxy Zoo, 65% of participants with tertiary education and 10% with doctoral level degrees. In Transcribe Bentham (TB), 97% of participants have tertiary education and 24% hold doctoral level degrees. What we see here is much more participation with people with higher degrees – well above their expected rate in the general population.
The second aspect, Participation inequality, have been observed in OpenStreetMap volunteer mapping activities, iSpot – in both the community of those who capture information and those that help classify the species, and even in an offline conservation volunteering activities of the Trust for Conservation Volunteers. In short, it is very persistent aspect of citizen science activities.
For the sake of the analysis, lets think of look at citizen science projects that require high skills from participants and significant engagement (like TB), those that require high skills but not necessarily a demanding participation (as many Zooniverse project do), and then the low skills/high engagement project (e.g. our work with non-literate groups), and finally low skills/low engagement projects. There are clear benefits for participation in each and every block of this classification:
high skills/high engagement: These provide provide a way to include highly valuable effort with the participants acting as virtual research assistants. There is a significant time investment by them, and opportunities for deeper engagement (writing papers, analysis)
high skills/low engagement: The high skills might contribute to data quality, and allow the use of disciplinary jargon, with opportunities for lighter or deeper engagement to match time/effort constraints
low skills/high engagement: Such activities are providing an opportunity for education, awareness raising, increased science capital, and other skills. They require support and facilitation but can show high potential for inclusiveness.
low skills/low engagement: Here we have an opportunity for active engagement with science with limited effort, there is also a potential for family/Cross-generational activities, and outreach to marginalised groups (as OPen Air Laboratories done)
In short – in each type of project, there are important societal benefits for participation, and it’s not only the ‘full inclusion at the deep level’ that we should focus on.
Interestingly, across these projects and levels, people are motivated by science as a joint human activity of creating knowledge that is shared.
So what can we say about participation in citizen science – well, it’s complex. There are cases where the effort is exploited, and we should guard against that, but outside these cases, the rest is much more complex picture.
The talk move on to suggest a model of allowing people to adjust their participation in citizen science through an ‘escalator’ that we are aiming to conceptually develop in DITOs.
Finally, with this understanding of participation, we can understand better the link to open science, open access and the need of participants to potentially analyse the information.
Thanks to an invitation from John Hammersley of OverLeaf I gave a talk at #FuturePub 7 event, which was dedicated to “New Developments in Scientific Collaboration Tech”.
The evening was structured around 7 very short talks (about 5 minutes) so my slides are a very short introduction to citizen science (in this event, I would say that about 10 or the 60 people that participated said that they haven’t heard the term), a note to the societal and technical aspects of the field, few examples that might be of interest to the audience, and finally pointers to why there are links between open science and citizen science, as well as the relationships between field and the current state of scientific publication.
Some of the relevant tweets from the event about this talk:
The last 3 months were a gradual sigh of relief for the Extreme Citizen Science group (ExCites), Mapping for Change (MfC), and for me. As the UCL engineering website announced, the ExCiteS group, which I co-direct, secured funding through 3 research grants from the European Union’s Horizon 2020 programme (H2020), with enough funding to continue our work for the next 3 years, which is excellent. As usual in publicity, UCL celebrates successes, not the work that led to it. However, there are implications for the effort of securing funding and it is worth reflecting on them – despite the fact that we are in the success camp. While the criticism of the application process to European projects on the ROARS website is a bit exaggerated, it does give a good context for this post. In what follows I cover the context for the need to apply for funding, look at the efforts, successes and failures from mid 2014 to early 2016 (mostly failures), and then look at the implications.
This is not a piece to boast about success or moan about failure, but I find writing as a useful way to reflect, and I wanted to take stock of the research application process. I hope that it will help in communicating what is the process of securing funding for an interdisciplinary, research intensive group.
Background & context
The background is that the ExCiteS group started at the end of 2011, with a large group of PhD students – as common to early stage research groups. With the support of the UK Engineering and Physical Science Research Council (EPSRC) award, which is about to end soon, it was possible to start a group. With additional funding from the European Union (EU) and EPSRC projects, including EveryAware (2011-2014), Citizen Cyberlab (2012-2015), Challenging Risk (2013-2018), and Cap4Access (2014-2016), it was possible to develop the activities of ExCiteS & MfC. This is evident in the software packages that are emerging from our work: Sapelli, GeoKey, and a new version of Community Maps, methodologies for using these tools within participatory processes. the academic and non-academic outputs, and the fact that people know about our work.
However, what was clear since 2011 was that 2015 will be a crunch point, when we will need funding to allow members of the group to move from PhD students to post-doctoral researchers (postdocs). The financial implication of funding a postdoc is about three-times the funding for a PhD student. In addition, while at earlier years members of the group (regardless of their career stage) participated in writing research proposals – and helped winning them (e.g. Citizen Cyberlab), when people write-up their PhD theses it is inappropriate to expect them to invest significant amount of time in research applications. Finally, all our funding come through research projects – we don’t have other sources of income.
Research Applications – effort, successes, failures
So it was very clear that 2015 is going to be full of research applications. To give an idea of how many and the work that was involved, I’m listing them here – more or less in order of effort. I’m providing more details on successful applications but only partial on the failed ones – mostly because I didn’t check with the coordinators or the partners to see if they allow me to do so.
We started in mid 2014, when we started working on the first version of what is now DITOs. Coordinating an EU H2020 project proposal with 11 partners mean that between May and September 2014 we’ve invested an estimated 6 or 7 person months within the group in preparing it. We’ve submitted it early October, only to be disappointed in early March 2015 when we heard that although we scored high (13/15), we won’t be funded – only 1 project out of 19 that applied was funded. We then resurrected the proposal in July 2015, dedicated further 5 person months, resubmitted it and won funding after competition with 56 other proposals – of which only 2 were funded.
The next major investment was into a first stage proposal to the Citizen Observatories call of H2020. ExCiteS coordinated one proposal, and MfC participated in another. The process required an outline submission and then a full proposal. We worked on the proposal from December 2014 to April 2015, and it wasn’t a huge surprise to discover that 47 proposals were submitted to the first stage, of which 11 progressed to the second. The one coordinated by ExCiteS, with an investment of about 5 person months, scored 7/10, so didn’t progressed to the second stage. MfC also invested 2.5 person months in another proposal, as a partner. This proposal passed the first stage, but failed in the second.
The proposal for the European Research Council (ERC) was developed between May and June 2015, with about 3 person months – and luckily was successful. It competed with 1953 applications in total (423 in the social sciences), of which 277 (59) were successful – about 14% success rate.
Another fellowship proposal in response to an EPSRC call passed the first round, and failed at the interview stage (where 2 out of 5 candidates were selected). This one was developed from May 2015 and failed in February 2016, after an effort of about 2.5 person months.
We also developed the Economic and Social Science Research Council (ESRC) responsive mode proposal, which mean that we’ve applied to the general funds, and not to a specific call. We collaborated with colleagues at the Institute of Education from January 2015 to July 2015 , with an effort of about 2.5 person months, but we learned that it was unsuccessful in March 2016.
Another 2 person months were dedicated to an ESRC call for methodological research, for which 65 applications were submitted out of which 6 were funded, with our proposal ranking 22 out of about 65. In parallel, I had a small part in another proposal for the same call, which was ranked 56.
We’ve invested a month in an unsuccessful application to Wellcome Trust Science Learning + call in July 2014.
Less time was spent on proposals where we had a smaller role – a failed H2020 ICT proposal in April 2015, or another H2020 about Integrating Society in Science and Innovation September 2015. This also include a successful proposal to the Climate and Development Knowledge Network (CDKN). Because of all the other proposals, information such as the description of our activities, CVs and other bits were ready and adjusted quite easily.
ExCiteS and MfC also participated in an EU LIFE proposal – this was for funding for applied activities, with very low level of funding of only 50%, so there was a need to think carefully about which add-on activities can be used for it. However, as the proposal failed, it wasn’t an issue.
Along the way, there were also small parts in an application to the Wellcome trust in early 2015 (failed), in an EPSRC programme grant (a large grant with many partners) that was organised at UCL and on which we dedicated time from June 2014 to February 2015 (ditto), an outline for Leverhulme trust (ditto), an ERC research proposal (ditto), and finally a COST Action application for a research network on Citizen Science(which was successful!)
So let’s summarise all these proposals, success, failure, effort in one table. Lines where the funder marked in bold mean that we’ve coordinated the proposal:
We’ve applied to lots of funders and mechanisms – fellowships, calls for proposals, and open calls for research ideas. We applied to UK funders and EU. As we are working in an interdisciplinary area, we have applied to social science as well as engineering, Information and Communication Technologies (ICT) and in between these areas. In third of the case we led the proposal, but in the rest we joined in to proposals that were set by others. So the first point to notice is that we didn’t fixate on one source, mechanism or role.
As the table shows, we’re not doing badly. Out of the 7 proposals that we’ve led, 2 succeeded (30%). Also among the 14 that we’ve participated in 3 succeeded (20%). The overall success is about quarter. Over about 18 months a group of about 10 people invested circa 40 person months in securing future funding (about 20% of the time) for the next 3 years, which doesn’t sound excessive.
However, the load was not spread equally, so some people spent a very significant amount of their time on proposals. I was involved in almost all of these 21 proposals during this period, much more in those that we led, and in some of those that we participated as partner, I was the only person in the group that worked on the proposal. It was increasingly challenging to keep submitting and working on proposals with so much uncertainty and the very long gap between submission and results (look above, and you’ll see that it can be up to 9 months). Because of the uncertainty about success, and an assumption that only 20% will be successful at best (that’s 4 wasted proposals for every successful one), I felt that I need to keep on going, but there were moment when I thought that it’s a doomed effort.
There is also the issue of morale – as should be obvious from the fact that we’ve announced the successes recently, as the failures mounted up during the second part of 2015, it was harder to be cheerful. Because of the long gap between proposal submission and result that I mentioned, the future of the group is unknown for a significant period, and that influences decisions by people about staying or leaving, or how to use the funds that we do have.
Leaving aside that by early 2016 it became hard to find the energy to be involved in more proposal writing, there is an issue about how interdisciplinary research groups are funded. While we can apply to more funding opportunities, the responses from the failures indicated that it’s tough to convince disciplinary evaluators that the work that is being done is important. This mean that we knew all along that we need to apply more. Maybe it was a coincident, but the EU funding evaluations seem more open to these ideas than UK funders.
Second, such a high number of applications take time from other research activities (e.g. check my publications in 2014-2015). Applications, with all the efforts that is associated with them, are not seen as an academic output, so all the effort of writing the text, proofing it and revising it are frequently wasted when a proposal fail.
Third, all these proposals burn social capital, ‘business capital’, and cash reserves – e.g. having a consultant to help with H2020 project or covering the costs of meetings, asking for letter of support from business partners, raising hopes and making links with partners only to write at the end that we won’t be working together beyond the proposal. There are also negotiations with the Head of Department on the level of support from the university, requests for help from research facilitators, financial administrators and other people at the university.
Fourth, considering how much effort, experience, support – and luck – is needed to secure research funding, I’m not surprise that some people are so despondent about their chances to do so, but all the above is the result of a large team and I would argue that the clue to the ability to keeping the stamina is team spirit and having a clear goal on why the hell you want the funding in the first place (in our case, we want to materialise Extreme Citizen Science).
Finally, looking at the number of the submissions, the ranking and the general success rate of applications in the areas that we’ve applied to (about 15% or less), I have concerns that under such conditions there is a ‘crowding out’ situation, in which groups that got better resources around them (e.g. the institutional infrastructure at UCL, our internal experience) make it harder for new entrants or smaller groups. At a higher funding rate, we could have secured the funding in less proposals, at which point we wouldn’t continue to apply, and therefore allow others to secure funding.
I have no plans for another period like the one that led to the current results. I am incredibly grateful to have such a level of success, which is about the institution that I’m in, the hard work and the evolving experience in preparing proposals and, always, luck. It is very possible that this post would have counted 19 failures, so we’re very grateful to all the people who evaluated out proposals positively and gave us the funding.
Back to the funding, with all the successes, in people terms, we’ve secured funding for the 10 people that I’ve mentioned for 3 years, with further 6 PhD students joining us over that period. There are still other people in the group that will need funding soon, so probably we will put the accumulated knowledge and experience to use soon.
My contribution to the discussion is based on previous thoughts on environmental information and public use of it. Inherently, I see the relationships between environmental decision-making, information, and information systems as something that need to be examined through the prism of the long history that linked them. This way we can make sense of the current trends. This three area are deeply linked throughout the history of the modern environmental movement since the 1960s (hence the Apollo 8 earth image at the beginning), and the Christmas message from the team with the reference to Genesis (see below) helped in making the message stronger .
To demonstrate the way this triplet evolved, I’m using texts from official documents – Stockholm 1972 declaration, Rio 1992 Agenda 21, etc. They are fairly consistent in their belief in the power of information systems in solving environmental challenges. The core aspects of environmental technophilia are summarised in slide 10.
This leads to environmental democracy principles (slide 11) and the assumptions behind them (slide 12). While information is open, it doesn’t mean that it’s useful or accessible to members of the public. This was true when raw air monitoring observations were released as open data in 1997 (before anyone knew the term), and although we have better tools (e.g. Google Earth) there are consistent challenges in making information meaningful – what do you do with Environment Agency DSM if you don’t know what it is or how to use a GIS? How do you interpret Global Forest Watch analysis about change in tree cover in your area if you are not used to interpreting remote sensing data (a big data analysis and algorithmic governance example)? I therefore return to the hierarchy of technical knowledge and ability to use information (in slide 20) that I covered in the ‘Neogeography and the delusion of democratisation‘ and look at how the opportunities and barriers changed over the years in slide 21.
The last slides show that despite of all the technical advancement, we can have situations such as the water contamination in Flint, Michigan which demonstrate that some of the problems from the 1960s that were supposed to be solved, well monitored, with clear regulations and processes came back because of negligence and lack of appropriate governance. This is not going to be solved with information systems, although citizen science have a role to play to deal with the governmental failure. This whole sorry mess and the re-emergence of air quality as a Western world environmental problem is a topic for another discussion…
The EU Joint Research Centre in Ispra has recently released the recording of a talk by Alan Irwin at the Joint Research Centre as part of the STS “Contro Corrente” series of seminars from 15 October 2015, with Jerome Ravetz and Silvio Funtowicz (famous for their post-normal science) as discussants. The talk, titled Citizen Science and Scientific Citizenship: same words, different meanings? is using the two keynotes at the Citizen Science Association 2015 conference (by Chris Filardi and Amy Robinson) as a starting point for a discussion about the relationships of citizen science to scientific citizenship.
If you are interested in the wider place of citizen science within the scientific enterprise, this seminar is an opportunity to hear from 3 people who thought about this for a long time (and their work influenced my thinking). It’s very much worth to spend the time to follow the whole discussion).
Two very valuable points from Irwin’s talk are, first, the identification ‘that the defining characteristics of citizen science is its location at the point where public participation and knowledge production – or societal context and epistemology – meet‘.
Secondly, the identification that scientific citizenship is having the following characteristics – focus on sociotechnical futures with specifically asking question about the relationship between knowledge and democracy; which highlights the political economy of knowledge and the changing nature of citizenship as practised engagement.
Also valuable is the linkage of knowledge, power, and justice and how these play out in citizen science in its different forms.
I’ll admit that I was especially interested in the way that my model of participation in citizen sciencewas used in this seminar. However, having a blog is also an opportunity to respond to some of the points that were discussed in the seminar!
Second, Funtowicz commented that the equivalent of ‘extreme citizen science’ in Arnstein ladder does not reach very high level of participation. I disagree. Arnstein top level is ‘Citizen Control, have-not citizens obtain the majority of decision-making seats, or full managerial power’. If in citizen science project we shift into more equal mode of knowledge production where the project is shaped by all participants, especially marginalised ones, and the scientists working as facilitators in service of the community, aren’t we at the same place?
Every project ends, eventually. The Citizen Cyberlab project was funded through the seventh framework programme of the European Union (or EU FP7 in short), and run from September 2012 to November 2015. Today marks the final review of the project in with all the project’s partners presenting the work that they’ve done during the project.
The project had a technical elements throughout its work, with platforms (technologies that provide foundation to citizen science projects), tools (technologies that support projects directly by being part of what volunteers use), and pilots – projects that use the technologies from citizen cyberlab as well as from other sources, to carry out citizen science projects. In addition to the platforms, tools or pilots – the project used all these elements as the background for a detailed understanding of creativity and learning in citizen cyberscience, which rely on Information and Communication Technologies (ICT). So the evaluation of the pilots and technologies was aimed to illuminate this question.
This post summarises some of the major points from the project. The project produced a system to develop and share research ideas (ideaweave.io), a framework for scientific games (RedWire.io) which is accompanied with tools to measure and observe the actions of gamers (RedMetrics.io), systems for sharing computation resources through virtual machines (through CitizenGrid platform), and a framework to track user actions across systems (CCLTracker), a platform for community mapping (GeoKey), mobile data collection tools (EpiCollect+).
The RedWire platform supports the development of games and the mixing of code between project (borrowing concepts from synthetic biology to computing!), and as the system encourages open science, even data from the different games can be mixed to create new ones. The integration with player behaviour tracking ability is significant in the use of games for research (so that’s done with RedMatrics). The analytics data is open, so there is a need to take care of privacy issues. An example of the gaming platform is Hero.Coli – a game about synthetic biology.
The GeoKey platform that was developed at UCL ExCiteS is now integrated with Community Maps, ArcGIS Online and can receive data trough Sapelli, EpiCollect or other HTML5 apps (as the air quality app on Google Play shows). The system is progressing and includes an installation package that make it easier to deploy. Within a year, there are about 650 users on the system, and further anonymous contributions, and over 60 mini-sites, many of them ported from the old system. The system is already translated to Polish and Spanish.
The Citizen Grid is a platform that improve volunteer computing, and allow the access to resources in a simplified manner, with launching of virtual machines through a single link. It can use shared resources from volunteers, or cloud computing.
The IdeaWeave system, which is a social network to support the development of ideas and projects, and share information about these projects. The final system supports challenges, badges and awards. They also add project blogging and ability for voting on proposals.
EpiCollect+ is a new implementation of EpiCollect which was supposed to be device independent through HTML5. There are issues with many APIs, and this lead to finding out limitations in different mobile platforms. There are many applications
The Virtual Atom Smasher application in CERN was redesign with the use of learning analytics, which shown that many people who start engaging with it don’t go through the learning elements and then find the interface confusing, so the restructuring was geared towards this early learning process. The process help people to understand theoretical and experimental physics principles. The system, which test4theory.cern.ch . After participants log in, they go through a questionnaire to understand what the participant know, and then go through video and interactive elements that help them to understand the terminology that is needed to use the interface effectively, and the rest of the process supports asking questions in forums, finding further information through links and more. Some of the side projects that were developed from Virtual Atom Smasher include to TooTR framework that supports creating tutorials that are web-based and include videos and interactive parts. During the project, they have attracted 790 registered participants, 43 spent more than 12 hours with the game. Now the game is gaining attention from more scientists who are now seeing that it is worth while to engage with citizen science. The project is fusing volunteer computing and volunteer thinking.
GeoTag-X provides a demonstrator for volunteer thinking, and was developed by UNITAR. It allow the capturing of relevant imagery and pictures from disaster or conflict situations. It support UNITAR humanitarian operations. They wanted to assess if the system is useful. They have 549 registered volunteers, with 362 completing at least one task. GeoTag-X engaged with the humanitarian Geo community – for example with GISCorps, UN Volunteers Online, and Humanity Road.
The Synthetic Biology pilot included the development of MOOC that explains the principles of the area, the game Hero.coli, developed a new spectrometer that will be produced at very large scale in India.
Our own extreme citizen science pilots focused on projects that use cyberlab technology, so focusing on air quality monitoring in which we used GeoKey and EpiCollect to record the location of diffusion tubes and the street context in which it was installed. In addition, we included the use of public lab technology for studying the environment, and playshops to explore the exposure to science.
The research into learning and creativity, shown that there is plenty of learning of the ‘on topic’ and the mechanics of the citizen science, with small minority showing deep engagement with active learning. There is variety of learning – personal development – from self-confidence to identity and cultural change; generic knowledge and skills; and finally project specific aspects. The project provides a whole set of methods for exploring citizen science: checklists that can be used to help designing for citizen science learning, surveys, interviews, analysing blogs, user analytics, and lab studies. Some of the interesting finding include: in GeoTag-X, even a complex interface was learnt quite quickly, and connecting emotionally to the issue of humanitarian issue and participation can predict learning. The Virtual Atom Smasher demonstrated that participants learned about the work of scientists and science (e.g. the plenty use of statistics). In SynBio4All, there was plenty of organisational skills, lab work, scientific communication and deeper contact with science – all through need to involved in a more significant way. The ExCiteS pilots show involvement and emotional learning, and evidence for community ‘hands on’ situated learning with high engagement of participants. There are examples for personal development, scientific literacy and community organisation, hosting workshop and other skills. One of the major achievement of this study is a general survey, which had 925 complete responses and 2500 partial ones – from volunteers across citizen science (80 projects) – clusters show 25% learn about technology and science skills, 21% learn about the topic and scientific skills, about 20% learn about science skills, but some collaboration and communication, 13% pure on-topic learning. In citizen science, high percentage learn from project documentation, next about 20% learns through the project and some from documentation, about 17% learn from the project and external documentation, next there was a group learning through discussion. Most feel that they learn (86%). learning is not initial motivation, but become an important factors, and also learning about new area of science. Highly engaged volunteers take on specific and various roles – translators, community managers, event organisers etc.
On the creativity side, interviews provided the richest source of information on creativity and how it is integrated into citizen science. Interviews with 96 volunteers provided one of the biggest qualitative survey in citizen science. Motivations – curiosity, interest in science and desire to contribute to research. They sustained participation due to continued interest, ability, time. The reasons for different audience composition are task time, geography and subject matter. In a lab study, it was shown that citizen cyberscience results are related to immersion in the game. There is also evidence that people are multi-tasking – they have plenty of distractions to the engagement in any given online project. The key finding about creativity include examples in the analysis of the images and geotagging in GeoTag-X. in the Virtual Atom Smasher, adjusting parameters seen as creative, while in SynBio4all the creation of games, or the creation of the MOOC were examples of creativity. In ExCiteS there are photos, drawing, sculptures , blog posts With air quality we’ve seen examples of newsletter, t-shirts, or creating maps. There are routes through the Motivations, learning and creativity. Might need to look at models for people who lead projects. To support creativity face-to-face collaboration is important, allow entry level of volunteers, and provide multiple methods for volunteers to provide feedback.
In terms of engagement – we carried out ThinkCamp events, linking to existing online communities, working through engagement and participation. Interestingly, analysis of twitter shown following from fellow researchers and practitioners in citizen science.
The citizen cyberlab will now continue as an activity of the university of Geneva – so watch this space!