New publication: Citizen science to foster innovation in open science, society and policy

The previous post described the opening chapter of “Citizen Science: Innovation in Open Policy, Science and Society“, which apart from the first 7 pages, is following a fairly standard pattern of introduction chapters – an overview of the sections and explaining the logic behind organising the chapters and the order that they appear, and description of the case studies in the book.

The concluding chapter, on the other hand, was created with special effort to make it a synthesis and analysis of the themes that emerge from the book. The chapter “Citizen science to foster innovation in open science, society and policy” was created in a joint effort of the editorial team in the following way: first, we’ve asked each of the chapters lead authors to agree with their co-authors and provide 3 to 5 bullet points that summarise the main messages of the chapter. The purpose of these points is to be a quick reference for the readers about the chapter with more focused information than an abstract. You can find these “Highlights” in each of the chapters (though not in the case studies).

These highlights also served another purpose – as a starting point for the synthesis. We copied all the highlights into a Google Document, and then, in mid-September 2017, with all the chapters completed and ready for the final stage of production, Aletta, Susanne, Anne, and myself joined in two online workshops in which we discussed the themes and collaboratively  moved the bullet points around so we can gather them into common headings (science, society, science-policy interface, technology, science communication and education, and organisational/institutional). With the bullet points grouped, we started composing paragraphs from this “raw material” – it is fascinating to follow the versions of the Google Document and see the sections emerging in a short period of time.

As with the rest of the book, we were fortunate that Susanne, the lead editor, is also a very talented science communicator with a very good eye to graphic design. The final chapter includes pictograms that represent different audiences for the recommendations that are emerging from the book – policymakers, researchers, educators, etc (see example below). The effort by Aletta and Susanne on this chapter produced an excellent synthesis from the joint output of 121 authors – an excellent way to conclude the book in a meaningful way.   The end result can be found here.

Page in book with icons

Advertisements

GSF-NESTI Open Science & Scientific Excellence workshop – researcher, participants, and institutional aspects

The Global Science Forum – National Experts on Science and Technology Indicators (GSF-NESTI) Workshop on “Reconciling Scientific Excellence and Open Science” (for which you can see the full report here) asked the question “What do we want out of science and how can we incentivise and monitor these outputs?”. In particular, the objective of the workshop was “to explore what we want out of public investment in science in the new era of Open Science and what might be done from a policy perspective to incentivise the production of desired outputs.” with an aim to explore the overarching questions of:
1. What are the desirable (shorter-term) outputs and (longer-term) impacts that we expect from Open Science and what are potential downsides?
2. How can scientists and institutions be incentivised to produce these desirable outcomes and manage the downsides?
3. What are the implications for science monitoring and assessment mechanisms?

The session that I was asked to contribute to focused on Societal Engagement: “The third pillar of Open Science is societal engagement. Ensuring open access to scientific information and data, as considered in the previous sessions, is one way of enabling societal engagement in science. Greater access to the outputs of public research for firms is expected to promote innovation. However, engaging with civil society more broadly to co-design and co-produce research, which is seen as essential to addressing many societal challenges, will almost certainly require more pro-active approaches.
Incentivising and measuring science’s engagement with society is a complex area that ranges across the different stages of the scientific process, from co-design of science agendas and citizen science through to education and outreach. There are many different ways in which scientists and scientific institutions engage with different societal actors to informing decision-making and policy development at multiple scales. Assessing the impact of such engagement is difficult and is highly context and time-dependent“.

For this session, the key questions were

  • “What do we desire in terms of short and long-term outputs and impacts from societal engagement?
  • How can various aspect of scientific engagement be incentivised and monitored?
  • What are the necessary skills and competencies for ‘citizen scientists’ and how can they be developed and rewarded?
  • How does open science contribute to accountability and trust?
  • Can altmetrics help in assessing societal engagement?”

In my talk, I’ve decided to address the first three questions, by reflecting on my personal experience (so the story of a researcher trying to balance the “excellence” concepts and “societal engagement”), then consider the experience of the participants in citizen science projects, and finally the institutional perspective.


I’ve started my presentation [Slide 3] with my early experiences in public engagement with environmental information (and participants interest in creating environmental information) during my PhD research, 20 years ago. This was a piece of research that set me on the path of societal engagement, and open science – for example, the data that we were showing was not accessible to the general public at the time, and I was investigating how the processes that follow the Aarhus convention and use of digital mapping information in GIS can increase public engagement in decision making. This research received a small amount of funding from UCL, and later from ESRC, but not significantly.

I then secured an academic position in 2001, and it took to 2006 [Slide 4] to develop new systems – for example, this London Green Map was developed shortly after Google Maps API became available, and while this is one of the first participatory GIS applications on to of this novel API, this was inherently unfunded (and was done as an MSc project). Most of my funded work at this early stage of my career had no link to participatory mapping and citizen science. This was also true for the research into OpenStreetMap [Slide 5], which started around 2005, and apart from a small grant from the Royal Geographical Society, was not part of the main funding that I secured during the period.

The first significant funding specifically for my work came in 2007-8, about 6 years into my academic career [Slide 6]. Importantly, it came because the people who organised a bid for the Higher Education Innovation Fund (HEIF), realised that they are weak in the area of community engagement and the work that I was doing in participatory mapping fit into their plans. This became a pattern, where people approach with a “community engagement problem” – so there is here a signal that awareness to societal engagement started to grow, but in terms of the budget and place in the projects, it was at the edge of the planning process. By 2009, the investment led to the development of a community mapping system [Slide 7] and the creation of Mapping for Change, a social enterprise that is dedicated to this area.

Fast forward to today [Slide 8-10], and I’m involved in creating software for participatory mapping with non-literate participants, that support the concept of extreme citizen science. In terms of “scientific excellence”, this development, towards creating a mapping system that anyone, regardless of literacy can use [Slide 11] is funded as “challenging engineering” by EPSRC, and as “frontier research” by the ERC, showing that it is possible to completely integrated scientific excellence and societal engagement – answering the “reconciling” issue in the workshop. A prototype is being used with ZSL to monitor illegal poaching in Cameroon [Slide 12], demonstrating the potential impact of such a research.

It is important to demonstrate the challenges of developing societal impact by looking at the development of Mapping for Change [Slide 13]. Because it was one of the first knowledge-based social enterprises that UCL established, setting it up was not simple – despite sympathy from senior management, it didn’t easily fit within the spin-off mechanisms of the university, but by engaging in efforts to secure further funding – for example through a cross universities social enterprise initiatives – it was possible to support the cultural transformation at UCL.

There are also issues with the reporting of the impact of societal engagement [Slide 14] and Mapping for Change was reported with the REF 2014 impact case studies. From the universities perspective, using these cases is attractive, however, if you recall that this research is mostly done with limited funding and resources, the reporting is an additional burden which is not coming with appropriate resources. This lack of resources is demonstrated by Horizon 2020, which with all the declarations on the importance of citizen science and societal engagement, dedicated to Science with and for Society only 0.60% of the budget [Slide 15].

Participant experience

Alice Sheppard presenting her escallatorWe now move to look at the experience of participants in citizen science projects, pointing that we need to be careful about indicators and measurements.

We start by pointing to the wide range of activities that include public engagement in science [Slide 17-18] and the need to provide people with the ability to move into deeper or lighter engagement in different life stages and interests. We also see that as we get into more deep engagement, the number of people that participate drop (this is part of participation inequality).

For specific participants, we need to remember that citizen science projects are trying to achieve multiple goals – from increasing awareness to having fun, to getting good scientific data [Slide 19] – and this complicates what we are assessing in each project and the ability to have generic indicators that are true to all projects. There are also multiple learning that participants can gain from citizen science [Slide 20], including personal development, and also attraction and rejection factors that influence engagement and enquiry [Slide 21]. This can also be demonstrated in a personal journey – in this example Alice Sheppard’s journey from someone with interest in science to a citizen science researcher [Slide 22].

However, we should not look only at the individual participant, but also at the communal level. An example for that is provided by the noise monitoring app in the EveryAware project [Slide 23] (importantly, EveryAware was part of Future Emerging Technologies – part of the top excellence programme of EU funding). The application was used by communities around Heathrow to signal their experience and to influence future developments [Slide 24]. Another example of communal level impact is in Putney, where the work with Mapping for Change led to change in the type of buses in the area [Slide 25].

In summary [Slide 26], we need to pay attention to the multiplicity of goals, objectives, and outcomes from citizen science activities. We also need to be realistic – not everyone will become an expert, and we shouldn’t expect mass transformation. At the same time, we shouldn’t expect it not to happen and give up. It won’t happen without funding (including to participants and people who are dedicating significant time).

Institutional aspects

The linkage of citizen science to other aspects of open science come through DITOs bus in Birmingham participants’ right to see the outcome of work that they have volunteered to contribute to [Slide 28]. Participants are often highly educated, and can also access open data and analyse it. They are motivated by contribution to science, so a commitment to open access publication is necessary. This and other aspects of open science and citizen science are covered in the DITOs policy brief [Slide 29]. A very important recommendation from the brief is that recognition that “Targeted actions are required. Existing systems (funding, rewards, impact assessment and evaluation) need to be assessed and adapted to become fit for Citizen Science and Open Science.”

We should also pay attention to recommendations such as those from the League of European Research Universities (LERU) report from 2016 [Slide 30]. In particular, there are recommendations to universities (such as setting a single contact point) and to funders (such as setting criteria to evaluate citizen science properly). There are various mechanisms to allow universities to provide an entry point to communities that need support. Such a mechanism is called “science shop” and provide a place where people can approach the university with an issue that concerns them and identify researchers that can work with them. Science shops require coordination and funding to the students who are doing their internships with community groups. Science shops and centres for citizen science are a critical part of opening up universities and making them more accessible [Slide 31].

Universities can also contribute to open science, open access, and citizen science through learning – such as, with a MOOC that designed to train researchers in the area of citizen science and crowdsourcing that we run at UCL [Slide 32].

In summary, we can see that citizen science is an area that is expanding rapidly. It got multifaceted aspects for researchers, participants and institutions, and care should be taken when considering how to evaluate them and how to provide indicators about them – mix methods are needed to evaluate & monitor them.

There are significant challenges of recognition: as valid excellent research, to have a sustainable institutional support, and the most critical indicator – funding. The current models in which they are hardly being funded (<1% in NERC, for example) show that funders still have a journey between what they are stating and what they are doing.


Reflection on the discussion: from attending the workshop and hearing about open access, open data, and citizen science, I left the discussion realising that the “societal engagement” is a very challenging aspect of the open science agenda – and citizen science practitioners should be aware of that. My impression is that with open access, as long as the payment is covered (by funder or the institution), and as long as the outlet is perceived as high quality, scientists will be happy to do so. The same can be said about open data – as long as funders are willing to cover the costs and providing mechanisms and support for skills, for example through libraries then we can potentially have progress there, too (although over protection over data by individual scientists and groups is an issue).

However, citizen science is opening up challenges and fears about expertise, and perceptions about it risking current practices, societal status, etc. Especially when considering the very hierarchical nature of scientific work – at the very local level through different academic job ranking, and within a discipline with specific big names setting the agenda in a specific field. These cultural aspects are more challenging.

In addition, there seem to be a misunderstanding of what citizen science is and mixing it with more traditional public engagement, plus some views that it can do fine by being integrated into existing research programmes. I would not expect to see major change without providing a clear signal through significant funding over a period of time that will indicate to scientists that the only way to unlock such funding is through societal engagement. This is not exactly a “moonshot” type funding – pursue any science that you want but open it. This might lead to the necessary cultural change.

OECD Open Science and Scientific Excellence Workshop – Paris

The OECD organised and hosted a Global Science Forum (GSF) and National Experts on Science and Technology Indicators (NESTI) Workshop on  “Reconciling Scientific Excellence and Open Science: What do we want out of science and how can we incentivise and monitor these outputs?” (9 April, 2018, OECD). In agreement with the OECD Secretariat, the information here is not attributed to anyone specific (Here is the blog post about my own presentation).

The workshop opened with the point that speaking about reconciling open science and science seem contradictory. Scientific excellence was based on the value of publications, but the digital transformation and the web have changed things – from elite access to a library where outputs are held to one that is available to everyone over the web, and we can see citizens accessing data. We also need to look at the future – opening even more, which is something positive but there are challenges in measuring, the impact of different bibliometrics and other indicators.

The openness happens quickly, and we need to understand the transformation and then think about the statistical aspects of this information. There is an effort of developing a roadmap to see the integration of open science across science policy initiatives.

The area is fairly complex: excellence, how science is changing, incentivise and measuring science – all these are tightly related to each other. Some of the fundamental questions: what do we want from science? only excellence or other things? How can we incentivise the academic community to move in the direction of open science – and what the policy community of science need to do about it. National Statistical communities and Global Science Forum are two important groups that can influence it in terms of policy and the measurement the impacts and processes.

The meeting is looking at open science, publishing, open data, and engagement with society, as well as indicators and measurement.

The slides from all the talks are available here. 

Session 1. Scientific excellence through open science or vice versa? What is excellence and how can it be operationalised in the evidence and policy debate?

Paula Stephan (Georgia State University, USA) addressed the challenges of science – lack of risk-taking, and lack of career opportunities to Early Career Scientists in their research. The factors that impact that – especially short-term bibliometrics and then, how open science can help in dealing with the issues.

The original rationale for government support science is the high risk that is associated with basic research. The competitive selective procedures reducing risk and leading to safer options to secure funding (including NIH or ERC). James Rothman who won Nobel prize in Physiology pointed that in the 1970s there was a much higher level of risk that allows him to explore things for 5 years before he started being productive. Concerns about that aspects appeared by AAAS in 2008 ARISE report, and NASA and DARPA became much more risk-averse.

In addition, there is lack of career opportunities for ECRs – the number of PhD is growing, but the number of research position declining – both in industry and academia. Positions are scare and working in universities is an alternative career. Because of the way that the scarce jobs or research applications are based on short citation windows – high impact journal paper is critical for career development. Postdocs are desperate to get a Nature or Science paper. Assessment of novel papers (papers that use references never before made together) showed that only 11% of papers are novel, and highly novel papers is associated with risk: disproportionate concentration at the top and bottom in citations distribution, and also get cited outside the field. The more novel the paper is, the less likely it is to appear in high ranking journal. The bibliometrics discourage researchers from taking these risks with novel paper.

Open science gives opportunity – citizen science give an opportunity for new ways of addressing some issues  – e.g. through crowdfunding to accommodate risky research. In addition, publication in open access can support these novel paper strategies.

Richard Gold (McGill University, Montreal, Canada) looked at why institutions choose open science – exponentially increasing costs of research, but it’s not enough and there are requests to ask for more funding. Productivity is declining – measured by the number of papers per investment. Firms are narrowing their focus of research.

We can, therefore, consider Open Science partnerships – OA publications, Open Data and no patents on co-created outputs as a potential way to address these challenges. This can be centred around academic and not-for-profit research centre, and generally about basic understanding of scientific issues, with data in the centre. Institutions look at it as a partial solution – decreasing duplication as no need to replicate, provide quality through many eyes, and providing synergies because there is a more diverse set of partners. It can increase productivity because data can be used in different fields, using wider networks of ideas and the ability to search through a pool of ideas. We can see across fields – more researchers, but fewer outputs in. In patent applications, we see that also the 1950s was the recent peak in novelty in terms of linking unrelated field, and this is dropping since.

An alternative to this is a system like the Structural Genomics Consortium – attracting philanthropic and industrial funding. There is also a citizen science aspects – ability to shape the research agenda in addition to providing the data. The second thing is that the data can be used with their communities – patients and indigenous groups are more willing to be involved. Open science better engages and empower patients in the process – easier to get consent.

Discussion: during the selection of projects, the bibliometrics indications need to be removed from the application and from funding decisions. Need people to read the research ideas, and need to move away from funding only a single person as the first author – need to incentivise teams and support. Need to think how to deal with impact of research and not only on the original research (someone might use the dataset that was produced in open science for a publication, not by the person who did the work).

There is a sense that the “lack of risk-taking” is an issue, but there is a need for measuring and showing if it is happening. Lots of scientists censuring their work and there is a need to document this happening. The global redistribution of people is about which areas people concentrate on – e.g. between physics and agriculture.

Session 2 – Open access publication and dissemination of scientific information

Rebecca Lawrence (Faculty of 1000) described how F1000 is aiming to develop a different model of publication – separating publication from evaluation. The publication is there because of funders and researchers evaluate others around where they publish. There are all sort of manipulations: overselling, p-value fishing, creative outliers, plagiarism, non-publication by a journal that don’t want low impact papers and more. There is a growing call for the move towards open access publication – e.g. the open science policy platform, European open science cloud, principles such as DORA, FAIR (Findable, Accessible, Interoperable, Reusable) and an increase of pre-print sources. There is also a new range of how science is being organised – how to make it sustainable in areas where there aren’t receiving much funding – use of pre-print services, and also exploring the peer review funding. F1000 is about thinking about the speed of s finding. The model was developed with Wellcome, Gates foundation and creating a platform that is controlled by funders, or institutions, and by researchers. In this model, publishers are service providers. F1000 support a wide range of outputs: research article, data, software, methods, case studies. They check that the paper technically: is the data behind it accessible and that it was not published before. The publication is done a complete open peer review – so you can see who is reviewing and what was done by the author. Within the article, you can see the stage in the research – even before peer review. Making the paper a living document – usually 14 days between submission and publication, and usually a month including being reviewed. The peer review here is transparent and the reviewers are being cited. This is good for ECRs to gain experience.

The indicators need to take into account career levels, culture (technical and reflective) and not only fields, and thinking about different structures – individual, group, institution. Need open metrics, and certain badges that tell you what you are looking for and also qualitative measures- traditional publications can curate articles.

2. Vincent Tunru (Flockademic, Netherlands) explored the issue of incentivising open science. Making science more inclusive – making more people being able to contribute to the scientific process. Open access can become the goal instead of the means to become more inclusive. If the information is free, people can read the results of publicly funded research, but there is a barrier to publish research within the OA model – publication costs should be much lower: other areas (music, news) have gone down in costs because of the internet. In some disciplines, there is the culture of sharing pre-print and getting feedback before submission to journals – although places like ArXiv is doing the work. The primary value of the submission to a journal is the credentialing, High-level journals can create scarcity to justify the demand. Nature scientific reports is taking over PLOS ONE because of that. We need to decouple credentialing from the specific journals. Different measures of excellence are possible, but we need to consider how we do it today – assuming that it is reviewers and editors are the ones who consider what excellence means. Need to focus on inclusivity and affordability. [See Vincent blog post here]

Kim Holmberg (University of Turku, Finland) focused on altmetrics –  Robert Merton pointed already in the 1950s that the referencing system is about finding a work that wasn’t known before but also about recognition of the other researchers. That leads then to how the journal impact factor and the H-Index became part of research assessment. These are being used more and more in research evaluation especially in the past 15 years. Earlier research has pointed out many flaws with them. In addition, they fail to take into account the complexity of scientific activities, nor do they tell you anything about the societal impact of research. One way to look at the complexity is the Open Science Career Assessment Matrix (OS-CAM).

We can think about the traces that people leave online as they go through the research process – discussing research ideas, collecting data, analysing, disseminating results. These traces can become altmetrics – another view of research activities. It is not just social media: the aim is to expand the view of what’s impact is about. With altmetrics we can analyse the networks that the researcher is involved in and that can give insights into new ways of interaction between the researcher with society. Citations show that a paper has been used by another researcher, while altmetrics can indicate how it has been disseminated and discussed among a wider audience. But there are still lots of questions about the meaning and applicability of altmetrics.

There are reports from the Mutual Learning Exercise europa.eu/!bj48Xg – looking at altmetrics, incentives and rewards for open science activities. For instance, in the area of career & research evaluation, researchers need specific training and education about open science, and in the area of evolving authorship identifying and rewarding peer review and publishing of negative results need to be developed. Implementation of open science needs to guarantee long-term sustainability and reward role-models who can provide a demonstration of this new approach to involving in science. The roadmap from the MLE suggests a process for this implementation.

Discussion: there is the issue of finding a good researcher in a group of researchers and publications is a way to see the ideas, but the link to open science and how it can help in that is unclear. However, finding a good researcher does not happen through all these metrics – it’s a human problem and not only a metric. Will originality be captured by these systems? Publication is only small part of the research activity – in every domain, there is a need to change and reduce the publication, but not only to think that someone will read the same paper again and again (after each revision). Attention is the scarce resource that needs to be managed and organised not to assume that more find a way to filter the information.

The response to this pointed that because of research funding is public, we should encourage publishing as much as possible so others can find the information, but we need good tools for searching and evaluating research so you can find it.

Another confusion – want to see the link between open access publication and open science. Open access can exist in the publish or perish structure. What is it in OA that offer an alternative to the close publishing structure. How can that lead us to different insight into researchers activities? In response to this, it was pointed out that it is important to understand the difference between Open Access and Open Science (OA = openly available research publications, OS = all activities and efforts that open the whole research process, including publishing of research results).

There is growing pressure for people to become media savvy and that means taking time from research.

Altmetrics: originally thought of as a tool that can help researchers find interesting and relevant research, not necessarily for evaluation (http://altmetrics.org/manifesto/).

Discussion: there is the issue of finding a good researcher in a group of researchers and publications is a way to see the ideas, but the link to open science and how it can help in that is unclear. However, finding a good researcher is not through all these metrics – it’s a human problem and not only a metric. Will originality be captured by these systems? Publication is only small part of the research activity – in every domain, there is a need to change and reduce the publication, but not only to think that someone will read the same paper again and again (after each revision). Attention is the scarce resource that needs to manage and organised not to assume that more find a way to filter the information.

The response to this pointed that because of research funding is public, we should encourage publishing as much as possible so others can find the information, but we need good tools for searching and evaluating research so you can find it.

Another confusion – want to see the link between open access publication and open science. Open access can exist in the publish or perish structure. What is it in OA that offer an alternative to the close publishing structure. How can that lead us to different insight into researchers activities?

There is growing pressure for people to become media savvy and that means taking time from research.

Altmetrics: originally as a tool that can help other researchers, not necessarily for evaluation.

Session 3. Open research data: good data management and data access

Simon Hodson (CODATA) – Open Science and FAIR data. The reconciling elements – the case for open science is the light that it shines on the data and make it useful. It allows reuse, reproducibility, and replicability – it is very much matching each other. CODATA is part of the International Council for Science – focusing capacity building, policy, and coordination. The case for open science – good scientific practice depends on communicating the evidence. In the past, a table or a graph that summarises some data was an easy way of sharing information, but as data and analysis grew, we need to change the practice of sharing results. The publications of “Science as an open enterprise” (2012), including pointing that the failure to report the data underlying the science is seen as malpractice. Secondly, open data practices transform certain areas of research – genomics, remote sensing in earth systems science. Can we replicate this in other research areas? Finally, can we foster innovation and reuse of data and finding within and outside the academic system – making it available to the public at large.

Open science has multiple elements – open science is not only open access and open data. We need data to be interoperable and reusable and should be available for machine learning and have an open discussion. There are perceptions of reproducibility of research but also change in attitudes. We need to think about culture – how scientific communities established their practices. In different research areas, there are very different approaches – e.g. in biomedical research, this is open but in social science, there is little experience of data sharing and reuse and don’t see benefits. There is a need for a sociology of science analysis of these changes. Some of these major changes: meetings about genome research in Bermuda and Fort Lauderdale agreement which was because of certain pressures. There is significant investment in creating data that is not being used once – e.g. Hubble. Why data across small experiments is not open to reuse? We need to find making this happen.

FAIR principle allows data to be reusable. FAIR came from OECD work, Royal Society report 2012 and G8 statement. What we need to address: skills, also limits of sharing, need to clarify guidelines for openness. We need to have standards, skills and reward data stewardship. We need to see data citation of data. There is a need for new incentives – the cultural change happened when prominent people in the field set up the agreement.

Fiona Murphy (Fiona Murphy Mitchell Consulting, UK) Working in the area of data publishing and providing the perspective of someone who is exploring how to practice open science. There are cultural issues: why to share, with whom, what rewards, and what is the risk. Technical – how is that is done, what are the workflows, tools, capacity, and time investment. There are issues of roles and responsibilities and who’s problem is it to organise the data.

Examples of projects – SHARC – share research data alliance – international and multi-stakeholders and aim to grow the capacity to share data. The specific group is working a White Paper on recommendations. The main issues are standards for metrics: need to be transparent, need about reputation, and impact on a wider area. Also, what will be the costs of non-sharing? There are different standards in terms of policies, also need persistent identifiers and the ability to reproduce. Equality of access to services is needed – how to manage peer to peer and how is that integrated into promotion and rewards. The way to explore that is by carrying out pilots projects to understand side effects. There is also a need to develop ethical standards.

The Belmont Forum Data Publishing Policy – looking at creating the data accessibility that is part of a digital publication. Developing consistency of message so researchers will know what they are facing. There are lots of issues – some standard wording is emerging, and capturing multiple data sets, clarify licensing etc.

We can also think about what would have started if all the current system was in place – the scholarlycommons.org is suggesting principles for “born digital” scientific practice should evolve. The approach to thinking about commons, they have created some decision trees to help with the project. Working as open scientists is a challenge today – for example, need to develop a decision tree software and other things are proving challenging to act as a completely open scientist. It’s a busy space and there is a gulf between high-level policy and principles and their delivery.

Jeff Spies (Centre for Open Science, Virginia) [via video-link] Jeff is covering open research data, urgent problems, and incremental solutions. looking at strategies that are the most impactful (which is different from the center for open science). We need to broaden the definition of data – we need context: more than just the data itself or the metadata – it is critical for the assessment, metascience work. We can think of knowledge graph – more then the semantic information for the published text, and the relationship of people, place, data, methods, software… but the situation in incentives is – from psychological perspectives, the getting awards for specific publications is so strong that makes the focus on what is publishable. We have rates of retractions go up as impact factor goes up. There is urgency and the lock-in the publishers are trying to capture the life-cycle of research. The problem is that culture change is very slow and we need to protect the data – funders and policymakers that can make a difference. Researchers don’t have the ability to curate data – but libraries are the people that can have a resource for that and focus. Potential – the researcher asked to link to promotion policies and that will force universities to share them, and if the policy mention data sharing (as a way to force universities to change)

Discussion: there is concern about the ability of researchers to deal with data. There is a problem of basic data literacy.

The problem with making the data FAIR it is about 10% of the project costs and where it is useful, or where it is not enough or too much – just organising the data with the librarians is not enough as data requires a lot of domain knowledge. There are significant costs. however, in the same way, that the total costs of science to include the effort of peer review, or getting to publications (either subscription or publication), then we should also pay for the data curation. There is a need for appraisal and decision how data and process will be done.

We need to think about the future use of data – the same as natural history specimens and we can never know what should be done. Questions about the meaning of data are very important – it’s not only specimens but also photographs and not necessarily digital.

Libraries can adapt and can get respects – they are experts in curation and archiving

Session 4. Societal engagement 

Kazuhiro Hayashi (NISTEP, Tokyo, Japan) Open science as a social engagement in Japan. Is in science and technology – is being involved in open access journal and keen about altmetrics – now involved in open science policy. Generally, see multi-role – top down and bottom up – from working in G7 science expert group in open science, and also in creating software and journals. Involved in citizen science NISTEP journal and lectures, and involved in altmetrics, multi-stakeholders workshop and future earth. He would like to showcase studies:

Citizen science – the funding system in Japan for science is coming from the state mainly and they have a difficult time to do public engagement – spontaneous researchers “wild researchers”. Suggesting a more symmetrical system – creating also independent researchers which are getting their budget from a business and they publish in online journals. Wild researchers are based on crowdfunding and relay on the engagement of citizens. From his experience, recognise the new relationship between citizens and scientists: research style, new career paths and funding. Negative aspects of citizen science include populism in crowdfunding – need to be popular but not suitable for the crowd. Als need a new scheme for ECRs and need to include it. Also, there is a potential for misuse and plagiarism because of lack of data and science literacy.

Altmetrics – contributed to NISO Altmetrics initiative working group – difficult to define, and current altmetrics scores in Japanese literature are closely related to Maslow’s hierarchy of need. There are plenty of institutional repositories that – access to journal articles on repositories is more social – readers are non-researchers who would go to journal websites. Need to look at social impact – look mentioning and network analysis but it is difficult to analyse. There is need to look at the flow of data across the web.

Multi-stakeholders workshop – considering the future of open science and society. With environmental sciences and informatics. the outcome is to think about erasing influences of different socio-economic status on participants. Co-development of data infrastructure and the action of social transformation. There is an importance in capacity building. Need to see how open science and transdisciplinary work co-evolved. For social engagement – very time-consuming and need to get funded, and need open for creative activities for citizens and scientists. Think about new relationships between science and society. Need to use tentative indicators to transform society and culture – creating a future of open science and society – move from “publish or perish” to “share or perish”. Japan will have 2 citizen science sessions at the Japan open science summit on June 18-19 2018.

Muki Haklay (UCL, London, UK) [see my separate blog post]

Cecilia Cabello Valdes (Foundation for Science and Technology, Madrid, Spain) Societal engagement in open science. The foundation is aimed to promote science link with society – original with interest of increasing interest of the Spanish citizens. They are managing calls and fund different activities (about 3,250K Eur). More than 200 projects. They do activities such as Famelab – giving events to promote science and technology, in an open way. The science news agency – there is lack of awareness of scientific research – the SiNC agency – the papers are taken by general media – over 1000 journalists who use the information. They carry out summer science camps: 1920 funded students funded in 16 universities.They also manage the national museum of science and technology (Muncyt) and they share the history of science and technology in Spain. It’s a unique type of a science museum.

In citizen science, they have done a lot of work in awareness of the public to science and technology, and to keep public support for science investment. More recently they create a council of foundations for science – there wasn’t awareness of social foundations that haven’t invest in science and not only cultural activities. There are 3 foundations that are involved with the council and they are having a direct contact with the minister to develop this area of funding. The second initiative is crowdfunding for science – they help to carry out a campaign that helps in creating activities – it is also a tool of engagement.

Outreach is difficult – the council support policymakers and the general public is aware of the issues. So there are challenges – and that need to transform and how do we measure it? Some of the roles that the council need to do is to incentivise policymakers to understand what they want to achieve and then have indicators to assist in seeing that the goals are achieved. They participated in the process of policy recommendation about open science, and then translate that into action – for policymakers and society. In Fecyt they also provide resources: access to WoS/Scopus, evaluation of journals, standardised CV of researchers, and open science. Finally they participation in studies that look at measurements of science and the results

Discussion: Science Shops – are there examples that link to Maker spaces? Yes, there are examples of activities such as Public Lab but also the Living Knowledge network

Many societal engagements are not open science – they treat society as a separate entity: a struggle of making citizen science into open science – data remain closed. What are the aspects that lend themselves to open science and citizen science? – there are many definitions and there are different ways to define the two, but for example, the need to access publications, or the participation in the analysis of open data, or the production of open data, are all examples for an overlap.

Part of the discussion is about sharing knowledge, the part that says that researcher is like anyone else? There is a big difference between the scientific community and everyone else? The effort is not recognised in society and might you remove the prestige than no one would want to participate in science?

As you know, public interest – why the citizens want to participate in research? the citizens want the result of public research will help people to improve their quality of life. The science should address social problems.

How much people participate in – precipita is a new project and fund are not matched and they provide the technical help, and the promotion is through a campaign through different institutions

Should citizen science democratise science which is controversial – when information became more accessible as in Gutenberg, we are increasing the ability. Need to make citizen science a way to increase access to science.

How to get to integrated science into pockets and need to find a way to integrate these things together. There is a package that needs to support together: access, data, and public engagement and need to focus on them.

Citizen science needs to be integrated into all the science and needs to make results.

Session 5. Scientific Excellence re-visited

David Carr (Wellcome Trust, London, UK) Wellcome is committed to providing their research outputs – seeing it as part of good research practice. As a funder, they’ve had a long-standing policy on open accessing publications (since 2005) and other research outputs. Need to have also the costs of carrying out public engagement, and open access publications should be part of the funding framework. Also asking reviewers to recognise and value a wide range of research outputs. There are still need to think of reward and assessment structures, the sustaining of the infrastructures that are needed, and the need to create data specialists and managing the process to increase it. There are concerns by the research community about open access. Wellcome established open research team – looking at funder led and community-led activities, and also policy leadership. They now have the “WellcomeOpenResearch.org publishing platform” which is using F1000 platform, they also had the open science prize. They also look on policy leadership – e.g. the San Francisco DORA (declaration on research assessment). Also looking at changes to application forms to encourage other forms of outputs and then provide guidance to staff, reviewers and panel members. They also celebrate with applicants when they do open research, and also inform them about the criteria and options. They also carry out effort to evaluate if the open science indeed delivers on the promises through projects in different places – e.g. the McGill project.

Learning from the Arava Long-Term Socio-Ecological Research workshop

DSCN2472The Eilot region, near Eilat in Israel, is considered locally as a remote part of the Negev desert in Israel (it is about 3.5h drive from the population centres of Tel Aviv). It is an arid desert, with very sparse population – about 4000 people who live in communal settlements – mostly kibbutzim in an area of 2650 sq km (about the area of Luxembourg. This is a very challenging place for Western-style human habitation, in an area with a fragile desert ecosystem. The region and the Arava Institute at the centre of it, provided the stage for a workshop on Long-Term Socio-Ecological Research (LTSER) network with participants from the European network and supported by the eLTER H2020 project. LTER and LTSER are placed-based research activities that are led from the ecological perspective, with the latter integrating social aspects as an integral part of its inquiry and research framework.

The workshop run from 4-8 March on location, which allowed the immersion into the issues of the place, as well as exchanging experiences and views across the different “platforms” (the coordination bodies for the different sites that are used for LTSER research). While the people who are involved in the network were mostly familiar with one another, I was the external guest – invited to provide some training and insight into the way citizen science can be used in this type of research.

Its been over 21 years since I’ve been in this place – which I visited several times from my childhood to my late 20s. With a long experience of living in the UK, I felt like an outsider-insider – I can understand many of the cultural aspects while, at the same time, bringing my thinking and practice that is shaped through my work at UCL over this period. Also in disciplinary terms, I was outsider-insider – I’m interested in ecology, and with citizen science, linked to many people and activities in this area of research – however, I’m not an ecologist (leaving aside what exactly is my discipline). Because of that, I am aware of their framework, research questions, and issues (e.g. limited funding and marginalisation in science and research policy) which helped me in understanding the discussion and participating in it.

Visiting the area, discussing the social and ecological aspects, and progressing on a range of concepts, brought up several reflections that I’m outlining here:

DSCN2468First is the challenge of sustainability and sustainable development in such an area. It was quite telling that the head of the region, who is an active scientist, was pointing out that they want to have progressive development, and not exactly sustainable development. As we visited and travelled through the area, the challenges of achieving sustainability – with a wish for limited demographic growth and economic development that will ensure the high quality of life that the communities carve in a hostile environment can continue. This means attracting younger people who want to be part of the specific kibbutz community (the average age in the current settlement is quite high); bringing in commercial activities that match the characteristics of the area without altering them hugely – such as renewable energy activities (the area is already receiving 70% of energy from renewable energy during the day), agriculture (the area is a large producer of Medjool dates), and tourism (a new airport is about to be finished for flights from Europe); and all this while paying attention to the environmental and natural aspects of the area.

Second, the importance of cultural shaping of human-environment relationship in the area. The social organisation, the focus on agriculture (in addition to the dates there is an important milk dairy in ), and a strong belief in the power of technology to offer solutions to emerging problems stood out as major drivers of the way things happen. Each Kibbutz have a specific culture, which influences its social and operational characteristics so each is making collective decisions according to the specific organisation, and this has an environmental impact – for example, with the increase in heat due to climate change it must be that Yotveta, with a big herd of milk cows that are maintained in the desert conditions, is facing tougher challenges – and we heard from Ketura who made the decision not to maintain their herd. The impact here is an increasing use of water to cool the cows, not to mention that need to bring the feed from outside, I’d guess through Eilat port which is a short drive away. The agriculture is important in both the general ethos of the Kibbutz movement, but also significant economic income – and at the moment the dates are suitable in terms of the income that they provide. The way technological optimism is integrated into this vision is especially interesting and was pointed out by several participants. Several local presenters (some of them decision makers) mentioned that the region wants to be “silicon valley of renewable energy” and there is already rapid development of various solar energy schemes in different settlements, a research centre, and the cadaver of Better Place battery replacement station, but clearly nothing on the scale of say, Masdar Institute or anything similar in terms of the scale and R&D effort, so it is not clear what is standing behind this phrase. It seems more like a beacon of energy independence of some sort, and the provision of energy to the nearby city of Eilat as a source of income. The local presentation and discussions show a strong “frontier” conceptualisation of a personal and collective role, and this comes first in term of the relationship with the environment. The result is odd – organic palm dates which are planted next to fragile sand dunes, and with issues with waste management…

DSCN2431Third, it was not surprising to hear about citizen science activities in the area, including a recent winter bird survey that was initiated by several environmental bodies in the area, and which includes the use of Esri Survey123 forms to collect data in several specific sites, by providing the participants shelter and food during a weekend and which had excellent results. The area is perfect for citizen science activities – it got a highly educated population, large areas of nature reserves, very good mobile connectivity even off the roads, and environmental awareness (even if actions are contradictory). It is also a critical place for migrating bird, and there is a small visitors and research park near Eilat. At least from the point of view of LTSER, there’s a potential for a range of activities that can cater for local and for tourists.

Fourth, it was interesting to have discussions about citizen science that moved well beyond concerns over data quality (although I did have some of those too – as expected!). Amongst ecologists, the term citizen science is familiar, though not the full range of possibilities and issues. There were many questions about potential cross sites projects, recruitment and maintaining work with participants, creating new projects, and even using the results from citizen science in policy processes and gaining legitimacy.

Fifth, and something that I think worth exploring further – I couldn’t escape the thought that it will be very interesting to compare the kibbutz social and cultural organisation over time with open source and open knowledge projects. A concern that we heard through the visit is about the need for demographic growth but with very specific and testing conditions for anyone who wants to join – beyond the challenging environmental conditions. There is a fascinating mix of strong ideological motivations (settling the desert, leaving in communal settings, doing agriculture in the desert) with actions that are about comfort and quality of life, and as a result, concern about the ageing of the core population many of them from the founding generation. I can see parallels with open knowledge projects such as  OpenStreetMap, or citizen science projects, where you hear two contradictory statements at one – a wish to bring more people on, combined with a strong demand for commitment, and practical barriers to entry, which as a result create a stable core community which slowly age…

The workshop was summarised graphically by Aya Auerbach, in the following way.

SummaryLTSER

New book: European Handbook of Crowdsourced Geographic Information

COST EnergicCOST ENERGIC is a network of researchers across Europe (and beyond) that are interested in research crowdsourced geographic information, also known as Volunteered Geographic Information (VGI). The acronym stands for ‘Co-Operation in Science & Technology’ (COST) through ‘European Network Researching Geographic Information Crowdsourcing’ (ENREGIC). I have written about this programme before, through events such as twitter chats, meetings, summer schools and publications. We started our activities in December 2012, and now, 4 years later, the funding is coming to an end.

bookcoverOne of the major outcomes of the COST ENERGIC network is an edited book that is dedicated to the research on VGI, and we have decided that following the openness of the field, in which many researchers use open sources to analyse locations, places, and movement, we should have the publication as open access – free to download and reuse. To achieve that, we’ve approached Ubiquity Press, who specialise in open access academic publishing, and set a process of organising the writing of short and accessible chapters from across the spectrum of research interests and topics that are covered by members of the network. Dr Haosheng Huang (TU Wien) volunteered to assist with the editing and management of the process. The chapters then went through internal peer review, and another cycle of peer review following Ubiquity Press own process, so it is thoroughly checked!

The book includes 31 chapters with relevant information about application of VGI and citizen science, management of data, examples of projects, and high level concepts in this area.

The book is now available for download hereHere is the description of the book:

This book focuses on the study of the remarkable new source of geographic information that has become available in the form of user-generated content accessible over the Internet through mobile and Web applications. The exploitation, integration and application of these sources, termed volunteered geographic information (VGI) or crowdsourced geographic information (CGI), offer scientists an unprecedented opportunity to conduct research on a variety of topics at multiple scales and for diversified objectives.
The Handbook is organized in five parts, addressing the fundamental questions:

  • What motivates citizens to provide such information in the public domain, and what factors govern/predict its validity?
  • What methods might be used to validate such information?
  • Can VGI be framed within the larger domain of sensor networks, in which inert and static sensors are replaced or combined by intelligent and mobile humans equipped with sensing devices?
  • What limitations are imposed on VGI by differential access to broadband Internet, mobile phones, and other communication technologies, and by concerns over privacy?
  • How do VGI and crowdsourcing enable innovation applications to benefit human society?

Chapters examine how crowdsourcing techniques and methods, and the VGI phenomenon, have motivated a multidisciplinary research community to identify both fields of applications and quality criteria depending on the use of VGI. Besides harvesting tools and storage of these data, research has paid remarkable attention to these information resources, in an age when information and participation is one of the most important drivers of development.
The collection opens questions and points to new research directions in addition to the findings that each of the authors demonstrates. Despite rapid progress in VGI research, this Handbook also shows that there are technical, social, political and methodological challenges that require further studies and research

 

Science Foo Camp 2016

Science Foo Camp (SciFoo) is an invitation based science unconference that is organised by O’Reilly media, Google, Nature, and Digital Science. Or put it another way, a weekend event (from Friday evening to Sunday afternoon), where 250 scientists, science communicators and journalists, technology people from area that relate to science, artists and ‘none of the above’ come and talk about their interests, other people interests, and new ideas, in a semi-structured way.

As this is an invitation only event, when I got the invitation, I wasn’t sure if it is real – only to replace this feeling with excitement after checking some of the information about it (on Wikipedia and other sites). I was also a little bit concerned after noticing how many of the participants are from traditional natural science disciplines, such as physics, computer science, neuroscience, chemistry, engineering and such (‘Impostor syndrome‘). However, the journey into citizen science, since 2010 and the first Citizen Cyberscience Summit, have led me to fascinating encounters in ecological conferences, physicists and environmental scientists, synthetic biologists, epidemiologists, and experimental physicists, in addition to links to Human-Computer Interaction researchers, educational experts, environmental policy makers, and many more. So I hoped that I could also communicate with the scientists that come to SciFoo.

I was especially looking forward to see how the unconference is organised and run. I’ve experienced unconferences (e.g. WhereCampEU in 2010, parts of State of the Map) and organised the Citizen Cyberscience Summits in 2012 & 2014 where we meshed-up a formal academic conference with unconference. I was intrigued to see how it works when the O’Reilly media team run it, as they popularised the approach.

The event itself run from the evening of Friday to early afternoon on Sunday, with very active 45 hours in between.

wp-1469243960730.jpgThe opening of the event included the following information (from Sarah Winge, Cat Allman, Chris DiBona, Daniel Hook, and Tim O’Reilly): The Foo Camp is an opportunity to bunch of really interesting people to get together and tell each other interesting stories – talk about the most interesting story that you’ve got. The main outputs are new connections between people. This as an opportunities to recharge and to get new ideas – helping each person to recharge using someone else battery. The ground rules include: go to sessions outside your field of expertise – an opportunity to see the world from a different perspective; be as extroverted as you can possibly be – don’t sit with people that you know, as you’ll have a better weekend to talk to different people. The aim is to make a conference that is made mostly from breaks – it’s totally OK to spend time not in a session; the law of two feet – it’s OK to leave and come from sessions and coming and going. It’s a DIY event. There are interesting discussions between competitors commercially, or academically – so it is OK to say that part of the conversations will be kept confidential.

wp-1469414697362.jpgThe expected scramble to suggest sessions and fill the board led to a very rich programme with huge variety – 110 sessions for a day and a half, ranging from ‘Origami Innovations’, ‘Are there Global Tipping Points?’, to ‘Growth Hacking, Rare disease R&D’, and ‘What we know about the universe? and what we don’t know?’. Multiple sessions explored Open science (open collaborations, reproducibility, open access publication), issues with science protocols, increasing engagement in science, gender, social justice side by side with designer babies, geoengineering, life extension, artificial intelligence and much more.

In addition, several curated sessions of lightning talks (5 minutes rapid presentations by participants), provided a flavour and extent of the areas that participants cover. For example, Carrie Partch talk about understanding how circadian cycles work – including the phenomena of social jet-lag, with people sleeping much more at weekends to compensate for lack of sleep during the weekdays. Or Eleine Chew demonstrated her mathematical analysis of different music performances and work as concert pianist.

I’ve followed the advice from Sarah, and started conversation with different people during meals, or on the bus to and from SciFoo, or while having coffee breaks. Actually everyone around was doing it – it was just wonderful to see all around people introducing themselves, and starting to talk about what they are doing. I found myself learning about research on common drugs that can extend the life of mice, brain research with amputees, and discussing how to move academic publications to open access (but somehow ending with the impact of the cold war on the investment in science).

I have organised a session about citizen science, crowdsourcing and open science, in which the discussion included questions about science with monks in Tibet, and patient active involvement in research about their condition. I’ve joined two other sessions about ‘Making Science Communication Thrilling for the Lay Person‘ with Elodie Chabrol (who run Pint of Science) and Adam Davidson; and ‘Science Communication: What? What? How? Discuss‘ with Suze Kundu, Jen Gupta, Simon Watt & Sophie Meekings. Plenty of ideas (and even a sub-hashtag to get responses for specific questions) came from these sessions, but also realisation of the challenges for early career academics in developing their skills in this area, with discouraging remarks from more senior academics, and potential career risks – so we also dedicated thinking about appropriate mechanisms to support public engagement activity.

Another fantastic discussion was led by Kevin Esvelt about ‘Better than nature: ethics of ecological engineering‘ – when this involve gene editing with techniques such as CRISPR with potential far reaching impact on ecological systems. This session just demonstrated how valuable it is to have interdisciplinary conference where the expertise of the people in the room range from geoengineering to ecology and ethics. It was also a mini-demonstration of Responsible Research and Innovation (RRI) in action, where potential directions of scientific research are discussed with a range of people with different background and knowledge.

The amount of input, encounters and discussion at SciFoo is overwhelming, and the social activities after the sessions (including singing and sitting by the ‘fire’) is part of the fun – though these were very exhausting 40 hours.

Because SciFoo invitees include a whole group of people from science communication, and as SciFoo coincide with Caren Cooper stint of the twitter account @IamSciComm account where she discussed the overlap between citizen science and science communication, I paid attention to the overlap during the meeting. The good news is that many of the scientists had some idea of what citizen science is. I always check that people know the term before explaining my work, so it’s great to see that term is gaining traction. The less good news is that it is still categorised under ‘science communication’ and maybe a useful session would have been ‘What is the problem of scientists with citizen science?’.

wp-1469414712786.jpg

For me, SciFoo raised the question about the value of interdisciplinary meetings and how to make them work. With such a list of organisers, location, exclusiveness and the mystery of invitation (several people, including me, wonder ‘It’s great being here, but how did they found out about my work?’) – all make it possible to get such an eclectic collection of researchers. While it’s obvious that the list is well curated with considerations of research areas, expertise, background, academic career stage, and diversity, the end results and the format open up the possibility of creative and unexpected meetings (e.g. during lunch). My own experience is that to achieve something that approach such a mix of disciplines in a common ‘bottom-up’ academic conference is very challenging and need a lot of work. The Citizen Cyberscience summits, ECSA conference, or the coming Citizen Science Association conference are highly interdisciplinary in terms of the traditional academic areas from which participant come – but they require to convince people to submit papers and come to the conference. Usually, the interdisciplinary event is an additional commitment to their disciplinary focus and this creates a special challenge. Maybe it can be possible to achieve similar interdisciplinary meetings by getting endorsements from multiple disciplinary societies, or get support from bodies with wide remit like the Royal Society and Royal Academy of Engineering.

Another thought is that the model of reaching out to people and convincing them that it is worth their while to come to such a meeting might also work better in allowing mixing, as open call are impacted by ‘self deselection’ where people decide that the conference is not for them (e.g. getting active participants to a citizen science conference, or ensuring that papers are coming from all flavours of citizen science).

Another delightful aspect is to notice how the unconference format worked with people that (mostly) haven’t experienced it before – the number of slots and opportunities was enough for people to mostly put their sessions forward. Although the call for people to be extroverts, the people with less confident will prepare their ideas more slowly, and can end up outside the grid. It was nice to see how some places in the grid were blocked off during the early stages, and then release to ideas that came during breaks, or for sessions that were proposed more slowly and didn’t secure a spot. There might be also value in restricting people to one session, and then progressing to more? What are the steps that are required to make an unconference format inclusive at the session setting stage?

In contrast to the approach in academic meetings to control the number of parallel sessions (to ensure enough people are showing up to a session), SciFoo is having so many, that most of the sessions are with a small group of about 10 or 20 people. This make it more valuable and suitable for exploratory discussions – which worked well in the sessions that I attended. In a way, at its best, SciFoo is many short brain storming sessions which leave you with a wish to discuss for longer.

If you get an invitation (and being flattered is part of the allure of SciFoo), it is worth going on the Wiki, give a bit of a description of yourself and think about a session that you’d like to propose – +1 can help you to get a feeling that people will be interested in it. Think about a catchy title that includes keywords, and remember that you are talking to intelligent lay people from outside your discipline, so prepare to explain some core principles for the discussion in 5 minutes or so. Don’t dedicate the time to tell people only about your research – think of an issue that bother you to some degree and you want to explore (for me it was the connection between citizen science and open science) and consider that you’ll have one hour to discuss it.

Follow the advice – say hello to everyone and have great conversations during breaks, and don’t go to sessions if the conversation is more interesting. Another take on the meeting is provided by Bjoern Brembs on his blog, with whom I had the open access conversation (and I still unsure how we ended with the Cold War).  Also remember to enjoy the experience, sit by the ‘fire’ and talk about things other than science!

 

 

ECSA2016 ThinkCamp Challenge: how can Overleaf support collaborative writing between academics and citizen scientists?

Overleaf, ThinkCamp Challenge, collaborative writing – lots of jargon for a title – so let’s start by explaining them and I then cover what happened (that’s an Abstract).

Background – what are Overleaf, ThinkCamp, and Challenge? (Introduction)

Overleaf  is a scientific technology company that offer a collaborative environment for writing scientific papers. Overlaf is based on LaTeX  – a typesetting software that is popular in many disciplines – Computer Science, Physics, Mathematics, Statistics, Engineering, Economics, Linguistics and other DSC_0315fields. Importantly, Overleaf simplifies the scientific writing process by providing templates that scientific journals use, support for collaboration, adding comments, and other tools that make it easy to write academic papers. LaTeX is complex to use, and Overleaf is aimed at facilitating the process of learning and using it in academic writing. Overleaf was a sponsor of the European Citizen Science Association conference ThinkCamp, so together with them we developed a challenge . So let’s explain what is ThinkCamp before turning to the challenge.

A ThinkCamp, is a type of open events that are associated with the  ‘unconference’ approach, which in our context mean taking a part of an academic conference and opening it up to anyone who want to step forward and explore a topic that came up during the conference, or that they have been working on it for a while. Particularly for ThinkCamp, the activity is structured around discussion/exploration groups that are provided space to write, draw and share ideas. The themes are called ‘Challenges’. Some of the themes are offered in advance by people who are coming to the conference, and there is usually space for people to suggest their ideas on the day.  The day starts with a one minute description of each challenge. Even with the planned challenges, those who proposed them can’t say much about them, and they are looking for the collective intelligence of those who are interested in the topic to explore it. In effect, ThinkCamp is multiple brainstormDSCN1625ing and idea generation events happening in the same space. People can move between groups, drop in and out, and contribute as little or as much as they want. A Challenge can be physical or require programming, but can also be purely based on discussion. For the ECSA 2016 ThinkCamp, the conference organisers invited the local Berlin grassroots science & maker communities to collaborate together with conference attendees on a number of Citizen Science Challenges.

What was the challenge? (Methodology)

For this specific challenge, we defined it as ‘The Overleaf Collaborative Writing Challenge – How can Overleaf support collaborative writing between academics
and citizen scientists?‘. The focus here is on scientific papers that are coming out of a citizen science project. It is now becoming more common to include citizen scientists as co-authors in the title of the paper. However, can they have more direct involvement in the process of writing so they are more involved in the scientific process? This was the ‘research question’ (more accurately, idea) for the session.

wp-1463894715220.jpgWe had a table, and two session, each of about hour and a half. In each session, about 6 or 8 people joined me, with one person staying for both session (Artemis Skarlatidou), and other people joining for parts or the whole discussion (among them Alison Parker, Avinoam Baruch, Berk Anbaroglu, Christian Nold,  Denise Gameiro,  Jon Van Oast, Julia Aletebuchner, Libby Helpburn, Lotta Tomasson, Sultan Kocaman, and surely several other people). We had a table with a poster, which included information about the challenge.

Although we have looked briefly at the Overleaf system during the beginning of the discussion, it expanded very quickly to the core issues of collaboration between scientists and citizen scientists on writing paper together.

What did we talked about? (Results)

I have attempted to facilitate the discussion while allowing people to raise their point and discuss them at length. As usual, some discussion points led to other discussion points. During the three hours, we filled about 4 flip-chart pages, which are provided below (Figure 1).

DSCN1628DSCN1629DSCN1626DSC_0332
Figure 1: Flip-chart of discussion point (click to enlarge)

So what did we discussed?

We refined our problem, and decided that our assumption is a situation where a scientist initiate the paper and lead the process of writing, but in collaboration with citizen scientists. Of course, papers that are led by citizen scientists are very important, but as with many prototyping activities, we wanted to start with a scenario that make the problem less hard – at least one of the members of the team will know what is expected in terms of the publication process. There are many citizen scientists that already publish (e.g. astronomy, biological recording – see diary of a citizen scientist which in the last pages describe the scientific outcome of her work), but we’re talking about the general case, and I still recall how daunting the first paper feel, and I also know how special it feel to have the first paper published (it’s one of the precious things of working with PhD students), so let’s assume that we’re talking about first paper, with someone helping.

The topmost issue is to explain to citizen scientists why a peer review paper is a worthwhile effort  – some websites and systems (e.g. Public Lab research notes) are offering alternatives to academic publication – however, having a peer review can increase the value of the work in terms of policy impact, authority and other aspects. What are the exact reasons for people to join in? this is something that we need to understand more.

DSCN1625We started with the components of paper: introduction, literature review, methodology, results… and the need to understand why they are there and how to understand them. There is the AAAS website that helps in learning how to read an academic paper. Some tips are also available in other places – and that there are so much material online to teach people how to read scholarly articles, tell you that it’s not a trivial task! For this, we can also research and identify material on library websites that teach undergraduate students how to read and write scientific papers, and choose the best resources for citizen scientists. We need to indicate that some effort is required, but also chunk the learning material. Having pop-ups and context specific help to a section of the paper, and, as Overleaf already do, have the sections with place-holder in place.

Once people learned what is the aim of the project and the components of an academic paper, we need a way for people to show which part they would like to contribute to – maybe they want to comment on the methodology and not on other parts (so we might have a matrix linking people with parts of the paper). Further discussion lead to the main insight of the discussion: We can split the roles that are needed in academic paper writing, and allow people to decide what they want to do. The roles include: authoring text, fact checking, reference checking, chart and graph design, map design, translation, checking for comprehension, proofreading, reviewing, checking the statics for mistakes and possibly more. We can think of a system to match between skills and task – like PeerWith but there are problems: first, we should do it inside the project, and be careful not to get into exploitation and undermining freelance editors, proofreader, graphic designers etc. There is, of course, huge advantage for engaging people from within the project – they will do the work from a much more informed position. Consider projects with many thousands of volunteers (OpenStreetMap, Zooniverse, BOINC) – it is possible to link the multiple skills of participants to the many scientists who are involved in different projects and might want to work collaboratively on papers. Under these conditions – we will have major issues of trust by all sides, and confidence by the citizen scientists that they can contribute. We need interfaces nudges and support to overcome these. We need to clearly communicate what are the aspects of the role, compensation & benefits (e.g. authorship, payment?).

Back to the process of writing the different sections of the paper, we can give elements of training to contributors, according to how much they want to commit and how much time they’ve got. Probably it make sense to do micro-training with expanding levels of information.

We need to consider how we open up papers and material that sit behind a pay-wall to allow citizen scientists to be involved in a meaningful way.

We can also consider a gradual process, where there is a pre-writing stage in which we agree the narrative, order, and images that will be used – we can use accessible language to sort out the list – e.g. ‘what is the problem?’ (for the introduction); ‘what do we know?’ (literature review); or ‘what have we done?’ (for the methodology). We can think of the paper as the final object, and have a structure to support its development through sub-objects.

wp-1463894724971.jpgThe second major insight of the session was the introduction of a role for science communication experts, as facilitators between citizen scientists and scientists. The process will need a lot of communication, and we need to link to tools for managing chats (instant messaging), calls and maybe video. The volunteers need to be mentors and get feedback, so improvement of skills. 

We explored what each side bring to the equation: citizen scientists – skills, knowledge and they gain experience in writing a paper and having a scientific publication with their name on. Science communicators – translation between scientists and citizen scientists, ability to explain why paper is valuable, what are the parts of the paper and why things happen the way they are. They gain by being employed with an active role in the process. Scientists benefits by having lots of help on their paper, and they need to act as mentors and cover the publication fees (assuming open access).

What next? (discussion and conclusions)

ThinkCampMukiWe realised that this is complex process that will need plenty of effort to make it happen, but that it is possible to facilitate with Web tools. There are plenty of open issues, and it might be an idea to develop a small research/public engagement project on the basis of these ideas. If you have ideas, comments and suggestions – please help us!