OECD Open Science and Scientific Excellence Workshop – Paris

The OECD organised and hosted a Global Science Forum (GSF) and National Experts on Science and Technology Indicators (NESTI) Workshop on  “Reconciling Scientific Excellence and Open Science: What do we want out of science and how can we incentivise and monitor these outputs?” (9 April, 2018, OECD). In agreement with the OECD Secretariat, the information here is not attributed to anyone specific (Here is the blog post about my own presentation).

The workshop opened with the point that speaking about reconciling open science and science seem contradictory. Scientific excellence was based on the value of publications, but the digital transformation and the web have changed things – from elite access to a library where outputs are held to one that is available to everyone over the web, and we can see citizens accessing data. We also need to look at the future – opening even more, which is something positive but there are challenges in measuring, the impact of different bibliometrics and other indicators.

The openness happens quickly, and we need to understand the transformation and then think about the statistical aspects of this information. There is an effort of developing a roadmap to see the integration of open science across science policy initiatives.

The area is fairly complex: excellence, how science is changing, incentivise and measuring science – all these are tightly related to each other. Some of the fundamental questions: what do we want from science? only excellence or other things? How can we incentivise the academic community to move in the direction of open science – and what the policy community of science need to do about it. National Statistical communities and Global Science Forum are two important groups that can influence it in terms of policy and the measurement the impacts and processes.

The meeting is looking at open science, publishing, open data, and engagement with society, as well as indicators and measurement.

The slides from all the talks are available here. 

Session 1. Scientific excellence through open science or vice versa? What is excellence and how can it be operationalised in the evidence and policy debate?

Paula Stephan (Georgia State University, USA) addressed the challenges of science – lack of risk-taking, and lack of career opportunities to Early Career Scientists in their research. The factors that impact that – especially short-term bibliometrics and then, how open science can help in dealing with the issues.

The original rationale for government support science is the high risk that is associated with basic research. The competitive selective procedures reducing risk and leading to safer options to secure funding (including NIH or ERC). James Rothman who won Nobel prize in Physiology pointed that in the 1970s there was a much higher level of risk that allows him to explore things for 5 years before he started being productive. Concerns about that aspects appeared by AAAS in 2008 ARISE report, and NASA and DARPA became much more risk-averse.

In addition, there is lack of career opportunities for ECRs – the number of PhD is growing, but the number of research position declining – both in industry and academia. Positions are scare and working in universities is an alternative career. Because of the way that the scarce jobs or research applications are based on short citation windows – high impact journal paper is critical for career development. Postdocs are desperate to get a Nature or Science paper. Assessment of novel papers (papers that use references never before made together) showed that only 11% of papers are novel, and highly novel papers is associated with risk: disproportionate concentration at the top and bottom in citations distribution, and also get cited outside the field. The more novel the paper is, the less likely it is to appear in high ranking journal. The bibliometrics discourage researchers from taking these risks with novel paper.

Open science gives opportunity – citizen science give an opportunity for new ways of addressing some issues  – e.g. through crowdfunding to accommodate risky research. In addition, publication in open access can support these novel paper strategies.

Richard Gold (McGill University, Montreal, Canada) looked at why institutions choose open science – exponentially increasing costs of research, but it’s not enough and there are requests to ask for more funding. Productivity is declining – measured by the number of papers per investment. Firms are narrowing their focus of research.

We can, therefore, consider Open Science partnerships – OA publications, Open Data and no patents on co-created outputs as a potential way to address these challenges. This can be centred around academic and not-for-profit research centre, and generally about basic understanding of scientific issues, with data in the centre. Institutions look at it as a partial solution – decreasing duplication as no need to replicate, provide quality through many eyes, and providing synergies because there is a more diverse set of partners. It can increase productivity because data can be used in different fields, using wider networks of ideas and the ability to search through a pool of ideas. We can see across fields – more researchers, but fewer outputs in. In patent applications, we see that also the 1950s was the recent peak in novelty in terms of linking unrelated field, and this is dropping since.

An alternative to this is a system like the Structural Genomics Consortium – attracting philanthropic and industrial funding. There is also a citizen science aspects – ability to shape the research agenda in addition to providing the data. The second thing is that the data can be used with their communities – patients and indigenous groups are more willing to be involved. Open science better engages and empower patients in the process – easier to get consent.

Discussion: during the selection of projects, the bibliometrics indications need to be removed from the application and from funding decisions. Need people to read the research ideas, and need to move away from funding only a single person as the first author – need to incentivise teams and support. Need to think how to deal with impact of research and not only on the original research (someone might use the dataset that was produced in open science for a publication, not by the person who did the work).

There is a sense that the “lack of risk-taking” is an issue, but there is a need for measuring and showing if it is happening. Lots of scientists censuring their work and there is a need to document this happening. The global redistribution of people is about which areas people concentrate on – e.g. between physics and agriculture.

Session 2 – Open access publication and dissemination of scientific information

Rebecca Lawrence (Faculty of 1000) described how F1000 is aiming to develop a different model of publication – separating publication from evaluation. The publication is there because of funders and researchers evaluate others around where they publish. There are all sort of manipulations: overselling, p-value fishing, creative outliers, plagiarism, non-publication by a journal that don’t want low impact papers and more. There is a growing call for the move towards open access publication – e.g. the open science policy platform, European open science cloud, principles such as DORA, FAIR (Findable, Accessible, Interoperable, Reusable) and an increase of pre-print sources. There is also a new range of how science is being organised – how to make it sustainable in areas where there aren’t receiving much funding – use of pre-print services, and also exploring the peer review funding. F1000 is about thinking about the speed of s finding. The model was developed with Wellcome, Gates foundation and creating a platform that is controlled by funders, or institutions, and by researchers. In this model, publishers are service providers. F1000 support a wide range of outputs: research article, data, software, methods, case studies. They check that the paper technically: is the data behind it accessible and that it was not published before. The publication is done a complete open peer review – so you can see who is reviewing and what was done by the author. Within the article, you can see the stage in the research – even before peer review. Making the paper a living document – usually 14 days between submission and publication, and usually a month including being reviewed. The peer review here is transparent and the reviewers are being cited. This is good for ECRs to gain experience.

The indicators need to take into account career levels, culture (technical and reflective) and not only fields, and thinking about different structures – individual, group, institution. Need open metrics, and certain badges that tell you what you are looking for and also qualitative measures- traditional publications can curate articles.

2. Vincent Tunru (Flockademic, Netherlands) explored the issue of incentivising open science. Making science more inclusive – making more people being able to contribute to the scientific process. Open access can become the goal instead of the means to become more inclusive. If the information is free, people can read the results of publicly funded research, but there is a barrier to publish research within the OA model – publication costs should be much lower: other areas (music, news) have gone down in costs because of the internet. In some disciplines, there is the culture of sharing pre-print and getting feedback before submission to journals – although places like ArXiv is doing the work. The primary value of the submission to a journal is the credentialing, High-level journals can create scarcity to justify the demand. Nature scientific reports is taking over PLOS ONE because of that. We need to decouple credentialing from the specific journals. Different measures of excellence are possible, but we need to consider how we do it today – assuming that it is reviewers and editors are the ones who consider what excellence means. Need to focus on inclusivity and affordability. [See Vincent blog post here]

Kim Holmberg (University of Turku, Finland) focused on altmetrics –  Robert Merton pointed already in the 1950s that the referencing system is about finding a work that wasn’t known before but also about recognition of the other researchers. That leads then to how the journal impact factor and the H-Index became part of research assessment. These are being used more and more in research evaluation especially in the past 15 years. Earlier research has pointed out many flaws with them. In addition, they fail to take into account the complexity of scientific activities, nor do they tell you anything about the societal impact of research. One way to look at the complexity is the Open Science Career Assessment Matrix (OS-CAM).

We can think about the traces that people leave online as they go through the research process – discussing research ideas, collecting data, analysing, disseminating results. These traces can become altmetrics – another view of research activities. It is not just social media: the aim is to expand the view of what’s impact is about. With altmetrics we can analyse the networks that the researcher is involved in and that can give insights into new ways of interaction between the researcher with society. Citations show that a paper has been used by another researcher, while altmetrics can indicate how it has been disseminated and discussed among a wider audience. But there are still lots of questions about the meaning and applicability of altmetrics.

There are reports from the Mutual Learning Exercise europa.eu/!bj48Xg – looking at altmetrics, incentives and rewards for open science activities. For instance, in the area of career & research evaluation, researchers need specific training and education about open science, and in the area of evolving authorship identifying and rewarding peer review and publishing of negative results need to be developed. Implementation of open science needs to guarantee long-term sustainability and reward role-models who can provide a demonstration of this new approach to involving in science. The roadmap from the MLE suggests a process for this implementation.

Discussion: there is the issue of finding a good researcher in a group of researchers and publications is a way to see the ideas, but the link to open science and how it can help in that is unclear. However, finding a good researcher does not happen through all these metrics – it’s a human problem and not only a metric. Will originality be captured by these systems? Publication is only small part of the research activity – in every domain, there is a need to change and reduce the publication, but not only to think that someone will read the same paper again and again (after each revision). Attention is the scarce resource that needs to be managed and organised not to assume that more find a way to filter the information.

The response to this pointed that because of research funding is public, we should encourage publishing as much as possible so others can find the information, but we need good tools for searching and evaluating research so you can find it.

Another confusion – want to see the link between open access publication and open science. Open access can exist in the publish or perish structure. What is it in OA that offer an alternative to the close publishing structure. How can that lead us to different insight into researchers activities? In response to this, it was pointed out that it is important to understand the difference between Open Access and Open Science (OA = openly available research publications, OS = all activities and efforts that open the whole research process, including publishing of research results).

There is growing pressure for people to become media savvy and that means taking time from research.

Altmetrics: originally thought of as a tool that can help researchers find interesting and relevant research, not necessarily for evaluation (http://altmetrics.org/manifesto/).

Discussion: there is the issue of finding a good researcher in a group of researchers and publications is a way to see the ideas, but the link to open science and how it can help in that is unclear. However, finding a good researcher is not through all these metrics – it’s a human problem and not only a metric. Will originality be captured by these systems? Publication is only small part of the research activity – in every domain, there is a need to change and reduce the publication, but not only to think that someone will read the same paper again and again (after each revision). Attention is the scarce resource that needs to manage and organised not to assume that more find a way to filter the information.

The response to this pointed that because of research funding is public, we should encourage publishing as much as possible so others can find the information, but we need good tools for searching and evaluating research so you can find it.

Another confusion – want to see the link between open access publication and open science. Open access can exist in the publish or perish structure. What is it in OA that offer an alternative to the close publishing structure. How can that lead us to different insight into researchers activities?

There is growing pressure for people to become media savvy and that means taking time from research.

Altmetrics: originally as a tool that can help other researchers, not necessarily for evaluation.

Session 3. Open research data: good data management and data access

Simon Hodson (CODATA) – Open Science and FAIR data. The reconciling elements – the case for open science is the light that it shines on the data and make it useful. It allows reuse, reproducibility, and replicability – it is very much matching each other. CODATA is part of the International Council for Science – focusing capacity building, policy, and coordination. The case for open science – good scientific practice depends on communicating the evidence. In the past, a table or a graph that summarises some data was an easy way of sharing information, but as data and analysis grew, we need to change the practice of sharing results. The publications of “Science as an open enterprise” (2012), including pointing that the failure to report the data underlying the science is seen as malpractice. Secondly, open data practices transform certain areas of research – genomics, remote sensing in earth systems science. Can we replicate this in other research areas? Finally, can we foster innovation and reuse of data and finding within and outside the academic system – making it available to the public at large.

Open science has multiple elements – open science is not only open access and open data. We need data to be interoperable and reusable and should be available for machine learning and have an open discussion. There are perceptions of reproducibility of research but also change in attitudes. We need to think about culture – how scientific communities established their practices. In different research areas, there are very different approaches – e.g. in biomedical research, this is open but in social science, there is little experience of data sharing and reuse and don’t see benefits. There is a need for a sociology of science analysis of these changes. Some of these major changes: meetings about genome research in Bermuda and Fort Lauderdale agreement which was because of certain pressures. There is significant investment in creating data that is not being used once – e.g. Hubble. Why data across small experiments is not open to reuse? We need to find making this happen.

FAIR principle allows data to be reusable. FAIR came from OECD work, Royal Society report 2012 and G8 statement. What we need to address: skills, also limits of sharing, need to clarify guidelines for openness. We need to have standards, skills and reward data stewardship. We need to see data citation of data. There is a need for new incentives – the cultural change happened when prominent people in the field set up the agreement.

Fiona Murphy (Fiona Murphy Mitchell Consulting, UK) Working in the area of data publishing and providing the perspective of someone who is exploring how to practice open science. There are cultural issues: why to share, with whom, what rewards, and what is the risk. Technical – how is that is done, what are the workflows, tools, capacity, and time investment. There are issues of roles and responsibilities and who’s problem is it to organise the data.

Examples of projects – SHARC – share research data alliance – international and multi-stakeholders and aim to grow the capacity to share data. The specific group is working a White Paper on recommendations. The main issues are standards for metrics: need to be transparent, need about reputation, and impact on a wider area. Also, what will be the costs of non-sharing? There are different standards in terms of policies, also need persistent identifiers and the ability to reproduce. Equality of access to services is needed – how to manage peer to peer and how is that integrated into promotion and rewards. The way to explore that is by carrying out pilots projects to understand side effects. There is also a need to develop ethical standards.

The Belmont Forum Data Publishing Policy – looking at creating the data accessibility that is part of a digital publication. Developing consistency of message so researchers will know what they are facing. There are lots of issues – some standard wording is emerging, and capturing multiple data sets, clarify licensing etc.

We can also think about what would have started if all the current system was in place – the scholarlycommons.org is suggesting principles for “born digital” scientific practice should evolve. The approach to thinking about commons, they have created some decision trees to help with the project. Working as open scientists is a challenge today – for example, need to develop a decision tree software and other things are proving challenging to act as a completely open scientist. It’s a busy space and there is a gulf between high-level policy and principles and their delivery.

Jeff Spies (Centre for Open Science, Virginia) [via video-link] Jeff is covering open research data, urgent problems, and incremental solutions. looking at strategies that are the most impactful (which is different from the center for open science). We need to broaden the definition of data – we need context: more than just the data itself or the metadata – it is critical for the assessment, metascience work. We can think of knowledge graph – more then the semantic information for the published text, and the relationship of people, place, data, methods, software… but the situation in incentives is – from psychological perspectives, the getting awards for specific publications is so strong that makes the focus on what is publishable. We have rates of retractions go up as impact factor goes up. There is urgency and the lock-in the publishers are trying to capture the life-cycle of research. The problem is that culture change is very slow and we need to protect the data – funders and policymakers that can make a difference. Researchers don’t have the ability to curate data – but libraries are the people that can have a resource for that and focus. Potential – the researcher asked to link to promotion policies and that will force universities to share them, and if the policy mention data sharing (as a way to force universities to change)

Discussion: there is concern about the ability of researchers to deal with data. There is a problem of basic data literacy.

The problem with making the data FAIR it is about 10% of the project costs and where it is useful, or where it is not enough or too much – just organising the data with the librarians is not enough as data requires a lot of domain knowledge. There are significant costs. however, in the same way, that the total costs of science to include the effort of peer review, or getting to publications (either subscription or publication), then we should also pay for the data curation. There is a need for appraisal and decision how data and process will be done.

We need to think about the future use of data – the same as natural history specimens and we can never know what should be done. Questions about the meaning of data are very important – it’s not only specimens but also photographs and not necessarily digital.

Libraries can adapt and can get respects – they are experts in curation and archiving

Session 4. Societal engagement 

Kazuhiro Hayashi (NISTEP, Tokyo, Japan) Open science as a social engagement in Japan. Is in science and technology – is being involved in open access journal and keen about altmetrics – now involved in open science policy. Generally, see multi-role – top down and bottom up – from working in G7 science expert group in open science, and also in creating software and journals. Involved in citizen science NISTEP journal and lectures, and involved in altmetrics, multi-stakeholders workshop and future earth. He would like to showcase studies:

Citizen science – the funding system in Japan for science is coming from the state mainly and they have a difficult time to do public engagement – spontaneous researchers “wild researchers”. Suggesting a more symmetrical system – creating also independent researchers which are getting their budget from a business and they publish in online journals. Wild researchers are based on crowdfunding and relay on the engagement of citizens. From his experience, recognise the new relationship between citizens and scientists: research style, new career paths and funding. Negative aspects of citizen science include populism in crowdfunding – need to be popular but not suitable for the crowd. Als need a new scheme for ECRs and need to include it. Also, there is a potential for misuse and plagiarism because of lack of data and science literacy.

Altmetrics – contributed to NISO Altmetrics initiative working group – difficult to define, and current altmetrics scores in Japanese literature are closely related to Maslow’s hierarchy of need. There are plenty of institutional repositories that – access to journal articles on repositories is more social – readers are non-researchers who would go to journal websites. Need to look at social impact – look mentioning and network analysis but it is difficult to analyse. There is need to look at the flow of data across the web.

Multi-stakeholders workshop – considering the future of open science and society. With environmental sciences and informatics. the outcome is to think about erasing influences of different socio-economic status on participants. Co-development of data infrastructure and the action of social transformation. There is an importance in capacity building. Need to see how open science and transdisciplinary work co-evolved. For social engagement – very time-consuming and need to get funded, and need open for creative activities for citizens and scientists. Think about new relationships between science and society. Need to use tentative indicators to transform society and culture – creating a future of open science and society – move from “publish or perish” to “share or perish”. Japan will have 2 citizen science sessions at the Japan open science summit on June 18-19 2018.

Muki Haklay (UCL, London, UK) [see my separate blog post]

Cecilia Cabello Valdes (Foundation for Science and Technology, Madrid, Spain) Societal engagement in open science. The foundation is aimed to promote science link with society – original with interest of increasing interest of the Spanish citizens. They are managing calls and fund different activities (about 3,250K Eur). More than 200 projects. They do activities such as Famelab – giving events to promote science and technology, in an open way. The science news agency – there is lack of awareness of scientific research – the SiNC agency – the papers are taken by general media – over 1000 journalists who use the information. They carry out summer science camps: 1920 funded students funded in 16 universities.They also manage the national museum of science and technology (Muncyt) and they share the history of science and technology in Spain. It’s a unique type of a science museum.

In citizen science, they have done a lot of work in awareness of the public to science and technology, and to keep public support for science investment. More recently they create a council of foundations for science – there wasn’t awareness of social foundations that haven’t invest in science and not only cultural activities. There are 3 foundations that are involved with the council and they are having a direct contact with the minister to develop this area of funding. The second initiative is crowdfunding for science – they help to carry out a campaign that helps in creating activities – it is also a tool of engagement.

Outreach is difficult – the council support policymakers and the general public is aware of the issues. So there are challenges – and that need to transform and how do we measure it? Some of the roles that the council need to do is to incentivise policymakers to understand what they want to achieve and then have indicators to assist in seeing that the goals are achieved. They participated in the process of policy recommendation about open science, and then translate that into action – for policymakers and society. In Fecyt they also provide resources: access to WoS/Scopus, evaluation of journals, standardised CV of researchers, and open science. Finally they participation in studies that look at measurements of science and the results

Discussion: Science Shops – are there examples that link to Maker spaces? Yes, there are examples of activities such as Public Lab but also the Living Knowledge network

Many societal engagements are not open science – they treat society as a separate entity: a struggle of making citizen science into open science – data remain closed. What are the aspects that lend themselves to open science and citizen science? – there are many definitions and there are different ways to define the two, but for example, the need to access publications, or the participation in the analysis of open data, or the production of open data, are all examples for an overlap.

Part of the discussion is about sharing knowledge, the part that says that researcher is like anyone else? There is a big difference between the scientific community and everyone else? The effort is not recognised in society and might you remove the prestige than no one would want to participate in science?

As you know, public interest – why the citizens want to participate in research? the citizens want the result of public research will help people to improve their quality of life. The science should address social problems.

How much people participate in – precipita is a new project and fund are not matched and they provide the technical help, and the promotion is through a campaign through different institutions

Should citizen science democratise science which is controversial – when information became more accessible as in Gutenberg, we are increasing the ability. Need to make citizen science a way to increase access to science.

How to get to integrated science into pockets and need to find a way to integrate these things together. There is a package that needs to support together: access, data, and public engagement and need to focus on them.

Citizen science needs to be integrated into all the science and needs to make results.

Session 5. Scientific Excellence re-visited

David Carr (Wellcome Trust, London, UK) Wellcome is committed to providing their research outputs – seeing it as part of good research practice. As a funder, they’ve had a long-standing policy on open accessing publications (since 2005) and other research outputs. Need to have also the costs of carrying out public engagement, and open access publications should be part of the funding framework. Also asking reviewers to recognise and value a wide range of research outputs. There are still need to think of reward and assessment structures, the sustaining of the infrastructures that are needed, and the need to create data specialists and managing the process to increase it. There are concerns by the research community about open access. Wellcome established open research team – looking at funder led and community-led activities, and also policy leadership. They now have the “WellcomeOpenResearch.org publishing platform” which is using F1000 platform, they also had the open science prize. They also look on policy leadership – e.g. the San Francisco DORA (declaration on research assessment). Also looking at changes to application forms to encourage other forms of outputs and then provide guidance to staff, reviewers and panel members. They also celebrate with applicants when they do open research, and also inform them about the criteria and options. They also carry out effort to evaluate if the open science indeed delivers on the promises through projects in different places – e.g. the McGill project.

Advertisements

Published by

mukih

Professor of GIScience, University College London

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s