Algorithmic Governance Workshop (NUI Galway)

Algorithmic Governance Workshop (source: Niall O Brolchain)

The workshop ‘Algorithmic Governance’ was organised as an intensive one day discussion and research needs development. As the organisers Dr John Danaher
and Dr Rónán Kennedy identified:

‘The past decade has seen an explosion in big data analytics and the use  of algorithm-based systems to assist, supplement, or replace human decision-making. This is true in private industry and in public governance. It includes, for example, the use of algorithms in healthcare policy and treatment, in identifying potential tax cheats, and in stopping terrorist plotters. Such systems are attractive in light of the increasing complexity and interconnectedness of society; the general ubiquity and efficiency of ‘smart’ technology, sometimes known as the ‘Internet of Things’; and the cutbacks to government services post-2008.
This trend towards algorithmic governance poses a number of unique challenges to effective and legitimate public-bureaucratic decision-making. Although many are already concerned about the threat to privacy, there is more at stake in the rise of algorithmic governance than this right alone. Algorithms are step-by-step computer coded instructions for taking some input (e.g. tax return/financial data), processing it, and converting it into an output (e.g. recommendation for audit). When algorithms are used to supplement or replace public decision-making, political values and policies have to be translated into computer code. The coders and designers are given a set of instructions (a project ‘spec’) to guide them in this process, but such project specs are often vague and underspecified. Programmers exercise considerable autonomy when translating these requirements into code. The difficulty is that most programmers are unaware of the values and biases that can feed into this process and fail to consider how those values and biases can manifest themselves in practice, invisibly undermining fundamental rights. This is compounded by the fact that ethics and law are not part of the training of most programmers. Indeed, many view the technology as a value-neutral tool. They consequently ignore the ethical ‘gap’ between policy and code. This workshop will bring together an interdisciplinary group of scholars and experts to address the ethical gap between policy and code.

The workshop was structured around 3 sessions of short presentations of about 12 minutes, with an immediate discussion, and then a workshop to develop research ideas emerging from the sessions. This very long post are my notes from the meeting. These are my takes, not necessarily those of the presenters. For another summery of the day, check John Danaher’s blog post.

Session 1: Perspective on Algorithmic Governance

Professor Willie Golden (NUI Galway)Algorithmic governance: Old or New Problem?’ focused on an information science perspective.  We need to consider the history – an RO Mason paper from 1971 already questioned the balance between the decision-making that should be done by humans, and that part that need to be done by the system. The issue is the level of assumptions that are being integrated into the information system. Today the amount of data that is being collected and the assumption on what it does in the world is a growing one, but we need to remain sceptical at the value of the actionable information. Algorithms needs managers too. Davenport in HBR 2013 pointed that the questions by decision makers before and after the processing are critical to effective use of data analysis systems. In addition, people are very concerned about data – we’re complicit in handing over a lot of data as consumers and the Internet of Things (IoT) will reveal much more. Debra Estrin 2014 at CACM provided a viewpoint – small data, where n = me where she highlighted the importance of health information that the monitoring of personal information can provide baseline on you. However, this information can be handed over to health insurance companies and the question is what control you have over it. Another aspect is Artificial Intelligence – Turing in 1950’s brought the famous ‘Turing test’ to test for AI. In the past 3-4 years, it became much more visible. The difference is that AI learn, which bring the question how you can monitor a thing that learn and change over time get better. AI doesn’t have self-awareness as Davenport 2015 noted in Just How Smart are Smart Machines and arguments that machine can be more accurate than humans in analysing images. We may need to be more proactive than we used to be.

Dr Kalpana Shankar (UCD), ‘Algorithmic Governance – and the
Death of Governance?’ focused on digital curation/data sustainability and implication for governance. We invest in data curation as a socio-technical practice, but need to explore what it does and how effective are current practices. What are the implications if we don’t do ‘data labour’ to maintain it, to avoid ‘data tumbleweed. We are selecting data sets and preserving them for the short and long term. There is an assumption that ‘data is there’ and that it doesn’t need special attention. Choices that people make to preserve data sets will influence the patterns of  what appear later and directions of research. Downstream, there are all sort of business arrangement to make data available and the preserving of data – the decisions shape disciplines and discourses around it – for example, preserving census data influenced many of the social sciences and direct them towards certain types of questions. Data archives influenced the social science disciplines – e.g. using large data set and dismissing ethnographic and quantitative data. The governance of data institutions need to get into and how that influence that information that is stored and share. What is the role of curating data when data become open is another question. Example for the complexity is provided in a study of a system for ‘match making’ of refugees to mentors which is used by an NGO, when the system is from 2006, and the update of job classification is from 2011, but the organisation that use the system cannot afford updating and there is impacts on those who are influenced by the system.

Professor John Morison (QUB), ‘Algorithmic Governmentality’. From law perspective, there is an issue of techno-optimism. He is interested in e-participation and participation in government. There are issue of open and big data, where we are given a vision of open and accountable government and growth in democratisation – e.g. social media revolution, or opening government through data. We see fantasy of abundance, and there are also new feedback loops – technological solutionism to problems in politics with technical fixes. Simplistic solutions to complex issues. For example, an expectation that in research into cybersecurity, there are expectations of creating code as a scholarly output. Big Data have different creators (from Google to national security bodies) and they don’t have the same goals. There is also issues of technological authoritarianism as a tool of control. Algorithmic governance require to engage in epistemology, ontology or governance. We need to consider the impact of democracy – the AI approach is arguing for the democratisation through N=all argument. Leaving aside the ability to ingest all the data, what is seemed to assume that subjects are not viewed any more as individuals but as aggregate that can be manipulated and act upon. Algorithmic governance, there is a false emancipation by promise of inclusiveness, but instead it is responding to predictions that are created from data analysis. The analysis is arguing to be scientific way to respond to social needs. Ideas of individual agency disappear. Here we can use Foucault analysis of power to understand agency.  Finally we also see government without politics – arguing that we make subjects and objects amenable to action. There is not selfness, but just a group prediction. This transcend and obviates many aspects of citizenship.

Niall O’Brolchain (Insight Centre), ‘The Open Government’. There is difference between government and governance. The eGov unit in Galway Insight Centre of Data Analytics act as an Open Data Institute node and part of the Open Government Partnership. OGP involve 66 countries, to promote transparency, empower citizens, fight corruption, harness new technologies to strengthen governance. Started in 2011 and involved now 1500 people, with ministerial level involvement. The OGP got set of principles, with eligibility criteria that involve civic society and government in equal terms – the aim is to provide information so it increase civic participation, requires the highest standards of professional integrity throughout administration, and there is a need to increase access to new technologies for openness and accountability. Generally consider that technology benefits outweigh the disadvantages for citizenship. Grand challenges – improving public services, increasing public integrity, public resources, safer communities, corporate accountability. Not surprisingly, corporate accountability is one of the weakest.

Discussion:

Using the Foucault framework, the question is about the potential for resistance that is created because of the power increase. There are cases to discuss about hacktivism and use of technologies. There is an issue of the ability of resisting power – e.g. passing details between companies based on prediction. The issue is not about who use the data and how they control it. Sometime need to use approaches that are being used by illegal actors to hide their tracks to resist it.
A challenge to the workshop is that the area is so wide, and we need to focus on specific aspects – e.g. use of systems in governments, and while technology is changing. Interoperability.  There are overlaps between environmental democracy and open data, with many similar actors – and with much more government buy-in from government and officials. There was also technological change that make it easier for government (e.g. Mexico releasing environmental data under OGP).
Sovereignty is also an issue – with loss of it to technology and corporations over the last years, and indeed the corporate accountability is noted in the OGP framework as one that need more attention.
There is also an issue about information that is not allowed to exists, absences and silences are important. There are issues of consent – the network effects prevent options of consent, and therefore society and academics can force businesses to behave socially in a specific way. Keeping of information and attributing it to individuals is the crux of the matter and where governance should come in. You have to communicate over the internet about who you are, but that doesn’t mean that we can’t dictate to corporations what they are allowed to do and how to use it. We can also consider of privacy by design.

Session 2: Algorithmic Governance and the State

Dr Brendan Flynn (NUI Galway), ‘When Big Data Meets Artificial Intelligence will Governance by Algorithm be More or Less Likely to Go to War?’. When looking at autonomous weapons we can learn about general algorithmic governance. Algorithmic decision support systems have a role to play in very narrow scope – to do what the stock market do – identifying very dangerous response quickly and stop them. In terms of politics – many things will continue. One thing that come from military systems is that there are always ‘human in the loop’ – that is sometime the problem. There will be HCI issues with making decisions quickly based on algorithms and things can go very wrong. There are false positive cases as the example of the USS Vincennes that uses DSS to make a decision on shooting down a passenger plane. The decision taking is limited by the decision shaping, which is handed more and more to algorithms. There are issues with the way military practices understand command responsibility in the Navy, which put very high standard from responsibility of failure. There is need to see how to interpret information from black boxes on false positives and false negatives. We can use this extreme example to learn about civic cases. Need to have high standards for officials. If we do visit some version of command responsibility to those who are using algorithms in governance, it is possible to put responsibility not on the user of the algorithm and not only on the creators of the code.

Dr Maria Murphy (Maynooth), ‘Algorithmic Surveillance: True
Negatives’. We all know that algorithmic interrogation of data for crime prevention is becoming commonplace and also in companies. We know that decisions can be about life and death. When considering surveillance, there are many issues. Consider the probability of assuming someone to be potential terrorist or extremist. In Human Rights we can use the concept of private life, and algorithmic processing can challenge that. Article 8 of the Human Right Convention is not absolute, and can be changed in specific cases – and the ECHR ask for justifications from governments, to show that they follow the guidelines. Surveillance regulations need to explicitly identify types of people and crimes that are open to observations. You can’t say that everyone is open to surveillance. When there are specific keywords that can be judged, but what about AI and machine learning, where the creator can’t know what will come out? There is also need to show proportionality to prevent social harm. False positives in algorithms – because terrorism are so rare, there is a lot of risk to have a bad impact on the prevention of terrorism or crime. The assumption of more data is better data, we left with a problem of generalised surveillance that is seen as highly problematic. Interestingly the ECHR do see a lot of potential in technologies and their potential use by technologies.

Professor Dag Weise Schartum (University of Oslo), ‘Transformation of Law into Algorithm’. His focus was on how algorithms are created, and thinking about this within government systems. They are the bedrock of our welfare systems – which is the way they appear in law. Algorithms are a form of decision-making: general decisions about what should be regarded, and then making decisions. The translation of decisions to computer code, but the raw material is legal decision-making process and transform them to algorithms. Programmers do have autonomy when translating requirements into code – the Norwegian experience show close work with experts to implement the code. You can think of an ideal transformation model of a system to algorithms, that exist within a domain – service or authority of a government, and done for the purpose of addressing decision-making. The process is qualification of legal sources, and interpretations that are done in natural language, which then turn into specification of rules, and then it turns into a formal language which are then used for programming and modelling it. There are iterations throughout the process, and the system is being tested, go through a process of confirming the specification and then it get into use. It’s too complex to test every aspect of it, but once the specifications are confirmed, it is used for decision-making.  In terms of research we need to understand the transformation process in different agency – overall organisation, model of system development, competences, and degree of law-making effects. The challenge is the need to reform of the system: adapting to changes in the political and social change over the time. Need to make the system flexible in the design to allow openness and not rigidness.

Heike Felzman (NUI Galway), ‘The Imputation of Mental Health
from Social Media Contributions’ philosophy and psychological background. Algorithms can access different sources – blogs, social media and this personal data are being used to analyse mood analysis, and that can lead to observations about mental health. In 2013, there are examples of identifying of affective disorders, and the research doesn’t consider the ethical implication. Data that is being used in content, individual metadata like time of online activities, length of contributions, typing speed. Also checking network characteristics and biosensing such as voice, facial expressions. Some ethical challenges include: contextual integrity (Nissenbaum 2004/2009) privacy expectations are context specific and not as constant rules. Secondly, lack of vulnerability protection – analysis of mental health breach the rights of people to protect their health. Third, potential negative consequences, with impacts on employment, insurance, etc. Finally, the irrelevance of consent – some studies included consent in the development, but what about applying it in the world. We see no informed consent, no opt-out, no content related vulnerability protections, no duty of care and risk mitigation, there is no feedback and the number of participants number is unlimited. All these are in contrast to practices in Human Subjects Research guidelines.

Discussion:

In terms of surveillance, we should think about self-surveillance in which the citizens are providing the details of surveillance yourself. Surveillance is not only negative – but modern approach are not only for negative reasons. There is hoarding mentality of the military-industrial complex.
The area of command responsibility received attention, with discussion of liability and different ways in which courts are treating military versus civilian responsibility.

Panel 3: Algorithmic Governance in Practice

Professor Burkhard Schafer (Edinburgh), ‘Exhibit A – Algorithms as
Evidence in Legal Fact Finding’. The discussion about legal aspects can easily go to 1066 – you can go through a whole history. There are many links to medieval law to today. As a regulatory tool, there is the issue with the rule of proof. Legal scholars don’t focus enough on the importance of evidence and how to understand it. Regulations of technology is not about the law but about the implementation on the ground, for example in the case of data protection legislations. In a recent NESTA meeting, there was a discussion about the implications of Big Data – using personal data is not the only issue. For example, citizen science project that show low exposure to emission, and therefore deciding that it’s relevant to use the location in which the citizens monitored their area as the perfect location for a polluting activity – so harming the person who collected data. This is not a case of data protection strictly. How can citizen can object to ‘computer say no’ syndrome? What are the minimum criteria to challenge such a decision? What are the procedural rules of fairness. Have a meaningful cross examination during such cases is difficult in such cases. Courts sometimes accept and happy to use computer models, and other times reluctant to take them. There are issues about the burden of proof from systems (e.g. to show that ATM was working correctly when a fraud was done). DNA tests are relying on computer modelling, but systems that are proprietary and closed. Many algorithms are hidden for business confidentiality and there are explorations of these issues. One approach is to rely on open source tools. Replication is another way of ensuring the results. Escrow ownership of model by third party is another option. Next, there is a possibility to questioning software, in natural language.

Dr Aisling de Paor (DCU), ‘Algorithmic Governance and Genetic Information’ – there is an issue in law, and massive applications in genetic information. There is rapid technological advancement in many settings, genetic testing, pharma and many other aspects – indications of behavioural traits, disability, and more. There are competing rights and interests. There are rapid advances in this area – use in health care, and the technology become cheaper (already below $1000). Genetic information. In commercial settings use in insurance, valuable for economic and efficiency in medical settings. There is also focus on personalised medicine. A lot of the concerns are about misuse of algorithms. For example, the predictive assumption about impact on behaviour and health. The current state of predictability is limited, especially the environmental impacts on expressions of genes. There is conflicting rights – efficiency and economic benefits but challenge against human rights – e.g. right to privacy . Also right for non-discrimination – making decisions on the basis of probability may be deemed as discriminatory. There are wider societal and public policy concerns – possible creation of genetic underclass and the potential of exacerbate societal stigma about disability, disease and difference. Need to identify gaps between low, policy and code, decide use, commercial interests and the potential abuses.

Anthony Behan (IBM but at a personal capacity), ‘Ad Tech, Big Data and Prediction Markets: The Value of Probability’. Thinking about advertising, it is very useful use case to consider what happen in such governance processes. What happen in 200 milliseconds for advertising, which is the standards on the internet. The process of real-time-bid is becoming standardised. Start from a click – the publisher invokes an API and give information about the interactions from the user based on their cookie and there are various IDs. Supply Side Platform open an auction. on the demand side, there are advertisers that want to push content to people – age group, demographic, day, time and objectives such as click through rates. The Demand Side platform looks at the SSPs. Each SSP is connected to hundreds of Demand Side Platforms (DSPs). Complex relationships exist between these systems. There are probability score or engage in a way that they want to engage, and they offer how much it is worth for them – all in micropayment. The data management platform (DMP) is important to improve the bidding. e.g., if they can get information about users/platform/context at specific times places etc is important to guess how people tend to behave. The economy of the internet on advert is based on this structure. We get abstractions of intent – the more privacy was invaded and understand personality and intent, the less they were interested in a specific person but more in the probability and the aggregate. Viewing people as current identity and current intent, and it’s all about mathematics – there are huge amount of transactions, and the inventory become more valuable. The interactions become more diverse with the Internet of Things. The Internet become a ‘data farm’ – we started with a concept that people are valuable, to view that data is valuable and how we can extract it from people. Advertising goes into the whole commerce element.

I’ll blog about my talk ‘Algorithmic Governance in Environmental Information (or How Technophilia Shapes Environmental Democracy) later.

 Discussion:

There are issues with genetics and eugenics. Eugenics fell out of favour because of science issues, and the new genetics is claiming much more predictive power. In neuroscience there are issues about brain scans, which are not handled which are based on insufficient scientific evidence. There is an issue with discrimination – shouldn’t assume that it’s only negative. Need to think about unjustified discrimination. There are different semantic to the word. There are issues with institutional information infrastructure.

New PhD Opportunity: Human Computer Interaction and Spatial Data Quality for Online Civic Engagement

We have a new scholarship opening at the Extreme Citizen Science group for a PhD student who will research in Human Computer Interaction and Spatial Data Quality for Online Civic Engagement. The studentship is linked and contextualised by the European Union H2020 funded project, WeGovNow! . This project will focus on the use of digital technologies for effectively supporting civic society, whereby citizens are partners as opposed to customers in the delivery of public services. By integrating a set of innovative technologies from different European partners in Germany, Italy, and Greece to create citizen engagement platform, the project explores the use of digital tools for citizen reporting, e-participation, and communication between the citizen and local government. Building on previous research and technology development, the project will include programme of innovation in technology and services delivery. More information on the UCL ExCiteS blog

Source: New PhD Opportunity

Living Maps Review launched today

Living Maps review is a new online journal about maps, map making and thinking of mapping (I’m on the editorial board of the journal). As the launch email describes:

“map making as a democratic medium for visual artists, writers, social  researchers and community activists. The journal has its roots in the highly successful series of seminars, walks and learning events presented by the Livingmaps network over the past two years across London. Many of the contributions to the first issue are drawn from material presented at those events.”
The journal is free, but you need to register to the website and access it here: http://livingmaps.review/journal/index.php/LMR

Image:Eric Fischer Personal Geography of 2014. Creative Commons licence (CC BY 2.0)
Image:Eric Fischer Personal Geography of 2014. Creative Commons licence (CC BY 2.0)

Extreme Citizen Science in Esri ArcNews

The winter edition of Esri ArcNews (which according to Mike Gould of Esri, is printed in as many copies as Forbes) includes an article on the activities of the Extreme Citizen Science group in supporting indigenous groups in mapping. The article highlights the Geographical Information Systems (GIS) aspects of the work, and mentioning many members of the group.

You can read it here: http://www.esri.com/esri-news/arcnews/winter16articles/mapping-indigenous-territories-in-africa

Citizen Cyberlab – notes from final review (26-27 January, Geneva)

Citizen Cyberlab LogoEvery project ends, eventually. The Citizen Cyberlab project was funded through the seventh framework programme of the European Union (or EU FP7 in short), and run from September 2012 to November 2015. Today marks the final review of the project in with all the project’s partners presenting the work that they’ve done during the project.

wp-1453931121093.jpgThe project had a technical elements throughout its work, with platforms (technologies that provide foundation to citizen science projects), tools (technologies that support projects directly by being part of what volunteers use), and pilots – projects that use the technologies from citizen cyberlab as well as from other sources, to carry out citizen science projects. In addition to the platforms, tools or pilots – the project used all these elements as the background for a detailed understanding of creativity and learning in citizen cyberscience, which rely on Information and Communication Technologies (ICT). So the evaluation of the pilots and technologies was aimed to illuminate this question.

This post summarises some of the major points from the project. The project produced a system to develop and share research ideas (ideaweave.io), a framework for scientific games (RedWire.io) which is accompanied with tools to measure and observe the actions of gamers (RedMetrics.io), systems for sharing computation resources through virtual machines (through CitizenGrid platform), and a framework to track user actions across systems (CCLTracker), a platform for community mapping (GeoKey), mobile data collection tools (EpiCollect+).

Some of the systems that used these platforms and tools include Mapping for Change Community Maps, CERN Virtual Atom Smasher, and UNITAR Geotag-X.

The RedWire platform supports the development of games and the mixing of code between project (borrowing concepts from synthetic biology to computing!), and as the system encourages open science, even data from the different games can be mixed to create new ones. The integration with player behaviour tracking ability is significant in the use of games for research (so that’s done with RedMatrics). The analytics data is open, so there is a need to take care of privacy issues. An example of the gaming platform is Hero.Coli – a game about synthetic biology.

The GeoKey platform that was developed at UCL ExCiteS is now integrated with Community Maps, ArcGIS Online and can receive data trough Sapelli, EpiCollect or other HTML5 apps (as the air quality app on Google Play shows). The system is progressing and includes an installation package that make it easier to deploy. Within a year, there are about 650 users on the system, and further anonymous contributions, and over 60 mini-sites, many of them ported from the old system. The system is already translated to Polish and Spanish.

The Citizen Grid is a platform that improve volunteer computing, and allow the access to resources in a simplified manner, with launching of virtual machines through a single link. It can use shared resources from volunteers, or cloud computing.

The IdeaWeave system, which is a social network to support the development of ideas and projects, and share information about these projects. The final system supports challenges, badges and awards. They also add project blogging and ability for voting on proposals.

EpiCollect+ is a new implementation of EpiCollect which was supposed to be device independent through HTML5. There are issues with many APIs, and this lead to finding out limitations in different mobile platforms. There are many applications

wp-1453880231866.jpgThe Virtual Atom Smasher application in CERN was redesign with the use of learning analytics, which shown that many people who start engaging with it don’t go through the learning elements and then find the interface confusing, so the restructuring was geared towards this early learning process. The process help people to understand theoretical and experimental physics principles. The system, which test4theory.cern.ch . After participants log in, they go through a questionnaire to understand what the participant know, and then go through video and interactive elements that help them to understand the terminology that is needed to use the interface effectively, and the rest of the process supports asking questions in forums, finding further information through links and more. Some of the side projects that were developed from Virtual Atom Smasher include to TooTR framework that supports creating tutorials that are web-based and include videos and interactive parts. During the project, they have attracted 790 registered participants, 43 spent more than 12 hours with the game. Now the game is gaining attention from more scientists who are now seeing that it is worth while to engage with citizen science. The project is fusing volunteer computing and volunteer thinking.

wp-1453882325415.jpgGeoTag-X provides a demonstrator for volunteer thinking, and was developed by UNITAR. It allow the capturing of relevant imagery and pictures from disaster or conflict situations. It support UNITAR humanitarian operations. They wanted to assess if the system is useful. They have 549 registered volunteers, with 362 completing at least one task. GeoTag-X engaged with the humanitarian Geo community – for example with GISCorps, UN Volunteers Online, and Humanity Road.

The Synthetic Biology pilot included the development of MOOC that explains the principles of the area, the game Hero.coli, developed a new spectrometer that will be produced at very large scale in India.

wp-1453889426937.jpgOur own extreme citizen science pilots focused on projects that use cyberlab technology, so focusing on air quality monitoring in which we used GeoKey and EpiCollect to record the location of diffusion tubes and the street context in which it was installed. In addition, we included the use of public lab technology for studying the environment, and playshops to explore the exposure to science.

The research into learning and creativity, shown that there is plenty of learning of the ‘on topic’ and the mechanics of the citizen science, with small minority showing deep engagement with active learning. There is variety of learning – personal development – from self-confidence to identity and cultural change; generic knowledge and skills; and finally project specific aspects. The project provides a whole set of methods for exploring citizen science: checklists that can be used to help designing for citizen science learning, surveys, interviews, analysing blogs, user analytics, and lab studies. Some of the interesting finding include: in GeoTag-X, even a complex interface was learnt quite quickly, and connecting emotionally to the issue of humanitarian issue and participation can predict learning. The Virtual Atom Smasher demonstrated that participants learned about the work of scientists and science (e.g. the plenty use of statistics). wp-1453894997879.jpgIn SynBio4All, there was plenty of organisational skills, lab work, scientific communication and deeper contact with science – all through need to involved in a more significant way. The ExCiteS pilots show involvement and emotional learning, and evidence for community ‘hands on’ situated learning with high engagement of participants. There are examples for personal development, scientific literacy and community organisation, hosting workshop and other skills. One of the major achievement of this study is a general survey, which had 925 complete responses and 2500 partial ones – from volunteers across citizen science (80 projects) –  clusters show 25% learn about technology and science skills, 21% learn about the topic and scientific skills, about 20% learn about science skills, but some collaboration and communication, 13% pure on-topic learning. In citizen science, high percentage learn from project documentation, next about 20% learns through the project and some from documentation, about 17% learn from the project and external documentation, next there was a group learning through discussion. Most feel that they learn (86%). learning is not initial motivation, but become an important factors, and also learning about new area of science. Highly engaged volunteers take on specific and various roles – translators, community managers, event organisers etc.

wp-1453931104656.jpgOn the creativity side, interviews provided the richest source of information on creativity and how it is integrated into citizen science. Interviews with 96 volunteers provided one of the biggest qualitative survey in citizen science. Motivations – curiosity, interest in science and desire to contribute to research. They sustained participation due to continued interest, ability, time. The reasons for different audience composition are task time, geography and subject matter. In a lab study, it was shown that citizen cyberscience results are related to immersion in the game. There is also evidence that people are multi-tasking – they have plenty of distractions to the engagement in any given online project. The key finding about creativity include examples in the analysis of the images and geotagging in GeoTag-X. in the Virtual Atom Smasher, adjusting parameters seen as creative, while in SynBio4all the creation of games, or the creation of the MOOC were examples of creativity. In ExCiteS there are photos, drawing, sculptures , blog posts With air quality we’ve seen examples of newsletter, t-shirts, or creating maps. There are routes through the Motivations, learning and creativity. Might need to look at models for people who lead projects. To support creativity face-to-face collaboration is important, allow entry level of volunteers, and provide multiple methods for volunteers to provide feedback.

wp-1453931086530.jpgIn terms of engagement – we carried out ThinkCamp events, linking to existing online communities, working through engagement and participation. Interestingly, analysis of twitter shown following from fellow researchers and practitioners in citizen science.

The citizen cyberlab will now continue as an activity of the university of Geneva – so watch this space!

wp-1453931077287.jpg

 

 

 

New publication: Citizen Science and the Nexus (water, energy, food, population)

Under the leadership of Roger Fradera of the Centre for Environmental Policy at Imperial College London, I was involved as a co-author on a ‘thinkpiece’ about citizen science and the nexus. If you haven’t come across the term, ‘nexus’ is the linkage of food, energy, water and the environment as a major challenge for the future.

The paper is now published:

Fradera, R., Slawson, D., Gosling, L., Geoghegan, H., Lakeman-Fraser, P.,  Makuch, K. Makuch, Z., Madani, K., Martin, K., Slade, R., Moffat, A. and Haklay, M. Exploring the nexus through citizen science, Nexus Network think piece Series, Paper 010, November 2015

The paper explores the background of citizen science, and then suggests few recommendations in the context of the nexus, including:

  • Inclusivity: a co-created citizen science approach is likely to be more appropriate both to address the more complex nexus issues and to engage all sectors of society.
  • Engagement: Citizen science practitioners and nexus scientists should explore developing citizen science programmes with multi-scale engagement of citizens, for example programmes focusing on a nexus issue that combine local, citizen-led or co-created projects.
  • Barriers: Research is needed to understand the motivations, attitudes and willingness to change behaviours across all nexus stakeholders, and to better understand and find solutions to barriers.

The work was funded under the ESRC Nexus Network initiative

Standards and Recommendations for Citizen Science (University of Zurich)

Following a short project that was headed by Daniel Wyler of the University of Zürich in collaboration with the League of European Research Universities, two draft documents aimed at universities and research funders were developed. The documents can be found here, and there is scope to comments and suggest changes for the next month on them.  The university organised a one day workshop to discuss the findings of the work and the need for guidelines and standards.

The opening remakes came from Michael Hengartner (President, University of Zurich) highlighting the commitment of the university to openness as secular university and one of the first in Switzerland to be open to women. Switzerland has a long tradition of participatory democracy, though it also create challenges (e.g. participation in Horizon 2020). Citizen Science is a way for Swiss scientists to take advantage of the strong tradition of participatory democracy and very strong universities. There is also early involvement in citizen science – for example through the university of Geneva (citizen cyberscience centre) in collaboration with CERN and UNITAR. The reports are the result of the Citizen Science Initiative Switzerland. One of the initiative of CSI is the standards for excellence in citizen science and policy recommendations. They create a citizen science centre in Zürich, with infrastructure to facilitate and support citizen science across the world.

Next came a short note from Alice Shepard (citizen scientist, Galaxy Zoo) shared her experience as a citizen scientists who became lead forum moderator at Galaxy Zoo. Came to citizen science by accident – in 2007 became a lead forum moderator from being a lead volunteer and active. Her background is environmental science, and was frustrated from the lack of engagement of the public in her studies. In 2007, she became involved in galaxy zoo and it became an obsession. Different people have different skills and abilities to teach each others – they collaboration between volunteers started to find new things: one offs, accidental findings and that’s the way ordinary citizens, without much specific science training found new things and started their own projects. In galaxy zoo there is a safe space of the forum which was well behaved and allow questioning of many issues and explorations. They then started to have meetups and gathering and abilities to join on projects. They discovered classes of astronomical objects, and appear in book by Michael Nielsen. Lay people can do science to build new tools. Galaxy zoo treated volunteers as collaborators, write regular blog posts to work that recognised volunteers, recognition on page, and encourages safe civilised space on the internet an encouraged to find new things. Becoming a professional scientist is a challenge to become  – the general public are very capable, and we want to join in. the people who want to become professional scientists experience difficulties – gaining degrees, writing academically – so need to open new routes to science.

Following Alice, I gave an overview of Citizen Science (slides below)

Next was a talk by Michael Pocock (Centre for Ecology and Hydrology) about    Thoughtful enthusiasm for Citizen Science instead of just enthusiasm to citizen science, having a more careful and reflective one. There are many projects that are following under the title citizen science, but provocatively he argues that there is no such thing as citizen science: science is science – it should be judged as such, so shouldn’t have special treatment. Secondly, a problem with citizen – should be people or participants, and it is term of convenience – types of approaches which have common attributes. Citizen science need ‘real’ science with excellent engagement – citizen science is not about compromised between the two but merging the two. While examining the Shirk et al typology of contributory/collaborative/co-created citizen science, The Biological Recording Centre in which he works use multiple methods. A very important type of projects are enthusiasm led and the volunteers led the projects completely, with professionals providing support and tools. Citizen Science has a long standing activities in ecology and wildlife. Even for the BRC it is very diverse – across many taxa. It is possible to enthuse people about a very wide range of topics and not only popular species. The UK have 70,000 volunteers a year, ranging from occasional recorders, to non-professional experts. The work lead to high impact papers and understanding issues such as climate change. They also contribute to evidence based policy. Citizen science has diversity, with the analysis of many projects show that they are in a full range, from mass participation to systematic monitoring, and from simple to an elaborate approach. Citizen Science is like a toolbox – need to use the appropriate type and approach to citizen science to the issue. Be careful of being carried out by hype – that the project can become too big to fail, and lacking critical evaluation, so we would like to see thoughtful reflection on where it should be used. Universities offer cutting-edge research, societal impact, new technology, enthusiastic researchers and innovation. There is also hypothesis led citizen science such as conker tree and there is value in short term projects at a small scale. Need integrity to finish and close a project and finish it well. Need to preserve the fun and to some extent the anarchy that is common in citizen science

Next came the policy overview, in Open Science: From Vision to Action with Jean-Claude Burgelman (Head of Unit Science Policy, Foresight and Data, DG R&I at European Commission). The commissioner view is open innovations, open science and open to the world. Open Science is a systematic change in the modus operandi of science and research, and affective the research cycle and its stakeholders. From the usual close cycle of doing the cycle of science in a closed way to an open publication, review, blogs, open data, open annotations, workflows, code, pre-print services – new ecosystems of services and standards. We see activities of major companies getting involved in different ways in the new tools (e.g. Elsevier and Mendeley). It’s key drivers – digital technology, exponential growth of data, more researchers and increased in scientific production. There are plenty things happen at one: open source software, collaborative knowledge production creative commons, open innovation, Moocs etc. We need to use the openness to increase transparency, openness and networked collaboration – getting trustworthiness from the public. Citizen science as a way to link science and society and being responsive to their needs. The public consultation for Science 2.0 led to  many responses, leading to the selection of open science. 47% agree that citizen science is part of open science – the lowest response from scientists, while 80% argue that it’s because of digital technologies. The barriers for open science are quality assurance, lack of credits, infrastructure and awareness to benefits. Interestingly, less than 70% were concerns about ethical and privacy issues. People viewed that the implications for science will make science more reliable, efficient and faster leading to wider innovation, while crowdfunding is not seen as important indication of open science. In terms of policy – there was policy about open access to publication, data, infrastructure and framework conditions – need to ensure that it is bottom-up and stakeholder-driven – not a top-down solution from Brussels. Decided on open science policy – 5 blocks: foster open science, remove barriers, developing infrastructures (open science cloud), open access on publication and data, and socio-economic driver. In fostering open science – promoting best practices, research integrity, citizen science and similar area -and establish an open science forum. Also mainstream open access to publications and data in Horizon 2020. The open cloud for science is challenging – require governance, data and service layer and infrastructure layer. The policy forum includes a working group on citizen science. Citizen science is important – but should be seen as part of the wider open science landscape

Another view of processes that are happening at the policy level was provided by Claudia Göbel (European Citizen Science Association, Museum für Naturkunde Berlin) in Citizen Science associations as Agents of Professionalisation using the Socientize framework, looking at the mesoscale and macroscale. We’re seeing growth in national (Austria, Germany) and international – Citizen Science Association, Australian Citizen Science Association and European Citizen Science Association (ECSA). ECSA got 84 members from 22 countries – they have organisations, and members – about 66% are from science organisation, and four important hubs – Germany, Spain, Italy and UK – but that depends on the history of ECSA and how its network grown over the past 5 years. ECSA started to set up some of the key documents: ECSA strategy – part of the activity is to be a think tank for citizen science – sharing knowledge & skills across field, and linking to international links. Many of the members are involved in ecology and biodiversity and therefore there is a link to dealing with sustainability though citizen science, and developing participatory methods for cooperation, empowerment and impact. ECSA also developing memorandum of understanding with ACSA and CSA. Interesting response between the association came as a response to the Nature editorial on citizen science. The capacity building working group has launched the ten principles of citizen science – and try to identify good practice within a flexible concept. Responses to policy document can be challenging within a volunteer based organisation. ECSA have an important effort in environmental policy, and in Responsible Research and Innovation. We have seen the ECSA is located at the meso level in exchange and capacity building in the Socientize framework – doing the multiplier effect. In the university sector – some specific research group, museum or sub-organisation is members of ECSA . Also example for innovation in citizen science and new mechanisms, structures process for an area. What we are seeing is a process of professionalisation – fostering learning and action, providing information and services and expertise – creating community of peers, standards, and quality and they will play a role in the field as a whole.

This was followed by a panel discussion which was moderated by Mike Martin (Gerontopsychology, University of Zurich) with myself, Lidia Borrell-Damián (Director Research and Innovation, European University Association), Jennifer Shirk (Field Development Coordinator, Citizen Science Association), Josep Perelló (OpenSystems Research and Complex Lab Barcelona, Universitat de Barcelona), Effy Vayena (Epidemiology, Biostatistics & Prevention Institute, University of Zurich), François Grey (Citizen Cyberscience Centre, University of Geneva), Dirk Helbing (Computational Social Science, ETH Zurich), Alice Sheppard (Galaxy Zoo)

 

The afternoon was dedicated to two workshops Policy Recommendations for Funding Moderator: Jean-Claude Burgelman, who noted that as policy maker, defining everything as citizen science – calling any informal participation in science is not useful for policy making. Some of the recommendations that the people in the room made include: Need to be clear about innovation, sharing of intellectual properties. Need to ensure that there are clear benefits to citizen scientists – commitment to professional training, and opportunities that are opened to them. Every research institution should develop a policy on open science as part of that citizen science. There is need for data management plans. The software development of an infrastructure is lots of time are not well covered in usual funding. Citizen science require a social infrastructure that is not part of the current rewarding of scientists and organisations. Citizen science can be used as an area of a demonstrator for citizen science – open data, open access, open source as a way to transform the field. We need to consider how to work at local, regional, national and European Countries. we also need action to increase the participation in citizen science across Europe. There is also an issue of ‘right for data’ that should allow people to access to their own data. Need to define parameters for high quality science research and the document should be for outside the context.  Quality of the science need to be equivalent to the general scientists, localness of citizen science is an issue that limits academic interest – there isn’t enough recognition of the local aspects and interest.

The second workshop looked at Standards for Citizen Science Moderated by Kevin Schawinski (Astrophysics, ETH Zurich) included some of the following points: do we need standards and rules? maybe we should wait to give it emerging over time. Maybe begin with guidelines, and then let them evolve over time. The citizen scientists need to be involved in setting the standards and working through them. Standards can be used in multiple ways, as a reference to allow people to see how things should work. Good principles can express aspiration of excellence. Quality of the research is multi-faceted – can consider the outcomes (the goals of the project) and evaluating the process through which they were achieved. Acknowledging citizen science through scientific outcomes can be challenging – some people want and don’t want to be named. There are also many ways of authorship, participation and practices between scientific fields. Worth asking the people who participate what they want.

Conclusions: Results and next Steps, was set by Daniel Wyler and Katrien Maes (Chief Policy Office, League of European Research Universities) ‘citizens are not organised’  so the feedback on the documents came so far from more institutional partners – need to engage with the public much more. The general view is that it is worth considering guidelines and principles for universities – it can help funders to fund project and put citizen science in focus. We should have in the guidelines parameters about different levels of participation and engagement. Acknowledgement is an issue that depend on the science and the guidelines should allow variations and practices. There is an issue with judging and assessing citizen science completely different – we should ensure similar valuation. For medical research need to consider how to approach personal data. we should have a single point of entry where they can get support for education and training .

From LERU’s perspective, the papers are important to put citizen science on the map and raise attention. There isn’t just one citizen science, so there is plenty of information awareness raising that is required to make universities aware of the opportunities. For universities, the paper will need to take a narrower view of citizen science – especially integrating it with open science agenda and with the activities of research universities. Guidelines and principles – not regulations and strict rules as this will not be appropriate for the field.