Securing funding and balancing efforts: a tale of 21 research applications

EU H2020 Participants Portal
EU H2020 Participants Portal

The last 3 months were a gradual sigh of relief for the Extreme Citizen Science group (ExCites), Mapping for Change (MfC), and for me. As the UCL engineering website announced, the ExCiteS group, which I co-direct, secured funding through 3 research grants from the European Union’s Horizon 2020 programme (H2020), with enough funding to continue our work for the next 3 years, which is excellent. As usual in publicity, UCL celebrates successes, not the work that led to it. However, there are implications for the effort of securing funding and it is worth reflecting on them – despite the fact that we are in the success camp. While the criticism of the application process to European projects on the ROARS website is a bit exaggerated, it does give a good context for this post. In what follows I cover the context for the need to apply for funding, look at the efforts, successes and failures from mid 2014 to early 2016 (mostly failures), and then look at the implications. 

This is not a piece to boast about success or moan about failure, but I find writing as a useful way to reflect, and I wanted to take stock of the research application process. I hope that it will help in communicating what is the process of securing funding for an interdisciplinary, research intensive group.

Background & context 

The background is that the ExCiteS group started at the end of 2011, with a large group of PhD students – as common to early stage research groups. With the support of the UK Engineering and Physical Science Research Council (EPSRC) award, which is about to end soon, it was possible to start a group. With additional funding from the European Union (EU) and EPSRC projects, including EveryAware (2011-2014), Citizen Cyberlab (2012-2015), Challenging Risk (2013-2018), and Cap4Access (2014-2016), it was possible to develop the activities of ExCiteS & MfC. This is evident in the software packages that are emerging from our work: Sapelli, GeoKey, and a new version of Community Maps, methodologies for using these tools within participatory processes. the academic and non-academic outputs, and the fact that people know about our work.

However, what was clear since 2011 was that 2015 will be a crunch point, when we will need funding to allow members of the group to move from PhD students to post-doctoral researchers (postdocs). The financial implication of funding a postdoc is about three-times the funding for a PhD student. In addition, while at earlier years members of the group (regardless of their career stage) participated in writing research proposals – and helped winning them (e.g. Citizen Cyberlab), when people write-up their PhD theses it is inappropriate to expect them to invest significant amount of time in research applications. Finally, all our funding come through research projects – we don’t have other sources of income.

Research Applications – effort, successes, failures 

UK Research Councils system (Je-S)
UK Research Councils system (Je-S)

So it was very clear that 2015 is going to be full of research applications. To give an idea of how many and the work that was involved, I’m listing them here – more or less in order of effort. I’m providing more details on successful applications but only partial on the failed ones – mostly because I didn’t check with the coordinators or the partners to see if they allow me to do so.

We started in mid 2014, when we started working on the first version of what is now DITOs. Coordinating an EU H2020 project proposal with 11 partners mean that between May and September 2014 we’ve invested an estimated 6 or 7 person months within the group in preparing it. We’ve submitted it early October, only to be disappointed in early March 2015 when we heard that although we scored high (13/15), we won’t be funded – only 1 project out of 19 that applied was funded. We then resurrected the proposal in July 2015, dedicated further 5 person months, resubmitted it and won funding after competition with 56 other proposals – of which only 2 were funded.

The next major investment was into a first stage proposal to the Citizen Observatories call of H2020. ExCiteS coordinated one proposal, and MfC participated in another. The process required an outline submission and then a full proposal. We worked on the proposal from December 2014 to April 2015, and it wasn’t a huge surprise to discover that 47 proposals were submitted to the first stage, of which 11 progressed to the second. The one coordinated by ExCiteS, with an investment of about 5 person months, scored 7/10, so didn’t progressed to the second stage. MfC also invested 2.5 person months in another proposal, as a partner. This proposal passed the first stage, but failed in the second.

Participating as a major partner in a proposal is also a significant effort, especially in H2020 projects in which there are multiple partners. The collaborative effort of MfC and ExCiteS in the proposal that emerged at WeGovNow! required about 4 person months. The proposal was submitted twice – first in July 2015 to a call for “Collective Awareness Platforms for Sustainability and Social Innovation” which received 193 proposals of which 22 were funded, and then again to a call for “Meeting new societal needs by using emerging technologies in the public sector” where only 2 proposals were submitted in December 2015 (you can be lucky sometimes!).

The proposal for the European Research Council (ERC) was developed between May and June 2015, with about 3 person months – and luckily was successful. It competed with 1953 applications in total (423 in the social sciences), of which 277 (59) were successful – about 14% success rate.

Another fellowship proposal in response to an EPSRC call passed the first round, and failed at the interview stage (where 2 out of 5 candidates were selected). This one was developed from May 2015 and failed in February 2016, after an effort of about 2.5 person months.

We also developed the Economic and Social Science Research Council (ESRC) responsive mode proposal, which mean that we’ve applied to the general funds, and not to a specific call. We collaborated with colleagues at the Institute of Education from January 2015 to July 2015 , with an effort of about 2.5 person months, but we learned that it was unsuccessful in March 2016.

Another 2 person months were dedicated to an ESRC call for methodological research, for which 65 applications were submitted out of which 6 were funded, with our proposal ranking 22 out of about 65. In parallel, I had a small part in another proposal for the same call, which was ranked 56.

We’ve invested a month in an unsuccessful application to Wellcome Trust Science Learning + call in July 2014.

Less time was spent on proposals where we had a smaller role – a failed H2020 ICT proposal in April 2015, or another H2020 about Integrating Society in Science and Innovation September 2015. This also include a successful proposal to the Climate and Development Knowledge Network (CDKN).  Because of all the other proposals, information such as the description of our activities, CVs and other bits were ready and adjusted quite easily.

ExCiteS and MfC also participated in an EU LIFE proposal – this was for funding for applied activities, with very low level of funding of only 50%, so there was a need to think carefully about which add-on activities can be used for it. However, as the proposal failed, it wasn’t an issue.

Along the way, there were also small parts in an application to the Wellcome trust in early 2015 (failed), in an EPSRC programme grant (a large grant with many partners) that was organised at UCL and on which we dedicated time from June 2014 to February 2015 (ditto), an outline for Leverhulme trust (ditto), an ERC research proposal (ditto), and finally  a COST Action application for a research network on Citizen Science (which was successful!)

So let’s summarise all these proposals, success, failure, effort in one table. Lines where the funder marked in bold mean that we’ve coordinated the proposal:

Funder Effort (Months) Success/Failure
1 H2020 7 Failure
2 H2020 5 Success
3 H2020 5 Failure
4 H2020 2.5 Failure
5 H2020 4 Failure
6 H2020 1 Success
7 ERC 3 Success
8 EPSRC 2.5 Failure
9 ESRC 2.5 Failure
10 ESRC 2 Failure
11 ESRC 0.5 Failure
12 Wellcome 1 Failure
13 H2020 0.25 Failure
14 H2020 0.25 Failure
15 CDKN 0.25 Success
16 EU LIFE 0.5 Failure
17 Wellcome 0.5 Failure
18 EPSRC 0.5 Failure
19 Leverhulme 0.5 Failure
20 ERC 0.25 Failure
21 COST 0.5 Success

So what?

We’ve applied to lots of funders and mechanisms – fellowships, calls for proposals, and open calls for research ideas. We applied to UK funders and EU. As we are working in an interdisciplinary area, we have applied to social science as well as engineering, Information and Communication Technologies (ICT) and in between these areas. In third of the case we led the proposal, but in the rest we joined in to proposals that were set by others. So the first point to notice is that we didn’t fixate on one source, mechanism or role.

As the table shows, we’re not doing badly. Out of the 7 proposals that we’ve led, 2 succeeded (30%). Also among the 14 that we’ve participated in 3 succeeded (20%). The overall success is about quarter. Over about 18 months a group of about 10 people invested circa 40 person months in securing future funding (about 20% of the time) for the next 3 years, which doesn’t sound excessive.

However, the load was not spread equally, so some people spent a very significant amount of their time on proposals. I was involved in almost all of these 21 proposals during this period, much more in those that we led, and in some of those that we participated as partner, I was the only person in the group that worked on the proposal. It was increasingly challenging to keep submitting and working on proposals with so much uncertainty and the very long gap between submission and results (look above, and you’ll see that it can be up to 9 months). Because of the uncertainty about success, and an assumption that only 20% will be successful at best (that’s 4 wasted proposals for every successful one), I felt that I need to keep on going, but there were moment when I thought that it’s a doomed effort.

There is also the issue of morale – as should be obvious from the fact that we’ve announced the successes recently, as the failures mounted up during the second part of 2015, it was harder to be cheerful. Because of the long gap between proposal submission and result that I mentioned, the future of the group is unknown for a significant period, and that influences decisions by people about staying or leaving, or how to use the funds that we do have.

Implications

Leaving aside that by early 2016 it became hard to find the energy to be involved in more proposal writing, there is an issue about how interdisciplinary research groups are funded. While we can apply to more funding opportunities, the responses from the failures indicated that it’s tough to convince disciplinary evaluators that the work that is being done is important. This mean that we knew all along that we need to apply more. Maybe it was a coincident, but the EU funding evaluations seem more open to these ideas than UK funders.

Second, such a high number of applications take time from other research activities (e.g. check my publications in 2014-2015). Applications, with all the efforts that is associated with them, are not seen as an academic output, so all the effort of writing the text, proofing it and revising it are frequently wasted when a proposal fail.

Third, all these proposals burn social capital, ‘business capital’, and cash reserves – e.g. having a consultant to help with H2020 project or covering the costs of meetings, asking for letter of support from business partners, raising hopes and making links with partners only to write at the end that we won’t be working together beyond the proposal. There are also negotiations with the Head of Department on the level of support from the university, requests for help from research facilitators, financial administrators and other people at the university.

Fourth, considering how much effort, experience, support – and luck – is needed to secure research funding, I’m not surprise that some people are so despondent about their chances to do so, but all the above is the result of a large team and I would argue that the clue to the ability to keeping the stamina is team spirit and having a clear goal on why the hell you want the funding in the first place (in our case, we want to materialise Extreme Citizen Science).

Finally, looking at the number of the submissions, the ranking and the general success rate of applications in the areas that we’ve applied to (about 15% or less), I have concerns that under such conditions there is a ‘crowding out’ situation, in which groups that got better resources around them (e.g. the institutional infrastructure at UCL, our internal experience) make it harder for new entrants or smaller groups. At a higher funding rate, we could have secured the funding in less proposals, at which point we wouldn’t continue to apply, and therefore allow others to secure funding.

Epilogue

I have no plans for another period like the one that led to the current results. I am incredibly grateful to have such a level of success, which is about the institution that I’m in, the hard work and the evolving experience in preparing proposals and, always, luck. It is very possible that this post would have counted 19 failures, so we’re very grateful to all the people who evaluated out proposals positively and gave us the funding.

Back to the funding, with all the successes, in people terms, we’ve secured funding for the 10 people that I’ve mentioned for 3 years, with further 6 PhD students joining us over that period. There are still other people in the group that will need funding soon, so probably we will put the accumulated knowledge and experience to use soon.

New Paper: Extreme Citizen Science – a new approach (in Portuguese)

One of the advantages of working in a multi-disciplinary and culturally diverse group is that I can become co-author in languages that I do not speak. Carolina Comandulli, who is doing her PhD research in the Brazil/Peru border area, led on the writing of a paper on ‘Extreme Citizen Science’ – we have collaborated on the writing in English, and then the paper was translated. you can find it in the ICMBIO website and the title and abstract (in Portuguese) are provided below.

Carolina Comandulli, Michalis Vitos, Gillian Conquest, Julia Altenbuchner, Matthias Stevens, Jerome Lewis, Muki Haklay

 

A conservação da biodiversidade é uma questão que tem preocupado o mundo todo. Nas últimas décadas, centenas de áreas protegidas foram criadas para assegurar a preservação da biodiversidade no planeta. Um grande número de áreas protegidas é habitado por comunidades que dependem do uso de seus recursos naturais não apenas para a sua sobrevivência, mas também para a sua reprodução social e cultural. Em muitos casos, as populações locais têm sido diretamente responsáveis pela gestão sustentável desses complexos ecossistemas por séculos. Iniciativas de Ciência Cidadã – entendida como a participação de amadores, voluntários e entusiastas em projetos científicos – têm envolvido o público na produção científica e em projetos de monitoramento da biodiversidade, mas têm limitado essa participação à coleta de dados, e têm normalmente ocorrido em locais afluentes, excluindo as populações não alfabetizadas ou letradas e que vivem em áreas remotas. Povos e comunidades tradicionais conhecem os aspectos ambientais das áreas por eles habitadas, o que pode ser benéfico para a gestão e o monitoramento bem-sucedidos da biodiversidade. Portanto, ao se tratar do monitoramento e da proteção da biodiversidade em áreas habitadas por populações humanas, o seu envolvimento é central e pode conduzir a um cenário onde todas as partes envolvidas se beneficiam. Extreme Citizen Science (ExCiteS) é um grupo de pesquisa interdisciplinar criado em 2011, na University College London, com a finalidade de avançar o atual conjunto de práticas da Ciência Cidadã. A ideia é permitir que qualquer comunidade, em qualquer lugar do mundo – desde grupos marginalizados que vivem nas periferias de áreas urbanas até grupos de caçadores e coletores da floresta amazônica –, comece um projeto de Ciência Cidadã para lidar com suas próprias questões. Este artigo apresenta os diversos aspectos que tornam a Ciência Cidadã “extrema” no trabalho do grupo ExCiteS, por meio da exposição de suas teorias, métodos e ferramentas, e dos estudos de caso atuais que envolvem comunidades tradicionais ao redor do mundo. Por fim, ressalta-se a maior preocupação do grupo, que é tornar a participação verdadeiramente efetiva, e sugere-se como iniciativas de monitoramento da biodiversidade podem ser realizadas de maneira colaborativa, trazendo benefícios a todos os atores envolvidos.

Introducing “Doing It Together Science” – an EU citizen science project

The full details of new project is over on the Extreme Citizen Science blog (link below) so here is the two lines summary. Doing It Together Science (DITOs) is a three-year programme to increase public participation in scientific research and innovation across Europe. The project includes 11-partners and coordinated by UCL Extreme Citizen Science group. DITOs aims to enable people to contribute to science at a level of participation suitable for them, whether that is using a crowdsourcing app to log air quality or working in a citizen bio hacking lab.

Source: Introducing “Doing It Together Science” – an EU citizen science project

ERC Advanced Grant: Extreme Citizen Science: Analysis and Visualisation

logo-ercNow that the press release by the European Research Council (ERC) is out, it’s time to share the great news: The Extreme Citizen Science group has secured €2.5m from the ERC to continue our journey towards Intelligent Maps. Building on the work that we’ve done with the support of the EPSRC in Extreme Citizen Science,  and the development of Sapelli, we now have the base funding to continue the work for the next 5 years.

This is a summary of the project:

The challenge of Extreme Citizen Science is to enable any community, regardless of literacy or education, to initiate, run, and use the result of a local citizen science activity, so they can be empowered to address and solve issues that concern them. Citizen Science is understood here as the participation of members of the public in a scientific project, from shaping the question, to collecting the data, analysing it and using the knowledge that emerges from it. Over the past 4 years, the Extreme Citizen Science programme at UCL has demonstrated that non-literate people and those with limited technical literacy can participate in formulating research questions and collecting the data that is important to them. Extreme Citizen Science: Analysis and Visualisation (ECSAnVis)* takes the next ambitious step – developing geographical analysis and visualisation tools that can be used, successfully, by people with limited literacy, in a culturally appropriate way. At the core of the proposal is the imperative to see technology as part of socially embedded practices and culture and avoid ‘technical fixes’.

The development of novel, socially and culturally accessible Geographic Information System (GIS) interface and underlying algorithms, will provide communities with tools to support them to combine their local environmental knowledge with scientific analysis to improve environmental management. In an exciting collaboration with local indigenous partners on case studies in critically important, yet fragile and menaced ecosystems in the Amazon and the Congo-basin, our network of anthropologists, ecologists, computer scientists, designers and electronic engineers will develop innovative hardware, software and participatory methodologies that will enable any community to use this innovative GIS.

The research will contribute to the fields of geography, geographic information science, anthropology, development, agronomy and conservation

* ECSAnVis can be pronounced EXANVIS, but it’s not the best acronym in the world, so  we’re going to use Intelligent Maps to say what this project is about!

 

Algorithmic Governance and its Discontents

Continuing with relevant posts from the Algorithmic Governance workshop , one of the speakers of the workshop, Anthony Behan explores on his blog Algorithmic Governance and its Discontents , and in particular he points that

In a comprehensive and packed agenda, politics barely got a mention – but that too needs considerable discussion.

John Danaher has done some initial work to address this challenge. He combines List’s ‘logical space of democracy‘ as a politics within which ‘collective decision procedures’ are agreed, with a four-component model of algorithmic decision-making, being ‘sensing-processing-execution-learning’. Adopting a basic premise that each component can be automated or human, the logic extends to a matrix of options within which a collective decision procedure can be agreed. It is a very useful abstract framework, though I would add a number of additional points.

Read more….

 

Algorithmic governance in environmental information (or how technophilia shape environmental democracy)

These are the slides from my talk at the Algorithmic Governance workshop (for which there are lengthy notes in the previous post). The workshop explored the many ethical, legal and conceptual issues with the transition to Big Data and algorithm based decision-making.

My contribution to the discussion is based on previous thoughts on environmental information and public use of it. Inherently, I see the relationships between environmental decision-making, information, and information systems as something that need to be examined through the prism of the long history that linked them. This way we can make sense of the current trends. This three area are deeply linked throughout the history of the modern environmental movement since the 1960s (hence the Apollo 8 earth image at the beginning),  and the Christmas message from the team with the reference to Genesis (see below) helped in making the message stronger .

To demonstrate the way this triplet evolved, I’m using texts from official documents – Stockholm 1972 declaration, Rio 1992 Agenda 21, etc. They are fairly consistent in their belief in the power of information systems in solving environmental challenges. The core aspects of environmental technophilia are summarised in slide 10.

This leads to environmental democracy principles (slide 11) and the assumptions behind them (slide 12). While information is open, it doesn’t mean that it’s useful or accessible to members of the public. This was true when raw air monitoring observations were released as open data in 1997 (before anyone knew the term), and although we have better tools (e.g. Google Earth) there are consistent challenges in making information meaningful – what do you do with Environment Agency DSM if you don’t know what it is or how to use a GIS? How do you interpret Global Forest Watch analysis about change in tree cover in your area if you are not used to interpreting remote sensing data (a big data analysis and algorithmic governance example)? I therefore return to the hierarchy of technical knowledge and ability to use information (in slide 20) that I covered in the ‘Neogeography and the delusion of democratisation‘ and look at how the opportunities and barriers changed over the years in slide 21.

The last slides show that despite of all the technical advancement, we can have situations such as the water contamination in Flint, Michigan which demonstrate that some of the problems from the 1960s that were supposed to be solved, well monitored, with clear regulations and processes came back because of negligence and lack of appropriate governance. This is not going to be solved with information systems, although citizen science have a role to play to deal with the governmental failure. This whole sorry mess and the re-emergence of air quality as a Western world environmental problem is a topic for another discussion…

Algorithmic Governance Workshop (NUI Galway)

Algorithmic Governance Workshop (source: Niall O Brolchain)

The workshop ‘Algorithmic Governance’ was organised as an intensive one day discussion and research needs development. As the organisers Dr John Danaher
and Dr Rónán Kennedy identified:

‘The past decade has seen an explosion in big data analytics and the use  of algorithm-based systems to assist, supplement, or replace human decision-making. This is true in private industry and in public governance. It includes, for example, the use of algorithms in healthcare policy and treatment, in identifying potential tax cheats, and in stopping terrorist plotters. Such systems are attractive in light of the increasing complexity and interconnectedness of society; the general ubiquity and efficiency of ‘smart’ technology, sometimes known as the ‘Internet of Things’; and the cutbacks to government services post-2008.
This trend towards algorithmic governance poses a number of unique challenges to effective and legitimate public-bureaucratic decision-making. Although many are already concerned about the threat to privacy, there is more at stake in the rise of algorithmic governance than this right alone. Algorithms are step-by-step computer coded instructions for taking some input (e.g. tax return/financial data), processing it, and converting it into an output (e.g. recommendation for audit). When algorithms are used to supplement or replace public decision-making, political values and policies have to be translated into computer code. The coders and designers are given a set of instructions (a project ‘spec’) to guide them in this process, but such project specs are often vague and underspecified. Programmers exercise considerable autonomy when translating these requirements into code. The difficulty is that most programmers are unaware of the values and biases that can feed into this process and fail to consider how those values and biases can manifest themselves in practice, invisibly undermining fundamental rights. This is compounded by the fact that ethics and law are not part of the training of most programmers. Indeed, many view the technology as a value-neutral tool. They consequently ignore the ethical ‘gap’ between policy and code. This workshop will bring together an interdisciplinary group of scholars and experts to address the ethical gap between policy and code.

The workshop was structured around 3 sessions of short presentations of about 12 minutes, with an immediate discussion, and then a workshop to develop research ideas emerging from the sessions. This very long post are my notes from the meeting. These are my takes, not necessarily those of the presenters. For another summery of the day, check John Danaher’s blog post.

Session 1: Perspective on Algorithmic Governance

Professor Willie Golden (NUI Galway)Algorithmic governance: Old or New Problem?’ focused on an information science perspective.  We need to consider the history – an RO Mason paper from 1971 already questioned the balance between the decision-making that should be done by humans, and that part that need to be done by the system. The issue is the level of assumptions that are being integrated into the information system. Today the amount of data that is being collected and the assumption on what it does in the world is a growing one, but we need to remain sceptical at the value of the actionable information. Algorithms needs managers too. Davenport in HBR 2013 pointed that the questions by decision makers before and after the processing are critical to effective use of data analysis systems. In addition, people are very concerned about data – we’re complicit in handing over a lot of data as consumers and the Internet of Things (IoT) will reveal much more. Debra Estrin 2014 at CACM provided a viewpoint – small data, where n = me where she highlighted the importance of health information that the monitoring of personal information can provide baseline on you. However, this information can be handed over to health insurance companies and the question is what control you have over it. Another aspect is Artificial Intelligence – Turing in 1950’s brought the famous ‘Turing test’ to test for AI. In the past 3-4 years, it became much more visible. The difference is that AI learn, which bring the question how you can monitor a thing that learn and change over time get better. AI doesn’t have self-awareness as Davenport 2015 noted in Just How Smart are Smart Machines and arguments that machine can be more accurate than humans in analysing images. We may need to be more proactive than we used to be.

Dr Kalpana Shankar (UCD), ‘Algorithmic Governance – and the
Death of Governance?’ focused on digital curation/data sustainability and implication for governance. We invest in data curation as a socio-technical practice, but need to explore what it does and how effective are current practices. What are the implications if we don’t do ‘data labour’ to maintain it, to avoid ‘data tumbleweed. We are selecting data sets and preserving them for the short and long term. There is an assumption that ‘data is there’ and that it doesn’t need special attention. Choices that people make to preserve data sets will influence the patterns of  what appear later and directions of research. Downstream, there are all sort of business arrangement to make data available and the preserving of data – the decisions shape disciplines and discourses around it – for example, preserving census data influenced many of the social sciences and direct them towards certain types of questions. Data archives influenced the social science disciplines – e.g. using large data set and dismissing ethnographic and quantitative data. The governance of data institutions need to get into and how that influence that information that is stored and share. What is the role of curating data when data become open is another question. Example for the complexity is provided in a study of a system for ‘match making’ of refugees to mentors which is used by an NGO, when the system is from 2006, and the update of job classification is from 2011, but the organisation that use the system cannot afford updating and there is impacts on those who are influenced by the system.

Professor John Morison (QUB), ‘Algorithmic Governmentality’. From law perspective, there is an issue of techno-optimism. He is interested in e-participation and participation in government. There are issue of open and big data, where we are given a vision of open and accountable government and growth in democratisation – e.g. social media revolution, or opening government through data. We see fantasy of abundance, and there are also new feedback loops – technological solutionism to problems in politics with technical fixes. Simplistic solutions to complex issues. For example, an expectation that in research into cybersecurity, there are expectations of creating code as a scholarly output. Big Data have different creators (from Google to national security bodies) and they don’t have the same goals. There is also issues of technological authoritarianism as a tool of control. Algorithmic governance require to engage in epistemology, ontology or governance. We need to consider the impact of democracy – the AI approach is arguing for the democratisation through N=all argument. Leaving aside the ability to ingest all the data, what is seemed to assume that subjects are not viewed any more as individuals but as aggregate that can be manipulated and act upon. Algorithmic governance, there is a false emancipation by promise of inclusiveness, but instead it is responding to predictions that are created from data analysis. The analysis is arguing to be scientific way to respond to social needs. Ideas of individual agency disappear. Here we can use Foucault analysis of power to understand agency.  Finally we also see government without politics – arguing that we make subjects and objects amenable to action. There is not selfness, but just a group prediction. This transcend and obviates many aspects of citizenship.

Niall O’Brolchain (Insight Centre), ‘The Open Government’. There is difference between government and governance. The eGov unit in Galway Insight Centre of Data Analytics act as an Open Data Institute node and part of the Open Government Partnership. OGP involve 66 countries, to promote transparency, empower citizens, fight corruption, harness new technologies to strengthen governance. Started in 2011 and involved now 1500 people, with ministerial level involvement. The OGP got set of principles, with eligibility criteria that involve civic society and government in equal terms – the aim is to provide information so it increase civic participation, requires the highest standards of professional integrity throughout administration, and there is a need to increase access to new technologies for openness and accountability. Generally consider that technology benefits outweigh the disadvantages for citizenship. Grand challenges – improving public services, increasing public integrity, public resources, safer communities, corporate accountability. Not surprisingly, corporate accountability is one of the weakest.

Discussion:

Using the Foucault framework, the question is about the potential for resistance that is created because of the power increase. There are cases to discuss about hacktivism and use of technologies. There is an issue of the ability of resisting power – e.g. passing details between companies based on prediction. The issue is not about who use the data and how they control it. Sometime need to use approaches that are being used by illegal actors to hide their tracks to resist it.
A challenge to the workshop is that the area is so wide, and we need to focus on specific aspects – e.g. use of systems in governments, and while technology is changing. Interoperability.  There are overlaps between environmental democracy and open data, with many similar actors – and with much more government buy-in from government and officials. There was also technological change that make it easier for government (e.g. Mexico releasing environmental data under OGP).
Sovereignty is also an issue – with loss of it to technology and corporations over the last years, and indeed the corporate accountability is noted in the OGP framework as one that need more attention.
There is also an issue about information that is not allowed to exists, absences and silences are important. There are issues of consent – the network effects prevent options of consent, and therefore society and academics can force businesses to behave socially in a specific way. Keeping of information and attributing it to individuals is the crux of the matter and where governance should come in. You have to communicate over the internet about who you are, but that doesn’t mean that we can’t dictate to corporations what they are allowed to do and how to use it. We can also consider of privacy by design.

Session 2: Algorithmic Governance and the State

Dr Brendan Flynn (NUI Galway), ‘When Big Data Meets Artificial Intelligence will Governance by Algorithm be More or Less Likely to Go to War?’. When looking at autonomous weapons we can learn about general algorithmic governance. Algorithmic decision support systems have a role to play in very narrow scope – to do what the stock market do – identifying very dangerous response quickly and stop them. In terms of politics – many things will continue. One thing that come from military systems is that there are always ‘human in the loop’ – that is sometime the problem. There will be HCI issues with making decisions quickly based on algorithms and things can go very wrong. There are false positive cases as the example of the USS Vincennes that uses DSS to make a decision on shooting down a passenger plane. The decision taking is limited by the decision shaping, which is handed more and more to algorithms. There are issues with the way military practices understand command responsibility in the Navy, which put very high standard from responsibility of failure. There is need to see how to interpret information from black boxes on false positives and false negatives. We can use this extreme example to learn about civic cases. Need to have high standards for officials. If we do visit some version of command responsibility to those who are using algorithms in governance, it is possible to put responsibility not on the user of the algorithm and not only on the creators of the code.

Dr Maria Murphy (Maynooth), ‘Algorithmic Surveillance: True
Negatives’. We all know that algorithmic interrogation of data for crime prevention is becoming commonplace and also in companies. We know that decisions can be about life and death. When considering surveillance, there are many issues. Consider the probability of assuming someone to be potential terrorist or extremist. In Human Rights we can use the concept of private life, and algorithmic processing can challenge that. Article 8 of the Human Right Convention is not absolute, and can be changed in specific cases – and the ECHR ask for justifications from governments, to show that they follow the guidelines. Surveillance regulations need to explicitly identify types of people and crimes that are open to observations. You can’t say that everyone is open to surveillance. When there are specific keywords that can be judged, but what about AI and machine learning, where the creator can’t know what will come out? There is also need to show proportionality to prevent social harm. False positives in algorithms – because terrorism are so rare, there is a lot of risk to have a bad impact on the prevention of terrorism or crime. The assumption of more data is better data, we left with a problem of generalised surveillance that is seen as highly problematic. Interestingly the ECHR do see a lot of potential in technologies and their potential use by technologies.

Professor Dag Weise Schartum (University of Oslo), ‘Transformation of Law into Algorithm’. His focus was on how algorithms are created, and thinking about this within government systems. They are the bedrock of our welfare systems – which is the way they appear in law. Algorithms are a form of decision-making: general decisions about what should be regarded, and then making decisions. The translation of decisions to computer code, but the raw material is legal decision-making process and transform them to algorithms. Programmers do have autonomy when translating requirements into code – the Norwegian experience show close work with experts to implement the code. You can think of an ideal transformation model of a system to algorithms, that exist within a domain – service or authority of a government, and done for the purpose of addressing decision-making. The process is qualification of legal sources, and interpretations that are done in natural language, which then turn into specification of rules, and then it turns into a formal language which are then used for programming and modelling it. There are iterations throughout the process, and the system is being tested, go through a process of confirming the specification and then it get into use. It’s too complex to test every aspect of it, but once the specifications are confirmed, it is used for decision-making.  In terms of research we need to understand the transformation process in different agency – overall organisation, model of system development, competences, and degree of law-making effects. The challenge is the need to reform of the system: adapting to changes in the political and social change over the time. Need to make the system flexible in the design to allow openness and not rigidness.

Heike Felzman (NUI Galway), ‘The Imputation of Mental Health
from Social Media Contributions’ philosophy and psychological background. Algorithms can access different sources – blogs, social media and this personal data are being used to analyse mood analysis, and that can lead to observations about mental health. In 2013, there are examples of identifying of affective disorders, and the research doesn’t consider the ethical implication. Data that is being used in content, individual metadata like time of online activities, length of contributions, typing speed. Also checking network characteristics and biosensing such as voice, facial expressions. Some ethical challenges include: contextual integrity (Nissenbaum 2004/2009) privacy expectations are context specific and not as constant rules. Secondly, lack of vulnerability protection – analysis of mental health breach the rights of people to protect their health. Third, potential negative consequences, with impacts on employment, insurance, etc. Finally, the irrelevance of consent – some studies included consent in the development, but what about applying it in the world. We see no informed consent, no opt-out, no content related vulnerability protections, no duty of care and risk mitigation, there is no feedback and the number of participants number is unlimited. All these are in contrast to practices in Human Subjects Research guidelines.

Discussion:

In terms of surveillance, we should think about self-surveillance in which the citizens are providing the details of surveillance yourself. Surveillance is not only negative – but modern approach are not only for negative reasons. There is hoarding mentality of the military-industrial complex.
The area of command responsibility received attention, with discussion of liability and different ways in which courts are treating military versus civilian responsibility.

Panel 3: Algorithmic Governance in Practice

Professor Burkhard Schafer (Edinburgh), ‘Exhibit A – Algorithms as
Evidence in Legal Fact Finding’. The discussion about legal aspects can easily go to 1066 – you can go through a whole history. There are many links to medieval law to today. As a regulatory tool, there is the issue with the rule of proof. Legal scholars don’t focus enough on the importance of evidence and how to understand it. Regulations of technology is not about the law but about the implementation on the ground, for example in the case of data protection legislations. In a recent NESTA meeting, there was a discussion about the implications of Big Data – using personal data is not the only issue. For example, citizen science project that show low exposure to emission, and therefore deciding that it’s relevant to use the location in which the citizens monitored their area as the perfect location for a polluting activity – so harming the person who collected data. This is not a case of data protection strictly. How can citizen can object to ‘computer say no’ syndrome? What are the minimum criteria to challenge such a decision? What are the procedural rules of fairness. Have a meaningful cross examination during such cases is difficult in such cases. Courts sometimes accept and happy to use computer models, and other times reluctant to take them. There are issues about the burden of proof from systems (e.g. to show that ATM was working correctly when a fraud was done). DNA tests are relying on computer modelling, but systems that are proprietary and closed. Many algorithms are hidden for business confidentiality and there are explorations of these issues. One approach is to rely on open source tools. Replication is another way of ensuring the results. Escrow ownership of model by third party is another option. Next, there is a possibility to questioning software, in natural language.

Dr Aisling de Paor (DCU), ‘Algorithmic Governance and Genetic Information’ – there is an issue in law, and massive applications in genetic information. There is rapid technological advancement in many settings, genetic testing, pharma and many other aspects – indications of behavioural traits, disability, and more. There are competing rights and interests. There are rapid advances in this area – use in health care, and the technology become cheaper (already below $1000). Genetic information. In commercial settings use in insurance, valuable for economic and efficiency in medical settings. There is also focus on personalised medicine. A lot of the concerns are about misuse of algorithms. For example, the predictive assumption about impact on behaviour and health. The current state of predictability is limited, especially the environmental impacts on expressions of genes. There is conflicting rights – efficiency and economic benefits but challenge against human rights – e.g. right to privacy . Also right for non-discrimination – making decisions on the basis of probability may be deemed as discriminatory. There are wider societal and public policy concerns – possible creation of genetic underclass and the potential of exacerbate societal stigma about disability, disease and difference. Need to identify gaps between low, policy and code, decide use, commercial interests and the potential abuses.

Anthony Behan (IBM but at a personal capacity), ‘Ad Tech, Big Data and Prediction Markets: The Value of Probability’. Thinking about advertising, it is very useful use case to consider what happen in such governance processes. What happen in 200 milliseconds for advertising, which is the standards on the internet. The process of real-time-bid is becoming standardised. Start from a click – the publisher invokes an API and give information about the interactions from the user based on their cookie and there are various IDs. Supply Side Platform open an auction. on the demand side, there are advertisers that want to push content to people – age group, demographic, day, time and objectives such as click through rates. The Demand Side platform looks at the SSPs. Each SSP is connected to hundreds of Demand Side Platforms (DSPs). Complex relationships exist between these systems. There are probability score or engage in a way that they want to engage, and they offer how much it is worth for them – all in micropayment. The data management platform (DMP) is important to improve the bidding. e.g., if they can get information about users/platform/context at specific times places etc is important to guess how people tend to behave. The economy of the internet on advert is based on this structure. We get abstractions of intent – the more privacy was invaded and understand personality and intent, the less they were interested in a specific person but more in the probability and the aggregate. Viewing people as current identity and current intent, and it’s all about mathematics – there are huge amount of transactions, and the inventory become more valuable. The interactions become more diverse with the Internet of Things. The Internet become a ‘data farm’ – we started with a concept that people are valuable, to view that data is valuable and how we can extract it from people. Advertising goes into the whole commerce element.

I’ll blog about my talk ‘Algorithmic Governance in Environmental Information (or How Technophilia Shapes Environmental Democracy) later.

 Discussion:

There are issues with genetics and eugenics. Eugenics fell out of favour because of science issues, and the new genetics is claiming much more predictive power. In neuroscience there are issues about brain scans, which are not handled which are based on insufficient scientific evidence. There is an issue with discrimination – shouldn’t assume that it’s only negative. Need to think about unjustified discrimination. There are different semantic to the word. There are issues with institutional information infrastructure.