New paper: Usability and interaction dimensions of participatory noise and ecological monitoring

The EveryAware book provided an opportunity to communicate the results of a research that Dr Charlene Jennett led, together with two Masters students: Joanne (Jo) Summerfield and Eleonora (Nora) Cognetti, with me as an additional advisor. The research was linked to the EveryAware, since Nora explored the user experience of WideNoise, the citizen science noise monitoring app that was used in the project. There is also a link to the Citizen Cyberlab project, since Jo was looking at the field experience in ecological observation, and in particular during a BioBlitz. The chapter provides a Human-Computer Interaction (HCI) perspective to the way technology is used in citizen science projects. You can download the paper here and the proper citation for the chapter is:

Jennett, C., Cognetti, E., Summerfield, J. and Haklay, M. 2017. Usability and interaction dimensions of participatory noise and ecological monitoring. In Loreto, V., Haklay, M., Hotho, A., Servedio, V.C.P, Stumme, G., Theunis, J., Tria, F. (eds.) Participatory Sensing, Opinions and Collective Awareness. Springer. pp.201-212.

The official version of the paper is on Springer site here.

New Paper: The Three Eras of Environ-mental Information: the Roles of Experts and the Public

Since the first Eye on Earth conference in 2011, I started thinking that we’re moving to a new era in terms of relationships between experts and the public in terms of access to environmental information and it’s production. I also gave a talk about this issue in the Wilson Center in 2014. The three eras can be summarised as ‘information for experts by experts’,’information for experts and the public, by experts, and in experts language’, and ‘information for experts and the public, by experts and the public, in multiple forms’.

Finally, as part of a book that summarises the outcomes from the EveryAware project, I’ve written a chapter that explores the three eras of environmental information and provide a more detailed account of each of them.  You can access the paper here and it should be cited at

Haklay, M., 2017, The Three Eras of Environ-mental Information: The Roles of Experts and the Public, In Loreto, V., Haklay, M., Hotho, A., Servedio, V.C.P, Stumme, G., Theunis, J., Tria, F. (eds.) Participatory Sensing, Opinions and Collective Awareness. Springer. pp.163-179.

The book includes many other chapters and I’ll put several of them online later in the year. you can find the book on Springer site.

New paper: Using crowdsourced imagery to detect cultural ecosystem services: a case study in South Wales, UK

Map showing the numbers of contributors for all three photo-sharing platforms across all grid units of the study area. Numbers in parentheses in the legend indicate the number of grid units in the specified range.

Gianfranco Gliozzo, who is completing his Engineering Doctorate at the Extreme Citizen Science group, written up his first case study and published it in ‘Ecology and Society’.  Cited as
Gliozzo, G., N. Pettorelli, and M. Haklay. 2016. Using crowdsourced imagery to detect cultural ecosystem services: a case study in South Wales, UK. Ecology and Society 21(3):6. http://dx.doi.org/10.5751/ES-08436-210306

The paper went through many iterations and took its time, but it is finally out. The abstract is provided below, and the paper, in open access, can be found here

The paper is exploring the role of crowdsourced imagery, and it building on some work that Vyron Antoniou done in 2010 about understanding the geographical aspects of multiple photo-sharing websites. Gianfranco is demonstrating how such information can be used to address the policy issue of assessing the cultural benefit of open and protected spaces, which is known as cultural ecosystem services.

Within ecological research and environmental management, there is currently a focus on demonstrating the links between human well-being and wildlife conservation. Within this framework, there is a clear interest in better understanding how and why people value certain places over others. We introduce a new method that measures cultural preferences by exploring the potential of multiple online georeferenced digital photograph collections. Using ecological and social considerations, our study contributes to the detection of places that provide cultural ecosystem services. The degree of appreciation of a specific place is derived from the number of people taking and sharing pictures of it. The sequence of decisions and actions taken to share a digital picture of a given place includes the effort to travel to the place, the willingness to take a picture, the decision to geolocate the picture, and the action of sharing it through the Internet. Hence, the social activity of sharing pictures leaves digital proxies of spatial preferences, with people sharing specific photos considering the depicted place not only “worth visiting” but also “worth sharing visually.” Using South Wales as a case study, we demonstrate how the proposed methodology can help identify key geographic features of high cultural value. These results highlight how the inclusion of geographical user-generated content, also known as volunteered geographic information, can be very effective in addressing some of the current priorities in conservation. Indeed, the detection of the most appreciated nonurban areas could be used for better prioritization, planning, and management.

A review of volunteered geographic information quality assessment methods

One of the joys of academic life is the opportunity to participate in summer schools – you get a group of researchers, from PhD students to experienced professors, to a nice place in the Italian countryside, and for a week the group focuses on a topic – discussing, demonstrating and trying it out. The Vespucci Institute in 2014 that was dedicated to citizen science and Volunteered Geographic Information (VGI) is an example for that. Such activities are more than a summer retreat – there are tangible academic outputs that emerge from such workshops – demonstrating that valuable work is done!

During the summer school in 2014, Hansi Senaratne suggested to write a review of VGI data quality approaches, and together with Amin Mobasheri and Ahmed Loai Ali (all PhD students) started to developed it. I and Cristina Capineri, as summer school organisers and the vice-chair & chair of COST ENERGIC network (respectively), gave advice to the group and helped them in developing a paper, aimed at one of the leading journal of Geographic Information Science (GIScience) – the International Journal of GIScience (IJGIS).

Hensi presents at the Vespucci summer school
Hansi presenting at the Vespucci summer school

The paper went through the usual peer review process, and with a huge effort from Hansi, Amin & Ahmed, it gone all the way to publication. It is now out. The paper is titled ‘A review of volunteered geographic information quality assessment methods‘ and is accessible through the journal’s website. The abstract is provided below, and if you want the pre-print version – you can download it from here.

With the ubiquity of advanced web technologies and location-sensing hand held devices, citizens regardless of their knowledge or expertise, are able to produce spatial information. This phenomenon is known as volunteered geographic information (VGI). During the past decade VGI has been used as a data source supporting a wide range of services, such as environmental monitoring, events reporting, human movement analysis, disaster management, etc. However, these volunteer-contributed data also come with varying quality. Reasons for this are: data is produced by heterogeneous contributors, using various technologies and tools, having different level of details and precision, serving heterogeneous purposes, and a lack of gatekeepers. Crowd-sourcing, social, and geographic approaches have been proposed and later followed to develop appropriate methods to assess the quality measures and indicators of VGI. In this article, we review various quality measures and indicators for selected types of VGI and existing quality assessment methods. As an outcome, the article presents a classification of VGI with current methods utilized to assess the quality of selected types of VGI. Through these findings, we introduce data mining as an additional approach for quality handling in VGI

The Participatory City & Participatory Sensing – new paper

The Participatory City is a new book, edited by Yasminah Beebeejaun The Participatory City cover, which came out in March and will be launched on the 1st June. The book gather 19 chapters that explore the concept of participation in cities of all shapes and sizes. As Yasminah notes, concern about participation has started in the 1960s and never gone from urban studies – be it in anthropology, geography, urban planning, history or sociology.

The book is structured around short chapters of about eight pages, with colour images that illustrate the topic of the chapter. This make the book very accessible – and suitable for reading while commuting in a city. The chapters take you for a tour around many places in the world: from London, Berlin, Bangalore, to Johannesburg, Mexico City and to small towns in Pennsylvania and Lancashire (and few other places). It also explores multiple scales – from participation in global negotiations about urban policy in the UN, to the way immigrants negotiate a small area in central Dublin, as well as discussion of master-planning in several places, including London and Mexico City.

The book demonstrate the multi-faceted aspects of participation: from political power, to gender, environmental justice, indigenous rights, skills, expertise and the use of scientific information for decision making. Each of the chapters provides a concrete example for the participatory issue that it covers, and by so doing, make the concept that is being addressed easy to understand.

Not surprisingly, many of the success stories in the book’s chapters are minor, temporary and contingent on a set of conditions that allow them to happen. Together, the chapters demonstrate that participation, and the demand for representation and rights to the city are not futile effort but that it is possible to change things.

With a price tag that is reasonable, though not cheap (€28, about £21), this is highly recommended book that charts the aspects of urban participation in the early part of the 21st century, and especially demonstrating the challenges for meaningful participation in the face of technological developments, social and economic inequalities, and governance approaches that emphasise markets over other values.

My contribution to the book is titled ‘Making Participatory Sensing Meaningful and I’m examining how the concept of participatory sensing mutated over the years to mean any form of crowdsourced sensing. I then look at our experience in participatory sensing in Heathrow to suggest what are the conditions that enable participatory sensing that is matching the expectations from participatory processes, as well as the limitations and challenges. You can  find the paper here  and the proper citation for it is:

Haklay, M., 2016, Making Participatory Sensing Meaningful, in Beebeejaun, Y. (Ed.) The Participatory City, Jovis, pp. 154-161.

 

ECSA2016 ThinkCamp Challenge: how can Overleaf support collaborative writing between academics and citizen scientists?

Overleaf, ThinkCamp Challenge, collaborative writing – lots of jargon for a title – so let’s start by explaining them and I then cover what happened (that’s an Abstract).

Background – what are Overleaf, ThinkCamp, and Challenge? (Introduction)

Overleaf  is a scientific technology company that offer a collaborative environment for writing scientific papers. Overlaf is based on LaTeX  – a typesetting software that is popular in many disciplines – Computer Science, Physics, Mathematics, Statistics, Engineering, Economics, Linguistics and other DSC_0315fields. Importantly, Overleaf simplifies the scientific writing process by providing templates that scientific journals use, support for collaboration, adding comments, and other tools that make it easy to write academic papers. LaTeX is complex to use, and Overleaf is aimed at facilitating the process of learning and using it in academic writing. Overleaf was a sponsor of the European Citizen Science Association conference ThinkCamp, so together with them we developed a challenge . So let’s explain what is ThinkCamp before turning to the challenge.

A ThinkCamp, is a type of open events that are associated with the  ‘unconference’ approach, which in our context mean taking a part of an academic conference and opening it up to anyone who want to step forward and explore a topic that came up during the conference, or that they have been working on it for a while. Particularly for ThinkCamp, the activity is structured around discussion/exploration groups that are provided space to write, draw and share ideas. The themes are called ‘Challenges’. Some of the themes are offered in advance by people who are coming to the conference, and there is usually space for people to suggest their ideas on the day.  The day starts with a one minute description of each challenge. Even with the planned challenges, those who proposed them can’t say much about them, and they are looking for the collective intelligence of those who are interested in the topic to explore it. In effect, ThinkCamp is multiple brainstormDSCN1625ing and idea generation events happening in the same space. People can move between groups, drop in and out, and contribute as little or as much as they want. A Challenge can be physical or require programming, but can also be purely based on discussion. For the ECSA 2016 ThinkCamp, the conference organisers invited the local Berlin grassroots science & maker communities to collaborate together with conference attendees on a number of Citizen Science Challenges.

What was the challenge? (Methodology)

For this specific challenge, we defined it as ‘The Overleaf Collaborative Writing Challenge – How can Overleaf support collaborative writing between academics
and citizen scientists?‘. The focus here is on scientific papers that are coming out of a citizen science project. It is now becoming more common to include citizen scientists as co-authors in the title of the paper. However, can they have more direct involvement in the process of writing so they are more involved in the scientific process? This was the ‘research question’ (more accurately, idea) for the session.

wp-1463894715220.jpgWe had a table, and two session, each of about hour and a half. In each session, about 6 or 8 people joined me, with one person staying for both session (Artemis Skarlatidou), and other people joining for parts or the whole discussion (among them Alison Parker, Avinoam Baruch, Berk Anbaroglu, Christian Nold,  Denise Gameiro,  Jon Van Oast, Julia Aletebuchner, Libby Helpburn, Lotta Tomasson, Sultan Kocaman, and surely several other people). We had a table with a poster, which included information about the challenge.

Although we have looked briefly at the Overleaf system during the beginning of the discussion, it expanded very quickly to the core issues of collaboration between scientists and citizen scientists on writing paper together.

What did we talked about? (Results)

I have attempted to facilitate the discussion while allowing people to raise their point and discuss them at length. As usual, some discussion points led to other discussion points. During the three hours, we filled about 4 flip-chart pages, which are provided below (Figure 1).

DSCN1628DSCN1629DSCN1626DSC_0332
Figure 1: Flip-chart of discussion point (click to enlarge)

So what did we discussed?

We refined our problem, and decided that our assumption is a situation where a scientist initiate the paper and lead the process of writing, but in collaboration with citizen scientists. Of course, papers that are led by citizen scientists are very important, but as with many prototyping activities, we wanted to start with a scenario that make the problem less hard – at least one of the members of the team will know what is expected in terms of the publication process. There are many citizen scientists that already publish (e.g. astronomy, biological recording – see diary of a citizen scientist which in the last pages describe the scientific outcome of her work), but we’re talking about the general case, and I still recall how daunting the first paper feel, and I also know how special it feel to have the first paper published (it’s one of the precious things of working with PhD students), so let’s assume that we’re talking about first paper, with someone helping.

The topmost issue is to explain to citizen scientists why a peer review paper is a worthwhile effort  – some websites and systems (e.g. Public Lab research notes) are offering alternatives to academic publication – however, having a peer review can increase the value of the work in terms of policy impact, authority and other aspects. What are the exact reasons for people to join in? this is something that we need to understand more.

DSCN1625We started with the components of paper: introduction, literature review, methodology, results… and the need to understand why they are there and how to understand them. There is the AAAS website that helps in learning how to read an academic paper. Some tips are also available in other places – and that there are so much material online to teach people how to read scholarly articles, tell you that it’s not a trivial task! For this, we can also research and identify material on library websites that teach undergraduate students how to read and write scientific papers, and choose the best resources for citizen scientists. We need to indicate that some effort is required, but also chunk the learning material. Having pop-ups and context specific help to a section of the paper, and, as Overleaf already do, have the sections with place-holder in place.

Once people learned what is the aim of the project and the components of an academic paper, we need a way for people to show which part they would like to contribute to – maybe they want to comment on the methodology and not on other parts (so we might have a matrix linking people with parts of the paper). Further discussion lead to the main insight of the discussion: We can split the roles that are needed in academic paper writing, and allow people to decide what they want to do. The roles include: authoring text, fact checking, reference checking, chart and graph design, map design, translation, checking for comprehension, proofreading, reviewing, checking the statics for mistakes and possibly more. We can think of a system to match between skills and task – like PeerWith but there are problems: first, we should do it inside the project, and be careful not to get into exploitation and undermining freelance editors, proofreader, graphic designers etc. There is, of course, huge advantage for engaging people from within the project – they will do the work from a much more informed position. Consider projects with many thousands of volunteers (OpenStreetMap, Zooniverse, BOINC) – it is possible to link the multiple skills of participants to the many scientists who are involved in different projects and might want to work collaboratively on papers. Under these conditions – we will have major issues of trust by all sides, and confidence by the citizen scientists that they can contribute. We need interfaces nudges and support to overcome these. We need to clearly communicate what are the aspects of the role, compensation & benefits (e.g. authorship, payment?).

Back to the process of writing the different sections of the paper, we can give elements of training to contributors, according to how much they want to commit and how much time they’ve got. Probably it make sense to do micro-training with expanding levels of information.

We need to consider how we open up papers and material that sit behind a pay-wall to allow citizen scientists to be involved in a meaningful way.

We can also consider a gradual process, where there is a pre-writing stage in which we agree the narrative, order, and images that will be used – we can use accessible language to sort out the list – e.g. ‘what is the problem?’ (for the introduction); ‘what do we know?’ (literature review); or ‘what have we done?’ (for the methodology). We can think of the paper as the final object, and have a structure to support its development through sub-objects.

wp-1463894724971.jpgThe second major insight of the session was the introduction of a role for science communication experts, as facilitators between citizen scientists and scientists. The process will need a lot of communication, and we need to link to tools for managing chats (instant messaging), calls and maybe video. The volunteers need to be mentors and get feedback, so improvement of skills. 

We explored what each side bring to the equation: citizen scientists – skills, knowledge and they gain experience in writing a paper and having a scientific publication with their name on. Science communicators – translation between scientists and citizen scientists, ability to explain why paper is valuable, what are the parts of the paper and why things happen the way they are. They gain by being employed with an active role in the process. Scientists benefits by having lots of help on their paper, and they need to act as mentors and cover the publication fees (assuming open access).

What next? (discussion and conclusions)

ThinkCampMukiWe realised that this is complex process that will need plenty of effort to make it happen, but that it is possible to facilitate with Web tools. There are plenty of open issues, and it might be an idea to develop a small research/public engagement project on the basis of these ideas. If you have ideas, comments and suggestions – please help us! 

Being philosophical about crowdsourced geographic information

This is a post by Renee Sieber and myself, providing a bit of a background on why we wrote the paper “The epistemology(s) of volunteered geographic information: a critique” – this is in addition to what I’ve written about it in this blog post

Geo: Geography and Environment

By Renée Sieber (McGill University, Canada) and Muki Haklay (University College London, UK)

Our recent paper, The epistemology(s) of volunteered geographic information: a critique, started from a discussion we had about changes within the geographic information science (GIScience) research communities over the past two decades. We’ve both been working in the area of participatory geographic information systems (GIS) and critical studies of geographic information science (GIScience) since the late 1990s, where we engaged with people from all walks of life with the information that is available in GIS. Many times we’d work together with people to create new geographic information and maps. Our goal was to help reflect their point of view of the world and their knowledge about local conditions, not always aim for universal rules and principles. For example, the image below is from a discussion with the community in Hackney Wick, London, where individuals collaborated to…

View original post 819 more words