Citizen Science & Scientific Crowdsourcing – week 3 – Participation inequality

One of the aspects that fascinates me about citizen science and crowdsourcing is the nature of participation and in particular participation inequality. As I’ve noted last week, when you look at large scale systems, you expected to see it in them (so Google Local Guides is exhibiting 95:5:0.005 ratio).

I knew that this phenomenon has been observed many times in Massive Online Open Courses (MOOCs) so I expected it to happen in the course. I’m particularly interested in the question of the dynamic aspect of participation inequality: for example, at the point of the beginning of the “introduction to citizen science and scientific crowdsourcing” course, every single person is at exactly the same level of participation – 0. However, within three weeks, we are starting to see the pattern emerges. Here are some of the numbers:

At this point in time, there are 497 people that went through the trouble of accessing UCLeXtend and creating a profile. They are a small group of the people that seen the blog post (about 1,100) or the tweet about it (about 600 likes, retweets or clicking on the link). There are further 400 people that filled in the online form that I set before the course was open and stated their interest in it.

The course is structured as a set of lectures, each of them broken into segments of 10 minutes each, and although the annotated slides are available and it is likely that many people prefer them over listening to a PowerPoint video (it’s better in class!), the rate of viewing of the videos gives an indication of engagement.

Here are our viewing statistics for now:


We can start seeing how the sub-tasks (viewing a series of videos) is already creating the inequality – lots of people watch part of the first video, and either give up (maybe switching to the notes) or leaving it to another time. By part 4 of the first lecture, we are already at very few views (the “Lecture 3 Part 2” video is the one that I’ve integrated in the previous blog post).

What is interesting to see is how fast participation inequality emerges within the online course, and notice that there is now a core of about 5-10 people (about 1% to 2%) that are following the course at the same rate as the 9 students who are in the face to face class. I expect people to also follow the course over a longer period of time, so I wouldn’t read too much into the pattern and wait until the end of the course and a bit after it to do a full analysis.

When I was considering setting up the course as a hybrid online/offline, I was expecting this, since the amount of time that is required to follow up the course is nearly 4-5 hours a week – something reasonable for an MSc student during a course, but tough for a distance learner (I have a huge appreciation to these 10 people that are following!).




Citizen Science & Scientific Crowdsourcing – week 2 – Google Local Guides

The first week of the “Introduction to Citizen Science and Scientific Crowdsourcing” course was dedicated to an introduction to the field of citizen science using the history, examples and typologies to demonstrate the breadth of the field. The second week was dedicated to the second half of the course name – crowdsourcing in general, and its utilisation in scientific contexts. In the lecture, after a brief introduction to the concepts, I wanted to use a concrete example that shows a maturity in the implementation of commercial crowdsourcing. I also wanted something that is relevant to citizen science and that many parallels can be drawn from, so to learn lessons. This gave me the opportunity to use Google Local Guides as a demonstration.

My interest in Google Local Guides (GLG) come from two core aspects of it. As I pointed in OpenStreetMap studies, I’m increasingly annoyed by claims that OpenStreetMap is the largest Volunteered Geographical Information (VGI) project in the world. It’s not. I guessed that GLG was, and by digging into it, I’m fairly confident that with 50,000,000 contributors (of which most are, as usual, one-timers), Google created the largest VGI project around. The contributions are within my “distributed intelligence” and are voluntary. The second aspect that makes the project is fascinating for me is linked to a talk from 2007 in one of the early OSM conferences about the usability barriers that OSM (or more general VGI) need to cross to reach a wide group of contributors – basically about user-centred design. The design of GLG is outstanding and shows how much was learned by the Google Maps and more generally by Google about crowdsourcing. I had very little information from Google about the project (Ed Parsons gave me several helpful comments on the final slide set), but by experiencing it as a participant who can notice the design decisions and implementation, it is hugely impressive to see how VGI is being implemented professionally.

As a demonstration project, it provides examples for recruitment, nudging participants to contribute, intrinsic and extrinsic motivation, participation inequality, micro-tasks and longer tasks, incentives, basic principles of crowdsourcing such as “open call” that support flexibility, location and context aware alerts, and much more. Below is the segment from the lecture that focuses on Google Local Guides, and I hope to provide a more detailed analysis in a future post.

The rest of the lecture is available on UCLeXtend.

Defining principles for mobile apps and platforms development in citizen science

Core concepts of apps, platforms and portals for citizen science

In December 2016, ECSA and the Natural History Museum in Berlin organised a  workshop on analysing apps, platforms, and portals for citizen science projects. Now, the report from the workshop with an addition from a second workshop that was held in April 2017 has evolved into an open peer review paper on RIO Journal.

The workshops and the paper came to life thanks to the effort of Soledad Luna and Ulrike Sturm from the Berlin Museum.

RIO is worth noticing: is “The Research Ideas and Outcomes (RIO) journal” and what it is trying to offer is a way to publish outputs of the whole research cycle – from project proposals to data, methods, workflows, software, project reports and the rest. In our case, the workshop report is now open for comments and suggestions. I’ll be interested to see if there will be any…

The abstract reads:

Mobile apps and web-based platforms are increasingly used in citizen science projects. While extensive research has been done in multiple areas of studies, from Human-Computer Interaction to public engagement in science, we are not aware of a collection of recommendations specific for citizen science that provides support and advice for planning, design and data management of mobile apps and platforms that will assist learning from best practice and successful implementations. In two workshops, citizen science practitioners with experience in mobile application and web-platform development and implementation came together to analyse, discuss and define recommendations for the initiators of technology based citizen science projects. Many of the recommendations produced during the two workshops are applicable to non-mobile citizen science project. Therefore, we propose to closely connect the results presented here with ECSA’s Ten Principles of Citizen Science.

and the paper can be accessed here. 

Crowdsourced: navigation & location-based services

toyosemite-mapOnce you switch the smartphone off from email and social media network, you can notice better when and how you’re crowdsourced. By this, I mean that use of applications to contribute data is sometimes clearer as the phone becomes less of communication technology and more of information technology (while most of the time it is an information and communication mixed together).

During my last year summer break, while switching off, I was able to notice three types of crowdsourcing that were happening as I was mostly using my phone for tourism:  the set of applications that I used during the period was mostly Google Maps, Waze, and Sygic for navigation (Sygic is an offline satnav app that uses local storage, especially useful as you go away from signal – as in the case of travelling to Yosemite above). I also used TripAdvisor to find restaurants or plan visits, and in a tour of Hollywood, we’ve downloaded a guided tour by “PocketGuide Audio Travel Guide”. There were also the usual searches on Google to find the locations of swimming pools, train schedule, boat tours and all the other things that you do when you’re travelling.

The first type of being crowdsourced that I knew was there but I couldn’t notice was with every action that I’ve done with the phone. Searches, visiting websites and all the other things that I’ve done as long as the phone was on. I know that they are there, but I have no idea who is collecting the traces – I can be quite certain that the phone company and Google were getting information, but I can’t tell for sure – so that’s hidden crowdsourcing.

The second type is being passively crowdsourced – I know that it happened, and I can sotosanjose-lastdayrt of guess what information is being recorded, but I didn’t need any action to make it happen. Checking the map on the way back to the airport at the end of the journey, show how geolocated images and information are being put on the map. Google Maps assume that I visited known locations, mostly commercials, along the way. It was especially funny to stay in a suburban place that happened to be the registered address of a business, and every time we went to it, Google insisted that we’re visiting the business (which doesn’t physically exist in the place). At least with this passive crowdsourcing, I am knowingly contributing information – and since I’m benefiting from the navigation guidance of Google Maps as I drive along, it is a known transaction (regardless of power relationship and fairness). screenshot_20160808-154540

the third and final type of process was active crowdsourcing. This was when I was aware of what I’m contributing, to which system, and more or less how. When I provided an image to Google Local Guides  I knew that it will be shared (I am though, hugely surprised how many times it was viewed, but I’ll write about Local Guides in a different post). I was also actively contributing to TripAdvisor about some place near Venice Beach. A certain surprise also came from Waze, which, in a day of experiencing Los Angeles famous traffic, provided me with a message that ‘I’m one of the top contributors on this route’ after 3 reports. Of course, I can’t tell if this message is real, but if 3 reports are enough to make you a top contributor, the number of reporting participants must be very low.

Few other observations: it was interesting to see how embedded is the consideration that you will be online all the time – Google Maps suggested to download a route when they had information that part of the journey will not be covered by mobile signal, but the application didn’t behave well with offline data (Sygic did). Both Waze and Google Maps behaved very erratically when I passed a blocked slip road and didn’t follow their navigation guidance. For quite a distance, they continue to suggest that I turn around and use the blocked road.

The other aspect that was very apparent is the built in methodological individualism of all these apps – even though I was only with my partner, we found the PocketGuide Audio Travel Guide awkward to use – we wanted to experience the tour (which is interesting) together. The app is just difficult to use when you try to go with other people and discuss things together. This is partially how phones are thought of, but touring is not only a solitary experience…


New paper: Usability and interaction dimensions of participatory noise and ecological monitoring

The EveryAware book provided an opportunity to communicate the results of a research that Dr Charlene Jennett led, together with two Masters students: Joanne (Jo) Summerfield and Eleonora (Nora) Cognetti, with me as an additional advisor. The research was linked to the EveryAware, since Nora explored the user experience of WideNoise, the citizen science noise monitoring app that was used in the project. There is also a link to the Citizen Cyberlab project, since Jo was looking at the field experience in ecological observation, and in particular during a BioBlitz. The chapter provides a Human-Computer Interaction (HCI) perspective to the way technology is used in citizen science projects. You can download the paper here and the proper citation for the chapter is:

Jennett, C., Cognetti, E., Summerfield, J. and Haklay, M. 2017. Usability and interaction dimensions of participatory noise and ecological monitoring. In Loreto, V., Haklay, M., Hotho, A., Servedio, V.C.P, Stumme, G., Theunis, J., Tria, F. (eds.) Participatory Sensing, Opinions and Collective Awareness. Springer. pp.201-212.

The official version of the paper is on Springer site here.

New paper: Digital engagement methods for earthquake and fire preparedness

At the beginning of the Challenging Risk project, the project team considered that before we go out and develop participatory tools to engage communities in earthquake and fire preparedness, we should check what is available.

To achieve that, we have commissioned Enrica Verrucci to help us with the review, and later on other members of the team updated the information – including Patrick Rickles, David Rush, and Gretchen Fagg. We then thought about the development of a paper from the review, and an interesting interdisciplinary discussion ensue, with different potential emphasis and structures were suggested. It took us several iterations until we’ve agreed that the best way to communicate the purpose of the review is by linking the use of digital technologies to behaviour change, with the guidance of Gabriela Perez-Fuentes and Helene Joffe who are the psychological experts on the team.

The resulting paper have just been published in Natural Hazards. It is the first paper from the project that covers all the groups that are involved in the project. Here is the abstract:

“Natural or human-made hazards may occur at any time. Although one might assume that individuals plan in advance for such potentially damaging events, the existing literature indicates that most communities remain inadequately prepared. In the past, research in this area has focused on identifying the most effective ways to communicate risk and elicit preparedness by means of public hazard education campaigns and risk communication programmes. Today, web- and mobile-based technologies are offering new and far-reaching means to inform communities on how to prepare for or cope with extreme events, thus significantly contributing to community preparedness. Nonetheless, their practical efficacy in encouraging proactive hazard preparedness behaviours is not yet proven. Building on behaviour change interventions in the health field and looking in particular at earthquakes and fire hazards, the challenging RISK team has reviewed the currently active websites, Web, and mobile applications that provide information about earthquake and home fire preparedness. The review investigates the type of information provided, the modality of delivery, and the presence of behaviour change techniques in their design. The study proves that most of the digital resources focus on a single hazard and fail to provide context-sensitive information that targets specific groups of users. Furthermore, behaviour change techniques are rarely implemented in the design of these applications and their efficacy is rarely systematically evaluated. Recommendations for improving the design of Web- and mobile-based technologies are made so as to increase their effectiveness and uptake for a multi-hazard approach to earthquake and home fire preparedness.”

You can find the paper (which is Open Access) on the journal’s website:

You can find the list of websites and apps here.

UCL Institute for Global Prosperity Talk: Extreme Citizen Science – Current Developments

The slides below are from a talk that I gave today at UCL Institute for Global Prosperity

The abstract for the talk is:

With a growing emphasis on civil society-led change in diverse disciplines, from International Development to Town Planning, there is an increasing demand to understand how institutions might work with the public effectively and fairly.

Extreme Citizen Science is a situated, bottom-up practice that takes into account local needs, practices and culture and works with broad networks of people to design and build new devices and knowledge creation processes that can transform the world.

In this talk, I discussed the work of UCL Extreme Citizen Science group within the wider context of the developments in the field of citizen science. I covered the work that ExCiteS has already done, currently developing and plans for the future.