27 August, 2013
An interview by Prof Anthony Costello of UCL Institute of Global Health, discussing the growth in citizen science today.
8 July, 2013
The term ‘Citizen Science’ is clearly gaining more recognition and use. It is now get mentioned in radio and television broadcasts, social media channels as well as conferences and workshops. Some of the clearer signs for the growing attention include discussion of citizen science in policy oriented conferences such as UNESCO’s World Summit on Information Society (WSIS+10) review meeting discussion papers (see page ), or the Eye on Earth users conference (see the talks here) or the launch of the European Citizen Science Association in the recent EU Green Week conference.
Another aspect of the expanding world of citizen science is the emerging questions from those who are involved in such projects or study them about the efficacy of the term. As is very common with general terms, some reflections on the accuracy of the term are coming to the fore – so Rick Bonney and colleagues suggest to use ‘Public Participation in Scientific Research‘ (significantly, Bonney was the first to use ‘Citizen Science’ in 1995); Francois Grey coined Citizen Cyberscience to describe projects that are dependent on the Internet; recently Chris Lintott discussed some doubts about the term in the context of Zooniverse; and Katherine Mathieson asks if Citizen Science is just a passing fad. In our own group, there are also questions about the correct terminology, with Cindy Regalado suggestions to focus on ‘Publicly Initiated Scientific Research (PIScR)‘, and discussion on the meaning of ‘Extreme Citizen Science‘.
One way to explore what is going on is to consider the evolution of the ‘hype’ around citizen science through ‘Gartner’s Hype Cycle‘ which can be seen as a way to consider the way technologies are being adopted in a world of rapid communication and inflated expectations from technologies. leaving aside Gartner own hype, the story that the model is trying to tell is that once a new approach (technology) emerges because it is possible or someone reconfigured existing elements and claim that it’s a new thing (e.g. Web 2.0), it will go through a rapid growth in terms of attention and publicity. This will go on until it reaches the ‘peak of inflated expectations’ where the expectations from the technology are unrealistic (e.g. that it will revolutionize the way we use our fridges). This must follow by a slump, as more and more failures come to light and the promises are not fulfilled. At this stage, the disillusionment is so deep that even the useful aspects of the technology are forgotten. However, if it passes this stage, then after the realisation of what is possible, the technology is integrated into everyday life and practices and being used productively.
So does the hype cycle apply to citizen science?
If we look at Gartner cycle from last September, Crowdsourcing is near the ‘peak of inflated expectations’ and some descriptions of citizen science as scientific crowdsourcing clearly match the same mindset.
There is a growing evidence of academic researchers entering citizen science out of opportunism, without paying attention to the commitment and work that is require to carry out such projects. With some, it seems like that they decided that they can also join in because someone around know how to make an app for smartphones or a website that will work like Galaxy Zoo (failing to notice the need all the social aspects that Arfon Smith highlights in his talks). When you look around at the emerging projects, you can start guessing which projects will succeed or fail by looking at the expertise and approach that the people behind it take.
Another cause of concern are the expectations that I noticed in the more policy oriented events about the ability of citizen science to solve all sort of issues – from raising awareness to behaviour change with limited professional involvement, or that it will reduce the resources that are needed for activities such as environmental monitoring, but without an understanding that significant sustained investment is required – community coordinator, technical support and other aspects are needed here just as much. This concern is heightened by statements that promote citizen science as a mechanism to reduce the costs of research, creating a source of free labour etc.
On the other hand, it can be argued that the hype cycle doesn’t apply to citizen science because of history. Citizen science existed for many years, as Caren Cooper describe in her blog posts. Therefore, conceptualising it as a new technology is wrong as there are already mechanisms, practices and institutions to support it.
In addition, and unlike the technologies that are on Gartner chart, academic projects within which citizen science happen benefit from access to what is sometime termed patient capital without expectations for quick returns on investment. Even with the increasing expectations of research funding bodies for explanations on how the research will lead to an impact on wider society, they have no expectations that the impact will be immediate (5-10 years is usually fine) and funding come in chunks that cover 3-5 years, which provides the breathing space to overcome the ‘through of disillusionment’ that is likely to happen within the technology sector regarding crowdsourcing.
And yet, I would guess that citizen science will suffer some examples of disillusionment from badly designed and executed projects – to get these projects right you need to have a combination of domain knowledge in the specific scientific discipline, science communication to tell the story in an accessible way, technical ability to build mobile and web infrastructure, understanding of user interaction and user experience to to build an engaging interfaces, community management ability to nurture and develop your communities and we can add further skills to the list (e.g. if you want gamification elements, you need experts in games and not to do it amateurishly). In short, it need to be taken seriously, with careful considerations and design. This is not a call for gatekeepers , more a realisation that the successful projects and groups are stating similar things.
Which bring us back to the issue of the definition of citizen science and terminology. I have been following terminology arguments in my own discipline for over 20 years. I have seen people arguing about a data storage format for GIS and should it be raster or vector (answer: it doesn’t matter). Or arguing if GIS is tool or science. Or unhappy with Geographic Information Science and resolutely calling it geoinformation, geoinformatics etc. Even in the minute sub-discipline that deals with participation and computerised maps that are arguments about Public Participation GIS (PPGIS) or Participatory GIS (PGIS). Most recently, we are debating the right term for mass-contribution of geographic information as volunteered geographic information (VGI), Crowdsourced geographic information or user-generated geographic information.
It’s not that terminology and precision in definition is not useful, on the contrary. However, I’ve noticed that in most cases the more inclusive and, importantly, vague and broad church definition won the day. Broad terminologies, especially when they are evocative (such as citizen science), are especially powerful. They convey a good message and are therefore useful. As long as we don’t try to force a canonical definition and allow people to decide what they include in the term and express clearly why what they are doing is falling within citizen science, it should be fine. Some broad principles are useful and will help all those that are committed to working in this area to sail through the hype cycle safely.
29 April, 2013
CHI (Computer-Human Interaction) is the premier conference in the calendar of Human-Computer Interaction (HCI) studies. While the first paper that deal with geographic technologies within this conference was presented in 1991 (it was about User Interfaces for Geographic Information Systems by Andrew Frank and presented at a special interest group meeting), geography did not received much attention from HCI researchers in general, though the growth of location-based technologies made it a growing area in recent years. As I noted elsewhere, HCI did received interest in GIScience over the years, with more attention paid to spatial cognition and fundamental aspects of knowledge representation but unfortunately less on interaction design and exploration of user studies.
This sort of loose coupling between GIScience and HCI is also reflected in personal histories. I was aware of CHI and its importance for over 15 years, but I never managed to attend one – until now. When Brent Hecht invited me to join a CHI workshop proposal on Geographic HCI (GeoHCI), I jumped on the opportunity. The process of working together with HCI researchers on coordinating and curating a workshop led to mutual learning about priorities and practices of work of the two different research communities – in the tone and style of position papers, reviews and ways of organising a meeting. The response to the call for position papers was overwhelming and demonstrated the interest from both geography and HCI communities to find opportunities to converse and share ideas.
The workshop itself was excellent, with coverage of many topics that are being actively researched in Geography and GIScience – and the papers and presentation cover crowdsourced/volunteered geographic information, use of geographic information in crisis situations, participatory mapping and citizen science, concepts of place and space, personal memories, and of course many interactions with maps.
My own talk focused on Geography and HCI, exploring the point of view of geography when approaching computing environments to represent and communicate geographical knowledge. I have used human geography and particularly the concept of space/place to highlight the contribution that geography can make. For example in understanding the multiplicity of interpretation of place by using both David Harvey critique of spatial sciences in the understanding of place, and Doreen Massey relational geography description of places as ‘stories so far’ in ‘For Space‘ as a clear example of different conceptualisation of what they are.
One particular point that I highlighted, following the first chapter of Introducing Human Geographies in which a differentiation is made between Geography as ‘writing the Earth’: looking at human-nature relationship in the wider sense, versus ‘writing the World’ : looking at society-space relationships. For HCI audience I described it by rephrasing Don Norman’s differentiation between ‘Geography in the world‘ which is about the way people interact with the physical environment around them, versus ‘Geography in the head‘ which is the cultural, personal and social understanding of the place where they are and how they want to shape their personal activities, memories and interactions. Of course, Geography in the world is easier to represent in computers then the Geography in the head, and my personal view is that too much emphasis is paid to the first type.
Another part of the presentation focused on the importance of Cartography for geographical technologies, and why issues of map scale, media and task context are very important when designing geographic applications. For example, the value of paper as a media and understanding that maps are more about context then about ‘you are here’.
My position paper is available here . My presentation is provided below
In my view, the workshop was very valuable in opening new conversations. I have now a better understanding of the context in which HCI researchers in Google, Yahoo! and Pitney-Bowes Business Insight consider geography and what problems they have. The issue of place and the need to explore platial information came up several times, and we also experienced the multi-sensory engagement with place which are difficult to capture in digital forms. Most importantly, this was an experience in understanding the language and ways of expression that can help in bridging the two communities.
26 November, 2012
I’ve been using 37Signals’ Basecamp now for over 5 years. I’m involved in many projects with people from multiple departments and organisations. In the first large project that I run in 2007 – Mapping Change for Sustainable Communities – Basecamp was recommended to us by Nick Black (just before he co-founded CloudMade), so we’ve started using it. Since then, it was used for 33 projects and activities which range from coordinating writing an academic paper to running a large multidisciplinary group. In some projects it was used a lot in other it didn’t work as well. As with any other information system, the use of it depends on needs and habits of different users and not only on the tool itself.
It is generally an excellent tool to organise messages, information and documents about projects and activities and act well as a repository of project related information – but project management software is not what this post is about.
I’m sure that in the scheme of things, we are a fairly small users of Basecamp. Therefore, I was somewhat surprised to receive a card from 37Signals.
I’m fairly passive user of Basecamp as far as 37Signals are concerned – I’m please with what it does, but I have not contacted them with requests or anything like that. So getting this hand-written card was a very nice touch from a company that could very easily wrote the code to send me an email with the same information – but that wouldn’t be the same in terms of emotional impact.
As Sherry Turkle is noting in her recent book, the human contact is valuable and appreciated. This is important and lots of times undervalued aspect of communication and interaction – the analog channels are there and can be very effective. This blog post – and praising 37Signals for making this small effort, is an example of why it is worth doing it.
19 December, 2011
As noted in the previous post, which focused on the linkage between GIS and Environmental Information Systems, the Eye on Earth Summit took place in Abu Dhabi on the 12 to 15 December 2011, and focused on ‘the crucial importance of environmental and societal information and networking to decision-making’. Throughout the summit, two aspects of public were discussed extensively. On the one hand, Principle 10 of the Rio declaration from 1992 which call for public access to information, participation in decision making and access to justice was frequently mentioned including the need to continue and extend its implementation across the world. On the other, the growing importance of citizen science and crowdsourced environmental information was highlighted as a way to engage the wider public in environmental issues and contribute to the monitoring and understanding of the environment. They were not presented or discussed as mutually exclusive approaches to public involvement in environmental decision making, and yet, they do not fit together without a snag – so it is worth minding the gap.
As I have noted in several talks over the past 3 years (e.g. at the Oxford Transport Research Unit from which the slides above were taken), it is now possible to define 3 eras of public access to environmental information. During the first era, between the first UN environmental conference, held in Stockholm in 1972, were the UN Environmental Programme (UNEP) was established, and the Earth conference in Rio in 1992, environmental information was collected by experts, to be analysed by experts, and to be accessed by experts. The public was expected to accept the authoritative conclusions of the experts. The second period, between 1990s and until the mid 2000s and the emergence of Web 2.0, the focus turned to the provision of access to the information that was collected and processed by experts. This is top-down delivery of information that is at the centre of Principle 10:
‘Environmental issues are best handled with participation of all concerned citizens, at the relevant level. At the national level, each individual shall have appropriate access to information concerning the environment that is held by public authorities, including information on hazardous materials and activities in their communities, and the opportunity to participate in decision-making processes. States shall facilitate and encourage public awareness and participation by making information widely available. Effective access to judicial and administrative proceedings, including redress and remedy, shall be provided’
Notice the two emphasised sections which focus on passive provision of information to the public – there is no expectation that the public will be involved in creating it.
With the growth of the interactive web (or Web 2.0), and the increase awareness to citizen or community science , new modes of data collection started to emerge, in which the information is being produced by the public. Air pollution monitoring, noise samples or traffic surveys – all been carried out independently by communities using available cheap sensors or in collaboration with scientists and experts. This is a third era of access to environmental information: produced by experts and the public, to be used by both.
Thus, we can identify 3 eras of access to environmental information: authoritative (1970s-1990s), top-down (1990s-2005) and collaborative (2005 onward).
The collaborative era presents new challenges. As in previous periods, the information needs to be at the required standards, reliable and valid. This can be challenging for citizen science information. It also need to be analysed, and many communities don’t have access to the required expertise (see my presentation from the Open Knowledge Foundation Conference in 2008 that deals with this issue). Merging information from citizen science studies with official information is challenging. These and other issues must be explored, and – as shown above – the language of Principle 10 might need revision to account for this new era of environmental information.
In March 2008, I started comparing OpenStreetMap in England to the Ordnance Survey Meridian 2, as a way to evaluate the completeness of OpenStreetMap coverage. The rational behind the comparison is that Meridian 2 represents a generalised geographic dataset that is widely use in national scale spatial analysis. At the time that the study started, it was not clear that OpenStreetMap volunteers can create highly detailed maps as can be seen on the ‘Best of OpenStreetMap‘ site. Yet even today, Meridian 2 provides a minimum threshold for OpenStreetMap when the question of completeness is asked.
So far, I have carried out 6 evaluations, comparing the two datasets in March 2008, March 2009, October 2009, March 2010, September 2010 and March 2011. While the work on the statistical analysis and verification of the results continues, Oliver O’Brien helped me in taking the results of the analysis for Britain and turn them into an interactive online map that can help in exploring the progression of the coverage over the various time period.
Notice that the visualisation shows the total length of all road objects in OpenStreetMap, so does not discriminate between roads, footpaths and other types of objects. This is the most basic level of completeness evaluation and it is fairly coarse.
The application will allow you to browse the results and to zoom to a specific location, and as Oliver integrated the Ordnance Survey Street View layer, it will allow you to see what information is missing from OpenStreetMap.
Finally, note that for the periods before September 2010, the coverage is for England only.
Some details on the development of the map are available on Oliver’s blog.
Following successful funding for the European Union FP7 EveryAware and the EPSRC Extreme Citizen Science activities, the department of Civil, Environmental and Geomatic Engineering at UCL is inviting applications for a postdoctoral position and 3 PhD studentships. Please note that these positions are open to students from any EU country.
These positions are in the ‘Extreme Citizen Science’ (ExCiteS) research group. The group’s activities focus on the theory, methodologies, techniques and tools that are needed to allow any community to start its own bottom-up citizen science activity, regardless of the level of literacy of the users. Importantly, Citizen Science is understood in the widest sense, including perceptions and views – so participatory mapping and participatory geographic information are integral parts of the activities.
The research themes that the group explores include Citizen Science and Citizen Cyberscience; Community and participatory mapping/GIS; Volunteered Geographic Information (OpenStreetMap, Green Mapping, Participatory GeoWeb); Usability of geographic information and geographic information technology, especially with non-expert users; GeoWeb and mobile GeoWeb technologies that facilitate Extreme Citizen Science; and identifying scientific models and visualisations that are suitable for Citizen Science.
Research Associate in Extreme Citizen Science – a 2-year, postdoctoral research associate position commencing 1 May 2011.
The research associate will lead the development of an ‘Intelligent Map’ that allows non-literate users to upload data securely; and the system should allow the users to visualise their information with data from other users. Permissions need to be developed in accordance with cultural sensitivities. As uploaded data from multiple users sharing the same system increase over time, repeating patterns will begin to emerge that indicate particular environmental trends.
The role will also include some general project-management duties, guiding the PhD students who are working on the project. Travel to Cameroon to the forest communities that we are working with is necessary.
Complete details about this post and application procedure are available on the UCL jobs website.
PhD Studentship – understanding citizen scientists’ motivations, incentives and group organisation – a 3.5-year fully funded studentship. We are looking for applicants with a good honours degree (1st Class or 2:1 minimum), and an MA or MSc in anthropology, geography, sociology, psychology or related discipline. The applicant needs to be familiar with quantitative and qualitative research methods, and be able to work with a team that will include programmers and human-computer interaction experts who will design systems to be used in citizen science projects. Travel will be required as part of the project. A willingness to live for short periods in remote forest locations in simple lodgings, eating local food, will be necessary. French language skills are desirable.
The research itself will focus on motivations, incentives and understanding of the needs and wishes of participants in citizen science projects. We will specifically focus on engagement of non-literate people in such projects and need to understand how the process – from data collection to analysis – can be made meaningful and useful for their everyday life. The research will involve using quantitative methods to analyse large-scale patterns of engagement in existing projects, as well as ethnographic and qualitative study of participants. The project will include working with non-literate forest communities in Cameroon as well as marginalised communities in London.
Complete details about this post and application procedure are available on the UCL jobs website.
PhD Studentship in geographic visualisation for non-literate citizen scientists - a 3.5-year fully funded studentship. The applicant should possess a good honours degree (1st Class or 2:1 minimum), and an MSc in computer science, human-computer interaction, electronic engineering or related discipline. In addition, they need to be familiar with geographic information and software development, and be able to work with a team that will include anthropologists and human-computer interaction experts who will design systems to be used in citizen science projects. Travel will be required as part of the project. A willingness to live for short periods in remote forest locations in simple lodgings, eating local food, will be necessary. French language skills are desirable.
Complete details about this post and application procedure are available on the UCL jobs website.
In addition, we offer a PhD Studentship on How interaction design and mobile mapping influences participation in Citizen Science, which is part of the EveryAware project and is also open to any EU citizen.
10 November, 2010
These are the slides from the presentation that I gave to the BCS Geospatial SG.
The talk abstract is:
Here is a useful party trivia: as a form of human communication, maps pre-date text by thousands of years – some early spatial depictions are 25,000 years old, whereas writing emerged only 4000 years ago. When it comes to computing, the reverse is true: the first wide use of computing is from the early 1950s, whereas the first effort to create a GIS only started in 1966. There are good reasons for this, chief among them is the complexity of handling geographical information in digital computers. An adverse impact of this challenge is that for many years geospatial technologies developers focused on functionality and not on the interaction with end-users. The result of this focus is that while word processors and spreadsheets became popular in the early 1980s, only with the emergence of ‘Web Mapping 2.0′ in 2005, GIS and geospatial technologies became more popular, albeit far from universally usable.
The talk covered interaction and user aspects of geospatial technologies, pointing to issues that permeate the usability and usefulness of geographical information itself (e.g. why ESRI shapefile is a popular format despite its drawbacks?), the programming of geospatial technology (e.g. why OGC WMS did not spark the mashup revolution, while Google Maps API did?) and the interaction of end users with desktop and web-based GIS.
And the talk happened at the same day in which the excellent Third Workshop on the Usability of Geographic Information was running at the Ordnance Survey.
21 October, 2010
One issue that remained open in the studies on the relevance of Linus’ Law for OpenStreetMap was that the previous studies looked at areas with more than 5 contributors, and the link between the number of users and the quality was not conclusive – although the quality was above 70% for this number of contributors and above it.
Now, as part of writing up the GISRUK 2010 paper for journal publication, we had an opportunity to fill this gap, to some extent. Vyron Antoniou has developed a method to evaluate the positional accuracy on a larger scale than we have done so far. The methodology uses the geometric position of the Ordnance Survey (OS) Meridian 2 road intersections to evaluate positional accuracy. Although Meridian 2 is created by applying a 20-metre generalisation filter to the centrelines of the OS Roads Database, this generalisation process does not affect the positional accuracy of node points and thus their accuracy is the best available. An algorithm was developed for the identification of the correct nodes between the Meridian 2 and OSM, and the average positional error was calculated for each square kilometre in England. With this data, which provides an estimated positional accuracy for an area of over 43,000 square kilometres, it was possible to estimate the contribution that additional users make to the quality of the data.
As can be seen in the chart below, positional accuracy remains fairly level when the number of users is 13 or more – as we have seen in previous studies. On the other hand, up to 13 users, each additional contributor considerably improves the dataset’s quality. In grey you can see the maximum and minimum values, so the area represents the possible range of positional accuracy results. Interestingly, as the number of users increases, positional accuracy seems to settle close to 5m, which is somewhat expected when considering the source of the information – GPS receivers and aerial imagery. However, this is an aspect of the analysis that clearly requires further testing of the algorithm and the datasets.
It is encouraging to see that the results of the analysis are significantly correlated. For the full dataset the correlation is weak (-0.143) but significant at the 0.01 level (2-tailed). However, the average values for each number of contributors (blue line in the graph), the correlation is strong (-0.844) and significant at the 0.01 level (2-talled).
An important caveat is that the number of tiles with more than 10 contributors is fairly small, so that is another aspect that requires further exploration. Moreover, spatial data quality is not just positional accuracy, but also attribute accuracy, completeness, update and other properties. We can expect that they will also exhibit similar behaviour to positional accuracy, but this requires further studies – as always.
However, as this is a large-scale analysis that adds to the evidence from the small-scale studies, it is becoming highly likely that Linus’ Law is affecting the quality of OSM data and possibly of other so-called Volunteered Geographical Information (VGI) sources and there is a decreased gain in terms of positional accuracy when the number of contributors passes about 10 or so.