30 June, 2014
Today marks the publication of the report ‘crowdsourced geographic information in government‘. The report is the result of a collaboration that started in the autumn of last year, when the World Bank Global Facility for Disaster Reduction and Recovery(GFDRR) requested to carry out a study of the way crowdsourced geographic information is used by governments. The identification of barriers and success factors were especially needed, since GFDRR invest in projects across the world that use crowdsourced geographic information to help in disaster preparedness, through activities such as the Open Data for Resilience Initiative. By providing an overview of factors that can help those that implement such projects, either in governments or in the World Bank, we can increase the chances of successful implementations. To develop the ideas of the project, Robert Soden (GFDRR) and I run a short workshop during State of the Map 2013 in Birmingham, which helped in shaping the details of project plan as well as some preliminary information gathering. The project team included myself, Vyron Antoniou, Sofia Basiouka, and Robert Soden (GFDRR). Later on, Peter Mooney (NUIM) and Jamal Jokar (Heidelberg) volunteered to help us – demonstrating the value in research networks such as COST ENERGIC which linked us.
The general methodology that we decided to use is the identification of case studies from across the world, at different scales of government (national, regional, local) and domains (emergency, environmental monitoring, education). We expected that with a large group of case studies, it will be possible to analyse common patterns and hopefully reach conclusions that can assist future projects. In addition, this will also be able to identify common barriers and challenges.
We have paid special attention to information flows between the public and the government, looking at cases where the government absorbed information that provided by the public, and also cases where two-way communication happened.
Originally, we were aiming to ‘crowdsource’ the collection of the case studies. We identified the information that is needed for the analysis by using few case studies that we knew about, and constructing the way in which they will be represented in the final report. After constructing these ‘seed’ case study, we aimed to open the questionnaire to other people who will submit case studies. Unfortunately, the development of a case study proved to be too much effort, and we received only a small number of submissions through the website. However, throughout the study we continued to look out for cases and get all the information so we can compile them. By the end of April 2014 we have identified about 35 cases, but found clear and useful information only for 29 (which are all described in the report). The cases range from basic mapping to citizen science. The analysis workshop was especially interesting, as it was carried out over a long Skype call, with members of the team in Germany, Greece, UK, Ireland and US (Colorado) while working together using Google Docs collaborative editing functionality. This approach proved successful and allowed us to complete the report.
29 March, 2014
Thursday marked the launch of The Conservation Volunteers (TCV) report on volunteering impact where they summarised a three year project that explored motivations, changes in pro-environmental behaviour, wellbeing and community resilience. The report is worth a read as it goes beyond the direct impact on the local environment of TCV activities, and demonstrates how involvement in environmental volunteering can have multiple benefits. In a way, it is adding ingredients to a more holistic understanding of ‘green volunteering’.
One of the interesting aspects of the report is in the longitudinal analysis of volunteers motivation (copied here from the report). The comparison is from 784 baseline surveys, 202 Second surveys and 73 third surveys, which were done with volunteers while they were involved with the TCV. The second survey was taken after 4 volunteering sessions, and the third after 10 sessions.
The results of the surveys are interesting in the context of online activities (e.g. citizen science or VGI) because they provide an example for an activity that happen off line – in green spaces such as local parks, community gardens and the such. Moreover, the people that are participating in them come from all walks of life, as previous analysis of TCV data demonstrated that they are recruiting volunteers across the socio-economic spectrum. So here is an activity that can be compared to online volunteering. This is valuable, as if the pattern of TCV information are similar, then we can understand online volunteering as part of general volunteering and not assume that technology changes everything.
So the graph above attracted my attention because of the similarities to Nama Budhathoki work on the motivation of OpenStreetMap volunteers. First, there is a difference between the reasons that are influencing the people that join just one session and those that are involved for the longer time. Secondly, social and personal development aspects are becoming more important over time.
There is clear need to continue and explore the data – especially because the numbers that are being surveyed at each period are different, but this is an interesting finding, and there is surly more to explore. Some of it will be explored by Valentine Seymour in ExCiteS who is working with TCV as part of her PhD.
It is also worth listening to the qualitative observations by volunteers, as expressed in the video that open the event, which is provided below.
26 June, 2013
CHI 2013 and GeoHCI workshop highlighted to me the importance of understanding media for maps. During CHI, the ‘Paper Tab’ demonstration used E-Ink displays to demonstrate multiple displays interaction. I found the interactions non-intuitive and not mapping very well to what you would expect to do with paper, so a source for confusion – especially when they will eventually be mixed with papers on a desk. Anyhow, it is an interesting exploration.
E Ink displays are very interesting in terms of the potential use for mapping. The image below shows one of the early prototypes of maps that are designed specifically for the Kindle, or, more accurately, to the E Ink technology that is at heart of the Kindle. From a point of view of usability of geographical information technologies, the E Ink is especially interesting. There are several reasons for that.
First, the resolution of the Kindle display is especially high (close to 170 Pixels Per Inch) when the size of screen is considered. The Apple Retina display provide even better resolution and in colour and that makes maps on the iPad also interesting, as they are starting to get closer to the resolution that we are familiar with from paper maps (which is usually between 600 and 1200 Dot Per Inch). The reason that resolution matter especially when displaying maps, because the users need to see the context of the location that they are exploring. Think of the physiology of scanning the map, and the fact that capturing more information in one screen can help in understanding the relationships of different features. Notice that when the resolution is high but the screen area is limited (for example the screen of a smartphone) the limitations on the area that is displayed are quite severe and that reduce the usability of the map – scrolling require you to maintain in your memory where you came from.
Secondly, E Ink can be easily read even in direct sunlight because they are reflective and do not use backlight. This make them very useful for outdoor use, while other displays don’t do that very well.
Thirdly, they use less energy and can be used for long term display of the map while using it as a reference, whereas with most active displays (e.g. smartphone) continuous use will cause a rapid battery drain.
On the downside, E Ink refresh rates are slow, and they are more suitable for static display and not for dynamic and interactive display.
During the summer of 2011 and 2012, several MSc students at UCL explore the potential of E Ink for mapping in detail. Nat Evatt (who’s map is shown above) worked on the cartographic representation and shown that it is possible to create highly detailed and readable maps even with the limitation of 16 levels of grey that are available. The surprising aspects that he found is that while some maps are available in the Amazon Kindle store (the most likely place for e-book maps), it looks like the maps where just converted to shades of grey without careful attention to the device, which reduce their usability.
The work of Bing Cui and Xiaoyan Yu (in a case of collaboration between MSc students at UCLIC and GIScience) included survey in the field (luckily on a fairly sunny day near the Tower of London) and they explored which scales work best in terms of navigation and readability. The work shows that maps at scale of 1:4000 are effective – and considering that with E Ink the best user experience is when the number of refreshes are minimised that could be a useful guideline for e-book map designers.
The previous post focused on citizen science as participatory science. This post is discussing the meaning of this differentiation. It is the final part of the chapter that will appear in the book:
The typology of participation can be used across the range of citizen science activities, and one project should not be classified only in one category. For example, in volunteer computing projects most of the participants will be at the bottom level, while participants that become committed to the project might move to the second level and assist other volunteers when they encounter technical problems. Highly committed participants might move to a higher level and communicate with the scientist who coordinates the project to discuss the results of the analysis and suggest new research directions.
This typology exposes how citizen science integrates and challenges the way in which science discovers and produces knowledge. Questions about the way in which knowledge is produced and truths are discovered are part of the epistemology of science. As noted above, throughout the 20th century, as science became more specialised, it also became professionalised. While certain people were employed as scientists in government, industry and research institutes, the rest of the population – even if they graduated from a top university with top marks in a scientific discipline – were not regarded as scientists or as participants in the scientific endeavour unless they were employed professionally to do so. In rare cases, and following the tradition of ‘gentlemen/women scientists’, wealthy individuals could participate in this work by becoming an ‘honorary fellow’ or affiliated to a research institute that, inherently, brought them into the fold. This separation of ‘scientists’ and ‘public’ was justified by the need to access specialist equipment, knowledge and other privileges such as a well-stocked library. It might be the case that the need to maintain this separation is a third reason that practising scientists shy away from explicitly mentioning the contribution of citizen scientists to their work in addition to those identified by Silvertown (2009).
However, similarly to other knowledge professionals who operate in the public sphere, such as medical experts or journalists, scientists need to adjust to a new environment that is fostered by the Web. Recent changes in communication technologies, combined with the increased availability of open access information and the factors that were noted above, mean that processes of knowledge production and dissemination are opening up in many areas of social and cultural activities (Shirky 2008). Therefore, some of the elitist aspects of scientific practice are being challenged by citizen science, such as the notion that only dedicated, full-time researchers can produce scientific knowledge. For example, surely it should be professional scientists who can solve complex scientific problems such as long-standing protein-structure prediction of viruses. Yet, this exact problem was recently solved through a collaboration of scientists working with amateurs who were playing the computer game Foldit (Khatib et al. 2011). Another aspect of the elitist view of science can be witnessed in interaction between scientists and the public, where the assumption is of unidirectional ‘transfer of knowledge’ from the expert to lay people. Of course, as in the other areas mentioned above, it is a grave mistake to argue that experts are unnecessary and can be replaced by amateurs, as Keen (2007) eloquently argued. Nor is it suggested that, because of citizen science, the need for professionalised science will diminish, as, in citizen science projects, the participants accept the difference in knowledge and expertise of the scientists who are involved in these projects. At the same time, the scientists need to develop respect towards those who help them beyond the realisation that they provide free labour, which was noted above.
Given this tension, the participation hierarchy can be seen to be moving from a ‘business as usual’ scientific epistemology at the bottom, to a more egalitarian approach to scientific knowledge production at the top. The bottom level, where the participants are contributing resources without cognitive engagement, keeps the hierarchical division of scientists and the public. The public is volunteering its time or resources to help scientists while the scientists explain the work that is to be done but without expectation that any participant will contribute intellectually to the project. Arguably, even at this level, the scientists will be challenged by questions and suggestions from the participants and, if they do not respond to them in a sensitive manner, they will risk alienating participants. Intermediaries such as the IBM World Community Grid, where a dedicated team is in touch with scientists who want to run projects and a community of volunteered computing providers, are cases of ‘outsourcing’ the community management and thus allowing, to an extent, the maintenance of the separation of scientists and the public.
As we move up the ladder to a higher level of participation, the need for direct engagement between the scientist and the public increases. At the highest level, the participants are assumed to be on equal footing with the scientists in terms of scientific knowledge production. This requires a different epistemological understanding of the process, in which it is accepted that the production of scientific insights is open to any participant while maintaining scientific standards and practices such as systematic observations or rigorous statistical analysis to verify that the results are significant. The belief that, given suitable tools, many lay people are capable of such endeavours is challenging to some scientists who view their skills as unique. As the case of the computer game that helped in the discovery of new protein formations (Khatib et al. 2011) demonstrated, such collaboration can be fruitful even in cutting-edge areas of science. However, it can be expected that the more mundane and applied areas of science will lend themselves more easily to the fuller sense of collaborative science in which participants and scientists identify problems and develop solutions together. This is because the level of knowledge required in cutting-edge areas of science is so demanding.
Another aspect in which the ‘extreme’ level challenges scientific culture is that it requires scientists to become citizen scientists in the sense that Irwin (1995), Wilsdon, Wynne and Stilgoe (2005) and Stilgoe (2009) advocated (Notice Stilgoe’s title: Citizen Scientists). In this interpretation of the phrase, the emphasis is not on the citizen as a scientist, but on the scientist as a citizen. It requires the scientists to engage with the social and ethical aspects of their work at a very deep level. Stilgoe (2009, p.7) suggested that, in some cases, it will not be possible to draw the line between the professional scientific activities, the responsibilities towards society and a fuller consideration of how a scientific project integrates with wider ethical and societal concerns. However, as all these authors noted, this way of conceptualising and practising science is not widely accepted in the current culture of science.
Therefore, we can conclude that this form of participatory and collaborative science will be challenging in many areas of science. This will not be because of technical or intellectual difficulties, but mostly because of the cultural aspects. This might end up being the most important outcome of citizen science as a whole, as it might eventually catalyse the education of scientists to engage more fully with society.
27 November, 2011
This post continues to the theme of the previous one, and is also based on the chapter that will appear next year in the book:
The post focuses on the participatory aspect of different Citizen Science modes:
Against the technical, social and cultural aspects of citizen science, we offer a framework that classifies the level of participation and engagement of participants in citizen science activity. While there is some similarity between Arnstein’s (1969) ‘ladder of participation’ and this framework, there is also a significant difference. The main thrust in creating a spectrum of participation is to highlight the power relationships that exist within social processes such as urban planning or in participatory GIS use in decision making (Sieber 2006). In citizen science, the relationship exists in the form of the gap between professional scientists and the wider public. This is especially true in environmental decision making where there are major gaps between the public’s and the scientists’ perceptions of each other (Irwin 1995).
In the case of citizen science, the relationships are more complex, as many of the participants respect and appreciate the knowledge of the professional scientists who are leading the project and can explain how a specific piece of work fits within the wider scientific body of work. At the same time, as volunteers build their own knowledge through engagement in the project, using the resources that are available on the Web and through the specific project to improve their own understanding, they are more likely to suggest questions and move up the ladder of participation. In some cases, the participants would want to volunteer in a passive way, as is the case with volunteered computing, without full understanding of the project as a way to engage and contribute to a scientific study. An example of this is the many thousands of people who volunteered to the Climateprediction.net project, where their computers were used to run global climate models. Many would like to feel that they are engaged in one of the major scientific issues of the day, but would not necessarily want to fully understand the science behind it.
Therefore, unlike Arnstein’s ladder, there shouldn’t be a strong value judgement on the position that a specific project takes. At the same time, there are likely benefits in terms of participants’ engagement and involvement in the project to try to move to the highest level that is suitable for the specific project. Thus, we should see this framework as a typology that focuses on the level of participation.
At the most basic level, participation is limited to the provision of resources, and the cognitive engagement is minimal. Volunteered computing relies on many participants that are engaged at this level and, following Howe (2006), this can be termed ‘crowdsourcing’. In participatory sensing, the implementation of a similar level of engagement will have participants asked to carry sensors around and bring them back to the experiment organiser. The advantage of this approach, from the perspective of scientific framing, is that, as long as the characteristics of the instrumentation are known (e.g. the accuracy of a GPS receiver), the experiment is controlled to some extent, and some assumptions about the quality of the information can be used. At the same time, running projects at the crowdsourcing level means that, despite the willingness of the participants to engage with a scientific project, their most valuable input – their cognitive ability – is wasted.
The second level is ‘distributed intelligence’ in which the cognitive ability of the participants is the resource that is being used. Galaxy Zoo and many of the ‘classic’ citizen science projects are working at this level. The participants are asked to take some basic training, and then collect data or carry out a simple interpretation activity. Usually, the training activity includes a test that provides the scientists with an indication of the quality of the work that the participant can carry out. With this type of engagement, there is a need to be aware of questions that volunteers will raise while working on the project and how to support their learning beyond the initial training.
The next level, which is especially relevant in ‘community science’ is a level of participation in which the problem definition is set by the participants and, in consultation with scientists and experts, a data collection method is devised. The participants are then engaged in data collection, but require the assistance of the experts in analysing and interpreting the results. This method is common in environmental justice cases, and goes towards Irwin’s (1995) call to have science that matches the needs of citizens. However, participatory science can occur in other types of projects and activities – especially when considering the volunteers who become experts in the data collection and analysis through their engagement. In such cases, the participants can suggest new research questions that can be explored with the data they have collected. The participants are not involved in detailed analysis of the results of their effort – perhaps because of the level of knowledge that is required to infer scientific conclusions from the data.
Finally, collaborative science is a completely integrated activity, as it is in parts of astronomy where professional and non-professional scientists are involved in deciding on which scientific problems to work and the nature of the data collection so it is valid and answers the needs of scientific protocols while matching the motivations and interests of the participants. The participants can choose their level of engagement and can be potentially involved in the analysis and publication or utilisation of results. This form of citizen science can be termed ‘extreme citizen science’ and requires the scientists to act as facilitators, in addition to their role as experts. This mode of science also opens the possibility of citizen science without professional scientists, in which the whole process is carried out by the participants to achieve a specific goal.
This typology of participation can be used across the range of citizen science activities, and one project should not be classified only in one category. For example, in volunteer computing projects most of the participants will be at the bottom level, while participants that become committed to the project might move to the second level and assist other volunteers when they encounter technical problems. Highly committed participants might move to a higher level and communicate with the scientist who coordinates the project to discuss the results of the analysis and suggest new research directions.
6 June, 2011
It is always nice to announce good news. Back in February, together with Richard Treves at the University of Southampton, I submitted an application to the Google’s Faculty Research Award program for a grant to investigate Google Earth Tours in education. We were successful in getting a grant worth $86,883 USD. The project builds on my expertise in usability studies of geospatial technologies, including the use of eye tracking and other usability engineering techniques for GIS and Richard’s expertise in Google Earth tours and education, and longstanding interest in usability issues.
In this joint UCL/Southampton project, UCL will be lead partner and we will appoint a junior researcher for a year to develop run experiments that will help us in understanding of the effectiveness of Google Earth Tours in geographical learning, and we aim to come up with guidelines to their use. If you are interested, let me know.
Our main contact at Google for the project is Ed Parsons. We were also helped by Tina Ornduff and Sean Askay who acted as referees for the proposal.
The core question that we want to address is “How can Google Earth Tours be used create an effective learning experience?”
So what do we plan to do? Previous research on Google Earth Tours (GETs) has shown them to be an effective visualization technique for teaching geographical concepts, yet their use in this way is essentially passive. Active learning is a successful educational approach where student activity is combined with instruction to enhance learning. In the proposal we suggest that there is great education value in combining the advantages of the rich visualization of GETs with student activities. Evaluating the effectiveness of this combination is the purpose of the project, and we plan to do this by creating educational materials that consist of GETs and activities and testing them against other versions of the materials using student tests, eye tracking and questionnaires as data gathering techniques.
We believe that by improving the techniques by which spatial data is visualized we are improving spatial information access overall.
A nice aspect of the getting the project funded is that it works well with a project that is led by Claire Ellul and Kate Jones and funded by JISC. The G3 project, or “Bridging the Gaps between the GeoWeb and GIS” is touching on similar aspects and we surely going to share knowledge with them.
For more background on Richard Treves, see his blog (where the same post is published!)
In March 2008, I started comparing OpenStreetMap in England to the Ordnance Survey Meridian 2, as a way to evaluate the completeness of OpenStreetMap coverage. The rational behind the comparison is that Meridian 2 represents a generalised geographic dataset that is widely use in national scale spatial analysis. At the time that the study started, it was not clear that OpenStreetMap volunteers can create highly detailed maps as can be seen on the ‘Best of OpenStreetMap‘ site. Yet even today, Meridian 2 provides a minimum threshold for OpenStreetMap when the question of completeness is asked.
So far, I have carried out 6 evaluations, comparing the two datasets in March 2008, March 2009, October 2009, March 2010, September 2010 and March 2011. While the work on the statistical analysis and verification of the results continues, Oliver O’Brien helped me in taking the results of the analysis for Britain and turn them into an interactive online map that can help in exploring the progression of the coverage over the various time period.
Notice that the visualisation shows the total length of all road objects in OpenStreetMap, so does not discriminate between roads, footpaths and other types of objects. This is the most basic level of completeness evaluation and it is fairly coarse.
The application will allow you to browse the results and to zoom to a specific location, and as Oliver integrated the Ordnance Survey Street View layer, it will allow you to see what information is missing from OpenStreetMap.
Finally, note that for the periods before September 2010, the coverage is for England only.
Some details on the development of the map are available on Oliver’s blog.