Thursday marked the launch of The Conservation Volunteers (TCV) report on volunteering impact where they summarised a three year project that explored motivations, changes in pro-environmental behaviour, wellbeing and community resilience. The report is worth a read as it goes beyond the direct impact on the local environment of TCV activities, and demonstrates how involvement in environmental volunteering can have multiple benefits. In a way, it is adding ingredients to a more holistic understanding of ‘green volunteering’.
TCVmotivations One of the interesting aspects of the report is in the longitudinal analysis of volunteers motivation (copied here from the report).  The comparison is from 784 baseline surveys, 202 Second surveys and 73 third surveys, which were done with volunteers while they were involved with the TCV. The second survey was taken after 4 volunteering sessions, and the third after 10 sessions.

The results of the surveys are interesting in the context of online activities (e.g. citizen science or VGI) because they provide an example for an activity that happen off line – in green spaces such as local parks, community gardens and the such. Moreover, the people that are participating in them come from all walks of life, as previous analysis of TCV data demonstrated that they are recruiting volunteers across the socio-economic spectrum. So here is an activity that can be compared to online volunteering. This is valuable, as if the pattern of TCV information are similar, then we can understand online volunteering as part of general volunteering and not assume that technology changes everything.

So the graph above attracted my attention because of the similarities to Nama Budhathoki work on the motivation of OpenStreetMap volunteers. First, there is a difference between the reasons that are influencing the people that join just one session and those that are involved for the longer time. Secondly, social and personal development aspects are becoming more important over time.

There is clear need to continue and explore the data – especially because the numbers that are being surveyed at each period are different, but this is an interesting finding, and there is surly more to explore. Some of it will be explored by Valentine Seymour in ExCiteS who is working with TCV as part of her PhD.

It is also worth listening to the qualitative observations by volunteers, as expressed in the video that open the event, which is provided below.

TCV Volunteer Impacts from The Conservation Volunteers on Vimeo.

Some ideas take long time to mature into a form that you are finally happy to share them. This is an example for such thing.

I got interested in the area of Philosophy of Technology during my PhD studies, and continue to explore it since. During this journey, I found a lot of inspiration and links to Andrew Feenberg’s work, for example, in my paper about neogeography and the delusion of democratisation. The links are mostly due to Feenberg’s attention to ‘hacking’ or appropriating technical systems to functions and activities that they are outside what the designers or producers of them thought.

In addition to Feenberg, I became interested in the work of Albert Borgmann and because he explicitly analysed GIS, dedicating a whole chapter to it in Holding on to RealityIn particular, I was intrigues by his formulation to The Device Paradigm and the notion of Focal Things and Practices which are linked to information systems in Holding on to Reality where three forms of information are presented – Natural Information, Cultural Information and Technological Information. It took me some time to see that these 5 concepts are linked, with technological information being a demonstration of the trouble with the device paradigm, while natural and cultural information being part of focal things and practices (more on these concepts below).

I first used Borgmann’s analysis as part of ‘Conversations Across the Divide‘ session in 2005, which focused on Complexity and Emergence. In a joint contribution with David O’Sullivan about ‘complexity science and Geography: understanding the limits of narratives’, I’ve used Borgmann’s classification of information. Later on, we’ve tried to turn it into a paper, but in the end David wrote a much better analysis of complexity and geography, while the attempt to focus mostly on the information concepts was not fruitful.

The next opportunity to revisit Borgmann came in 2011, for an AAG pre-conference workshop on VGI where I explored the links between The Device Paradigm, Focal Practices and VGI. By 2013, when I was invited to the ‘Thinking and Doing Digital Mapping‘ workshop that was organise by ‘Charting the Digital‘ project. I was able to articulate the link between all the five elements of Borgmann’s approach in my position paper. This week, I was able to come back to the topic in a seminar in the Department of Geography at the University of Leicester. Finally, I feel that I can link them in a coherent way.

So what is it all about?

Within the areas of VGI and Citizen Science, there is a tension between the different goals or the projects and identification of practices in terms of what they mean for the participants – are we using people as ‘platform for sensors’ or are we dealing with fuller engagement? The use of Borgmann’s ideas can help in understanding the difference. He argues that modern technologies tend to adopt the myopic ‘Device Paradigm’ in which specific interpretation of efficiency, productivity and a reductionist view of human actions are taking precedence over ‘Focal Things and Practices’ that bring people together in a way meaningful to human life. In Holding On to Reality (1999), he differentiates three types of information: natural, cultural and technological.  Natural information is defined as information about reality: for example, scientific information on the movement of the earth or the functioning of a cell.  This is information that was created in order to understand the functioning of reality.  Cultural information is information that is being used to shape reality, such as engineering design plans.  Technological information is information as reality and leads to decreased human engagement with fundamental aspects of reality.  Significantly, these categories do not relate to the common usage of the words ‘natural’, ‘cultural and ‘technological’ rather to describe the changing relationship between information and reality at different stages of socio-technical development.

When we explore general geographical information, we can see that some of it is technological information, for example SatNav and the way that communicate to the people who us them, or virtual globes that try to claim to be a representation of reality with ‘current clouds’ and all. The paper map, on the other hand, provide a conduit to the experience of hiking and walking through the landscape, and is part of cultural information.

Things are especially interesting with VGI and Citizen Science. In them, information and practices need to be analysed in a more nuanced way. In some cases, the practices can become focal to the participants – for example in iSpot where the experience of identifying a species in the field is also link to the experiences of the amateurs and experts who discuss the classification. It’s an activity that brings people together. On the other hand, in crowdsourcing projects that grab information from SatNav devices, there is a demonstration of The Device Paradigm, with the potential of reducing of meaningful holiday journey to ‘getting from A to B at the shortest time’. The slides below go through the ideas and then explore the implications on GIS, VGI and Citizen Science.

Now for the next stage – turning this into a paper…

Following the two previous assertions, namely that:

you can be supported by a huge crowd for a very short time, or by few for a long time, but you can’t have a huge crowd all of the time (unless data collection is passive)’ (original post here)

And

‘All information sources are heterogeneous, but some are more honest about it than others’  (original post here)

The third assertion is about pattern of participation. It is one that I’ve mentioned before and in some way it is a corollary of the two assertions above.

‘When looking at crowdsourced information, always keep participation inequality in mind’ 

Because crowdsourced information, either Volunteered Geographic Information or Citizen Science, is created through a socio-technical process, all too often it is easy to forget the social side – especially when you are looking at the information without the metadata of who collected it and when. So when working with OpenStreetMap data, or viewing the distribution of bird species in eBird (below), even though the data source is expected to be heterogeneous, each observation is treated as similar to other observation and assumed to be produced in a similar way.

Distribution of House Sparrow

Yet, data is not only heterogeneous in terms of consistency and coverage, it is also highly heterogeneous in terms of contribution. One of the most persistence findings from studies of various systems – for example in Wikipedia , OpenStreetMap and even in volunteer computing is that there is a very distinctive heterogeneity in contribution. The phenomena was term Participation Inequality by Jakob Nielsn in 2006 and it is summarised succinctly in the diagram below (from Visual Liberation blog) – very small number of contributors add most of the content, while most of the people that are involved in using the information will not contribute at all. Even when examining only those that actually contribute, in some project over 70% contribute only once, with a tiny minority contributing most of the information.

Participation Inequality Therefore, when looking at sources of information that were created through such process, it is critical to remember the nature of contribution. This has far reaching implications on quality as it is dependent on the expertise of the heavy contributors, on their spatial and temporal engagement, and even on their social interaction and practices (e.g. abrasive behaviour towards other participants).

Because of these factors, it is critical to remember the impact and implications of participation inequality on the analysis of the information. There will be some analysis to which it will have less impact and some where it will have major one. In either cases, it need to be taken into account.

Following the last post, which focused on an assertion about crowdsourced geographic information and citizen science I continue with another observation. As was noted in the previous post, these can be treated as ‘laws’ as they seem to emerge as common patterns from multiple projects in different areas of activity – from citizen science to crowdsourced geographic information. The first assertion was about the relationship between the number of volunteers who can participate in an activity and the amount of time and effort that they are expect to contribute.

This time, I look at one aspect of data quality, which is about consistency and coverage. Here the following assertion applies: 

‘All information sources are heterogeneous, but some are more honest about it than others’

What I mean by that is the on-going argument about authoritative and  crowdsourced  information sources (Flanagin and Metzger 2008 frequently come up in this context), which was also at the root of the Wikipedia vs. Britannica debate, and the mistrust in citizen science observations and the constant questioning if they can do ‘real research’

There are many aspects for these concerns, so the assertion deals with the aspects of comprehensiveness and consistency which are used as a reason to dismiss crowdsourced information when comparing them to authoritative data. However, at a closer look we can see that all these information sources are fundamentally heterogeneous. Despite of all the effort to define precisely standards for data collection in authoritative data, heterogeneity creeps in because of budget and time limitations, decisions about what is worthy to collect and how, and the clash between reality and the specifications. Here are two examples:

Take one of the Ordnance Survey Open Data sources – the map present themselves as consistent and covering the whole country in an orderly way. However, dig in to the details for the mapping, and you discover that the Ordnance Survey uses different standards for mapping urban, rural and remote areas. Yet, the derived products that are generalised and manipulated in various ways, such as Meridian or Vector Map District, do not provide a clear indication which parts originated from which scale – so the heterogeneity of the source disappeared in the final product.

The census is also heterogeneous, and it is a good case of specifications vs. reality. Not everyone fill in the forms and even with the best effort of enumerators it is impossible to collect all the data, and therefore statistical analysis and manipulation of the results are required to produce a well reasoned assessment of the population. This is expected, even though it is not always understood.

Therefore, even the best information sources that we accept as authoritative are heterogeneous, but as I’ve stated, they just not completely honest about it. The ONS doesn’t release the full original set of data before all the manipulations, nor completely disclose all the assumptions that went into reaching the final value. The Ordnance Survey doesn’t tag every line with metadata about the date of collection and scale.

Somewhat counter-intuitively, exactly because crowdsourced information is expected to be inconsistent, we approach it as such and ask questions about its fitness for use. So in that way it is more honest about the inherent heterogeneity.

Importantly, the assertion should not be taken to be dismissive of authoritative sources, or ignoring that the heterogeneity within crowdsources information sources is likely to be much higher than in authoritative ones. Of course all the investment in making things consistent and the effort to get universal coverage is indeed worth it, and it will be foolish and counterproductive to consider that such sources of information can be replaced as is suggest for the census or that it’s not worth investing in the Ordnance Survey to update the authoritative data sets.

Moreover, when commercial interests meet crowdsourced geographic information or citizen science, the ‘honesty’ disappear. For example, even though we know that Google Map Maker is now used in many part

s of the world (see the figure), even in cases when access to vector data is provided by Google, you cannot find out about who contribute, when and where. It is also presented as an authoritative source of information. 

Despite the risk of misinterpretation, the assertion can be useful as a reminder that the differences between authoritative and crowdsourced information are not as big as it may seem.

The Spatial Data Infrastructure Magazine (SDIMag.com) is a relatively new e-zine dedicated to the development of spatial  data infrastructures around the world. Roger Longhorn, the editor of the magazine, conducted an email interview with me, which is now published.

In the interview, we are covering the problematic terminology used to describe a wider range of activities; the need to consider social and technical aspects as well as goals of the participants; and, of course, the role of the information that is produced through crowdsourcing, citizen science, VGI with spatial data infrastructures.

The full interview can be found here.

 

At the 2012 Annual Meeting of the Association of American Geographers, I presented during the session Information Geographies: Online Power, Representation and Voice’, which was organised by Mark Graham (Oxford Internet Institute) and Matthew Zook (University of Kentucky). For an early morning session on a Saturday, the session was well attended – and the papers in the session were very interesting.

My presentation, titled ‘Nobody wants to do council estates’ – digital divide, spatial justice and outliers‘, was the result of thinking about the nature of social information that is available on the Web and which I partially articulated in a response to a post on GeoIQ blog. When Mark and Matt asked for an abstract, I provided the following:

The understanding of the world through digital representation (digiplace) and VGI is frequently carried out with the assumption that these are valid, comprehensive and useful representations of the world. A common practice throughout the literature on these issues is to mention the digital divide and, while accepting it as a social phenomenon, either ignore it for the rest of the analysis or expect that it will solve itself over time through technological diffusion. The almost deterministic belief in technological diffusion absolves the analyst from fully confronting the political implication of the divide.

However, what VGI and social media analysis reveals is that the digital divide is part of deep and growing social inequalities in Western societies. Worse still, digiplace amplifies and strengthens them.

In digiplace the wealthy, powerful, educated and mostly male elite is amplified through multiple digital representations. Moreover, the frequent decision of algorithm designers to highlight and emphasise those who submit more media, and the level of ‘digital cacophony’ that more active contributors create, means that a very small minority – arguably outliers in every analysis of normal distribution of human activities – are super empowered. Therefore, digiplace power relationships are arguably more polarised than outside cyberspace due to the lack of social check and balances. This makes the acceptance of the disproportional amount of information that these outliers produce as reality highly questionable.

The following notes might help in making sense of the slides.

Slide 2 takes us back 405 years to Mantua, Italy, where Claudio Monteverdi has just written one of the very first operas – L’Orfeo – as an after-dinner entertainment piece for Duke Vincenzo Gonzaga. Leaving aside the wonderful music – my personal recommendation is for Emmanuelle Haïm’s performance and I used the opening toccata in my presentation – there is a serious point about history. For a large portion of human history, and as recent as 400 years ago, we knew only about the rich and the powerful. We ignored everyone else because they ‘were not important’.

Slide 3 highlights two points about modern statistics. First, that it is a tool to gain an understanding about the nature of society as a whole. Second, when we look at the main body of society, it is within the first 2 standard deviations of a normalised distribution. The Index of Deprivation of the UK (Slide 4) is an example ofthis type of analysis. Even though it was designed to direct resources to the most needy, it analyses the whole population (and, by the way, is normalised).

Slide 5 points out that on the Web, and in social media in particular, the focus is on ‘long tail’ distributions. My main issue is not with the pattern but with what it means in terms of analysing the information. This is where participation inequality (Slide 6) matters and the point of Nielsen’s analysis is that outlets such as Wikipedia (and, as we will see, OpenStreetMap) are suffering from even worse inequality than other communication media. Nielsen’s recent analysis in his newsletter (Slide 7) demonstrates how this is playing out on Facebook (FB). Notice the comment ‘these people have no life‘ or, as Sherry Turkle put it, they got life on the screen

Slide 8 and 9 demonstrate that participation inequality is strongly represented in OpenStreetMap, and we can expect it to play out in FourSquare, Google Map Maker, Waze and other GeoWeb social applications. Slide 10 focuses on other characteristics of the people that are involved in the contribution of content: men, highly educated, age 20-40. Similar characteristics have been shown in other social media and the GeoWeb by Monica Stephens & Antonella Rondinone, and by many other researchers.

In slides 11-14, observed spatial biases in OpenStreetMap are noted – concentration on highly populated places, gap between rich and poor places (using the Index of Deprivation from Slide 4), and difference between rural and urban areas. These differences were also observed in other sources of Volunteer Geographic Information (VGI) such as photo sharing sites (in Vyron Antoniou’s PhD).

Taken together, participation inequality, demographic bias and spatial bias point to a very skewed group that is producing most of the content that we see on the GeoWeb. Look back at Slide 3, and it is a good guess that this minority falls within 3 standard deviations of the centre. They are outliers – not representative of anything other than of themselves. Of course, given the large number of people online and the ability of outliers to ‘shout’ louder than anyone else, and converse among themselves, it is tempting to look at them as a population worth listening to. But it is, similarly to the opening point, a look at the rich and powerful (or super enthusiastic) and not the mainstream.

Strangely, when such a small group controls the economy, we see it as a political issue (Slide 15, which was produced by Mother Jones as part of the response to the Occupy movement). We should be just as concerned when it happens with digital content and sets the agenda of what we see and how we understand the world.

Now to the implication of this analysis, and the use of the GeoWeb and social media to understand society. Slide 17 provides the link to the GeoIQ post that argued that these outliers are worth listening to. They might be, but the issue is what you are trying to find out by looking at the data:

The first option is to ask questions about the resulting data such as ‘can it be used to update national datasets?’ – accepting the biases in the data collection as they are and explore if there is anything useful that comes out of the outcomes (Slides 19-21, from the work of Vyron Antoniou and Thomas Koukoletsos). This should be fine as long as the researchers don’t try to state something general about the way society works from the data. Even so, researchers ought to analyse and point to biases and shortcomings (Slides 11-14 are doing exactly that).

The second option is to start claiming that we can learn something about social activities (Slides 22-23, from the work of Eric Fischer and Daniel Gayo-Avello, as well as Sean Gorman in the GeoIQ post). In this case, it is wrong to read too much into the dataas Gayo-Avello noted – as the outliers’ bias renders the analysis as not representative of society. Notice, for example, the huge gap between the social media noise during the Egyptian revolution and the outcomes of the elections, or the political differences that Gayo-Avello noted.

The third option is to find data that is representative (Slide 24, from the MIT Senseable City Lab), which looks at the ‘digital breadcrumbs’ that we leave behind on a large scale – phone calls, SMS, travel cards, etc. This data is representative, but provides observations without context. There is no qualitative or contextual information that comes with it and, because of the biases that are noted above, it is wrong to integrate it with the digital cacophony of the outliers. It is most likely to lead to erroneous conclusions.

Therefore, the understanding of the concept of digiplace (Slide 25) – the ordering of digital representation through software algorithms and GeoWeb portals – is, in fact, double filtered. The provision of content by outliers means that the algorithms will tend to amplify their point of view and biases.  Not only that, digital inequality, which is happening on top of social and economic inequality, means that more and more of our views of the world are being shaped by this tiny minority.

When we add to the mix aspects of digital inequalities (some people can only afford a pay-as-you-go function phone, while a tiny minority consumes a lot of bandwidth over multiple devices), we should stop talking about the ‘digital divide’ as something that will close over time. This is some sort of imaginary trickle-down  theory that is being proven not to withstand the test of reality. If anything, it grows as the ‘haves’ are using multiple devices to shape digiplace in their own image.

This is actually one of the core problems that differentiates to approaches to engagement in data collection. There is the laissez-faire approach to engaging society in collecting information about the world (Slides 27-28 showing OpenStreetMap mapping parties) which does not confront the biases and opposite it, there are participatory approaches (Slides 29-30 showing participatory mapping exercises from the work of Mapping for Change) where the effort is on making the activity inclusive.

This point about the biases, inequality and influence on the way we understand the world is important to repeat – as it is too often ignored by researchers who deal with these data.

As part of the Volunteered Geographic Information (VGI) workshop that was held in Seattle in April 2011, Daniel Sui, Sarah Elwood and Mike Goodchild announced that they will be editing a volume dedicated to the topic, published as ‘Crowdsourcing Geographic Knowledge‘  (Here is a link to the Chapter in Crowdsourcing Geographic Knowledge)

My contribution to this volume focuses on citizen science, and shows the links between it and VGI. The chapter is currently under review, but the following excerpt discusses different types of citizen science activities, and I would welcome comments:

“While the aim here is not to provide a precise definition of citizen science. Yet, a definition and clarification of what the core characteristics of citizen science are is unavoidable. Therefore, it is defined as scientific activities in which non-professional scientists volunteer to participate in data collection, analysis and dissemination of a scientific project (Cohn 2008; Silvertown 2009). People who participate in a scientific study without playing some part in the study itself – for example, volunteering in a medical trial or participating in a social science survey – are not included in this definition.

While it is easy to identify a citizen science project when the aim of the project is the collection of scientific information, as in the recording of the distribution of plant species, there are cases where the definition is less clear-cut. For example, the process of data collection in OpenStreetMap or Google Map Maker is mostly focused on recording verifiable facts about the world that can be observed on the ground. The tools that OpenStreetMap mappers use – such as remotely sensed images, GPS receivers and map editing software – can all be considered scientific tools. With their attempt to locate observed objects and record them on a map accurately, they follow the footsteps of surveyors such as Robert Hooke, who also carried out an extensive survey of London using scientific methods – although, unlike OpenStreetMap volunteers, he was paid for his effort. Finally, cases where facts are collected in a participatory mapping activity, such as the one that Ghose (2001) describes, should probably be considered a citizen science only if the participants decided to frame it as such. For the purpose of the discussion here, such a broad definition is more useful than a limiting one that tries to reject certain activities.

Notice also that, by definition, citizen science can only exist in a world in which science is socially constructed as the preserve of professional scientists in academic institutions and industry, because, otherwise, any person who is involved in a scientific project would simply be considered a contributor and potentially a scientist. As Silvertown (2009) noted, until the late 19th century, science was mainly developed by people who had additional sources of employment that allowed them to spend time on data collection and analysis. Famously, Charles Darwin joined the Beagle voyage, not as a professional naturalist but as a companion to Captain FitzRoy. Thus, in that era, almost all science was citizen science albeit mostly by affluent gentlemen scientists and gentlewomen. While the first professional scientist is likely to be Robert Hooke, who was paid to work on scientific studies in the 17th century, the major growth in the professionalisation of scientists was mostly in the latter part of the 19th and throughout the 20th centuries.

Even with the rise of the professional scientist, the role of volunteers has not disappeared, especially in areas such as archaeology, where it is common for enthusiasts to join excavations, or in natural science and ecology, where they collect and send samples and observations to national repositories. These activities include the Christmas Bird Watch that has been ongoing since 1900 and the British Trust for Ornithology Survey, which has collected over 31 million records since its establishment in 1932 (Silvertown 2009). Astronomy is another area where amateurs and volunteers have been on par with professionals when observation of the night sky and the identification of galaxies, comets and asteroids are considered (BBC 2006). Finally, meteorological observations have also relied on volunteers since the early start of systematic measurements of temperature, precipitation or extreme weather events (WMO 2001).

This type of citizen science provides the first type of ‘classic’ citizen science – the ‘persistence’ parts of science where the resources, geographical spread and the nature of the problem mean that volunteers sometimes predate the professionalisation and mechanisation of science. These research areas usually require a large but sparse network of observers who carry out their work as part of a hobby or leisure activity. This type of citizen science has flourished in specific enclaves of scientific practice, and the progressive development of modern communication tools has made the process of collating the results from the participants easier and cheaper, while inherently keeping many of the characteristics of data collection processes close to their origins.

A second set of citizen science activities is environmental management and, even more specifically, within the context of environmental justice campaigns. Modern environmental management includes strong technocratic and science oriented management practices (Bryant & Wilson 1998; Scott & Barnett 2009) and environmental decision making is heavily based on scientific environmental information. As a result, when an environmental conflict emerges – such as a community protest over a local noisy factory or planned expansion of an airport – the valid evidence needs to be based on scientific data collection. This aspect of environmental justice struggle is encouraging communities to carry out ‘community science’ in which scientific measurements and analysis are carried out by members of local communities so they can develop an evidence base and set out action plans to deal with problems in their area. A successful example of such an approach is the ‘Global Community Monitor’ method to allow communities to deal with air pollution issues (Scott & Barnett 2009). This is performed through a simple method of sampling air using plastic buckets followed by analysis in an air pollution laboratory, and, finally, the community being provided with instructions on how to understand the results. This activity is termed ‘Bucket Brigade’ and was used across the world in environmental justice campaigns. In London, community science was used to collect noise readings in two communities that are impacted by airport and industrial activities. The outputs were effective in bringing environmental problems to the policy arena (Haklay, Francis & Whitaker 2008). As in ‘classic’ citizen science, the growth in electronic communication has enabled communities to identify potential methods – e.g. through the ‘Global Community Monitor’ website – as well as find international standards , regulations and scientific papers that can be used together with the local evidence.
However, the emergence of the Internet and the Web as a global infrastructure has enabled a new incarnation of citizen science: the realisation of scientists that the public can provide free labour, skills, computing power and even funding, and, the growing demands from research funders for public engagement all contributing to the motivation of scientists to develop and launch new and innovative projects (Silvertown 2009; Cohn 2008). These projects utilise the abilities of personal computers, GPS receivers and mobile phones to double as scientific instruments.

This third type of citizen science has been termed ‘citizen cyberscience’ by Francois Grey (2009). Within it, it is possible to identify three sub-categories: volunteered computing, volunteered thinking and participatory sensing.

Volunteered computing was first developed in 1999, with the foundation of SETI@home (Anderson et al. 2002), which was designed to distribute the analysis of data that was collected from a radio telescope in the search for extra-terrestrial intelligence. The project utilises the unused processing capacity that exists in personal computers, and uses the Internet to send and receive ‘work packages’ that are analysed automatically and sent back to the main server. Over 3.83 million downloads were registered on the project’s website by July 2002. The system on which SETI@home is based, the Berkeley Open Infrastructure for Network Computing (BOINC), is now used for over 100 projects, covering Physics, processing data from the Large Hadron Collider through LHC@home; Climate Science with the running of climate models in Climateprediction.net; and Biology in which the shape of proteins is calculated in Rosetta@home.

While volunteered computing requires very little from the participants, apart from installing software on their computers, in volunteered thinking the volunteers are engaged at a more active and cognitive level (Grey 2009). In these projects, the participants are asked to use a website in which information or an image is presented to them. When they register onto the system, they are trained in the task of classifying the information. After the training, they are exposed to information that has not been analysed, and are asked to carry out classification work. Stardust@home (Westphal et al. 2006) in which volunteers were asked to use a virtual microscope to try to identify traces of interstellar dust was one of the first projects in this area, together with the NASA ClickWorkers that focused on the classification of craters on Mars. Galaxy Zoo (Lintott et al. 2008), a project in which volunteers classify galaxies, is now one of the most developed ones, with over 100,000 participants and with a range of applications that are included in the wider Zooniverse set of projects (see http://www.zooniverse.org/) .

Participatory sensing is the final and most recent type of citizen science activity. Here, the capabilities of mobile phones are used to sense the environment. Some mobile phones have up to nine sensors integrated into them, including different transceivers (mobile network, WiFi, Bluetooth), FM and GPS receivers, camera, accelerometer, digital compass and microphone. In addition, they can link to external sensors. These capabilities are increasingly used in citizen science projects, such as Mappiness in which participants are asked to provide behavioural information (feeling of happiness) while the phone records their location to allow the linkage of different locations to wellbeing (MacKerron 2011). Other activities include the sensing of air-quality (Cuff 2007) or noise levels (Maisonneuve et al. 2010) by using the mobile phone’s location and the readings from the microphone.”

At the State of the Map (EU) 2011 conference that was held in Vienna from 15-17 July, I gave a keynote talk on the relationships between the OpenStreetMap  (OSM) community and the GIScience research community. Of course, the relationships are especially important for those researchers who are working on volunteered Geographic Information (VGI), due to the major role of OSM in this area of research.

The talk included an overview of what researchers have discovered about OpenStreetMap over the 5 years since we started to pay attention to OSM. One striking result is that the issue of positional accuracy does not require much more work by researchers. Another important outcome of the research is to understand that quality is impacted by the number of mappers, or that the data can be used with confidence for mainstream geographical applications when some conditions are met. These results are both useful, and of interest to a wide range of groups, but there remain key areas that require further research – for example, specific facets of quality, community characteristics  and how the OSM data is used.

Reflecting on the body of research, we can start to form a ‘code of engagement’ for both academics and mappers who are engaged in researching or using OpenStreetMap. One such guideline would be  that it is both prudent and productive for any researcher do some mapping herself, and understand the process of creating OSM data, if the research is to be relevant and accurate. Other aspects of the proposed ‘code’ are covered in the presentation.

The talk is also available as a video from the TU Wien Matterhorn server

 

 

In March 2008, I started comparing OpenStreetMap in England to the Ordnance Survey Meridian 2, as a way to evaluate the completeness of OpenStreetMap coverage. The rational behind the comparison is that Meridian 2 represents a generalised geographic dataset that is widely use in national scale spatial analysis. At the time that the study started, it was not clear that OpenStreetMap volunteers can create highly detailed maps as can be seen on the ‘Best of OpenStreetMap‘ site. Yet even today, Meridian 2 provides a minimum threshold for OpenStreetMap when the question of completeness is asked.

So far, I have carried out 6 evaluations, comparing the two datasets in March 2008, March 2009, October 2009, March 2010, September 2010 and March 2011. While the work on the statistical analysis and verification of the results continues, Oliver O’Brien helped me in taking the results of the analysis for Britain and turn them into an interactive online map that can help in exploring the progression of the coverage over the various time period.

Notice that the visualisation shows the total length of all road objects in OpenStreetMap, so does not discriminate between roads, footpaths and other types of objects. This is the most basic level of completeness evaluation and it is fairly coarse.

The application will allow you to browse the results and to zoom to a specific location, and as Oliver integrated the Ordnance Survey Street View layer, it will allow you to see what information is missing from OpenStreetMap.

Finally, note that for the periods before September 2010, the coverage is for England only.

Some details on the development of the map are available on Oliver’s blog.

This post reviews the two books about OpenStreetMap that appeared late in 2010:  OpenStreetMap: Using and Enhancing the Free Map of the World (by F. Ramm, J. Topf & S. Chilton, 386 pages, £25) and OpenStreetMap: Be your own Cartographer (by J. Bennett, 252 pages, £25). The review was written by Thomas Koukoletsos, with some edits by me. The review first covers the Ramm et al. book, and then compares it to Bennett’s. It is fairly details, so if you want to see the recommendation, scroll all the way down.

OpenStreetMap: Using and Enhancing the Free Map of the World is a comprehensive guide to OpenStreetMap (OSM), aimed at a wide range of readers, from those unfamiliar with the project to those who want to use its information and tools and integrate them with other applications. It is written in accessible language, starting from the basics and presenting things in an appropriate order for the reader to be able to follow, slowly building the necessary knowledge.

Part I, the introduction, covers 3 chapters. It presents the OSM project  generally, while pointing to other chapters wherever further details are provided later on. This includes how the project started, a short description of its main interface, how to export data, and some of its related services such as OpenStreetBugs and OpenRouteService. It concludes with a reference on mapping parties and the OSM foundation. This gives all the necessary information for someone new to OSM to get a general idea, without becoming too technical.

Part II, addressing OSM contributors, follows with chapter 4 focusing on how GPS technology is used for OSM. The balance between the technical detail and accessibility continues, so all the necessary information for mapping is presented in an easily digested way even for those not familiar with mapping science. The following chapter covers the whole mapping process using a very comprehensive case study, through which the reader understands how to work in the field, edit and finally upload the collected data. Based on this overview, the next chapter is slightly more technical, describing the data model followed by OSM. The information provided is necessary to understand how the OSM database is structured.

Chapter 7 moves on to details, describing what objects need to be mapped and how this can be done by using tags. The examples provided help the user to move from simpler to more complicated representations. The importance of this chapter, however, is in emphasising that, although the proposed tagging framework is not compulsory, it would be wise to do it as this will increase the consistency in the OSM database. The chapter ends with a suggestion of mapping priorities, from ‘very important’ objects and attributes to ‘luxury’ ones. Chapter 8 continues with map features, covering all other proposed mapping priorities. The split between the two chapters guides the user gradually from the most important features to those covered by expert OSM users, as otherwise mapping might have been far too difficult a task for new participants.

Chapter 9 describes Potlatch, an online editor which is the most popular. The description is simple and complete, and by the end the user is ready to contribute to the OSM database. The next chapter refers to JOSM, an offline editor designed for advanced users, which is more powerful than Potlatch but more difficult to use – although the extensive instructions make the use of this tool almost as easy as Potlatch. Chapter 11 concludes the review of editors by providing basic information on 5 other editors, suitable for desktop or mobile use. Chapter 12 presents some of the tools for mappers, designed to handle the OSM data or perform quality assurance tests. Among the capabilities described are viewing data in layers, monitoring changes in an area, viewing roads with no names, etc. The second part ends, in Chapter 13, with a description of the OSM licensing framework, giving the reader a detailed view of what source of data should be avoided when updating OSM to save it from copyright violations.

Part III of Ramm et al. is far more technical, beginning with how to use OSM on web pages. After providing the necessary information on tiling used for the OSM map (Mapnik and Tiles@Home servers), chapter 14 moves on to the use of OSM with Google Maps or with OpenLayers. Code is provided to assist the learning process. Chapter 15 provides information on how to download data, including the ability to download only changes and update an already downloaded version, explained further in a following chapter.

The next three chapters dive into cartographic issues, with chapter 16 starting with Osmarender, which helps visualising OSM data. With the help of many examples, the reader is shown how this tool can be used to render maps, and how to customise visualisation rules to create a personal map style. Chapter 17 continues with Mapnik, a more efficient tool than Osmarender for large datasets. Its efficiency is the result of reading the data from a PostgreSQL database. A number of other tools are required to be installed for Mapnik; however, they are all listed with basic installation instructions. The chapter concludes with performance tips, with an example of layers used according to the zooming level so that rendering is faster. The final renderer, described in chapter 18, is Kosmos. It is a more user-friendly application than the previous two, and the only one with a Graphical User Interface (GUI). The rules used to transform OSM data into a map come from the wiki pages, so anyone in need of a personal map style will have to create a wiki page. There is a description of a tiling process using Kosmos, as well as of exporting and printing options. The chapter concludes by mentioning Maperitive, the successor to Kosmos to be released shortly.

Chapter 19 is devoted to mobile use of OSM. After explaining the basics of navigation and route planning, there is a detailed description of how to create and install OSM data on Garmin GPS receivers. Additional applications for various types of devices are briefly presented (iPhones, iPods, Android), as well as other routing applications. Chapter 20 closes the third part of the book with an extensive discussion on licence issues of OSM data and its derivatives. The chapter covers the CC-BY-SA licence framework, as well as a comprehensive presentation of the future licence, without forgetting to mention the difficulties of such a change.

Part IV is the most technical part, aimed at those who want to integrate OSM into their applications. Chapter 21 reveals how OSM works, beginning with the OSM subversion repository, where the software for OSM is managed. Chapter 22 explains how the OSM Application Programming Interface (API) works. Apart from the basic data handling modes (create, retrieve, update or delete objects and GPS tracks), other methods of access are described, as well as how to work with changesets. The chapter ends with OAuth, a method to allow OSM authentication through third party applications keeping the necessary user information. Chapter 23 continues with XAPI, which is a different API that, although offers only read requests and its data may be a few minutes old, it allows more complex queries, returns more data than the standard API (e.g. historic versions) and allows RSS feeds from selected objects. Next, the Name Finder and Nominatim search engines for gazetteer purposes are covered. Lastly, GeoNames is mentioned, which, although not an OSM relative, can be used in combination with other OSM tools.

Chapter 24 presents Osmosis, a tool to filter and convert OSM data. Apart from enabling read and write of XML files, this tool is also able to access PostgreSQL and MySql databases for read and write purposes. It also describes how to create and process change files in order to continually update a local dataset or database from the OSM server. Chapter 25 moves deeper into more advanced editing, presenting the basics of large-scale or other automated changes. As such changes can affect a lot of people and their contributions, the chapter begins with ‘a note of caution’, discussing that, although power editing is available to everyone, a contact and discussion with those whose data is to be changed should be made.

Chapter 26 focuses on imports and exports including some of the programs that are used for specific data types. The final chapter presents a rather more detailed overview of how to run an OSM as well as a tile server, covering the requirements and installation. There is also a presentation of the API schema, and alternatives to the OSM API are also mentioned.

The book ends with the appendix, consisting of two parts, covering geodesy basics, and specifically geographic coordinates, datum definition and projections; and information on local OSM communities for a few selected countries.

Overall, the book is accessible and comprehensive.

Now, we turn to review the second book (Bennett) by focusing on differences between the two books.OpenStreetMap - Bennet

Chapters 1 and 2 give a general description of the OSM project and correspond to the first three chapters of Ramm et al. The history of OSM is more detailed here. The main OSM web page description does not include related websites but, on the other hand, it does describe how to use the slippy map as well as how to interact with data. The chapters also focus on the social aspect of the project, briefly presenting more details on a user’s account (e.g. personalisation of the user’s profile by adding a user photo, home location to enable communication with other users in the area or notification of local events).

Chapter 3 corresponds to chapters 4 and 5 of the first book. There is a more detailed description of how GPS works, as well as of how to configure the receiver; however, the other ways of mapping are less detailed. A typical mapping example and a more comprehensive description of the types of GPS devices suitable for OSM contribution, which are provided in Ramm et al., are missing.

Chapter 4 corresponds to chapters 6, 7 and 8 of the first book. Some less than important aspects are missing, such as the data model history. However, Ramm et al. is much more detailed on how to map objects, classifying them according to their importance and providing practical examples of how to do it, while in this chapter a brief description of tags is provided. Both books succeed in communicating the significance of following the wiki suggestions when it comes to tagging, despite the ‘any tags you like’ freedom. An interesting point, which is missing from the first book, is the importance of avoiding tagging for the renderer, explained here with the use of a comprehensive example.

Chapter 5 describes the editors Potlatch, JOSM and Merkaartor, corresponding with chapters 9, 10, and 11 of Ramm et al. Having the three editors in one chapter allows for a comparison table between them, giving a much quicker insight. A practical example with a GPS trace file helps in understanding the basics operation with these editors. More attention is given to Potlatch, while the other two editors are described only briefly. No other editors are described or mentioned.

Chapter 6 provides a practical example of using the three editors and shows how to map objects, which was covered in chapters 6, 7 and 8 in the first book. While the first book is more detailed and includes a wider range of mapping cases, here the reader becomes more familiar with the editors and learns how to provide the corresponding information. In addition to the material in the first book, here we have an example of finding undocumented tags and using OSMdoc.

Chapter 7 corresponds to chapter 12 of the first book, with a detailed description of the four basic tools to check OSM data for errors. However, Ramm et al. offers a broader view by mentioning or briefly describing seven other error-checking tools.

Chapter 8 deals with map production, similar to chapters 2, 16 and 18 of Ramm et al. The Osmarender tool is described in detail in both books. Kosmos renderer, however, is described in much more detail here, although it is no longer developed. The chapter’s summary here is very useful, as it presents briefly the 3 rendering tools and compares them. What is missing from this book, however, is a description of Mapnik (chapter 17 of Ramm et al.) and also the use of tiling in web mapping.

Chapter 9 corresponds to chapters 15, 22 and 23 of Ramm et al. Regarding planet files, Bennett provides a description of a way to check the planet file’s integrity, which can be useful for automating data integration processes. Moving on to OSM’s API, this book is confined to describing ways of retrieving data from OSM, unlike the first book that also includes operations to create, update or delete data. XAPI, however, is more detailed in this book, including how to filter data. In this chapter’s summary a brief description and comparison of the ways to access data is helpful. On the other hand, Ramm et al. briefly describes additional APIs and web services that are not covered here.

Chapter 10 matches chapter 24 of the first book. In both cases Osmosis is described in detail, with examples of how to filter data. The first book includes a more complete description of command line options, classified according to the data streams (entity or change). This book, on the other hand, is more explanatory on how to access data based on a predefined polygon, and further explaining how to create and use a customised one. The first book mentions additional tasks, such as ‘log progress’, ‘report integrity’, ‘buffer’, ‘sort’, while here only the latter is used during an example. An advantage of Bennett’s book, however, is that the use of Osmosis with a PostgreSQL database, as well as how to update data and how to automate a database update procedure, is explained more comprehensively and extensively.

The last chapter talks about future aspects of OSM. The OSM licence and its future development is explained in a comprehensive way, corresponding to the end of chapter 20 of the first book, with the use of some good examples to show where the present OSM licence is problematic. However, throughout Bennett’s book, licence issues are not covered as well as in Ramm et al. (chapters 13, 20), and the reader needs to reach the end of the book to understand what is allowed and what is not with the OSM data. Moving on, MapCSS, a common stylesheet language for OSM, is explained in detail, while in the first book it is simply mentioned at the end of chapter 9 during a discussion of Potlatch 2. The book ends with Mapzen POI collector for iPhone, covered in chapter 11 of the first book.

When compared to the first book, what is missing here is the use of OSM for navigation in mobile devices (chapter 19), large-scale editing (chapter 25), writing or finding software for OSM (chapter 21) and how to run an OSM server (chapter 27). Another drawback is the lack of coloured images; in some cases (e.g. chapter 7 – the NoName layer) it is difficult to understand them.

So which book is for me?

Both the books more or less deal with the same information, as shown by the chapters’ comparison and sequence.

Although there are areas where the two books are complementary, in most cases Ramm et al. provides a better understanding of the matters discussed, using a broader and more extensive view. It addresses a wide range of readers, from those unfamiliar with OSM to the advanced programmers who want to utilise it elsewhere, and is written with a progressive build-up of knowledge, which helps in the learning process. It also benefits from the dedicated website where updates are provided.  Bennett’s book, on the other hand, would be comparably more difficult to read for someone who has not heard of OSM, as well as for those in need of using it but who are not programming experts. There is a hidden assumption that the reader is fairly technically literate. It suffers somewhat from not being introductory enough, while at the same time not being in-depth and detailed.

As the two books are sold at a similar price point, we liked the Ramm et al. book much more and would recommend it to our students.

Follow

Get every new post delivered to your Inbox.

Join 2,082 other followers