Some ideas take long time to mature into a form that you are finally happy to share them. This is an example for such thing.

I got interested in the area of Philosophy of Technology during my PhD studies, and continue to explore it since. During this journey, I found a lot of inspiration and links to Andrew Feenberg’s work, for example, in my paper about neogeography and the delusion of democratisation. The links are mostly due to Feenberg’s attention to ‘hacking’ or appropriating technical systems to functions and activities that they are outside what the designers or producers of them thought.

In addition to Feenberg, I became interested in the work of Albert Borgmann and because he explicitly analysed GIS, dedicating a whole chapter to it in Holding on to RealityIn particular, I was intrigues by his formulation to The Device Paradigm and the notion of Focal Things and Practices which are linked to information systems in Holding on to Reality where three forms of information are presented – Natural Information, Cultural Information and Technological Information. It took me some time to see that these 5 concepts are linked, with technological information being a demonstration of the trouble with the device paradigm, while natural and cultural information being part of focal things and practices (more on these concepts below).

I first used Borgmann’s analysis as part of ‘Conversations Across the Divide‘ session in 2005, which focused on Complexity and Emergence. In a joint contribution with David O’Sullivan about ‘complexity science and Geography: understanding the limits of narratives’, I’ve used Borgmann’s classification of information. Later on, we’ve tried to turn it into a paper, but in the end David wrote a much better analysis of complexity and geography, while the attempt to focus mostly on the information concepts was not fruitful.

The next opportunity to revisit Borgmann came in 2011, for an AAG pre-conference workshop on VGI where I explored the links between The Device Paradigm, Focal Practices and VGI. By 2013, when I was invited to the ‘Thinking and Doing Digital Mapping‘ workshop that was organise by ‘Charting the Digital‘ project. I was able to articulate the link between all the five elements of Borgmann’s approach in my position paper. This week, I was able to come back to the topic in a seminar in the Department of Geography at the University of Leicester. Finally, I feel that I can link them in a coherent way.

So what is it all about?

Within the areas of VGI and Citizen Science, there is a tension between the different goals or the projects and identification of practices in terms of what they mean for the participants – are we using people as ‘platform for sensors’ or are we dealing with fuller engagement? The use of Borgmann’s ideas can help in understanding the difference. He argues that modern technologies tend to adopt the myopic ‘Device Paradigm’ in which specific interpretation of efficiency, productivity and a reductionist view of human actions are taking precedence over ‘Focal Things and Practices’ that bring people together in a way meaningful to human life. In Holding On to Reality (1999), he differentiates three types of information: natural, cultural and technological.  Natural information is defined as information about reality: for example, scientific information on the movement of the earth or the functioning of a cell.  This is information that was created in order to understand the functioning of reality.  Cultural information is information that is being used to shape reality, such as engineering design plans.  Technological information is information as reality and leads to decreased human engagement with fundamental aspects of reality.  Significantly, these categories do not relate to the common usage of the words ‘natural’, ‘cultural and ‘technological’ rather to describe the changing relationship between information and reality at different stages of socio-technical development.

When we explore general geographical information, we can see that some of it is technological information, for example SatNav and the way that communicate to the people who us them, or virtual globes that try to claim to be a representation of reality with ‘current clouds’ and all. The paper map, on the other hand, provide a conduit to the experience of hiking and walking through the landscape, and is part of cultural information.

Things are especially interesting with VGI and Citizen Science. In them, information and practices need to be analysed in a more nuanced way. In some cases, the practices can become focal to the participants – for example in iSpot where the experience of identifying a species in the field is also link to the experiences of the amateurs and experts who discuss the classification. It’s an activity that brings people together. On the other hand, in crowdsourcing projects that grab information from SatNav devices, there is a demonstration of The Device Paradigm, with the potential of reducing of meaningful holiday journey to ‘getting from A to B at the shortest time’. The slides below go through the ideas and then explore the implications on GIS, VGI and Citizen Science.

Now for the next stage – turning this into a paper…

Following the two previous assertions, namely that:

you can be supported by a huge crowd for a very short time, or by few for a long time, but you can’t have a huge crowd all of the time (unless data collection is passive)’ (original post here)

And

‘All information sources are heterogeneous, but some are more honest about it than others’  (original post here)

The third assertion is about pattern of participation. It is one that I’ve mentioned before and in some way it is a corollary of the two assertions above.

‘When looking at crowdsourced information, always keep participation inequality in mind’ 

Because crowdsourced information, either Volunteered Geographic Information or Citizen Science, is created through a socio-technical process, all too often it is easy to forget the social side – especially when you are looking at the information without the metadata of who collected it and when. So when working with OpenStreetMap data, or viewing the distribution of bird species in eBird (below), even though the data source is expected to be heterogeneous, each observation is treated as similar to other observation and assumed to be produced in a similar way.

Distribution of House Sparrow

Yet, data is not only heterogeneous in terms of consistency and coverage, it is also highly heterogeneous in terms of contribution. One of the most persistence findings from studies of various systems – for example in Wikipedia , OpenStreetMap and even in volunteer computing is that there is a very distinctive heterogeneity in contribution. The phenomena was term Participation Inequality by Jakob Nielsn in 2006 and it is summarised succinctly in the diagram below (from Visual Liberation blog) – very small number of contributors add most of the content, while most of the people that are involved in using the information will not contribute at all. Even when examining only those that actually contribute, in some project over 70% contribute only once, with a tiny minority contributing most of the information.

Participation Inequality Therefore, when looking at sources of information that were created through such process, it is critical to remember the nature of contribution. This has far reaching implications on quality as it is dependent on the expertise of the heavy contributors, on their spatial and temporal engagement, and even on their social interaction and practices (e.g. abrasive behaviour towards other participants).

Because of these factors, it is critical to remember the impact and implications of participation inequality on the analysis of the information. There will be some analysis to which it will have less impact and some where it will have major one. In either cases, it need to be taken into account.

Following the last post, which focused on an assertion about crowdsourced geographic information and citizen science I continue with another observation. As was noted in the previous post, these can be treated as ‘laws’ as they seem to emerge as common patterns from multiple projects in different areas of activity – from citizen science to crowdsourced geographic information. The first assertion was about the relationship between the number of volunteers who can participate in an activity and the amount of time and effort that they are expect to contribute.

This time, I look at one aspect of data quality, which is about consistency and coverage. Here the following assertion applies: 

‘All information sources are heterogeneous, but some are more honest about it than others’

What I mean by that is the on-going argument about authoritative and  crowdsourced  information sources (Flanagin and Metzger 2008 frequently come up in this context), which was also at the root of the Wikipedia vs. Britannica debate, and the mistrust in citizen science observations and the constant questioning if they can do ‘real research’

There are many aspects for these concerns, so the assertion deals with the aspects of comprehensiveness and consistency which are used as a reason to dismiss crowdsourced information when comparing them to authoritative data. However, at a closer look we can see that all these information sources are fundamentally heterogeneous. Despite of all the effort to define precisely standards for data collection in authoritative data, heterogeneity creeps in because of budget and time limitations, decisions about what is worthy to collect and how, and the clash between reality and the specifications. Here are two examples:

Take one of the Ordnance Survey Open Data sources – the map present themselves as consistent and covering the whole country in an orderly way. However, dig in to the details for the mapping, and you discover that the Ordnance Survey uses different standards for mapping urban, rural and remote areas. Yet, the derived products that are generalised and manipulated in various ways, such as Meridian or Vector Map District, do not provide a clear indication which parts originated from which scale – so the heterogeneity of the source disappeared in the final product.

The census is also heterogeneous, and it is a good case of specifications vs. reality. Not everyone fill in the forms and even with the best effort of enumerators it is impossible to collect all the data, and therefore statistical analysis and manipulation of the results are required to produce a well reasoned assessment of the population. This is expected, even though it is not always understood.

Therefore, even the best information sources that we accept as authoritative are heterogeneous, but as I’ve stated, they just not completely honest about it. The ONS doesn’t release the full original set of data before all the manipulations, nor completely disclose all the assumptions that went into reaching the final value. The Ordnance Survey doesn’t tag every line with metadata about the date of collection and scale.

Somewhat counter-intuitively, exactly because crowdsourced information is expected to be inconsistent, we approach it as such and ask questions about its fitness for use. So in that way it is more honest about the inherent heterogeneity.

Importantly, the assertion should not be taken to be dismissive of authoritative sources, or ignoring that the heterogeneity within crowdsources information sources is likely to be much higher than in authoritative ones. Of course all the investment in making things consistent and the effort to get universal coverage is indeed worth it, and it will be foolish and counterproductive to consider that such sources of information can be replaced as is suggest for the census or that it’s not worth investing in the Ordnance Survey to update the authoritative data sets.

Moreover, when commercial interests meet crowdsourced geographic information or citizen science, the ‘honesty’ disappear. For example, even though we know that Google Map Maker is now used in many part

s of the world (see the figure), even in cases when access to vector data is provided by Google, you cannot find out about who contribute, when and where. It is also presented as an authoritative source of information. 

Despite the risk of misinterpretation, the assertion can be useful as a reminder that the differences between authoritative and crowdsourced information are not as big as it may seem.

Looking across the range of crowdsourced geographic information activities, some regular patterns are emerging and it might be useful to start notice them as a way to think about what is possible or not possible to do in this area. Since I don’t like the concept of ‘laws’ – as in Tobler’s first law of geography which is  stated as ‘Everything is related to everything else, but near things are more related than distant things.’ – I would call them assertions. There is also something nice about using the word ‘assertion’ in the context of crowdsourced geographic information, as it echos Mike Goodchild’s differentiation between asserted and authoritative information. So not laws, just assertions or even observations.

The first one, is rephrasing a famous quote:

you can be supported by a huge crowd for a very short time, or by few for a long time, but you can’t have a huge crowd all of the time (unless data collection is passive)’

So the Christmas Bird Count can have tens of thousands of participants for a short time, while the number of people who operate weather observation stations will be much smaller. Same thing is true for OpenStreetMap – for crisis mapping, which is a short term task, you can get many contributors  but for the regular updating of an area under usual conditions, there will be only few.

The exception for the assertion is the case for passive data collection, where information is collected automatically through the logging of information from a sensor – for example the recording of GPS track to improve navigation information.

OSM Haiyan

The Consumers’ Association Which? magazine  is probably not the first place to turn to when you look for usability studies. Especially not if you’re interested in computer technology – for that, there are sources such as PC Magazine on the consumer side, and professional magazines such as Interactions from Association for Computing Machinery (ACM) Special Interest Group on Computer-Human Interaction (SIGCHI).

And yet…

Over the past few years, Which? is reviewing, testing and recommending Satnavs (also known Personal Navigation Devices – PNDs). Which? is an interesting case because it reaches over 600,000 households and because of the level of trust that it enjoys. If you look at their methodology for testing satnavs , you’ll find that it does resemble usability testing – click on the image to see the video from Which? about their methodology. The methodology is more about everyday use and the opinion of the assessors seems to play an important role.

Link to Which Satnav video

Professionals in geographical information science or human-computer interaction might dismiss the study as unrepresentative, or not fitting their ways of evaluating technologies, but we need to remember that Which? is providing an insight into the experience of the people who are outside our usual professional and social context – people who go to a high street shop or download an app and start using it straightaway. Therefore, it’s worth understanding how they review the different systems and what the experience is like when you try to think like a consumer, with limited technical knowledge and understanding of maps.

There are also aspects that puncture the ‘filter bubble‘ of geoweb people – Google Maps are now probably the most used maps on the web, but the satnav application using Google Maps was described as ‘bad, useful for getting around on foot, but traffic information and audio instructions are limited and there’s no speed limit or speed camera data‘. Waze, the crowdsourced application received especially low marks and the magazine noted that it ‘lets users share traffic and road info, but we found its routes and maps are inaccurate and audio is poor‘ (both citations from Which? Nov 2012, p. 38). It is also worth reading their description of OpenStreetMap when discussing map updates, and also the opinions on the willingness to pay for map updates.

There are many ways to receive information about the usability and the nature of interaction with geographical technologies, and some of them, while not traditional, can provide useful insights.

The Spatial Data Infrastructure Magazine (SDIMag.com) is a relatively new e-zine dedicated to the development of spatial  data infrastructures around the world. Roger Longhorn, the editor of the magazine, conducted an email interview with me, which is now published.

In the interview, we are covering the problematic terminology used to describe a wider range of activities; the need to consider social and technical aspects as well as goals of the participants; and, of course, the role of the information that is produced through crowdsourcing, citizen science, VGI with spatial data infrastructures.

The full interview can be found here.

 

At the State of the Map (EU) 2011 conference that was held in Vienna from 15-17 July, I gave a keynote talk on the relationships between the OpenStreetMap  (OSM) community and the GIScience research community. Of course, the relationships are especially important for those researchers who are working on volunteered Geographic Information (VGI), due to the major role of OSM in this area of research.

The talk included an overview of what researchers have discovered about OpenStreetMap over the 5 years since we started to pay attention to OSM. One striking result is that the issue of positional accuracy does not require much more work by researchers. Another important outcome of the research is to understand that quality is impacted by the number of mappers, or that the data can be used with confidence for mainstream geographical applications when some conditions are met. These results are both useful, and of interest to a wide range of groups, but there remain key areas that require further research – for example, specific facets of quality, community characteristics  and how the OSM data is used.

Reflecting on the body of research, we can start to form a ‘code of engagement’ for both academics and mappers who are engaged in researching or using OpenStreetMap. One such guideline would be  that it is both prudent and productive for any researcher do some mapping herself, and understand the process of creating OSM data, if the research is to be relevant and accurate. Other aspects of the proposed ‘code’ are covered in the presentation.

The talk is also available as a video from the TU Wien Matterhorn server

 

 

In March 2008, I started comparing OpenStreetMap in England to the Ordnance Survey Meridian 2, as a way to evaluate the completeness of OpenStreetMap coverage. The rational behind the comparison is that Meridian 2 represents a generalised geographic dataset that is widely use in national scale spatial analysis. At the time that the study started, it was not clear that OpenStreetMap volunteers can create highly detailed maps as can be seen on the ‘Best of OpenStreetMap‘ site. Yet even today, Meridian 2 provides a minimum threshold for OpenStreetMap when the question of completeness is asked.

So far, I have carried out 6 evaluations, comparing the two datasets in March 2008, March 2009, October 2009, March 2010, September 2010 and March 2011. While the work on the statistical analysis and verification of the results continues, Oliver O’Brien helped me in taking the results of the analysis for Britain and turn them into an interactive online map that can help in exploring the progression of the coverage over the various time period.

Notice that the visualisation shows the total length of all road objects in OpenStreetMap, so does not discriminate between roads, footpaths and other types of objects. This is the most basic level of completeness evaluation and it is fairly coarse.

The application will allow you to browse the results and to zoom to a specific location, and as Oliver integrated the Ordnance Survey Street View layer, it will allow you to see what information is missing from OpenStreetMap.

Finally, note that for the periods before September 2010, the coverage is for England only.

Some details on the development of the map are available on Oliver’s blog.

This post reviews the two books about OpenStreetMap that appeared late in 2010:  OpenStreetMap: Using and Enhancing the Free Map of the World (by F. Ramm, J. Topf & S. Chilton, 386 pages, £25) and OpenStreetMap: Be your own Cartographer (by J. Bennett, 252 pages, £25). The review was written by Thomas Koukoletsos, with some edits by me. The review first covers the Ramm et al. book, and then compares it to Bennett’s. It is fairly details, so if you want to see the recommendation, scroll all the way down.

OpenStreetMap: Using and Enhancing the Free Map of the World is a comprehensive guide to OpenStreetMap (OSM), aimed at a wide range of readers, from those unfamiliar with the project to those who want to use its information and tools and integrate them with other applications. It is written in accessible language, starting from the basics and presenting things in an appropriate order for the reader to be able to follow, slowly building the necessary knowledge.

Part I, the introduction, covers 3 chapters. It presents the OSM project  generally, while pointing to other chapters wherever further details are provided later on. This includes how the project started, a short description of its main interface, how to export data, and some of its related services such as OpenStreetBugs and OpenRouteService. It concludes with a reference on mapping parties and the OSM foundation. This gives all the necessary information for someone new to OSM to get a general idea, without becoming too technical.

Part II, addressing OSM contributors, follows with chapter 4 focusing on how GPS technology is used for OSM. The balance between the technical detail and accessibility continues, so all the necessary information for mapping is presented in an easily digested way even for those not familiar with mapping science. The following chapter covers the whole mapping process using a very comprehensive case study, through which the reader understands how to work in the field, edit and finally upload the collected data. Based on this overview, the next chapter is slightly more technical, describing the data model followed by OSM. The information provided is necessary to understand how the OSM database is structured.

Chapter 7 moves on to details, describing what objects need to be mapped and how this can be done by using tags. The examples provided help the user to move from simpler to more complicated representations. The importance of this chapter, however, is in emphasising that, although the proposed tagging framework is not compulsory, it would be wise to do it as this will increase the consistency in the OSM database. The chapter ends with a suggestion of mapping priorities, from ‘very important’ objects and attributes to ‘luxury’ ones. Chapter 8 continues with map features, covering all other proposed mapping priorities. The split between the two chapters guides the user gradually from the most important features to those covered by expert OSM users, as otherwise mapping might have been far too difficult a task for new participants.

Chapter 9 describes Potlatch, an online editor which is the most popular. The description is simple and complete, and by the end the user is ready to contribute to the OSM database. The next chapter refers to JOSM, an offline editor designed for advanced users, which is more powerful than Potlatch but more difficult to use – although the extensive instructions make the use of this tool almost as easy as Potlatch. Chapter 11 concludes the review of editors by providing basic information on 5 other editors, suitable for desktop or mobile use. Chapter 12 presents some of the tools for mappers, designed to handle the OSM data or perform quality assurance tests. Among the capabilities described are viewing data in layers, monitoring changes in an area, viewing roads with no names, etc. The second part ends, in Chapter 13, with a description of the OSM licensing framework, giving the reader a detailed view of what source of data should be avoided when updating OSM to save it from copyright violations.

Part III of Ramm et al. is far more technical, beginning with how to use OSM on web pages. After providing the necessary information on tiling used for the OSM map (Mapnik and Tiles@Home servers), chapter 14 moves on to the use of OSM with Google Maps or with OpenLayers. Code is provided to assist the learning process. Chapter 15 provides information on how to download data, including the ability to download only changes and update an already downloaded version, explained further in a following chapter.

The next three chapters dive into cartographic issues, with chapter 16 starting with Osmarender, which helps visualising OSM data. With the help of many examples, the reader is shown how this tool can be used to render maps, and how to customise visualisation rules to create a personal map style. Chapter 17 continues with Mapnik, a more efficient tool than Osmarender for large datasets. Its efficiency is the result of reading the data from a PostgreSQL database. A number of other tools are required to be installed for Mapnik; however, they are all listed with basic installation instructions. The chapter concludes with performance tips, with an example of layers used according to the zooming level so that rendering is faster. The final renderer, described in chapter 18, is Kosmos. It is a more user-friendly application than the previous two, and the only one with a Graphical User Interface (GUI). The rules used to transform OSM data into a map come from the wiki pages, so anyone in need of a personal map style will have to create a wiki page. There is a description of a tiling process using Kosmos, as well as of exporting and printing options. The chapter concludes by mentioning Maperitive, the successor to Kosmos to be released shortly.

Chapter 19 is devoted to mobile use of OSM. After explaining the basics of navigation and route planning, there is a detailed description of how to create and install OSM data on Garmin GPS receivers. Additional applications for various types of devices are briefly presented (iPhones, iPods, Android), as well as other routing applications. Chapter 20 closes the third part of the book with an extensive discussion on licence issues of OSM data and its derivatives. The chapter covers the CC-BY-SA licence framework, as well as a comprehensive presentation of the future licence, without forgetting to mention the difficulties of such a change.

Part IV is the most technical part, aimed at those who want to integrate OSM into their applications. Chapter 21 reveals how OSM works, beginning with the OSM subversion repository, where the software for OSM is managed. Chapter 22 explains how the OSM Application Programming Interface (API) works. Apart from the basic data handling modes (create, retrieve, update or delete objects and GPS tracks), other methods of access are described, as well as how to work with changesets. The chapter ends with OAuth, a method to allow OSM authentication through third party applications keeping the necessary user information. Chapter 23 continues with XAPI, which is a different API that, although offers only read requests and its data may be a few minutes old, it allows more complex queries, returns more data than the standard API (e.g. historic versions) and allows RSS feeds from selected objects. Next, the Name Finder and Nominatim search engines for gazetteer purposes are covered. Lastly, GeoNames is mentioned, which, although not an OSM relative, can be used in combination with other OSM tools.

Chapter 24 presents Osmosis, a tool to filter and convert OSM data. Apart from enabling read and write of XML files, this tool is also able to access PostgreSQL and MySql databases for read and write purposes. It also describes how to create and process change files in order to continually update a local dataset or database from the OSM server. Chapter 25 moves deeper into more advanced editing, presenting the basics of large-scale or other automated changes. As such changes can affect a lot of people and their contributions, the chapter begins with ‘a note of caution’, discussing that, although power editing is available to everyone, a contact and discussion with those whose data is to be changed should be made.

Chapter 26 focuses on imports and exports including some of the programs that are used for specific data types. The final chapter presents a rather more detailed overview of how to run an OSM as well as a tile server, covering the requirements and installation. There is also a presentation of the API schema, and alternatives to the OSM API are also mentioned.

The book ends with the appendix, consisting of two parts, covering geodesy basics, and specifically geographic coordinates, datum definition and projections; and information on local OSM communities for a few selected countries.

Overall, the book is accessible and comprehensive.

Now, we turn to review the second book (Bennett) by focusing on differences between the two books.OpenStreetMap - Bennet

Chapters 1 and 2 give a general description of the OSM project and correspond to the first three chapters of Ramm et al. The history of OSM is more detailed here. The main OSM web page description does not include related websites but, on the other hand, it does describe how to use the slippy map as well as how to interact with data. The chapters also focus on the social aspect of the project, briefly presenting more details on a user’s account (e.g. personalisation of the user’s profile by adding a user photo, home location to enable communication with other users in the area or notification of local events).

Chapter 3 corresponds to chapters 4 and 5 of the first book. There is a more detailed description of how GPS works, as well as of how to configure the receiver; however, the other ways of mapping are less detailed. A typical mapping example and a more comprehensive description of the types of GPS devices suitable for OSM contribution, which are provided in Ramm et al., are missing.

Chapter 4 corresponds to chapters 6, 7 and 8 of the first book. Some less than important aspects are missing, such as the data model history. However, Ramm et al. is much more detailed on how to map objects, classifying them according to their importance and providing practical examples of how to do it, while in this chapter a brief description of tags is provided. Both books succeed in communicating the significance of following the wiki suggestions when it comes to tagging, despite the ‘any tags you like’ freedom. An interesting point, which is missing from the first book, is the importance of avoiding tagging for the renderer, explained here with the use of a comprehensive example.

Chapter 5 describes the editors Potlatch, JOSM and Merkaartor, corresponding with chapters 9, 10, and 11 of Ramm et al. Having the three editors in one chapter allows for a comparison table between them, giving a much quicker insight. A practical example with a GPS trace file helps in understanding the basics operation with these editors. More attention is given to Potlatch, while the other two editors are described only briefly. No other editors are described or mentioned.

Chapter 6 provides a practical example of using the three editors and shows how to map objects, which was covered in chapters 6, 7 and 8 in the first book. While the first book is more detailed and includes a wider range of mapping cases, here the reader becomes more familiar with the editors and learns how to provide the corresponding information. In addition to the material in the first book, here we have an example of finding undocumented tags and using OSMdoc.

Chapter 7 corresponds to chapter 12 of the first book, with a detailed description of the four basic tools to check OSM data for errors. However, Ramm et al. offers a broader view by mentioning or briefly describing seven other error-checking tools.

Chapter 8 deals with map production, similar to chapters 2, 16 and 18 of Ramm et al. The Osmarender tool is described in detail in both books. Kosmos renderer, however, is described in much more detail here, although it is no longer developed. The chapter’s summary here is very useful, as it presents briefly the 3 rendering tools and compares them. What is missing from this book, however, is a description of Mapnik (chapter 17 of Ramm et al.) and also the use of tiling in web mapping.

Chapter 9 corresponds to chapters 15, 22 and 23 of Ramm et al. Regarding planet files, Bennett provides a description of a way to check the planet file’s integrity, which can be useful for automating data integration processes. Moving on to OSM’s API, this book is confined to describing ways of retrieving data from OSM, unlike the first book that also includes operations to create, update or delete data. XAPI, however, is more detailed in this book, including how to filter data. In this chapter’s summary a brief description and comparison of the ways to access data is helpful. On the other hand, Ramm et al. briefly describes additional APIs and web services that are not covered here.

Chapter 10 matches chapter 24 of the first book. In both cases Osmosis is described in detail, with examples of how to filter data. The first book includes a more complete description of command line options, classified according to the data streams (entity or change). This book, on the other hand, is more explanatory on how to access data based on a predefined polygon, and further explaining how to create and use a customised one. The first book mentions additional tasks, such as ‘log progress’, ‘report integrity’, ‘buffer’, ‘sort’, while here only the latter is used during an example. An advantage of Bennett’s book, however, is that the use of Osmosis with a PostgreSQL database, as well as how to update data and how to automate a database update procedure, is explained more comprehensively and extensively.

The last chapter talks about future aspects of OSM. The OSM licence and its future development is explained in a comprehensive way, corresponding to the end of chapter 20 of the first book, with the use of some good examples to show where the present OSM licence is problematic. However, throughout Bennett’s book, licence issues are not covered as well as in Ramm et al. (chapters 13, 20), and the reader needs to reach the end of the book to understand what is allowed and what is not with the OSM data. Moving on, MapCSS, a common stylesheet language for OSM, is explained in detail, while in the first book it is simply mentioned at the end of chapter 9 during a discussion of Potlatch 2. The book ends with Mapzen POI collector for iPhone, covered in chapter 11 of the first book.

When compared to the first book, what is missing here is the use of OSM for navigation in mobile devices (chapter 19), large-scale editing (chapter 25), writing or finding software for OSM (chapter 21) and how to run an OSM server (chapter 27). Another drawback is the lack of coloured images; in some cases (e.g. chapter 7 – the NoName layer) it is difficult to understand them.

So which book is for me?

Both the books more or less deal with the same information, as shown by the chapters’ comparison and sequence.

Although there are areas where the two books are complementary, in most cases Ramm et al. provides a better understanding of the matters discussed, using a broader and more extensive view. It addresses a wide range of readers, from those unfamiliar with OSM to the advanced programmers who want to utilise it elsewhere, and is written with a progressive build-up of knowledge, which helps in the learning process. It also benefits from the dedicated website where updates are provided.  Bennett’s book, on the other hand, would be comparably more difficult to read for someone who has not heard of OSM, as well as for those in need of using it but who are not programming experts. There is a hidden assumption that the reader is fairly technically literate. It suffers somewhat from not being introductory enough, while at the same time not being in-depth and detailed.

As the two books are sold at a similar price point, we liked the Ramm et al. book much more and would recommend it to our students.

Following successful funding for the European Union FP7 EveryAware and the EPSRC Extreme Citizen Science activities, the department of Civil, Environmental and Geomatic Engineering at UCL is inviting applications for a postdoctoral position and 3 PhD studentships. Please note that these positions are open to students from any EU country.

These positions are in the ‘Extreme Citizen Science’ (ExCiteS) research group. The group’s activities focus on the theory, methodologies, techniques and tools that are needed to allow any community to start its own bottom-up citizen science activity, regardless of the level of literacy of the users. Importantly, Citizen Science is understood in the widest sense, including perceptions and views – so participatory mapping and participatory geographic information are integral parts of the activities.

The research themes that the group explores include Citizen Science and Citizen Cyberscience; Community and participatory mapping/GIS; Volunteered Geographic Information (OpenStreetMap, Green Mapping, Participatory GeoWeb); Usability of geographic information and geographic information technology, especially with non-expert users;  GeoWeb and mobile GeoWeb technologies that facilitate Extreme Citizen Science; and identifying scientific models and visualisations that are suitable for Citizen Science.

The positions that are opening now are part of an effort to extend Dr Jerome Lewis’ research with forest communities (see BBC Report and report on software development):

Research Associate in Extreme Citizen Science – a 2-year, postdoctoral research associate position commencing 1 May 2011.

The research associate will lead the development of an ‘Intelligent Map’ that allows non-literate users to upload data securely; and the system should allow the users to visualise their information with data from other users. Permissions need to be developed in accordance with cultural sensitivities. As uploaded data from multiple users sharing the same system increase over time, repeating patterns will begin to emerge that indicate particular environmental trends.

The role will also include some general project-management duties, guiding the PhD students who are working on the project. Travel to Cameroon to the forest communities that we are working with is necessary.

Complete details about this post and application procedure are available on the UCL jobs website.

PhD Studentship – understanding citizen scientists’ motivations, incentives and group organisation – a 3.5-year fully funded studentship. We are looking for applicants with a good honours degree (1st Class or 2:1 minimum), and an MA or MSc in anthropology, geography, sociology, psychology or related discipline. The applicant needs to be familiar with quantitative and qualitative research methods, and be able to work with a team that will include programmers and human-computer interaction experts who will design systems to be used in citizen science projects. Travel will be required as part of the project. A willingness to live for short periods in remote forest locations in simple lodgings, eating local food, will be necessary. French language skills are desirable.

The research itself will focus on motivations, incentives and understanding of the needs and wishes of participants in citizen science projects. We will specifically focus on engagement of non-literate people in such projects and need to understand how the process – from data collection to analysis – can be made meaningful and useful for their everyday life. The research will involve using quantitative methods to analyse large-scale patterns of engagement in existing projects, as well as ethnographic and qualitative study of participants. The project will include working with non-literate forest communities in Cameroon as well as marginalised communities in London.

Complete details about this post and application procedure are available on the UCL jobs website.

PhD Studentship in geographic visualisation for non-literate citizen scientists - a 3.5-year fully funded studentship. The applicant should possess a good honours degree (1st Class or 2:1 minimum), and an MSc in computer science, human-computer interaction, electronic engineering or related discipline. In addition, they need to be familiar with geographic information and software development, and be able to work with a team that will include anthropologists and human-computer interaction experts who will design systems to be used in citizen science projects. Travel will be required as part of the project. A willingness to live for short periods in remote forest locations in simple lodgings, eating local food, will be necessary. French language skills are desirable.

Complete details about this post and application procedure are available on the UCL jobs website.

In addition, we offer a PhD Studentship on How interaction design and mobile mapping influences participation in Citizen Science, which is part of the EveryAware project and is also open to any EU citizen.

Follow

Get every new post delivered to your Inbox.

Join 2,082 other followers