On the 4th and 5th August, Portland, OR, was the gathering place for 300 participants that came to the workshop on Public Participation in Scientific Research. The workshop was timed just before the annual meeting of the Ecological Society of America, and therefore it was not surprising that the workshop focused on citizen science projects that are linked to ecology and natural environments monitoring. These projects are some of the longest running citizen science activities, that are now gaining recognition and attention.

The workshop was organised as a set of thematic talks interlaced with long poster sessions. This way, the workshop included over 180 presentations in a day and a half. That set the scene for a detailed discussion at the end of the second day, to explore what is the way forward to the field of PPSR/Citizen Science/Civic Science etc., with attention to sharing lessons, developing and supporting new activities, considering codes of ethics, etc.

I presented the last talk of the workshop, describing Extreme Citizen Science and arguing for the potential of public participation to go much deeper in terms of engagement. The presentation is provided below, together with an interview that was conducted with me shortly after it.

And the interview,

 

Over the Air 2012 event was a wonderful event – it’s a 36 hours event, dedicated to mobile development and it is based on Bletchley park. This year, Citizen Science was a theme of the event. The final talk was given by Francois Grey from the Citizen Cyberscience Centre . Francois’ interest is on volunteer computing – the type of citizen science were people donate the unused cycles on the computers through software such as BOINC - as well as the wider range of citizen science project. Based on his experience from talks with scientists around the world about citizen science, he developed the 7 myths of citizen science which he covered in his talk (see it below). He suggest them as point of views that are expressed by scientists when citizen science is suggested to them. They are:

  1. It doesn’t produce real science
  2. It doesn’t work for my science – it is too complex to engage people in it
  3. Nobody will be interested in my area of science
  4. You can’t trust the results from ordinary people if you involve them in something other than automatic processing
  5. Volunteer computing is energetically hugely wasteful when compared to computer clusters
  6. It doesn’t really engage people in science
  7. One day we will run out of volunteers

Interestingly, the myths are covering the practice of science (energy consumption, validation), social practices (number of volunteers) and the educational aspects of science (interest, engagement). It is worth thinking about these myths and what they mean for various projects – as well as remembering that they are based on scientists’ views.

At the 2012 Annual Meeting of the Association of American Geographers, I presented during the session Information Geographies: Online Power, Representation and Voice’, which was organised by Mark Graham (Oxford Internet Institute) and Matthew Zook (University of Kentucky). For an early morning session on a Saturday, the session was well attended – and the papers in the session were very interesting.

My presentation, titled ‘Nobody wants to do council estates’ – digital divide, spatial justice and outliers‘, was the result of thinking about the nature of social information that is available on the Web and which I partially articulated in a response to a post on GeoIQ blog. When Mark and Matt asked for an abstract, I provided the following:

The understanding of the world through digital representation (digiplace) and VGI is frequently carried out with the assumption that these are valid, comprehensive and useful representations of the world. A common practice throughout the literature on these issues is to mention the digital divide and, while accepting it as a social phenomenon, either ignore it for the rest of the analysis or expect that it will solve itself over time through technological diffusion. The almost deterministic belief in technological diffusion absolves the analyst from fully confronting the political implication of the divide.

However, what VGI and social media analysis reveals is that the digital divide is part of deep and growing social inequalities in Western societies. Worse still, digiplace amplifies and strengthens them.

In digiplace the wealthy, powerful, educated and mostly male elite is amplified through multiple digital representations. Moreover, the frequent decision of algorithm designers to highlight and emphasise those who submit more media, and the level of ‘digital cacophony’ that more active contributors create, means that a very small minority – arguably outliers in every analysis of normal distribution of human activities – are super empowered. Therefore, digiplace power relationships are arguably more polarised than outside cyberspace due to the lack of social check and balances. This makes the acceptance of the disproportional amount of information that these outliers produce as reality highly questionable.

The following notes might help in making sense of the slides.

Slide 2 takes us back 405 years to Mantua, Italy, where Claudio Monteverdi has just written one of the very first operas – L’Orfeo – as an after-dinner entertainment piece for Duke Vincenzo Gonzaga. Leaving aside the wonderful music – my personal recommendation is for Emmanuelle Haïm’s performance and I used the opening toccata in my presentation – there is a serious point about history. For a large portion of human history, and as recent as 400 years ago, we knew only about the rich and the powerful. We ignored everyone else because they ‘were not important’.

Slide 3 highlights two points about modern statistics. First, that it is a tool to gain an understanding about the nature of society as a whole. Second, when we look at the main body of society, it is within the first 2 standard deviations of a normalised distribution. The Index of Deprivation of the UK (Slide 4) is an example ofthis type of analysis. Even though it was designed to direct resources to the most needy, it analyses the whole population (and, by the way, is normalised).

Slide 5 points out that on the Web, and in social media in particular, the focus is on ‘long tail’ distributions. My main issue is not with the pattern but with what it means in terms of analysing the information. This is where participation inequality (Slide 6) matters and the point of Nielsen’s analysis is that outlets such as Wikipedia (and, as we will see, OpenStreetMap) are suffering from even worse inequality than other communication media. Nielsen’s recent analysis in his newsletter (Slide 7) demonstrates how this is playing out on Facebook (FB). Notice the comment ‘these people have no life‘ or, as Sherry Turkle put it, they got life on the screen

Slide 8 and 9 demonstrate that participation inequality is strongly represented in OpenStreetMap, and we can expect it to play out in FourSquare, Google Map Maker, Waze and other GeoWeb social applications. Slide 10 focuses on other characteristics of the people that are involved in the contribution of content: men, highly educated, age 20-40. Similar characteristics have been shown in other social media and the GeoWeb by Monica Stephens & Antonella Rondinone, and by many other researchers.

In slides 11-14, observed spatial biases in OpenStreetMap are noted – concentration on highly populated places, gap between rich and poor places (using the Index of Deprivation from Slide 4), and difference between rural and urban areas. These differences were also observed in other sources of Volunteer Geographic Information (VGI) such as photo sharing sites (in Vyron Antoniou’s PhD).

Taken together, participation inequality, demographic bias and spatial bias point to a very skewed group that is producing most of the content that we see on the GeoWeb. Look back at Slide 3, and it is a good guess that this minority falls within 3 standard deviations of the centre. They are outliers – not representative of anything other than of themselves. Of course, given the large number of people online and the ability of outliers to ‘shout’ louder than anyone else, and converse among themselves, it is tempting to look at them as a population worth listening to. But it is, similarly to the opening point, a look at the rich and powerful (or super enthusiastic) and not the mainstream.

Strangely, when such a small group controls the economy, we see it as a political issue (Slide 15, which was produced by Mother Jones as part of the response to the Occupy movement). We should be just as concerned when it happens with digital content and sets the agenda of what we see and how we understand the world.

Now to the implication of this analysis, and the use of the GeoWeb and social media to understand society. Slide 17 provides the link to the GeoIQ post that argued that these outliers are worth listening to. They might be, but the issue is what you are trying to find out by looking at the data:

The first option is to ask questions about the resulting data such as ‘can it be used to update national datasets?’ – accepting the biases in the data collection as they are and explore if there is anything useful that comes out of the outcomes (Slides 19-21, from the work of Vyron Antoniou and Thomas Koukoletsos). This should be fine as long as the researchers don’t try to state something general about the way society works from the data. Even so, researchers ought to analyse and point to biases and shortcomings (Slides 11-14 are doing exactly that).

The second option is to start claiming that we can learn something about social activities (Slides 22-23, from the work of Eric Fischer and Daniel Gayo-Avello, as well as Sean Gorman in the GeoIQ post). In this case, it is wrong to read too much into the dataas Gayo-Avello noted – as the outliers’ bias renders the analysis as not representative of society. Notice, for example, the huge gap between the social media noise during the Egyptian revolution and the outcomes of the elections, or the political differences that Gayo-Avello noted.

The third option is to find data that is representative (Slide 24, from the MIT Senseable City Lab), which looks at the ‘digital breadcrumbs’ that we leave behind on a large scale – phone calls, SMS, travel cards, etc. This data is representative, but provides observations without context. There is no qualitative or contextual information that comes with it and, because of the biases that are noted above, it is wrong to integrate it with the digital cacophony of the outliers. It is most likely to lead to erroneous conclusions.

Therefore, the understanding of the concept of digiplace (Slide 25) – the ordering of digital representation through software algorithms and GeoWeb portals – is, in fact, double filtered. The provision of content by outliers means that the algorithms will tend to amplify their point of view and biases.  Not only that, digital inequality, which is happening on top of social and economic inequality, means that more and more of our views of the world are being shaped by this tiny minority.

When we add to the mix aspects of digital inequalities (some people can only afford a pay-as-you-go function phone, while a tiny minority consumes a lot of bandwidth over multiple devices), we should stop talking about the ‘digital divide’ as something that will close over time. This is some sort of imaginary trickle-down  theory that is being proven not to withstand the test of reality. If anything, it grows as the ‘haves’ are using multiple devices to shape digiplace in their own image.

This is actually one of the core problems that differentiates to approaches to engagement in data collection. There is the laissez-faire approach to engaging society in collecting information about the world (Slides 27-28 showing OpenStreetMap mapping parties) which does not confront the biases and opposite it, there are participatory approaches (Slides 29-30 showing participatory mapping exercises from the work of Mapping for Change) where the effort is on making the activity inclusive.

This point about the biases, inequality and influence on the way we understand the world is important to repeat – as it is too often ignored by researchers who deal with these data.

The London Citizen Cyberscience Summit ran in the middle of February, from 16th (Thursday) to 18th (Saturday). It marked the launch of the UCL Extreme Citizen Science (ExCiteS) group, while providing an opportunity for people who are interested in different aspects of citizen science to come together, discuss, share ideas, consider joint projects and learn from other people. The original idea for the summit, when the first organisational meeting took place in October last year, was to set a programme that would include academics who research citizen science or develop citizen science projects; practitioners and enthusiasts who are developing technologies for citizen science activities; and people who are actively engaged in citizen science.Therefore, we included a mix of talks, workshops and hack days and started approaching speakers who would cover the range of interests, backgrounds and knowledge.

The announcement about the summit came out only in late December, so it was somewhat surprising to see the level of interest in the topic of citizen science. Considering that the previous summit, in 2010, attracted about 60 or 70 participants, it was pleasing to see that the second summit attracted more than 170 people.

To read about what happened in the summit there is plenty of material online. Nature news reported it as ‘Citizen science goes extreme‘. The New Scientist blog post discussed the ‘Intelligent Maps’ project of ExCiteS in ‘Interactive maps help pygmy tribes fight back‘, which was also covered by the BBC World Service Newshour programme (around 50 minutes in) and the Canadian CBC Science Shift programme. Le Monde also reported on ‘Un laboratoire de l’extrême‘.

Another report in New Scientist focused on the Public Laboratory for Open Technology and Science (PLOTS) development of a thermal flashlight in ‘Thermal flashlight “paints” cold rooms with colour‘. The China DialogueScientists and Citizens‘ provided a broader review of the summit.

In terms of blogs, there are summaries on the GridCast blog (including some video interviews), and a summary by one of the speakers, Andrea Wiggins, of day 1, day 2 and day 3. Nicola Triscott from the Arts Catalyst provides another account of the summit and her Arctic Perspective Initiative linkage.   Another participant, Célya Gruson-Daniel, discussed the summit in French at MyScienceWork, which also provided a collection of social media from the first day at http://storify.com/mysciencework/london-citizen-cyberscience-summit-16-18th-februar.

The talks are available to view again on the LiveStream account of ExCiteS at http://www.livestream.com/excites and there are also summaries on the ExCiteS blog http://uclexcites.wordpress.com/ and on the conference site http://cybersciencesummit.org/blog/ . Flickr photos from MyScienceWork and UCL Engineering (where the image on the right is from) are also available.

For me, several highlights of the conference included the impromptu integration of different projects during the summit. Ellie D’Hondt and Matthias Stevens from  BrusSense and NoiseTube used the opportunity of the PLOTS balloon mapping demonstration to extend it to noise mapping; Darlene Cavalier from SciStarter discussed with the Open Knowledge Foundation people how to use data about citizen science projects; and the people behind Xtribe at the University of Rome considered how their application can be used for Intelligent Maps – all these are synergies, new connections and new experimentation that the summit enabled. The enthusiasm of people who came to the summit contributed significantly to its success (as well as the hard work of the ExCiteS team).

Especially interesting, because of the wide-ranging overview of examples and case studies, is how the activity is conceptualised in different ways across the spectrum of DIY citizen science to structured observations that are managed by professional scientists. This is also apparent in the reports about the summit. I have commented in earlier blog posts about the need to understand citizen science as a different way of producing scientific knowledge. What might be helpful is a clear ‘code of ethics’ or ‘code of conduct’ for scientists who are involved in such projects. As Francois Taddei highlighted in his talk at the summit, there is a need to value the shared learning among all the participants, and not to keep the rigid hierarchies of university academics/public in place. There is also a need to allow for the creativity, exploration and development of ideas that we have seen during the summit to blossom – but only happen when all the sides that are involved in the process are open to such a process.

As noted  in the previous post, which focused on the linkage between GIS and Environmental Information Systems,  the Eye on Earth Summit took place in Abu Dhabi on the 12 to 15 December 2011, and focused on ‘the crucial importance of environmental and societal information and networking to decision-making’.  Throughout the summit, two aspects of public access to environmental information were discussed extensively. On the one hand, Principle 10 of the Rio declaration from 1992 which call for public access to information, participation in decision making and access to justice was frequently mentioned including the need to continue and extend its implementation across the world. On the other, the growing importance of citizen science and crowdsourced  environmental information was highlighted as a way to engage the wider public in environmental issues and contribute to the monitoring and understanding of the environment. They were not presented or discussed as mutually exclusive approaches to public involvement in environmental decision making, and yet, they do not fit together without a snag – so it is worth minding the gap.

As I have noted in several talks over the past 3 years (e.g. at the Oxford Transport Research Unit from which the slides above were taken), it is now possible to define 3 eras of public access to environmental information. During the first era, between the first UN environmental conference, held in Stockholm in 1972, were the UN Environmental Programme (UNEP) was established, and the Earth conference in Rio in 1992, environmental information was collected by experts, to be analysed by experts, and to be accessed by experts. The public was expected to accept the authoritative conclusions of the experts. The second period, between 1990s and until the mid 2000s and the emergence of Web 2.0, the focus turned to the provision of access to the information that was collected and processed by experts. This is top-down delivery of information that is at the centre of Principle 10:

‘Environmental issues are best handled with participation of all concerned citizens, at the relevant level. At the national level, each individual shall have appropriate access to information concerning the environment that is held by public authorities, including information on hazardous materials and activities in their communities, and the opportunity to participate in decision-making processes. States shall facilitate and encourage public awareness and participation by making information widely available. Effective access to judicial and administrative proceedings, including redress and remedy, shall be provided’

Notice the two emphasised sections which focus on passive provision of information to the public – there is no expectation that the public will be involved in creating it.

With the growth of the interactive web (or Web 2.0), and the increase awareness to citizen or community science , new modes of data collection started to emerge, in which the information is being produced by the public. Air pollution monitoring, noise samples or traffic surveys – all been carried out independently by communities using available cheap sensors or in collaboration with scientists and experts. This is a third era of access to environmental information: produced by experts and the public, to be used by both.

Thus, we can identify 3 eras of access to environmental information: authoritative (1970s-1990s), top-down (1990s-2005) and collaborative (2005 onward).

The collaborative era presents new challenges. As in previous periods, the information needs to be at the required standards, reliable and valid. This can be challenging for citizen science information. It also need to be analysed, and many communities don’t have access to the required expertise (see my presentation from the Open Knowledge Foundation Conference in 2008 that deals with this issue). Merging information from citizen science studies with official information is challenging. These and other issues must be explored, and – as shown above – the language of Principle 10 might need revision to account for this new era of environmental information.

The Eye on Earth Summit took place in Abu Dhabi on the 12 to 15 December 2011, and focused on ‘the crucial importance of environmental and societal information and networking to decision-making’. The summit was an opportunity to evaluate the development of Principle 10 from Rio declaration in 1992 as well as Chapter 40 of Agenda 21 both of which focus on environmental information and decision making.  The summit’s many speakers gave inspirational talks – with an impressive list including Jane Goodall highlighting the importance of information for education; Mathis Wackernagel updating on the developments in Ecological Footprint; Rob Swan on the importance of Antarctica;  Sylvia Earle on how we should protect the oceans; Mark Plotkin, Rebecca Moore and Chief Almir Surui on indigenous mapping in the Amazon and man others. The white papers that accompany the summit can be found in the Working Groups section of the website, and are very helpful updates on the development of environmental information issues over the past 20 years and emerging issues.

Interestingly, Working Group 2 on Content and User Needs is mentioning the conceptual framework of Environmental Information Systems (EIS) which I started developing in 1999 and after discussing it in the GIS and Environmental Modelling conference in 2000, I have published it as the paper ‘Public access to environmental information: past, present and future’ in the journal Computers, Environment and Urban Systems in 2003.

Discussing environmental information for a week made me to revisit the framework and review the changes that occurred over the past decade.

First, I’ll present the conceptual framework, which is based on 6 assertions. The framework was developed on the basis of a lengthy review in early 1999 of the available information on environmental information systems (the review was published as CASA working paper 7). While synthesising all the information that I have found, some underlying assumptions started to emerge, and by articulating them and putting them together and showing how they were linked, I could make more sense of the information that I found. This helped in answering questions such as ‘Why do environmental information systems receive so much attention from policy makers?’ and ‘Why are GIS appearing in so many environmental information systems ?’. I have used the word ‘assertions’ as the underlying principles seem to be universally accepted and taken for granted. This is especially true for the 3 core assumptions (assertions 1-3 below).

The framework offers the following assertions:

  1. Sound knowledge, reliable information and accurate data are vital for good environmental decision making.
  2. Within the framework of sustainable development, all stakeholders should take part in the decision making processes. A direct result of this is a call for improved public participation in environmental decision making.
  3. Environmental information is exceptionally well suited to GIS (and vice versa). GIS development is closely related to developments in environmental research, and GIS output is considered to be highly advantageous in understanding and interpreting environmental data.
  4. (Notice that this is emerging from combining 1 and 2) To achieve public participation in environmental decision making, the public must gain access to environmental information, data and knowledge.
  5. (Based on 1 and 3) GIS use and output is essential for good environmental decision making.
  6. (Based on all the others) Public Environmental Information Systems should be based on GIS technologies. Such systems are vital for public participation in environmental decision making.

Intriguingly, the Eye on Earth White Paper notes ‘This is a very “Geospatial” centric view; however it does summarise the broader principles of Environmental Information and its use’. Yet, my intention was not to develop a ‘Geospatial’ centric view – I was synthesising what I have found, and the keywords that I have used in the search did not include GIS. Therefore, the framework should be seen as an attempt to explain the reason that GIS is so prominent.

With this framework in mind, I have noticed a change over the past decade. Throughout the summit, GIS and ‘Geospatial’ systems were central – and they were mentioned and demonstrated many times. I was somewhat surprised how prominent they were in Sha Zukang speech (He is the Undersecretary General, United Nations, and Secretary General Rio +20 Summit). They are much more central than they were when I carried out the survey, and I left the summit feeling that for many speakers, presenters and delegates, it is now expected that GIS will be at the centre of any EIS. The wide acceptance does mean that initiatives such as the ‘Eye on Earth Network’ that is based on geographic information sharing is now possible. In the past, because of the very differing data structures and conceptual frameworks, it was more difficult to suggest such integration. The use of GIS as a lingua franca for people who are dealing with environmental information is surely helpful in creating an integrative picture of the situation at a specific place, across multiple domains of knowledge.

However, I see a cause for concern for the equivalence of GIS with EIS. As the literature in GIScience discussed over the years, GIS is good at providing snapshots, but less effective in modelling processes, or interpolating in both time and space, and most importantly, is having a specific way of creating and processing information. For example, while GIS can be coupled with system dynamic modelling (which was used extensively in environmental studies – most notably in ‘Limits to Growth’) it is also possible to run such models and simulations in packages that don’t use geographic information – For example, in the STELLA package for system dynamics or in bespoke models that were created with dedicated data models and algorithms. Importantly, the issue is not about the technical issues of coupling different software packages such as STELLA or agent-based modelling with GIS. Some EIS and environmental challenge might benefit from different people thinking in different ways about various problems and solutions, and not always forced to consider how a GIS play a part in them.

At the State of the Map (EU) 2011 conference that was held in Vienna from 15-17 July, I gave a keynote talk on the relationships between the OpenStreetMap  (OSM) community and the GIScience research community. Of course, the relationships are especially important for those researchers who are working on volunteered Geographic Information (VGI), due to the major role of OSM in this area of research.

The talk included an overview of what researchers have discovered about OpenStreetMap over the 5 years since we started to pay attention to OSM. One striking result is that the issue of positional accuracy does not require much more work by researchers. Another important outcome of the research is to understand that quality is impacted by the number of mappers, or that the data can be used with confidence for mainstream geographical applications when some conditions are met. These results are both useful, and of interest to a wide range of groups, but there remain key areas that require further research – for example, specific facets of quality, community characteristics  and how the OSM data is used.

Reflecting on the body of research, we can start to form a ‘code of engagement’ for both academics and mappers who are engaged in researching or using OpenStreetMap. One such guideline would be  that it is both prudent and productive for any researcher do some mapping herself, and understand the process of creating OSM data, if the research is to be relevant and accurate. Other aspects of the proposed ‘code’ are covered in the presentation.

The talk is also available as a video from the TU Wien Matterhorn server

 

 

GIS Research UK (GISRUK) is a long running conference series, and the 2011 instalment was hosted by the University of Portsmouth at the end of April.

During the conference, I was asked to give a keynote talk about Participatory GIS. I decided to cover the background of Participatory GIS in the mid-1990s, and the transition to more advanced Web Mapping applications from the mid-2000s. Of special importance are the systems that allow user-generated content, and the geographical types of systems that are now leading to the generation of Volunteer Geographic Information (VGI).

The next part of the talk focused on Citizen Science, culminating with the ideas that are the basis for Extreme Citizen Science.

Interestingly, as in previous presentations, one of the common questions about Citizen Science came up. Professional scientists seem to have a problem with the suggestion that citizens are as capable as scientists in data collection and analysis. While there is an acceptance about the concept, the idea that participants can suggest problems, collect data rigorously and analyse it seems to be too radical – or worrying.

What is important to understand is that the ideas of Extreme Citizen Science are not about replacing the role of scientists, but are a call to rethink the role of the participants and the scientists in cases where Citizen Science is used. It is a way to consider science as a collaborative process of learning and exploration of issues. My own experience is that participants have a lot of respect for the knowledge of the scientists, as long as the scientists have a lot of respect for the knowledge and ability of the participants. The participants would like to learn more about the topic that they are exploring and are keen to know: ‘what does the data that I collected mean?’ At the same time, some of the participants can become very serious in terms of data collection, reading about the specific issues and using the resources that are available online today to learn more. At some point, they are becoming knowledgeable participants and it is worth seeing them as such.

The slides below were used for this talk, and include links to the relevant literature.

The slides below are from my presentation in State of the Map 2010 in Girona, Spain. While the conference is about OpenStreetMap, the presentation covers a range of spatially implicint and explicit crowdsourcing projects and also activities that we carried out in Mapping for Change, which all show that unlike other crowdsourcing activities, geography (and places) are both limiting and motivating contribution to them.

In many ways, OpenStreetMap is similar to other open source and open knowledge projects, such as Wikipedia. These similarities include the patterns of contribution and the importance of participation inequalities, in which a small group of participants contribute very significantly, while a very large group of occasional participants contribute only occasionally; the general demographic of participants, with strong representation from educated young males; or the temporal patterns of engagements, in which some participants go through a peak of activity and lose interest, while a small group joins and continues to invest its time and effort to help the progress of the project. These aspects have been identified by researchers who explored volunteering and leisure activities, and crowdsourcing as well as those who explored commons-based peer production networks (Benkler & Nissenbaum 2006).

However, OpenStreetMap is a project about geography, and deals with the shape of features and information about places on the face of the Earth. Thus, the emerging question is ‘what influence does geography have on OSM?’ Does geography make some fundamental changes to the basic principles of crowdsourcing, or should OSM be treated as ‘wikipedia for maps’?

In the presentation, which is based on my work, as well as the work of Vyron Antoniou and Nama Budhathoki, we argue that geography is playing a ‘tyrannical’ role in OSM and other projects that are based on crowdsourced geographical information and shapes the nature of the project beyond what is usually accepted.

The first influence of geography is on motivation. A survey of OSM participants shows that specific geographical knowledge, which a participant acquired at first hand, and the wish to use this knowledge and see it mapped well is an important factor in participation in the project. We found that participants are driven to mapping activities by their desire to represent the places they care about and fix the errors on the map. Both of these motives require local knowledge.

A second influence is on the accuracy and completeness of coverage, with places that are highly populated, and therefore have a larger pool of potential participants, showing better coverage than suburban areas of well-mapped cities. Furthermore, there is an ongoing discussion within the OSM community about the value of mapping without local knowledge and the impact of such action on the willingness of potential contributors to fix errors and contribute to the map.

A third, and somewhat surprising, influence is the impact of mapping places that the participants haven’t or can’t visit, such as Haiti after the earthquake or Baghdad in 2007. Despite the willingness of participants to join in and help in the data collection process, the details that can be captured without being on the ground are fairly limited, even when multiple sources such as Flickr images, Google Street View and paper maps are used. The details are limited to what was captured at a certain point in time and to the limitations of the sensing device, so the mapping is, by necessity, incomplete.

We will demonstrate these and other aspects of what we termed ‘the tyranny of place’ and its impact on what can be covered by OSM without much effort and which locations will not be covered without a concentrated effort that requires some planning.

On the 23rd March 2010, UCL hosted the second workshop on usability of geographic information, organised by Jenny Harding (Ordnance Survey Research), Sarah Sharples (Nottingham), and myself. This workshop was extending the range of topics that we have covered in the first one, on which we have reported during the AGI conference last year. This time, we had about 20 participants and it was an excellent day, covering a wide range of topics – from a presentation by Martin Maguire (Loughborough) on the visualisation and communication of Climate Change data, to Johannes Schlüter (Münster) discussion on the use of XO computers with schoolchildren, to a talk by Richard Treves (Southampton) on the impact of Google Earth tours on learning. Especially interesting are the combination of sound and other senses in the work on Nick Bearman (UEA) and Paul Kelly (Queens University, Belfast).

Jenny’s introduction highlighted the different aspects of GI usability, from those that are specific to data to issues with application interfaces. The integration of data with software that creates the user experience in GIS was discussed throughout the day, and it is one of the reasons that the issue of the usability of the information itself is important in this field. The Ordnance Survey is currently running a project to explore how they can integrate usability into the design of their products – Michael Brown’s presentation discusses the development of a survey as part of this project. The integration of data and application was also central to Philip Robinson (GE Energy) presentation on the use of GI by utility field workers.

My presentation focused on some preliminary thoughts that are based on the analysis of OpenStreetMap  and Google Map communities response to the earthquake in Haiti at the beginning of 2010. The presentation discussed a set of issues that, if explored, will provide insights that are relevant beyond the specific case and that can illuminate issues that are relevant to daily production and use of geographic information. For example, the very basic metadata that was provided on portals such as GeoCommons and what users can do to evaluate fitness for use of a specific data set (See also Barbara Poore’s (USGS) discussion on the metadata crisis).

Interestingly, the day after giving this presentation I had a chance to discuss GI usability with Map Action volunteers who gave a presentation in GEO-10 . Their presentation filled in some gaps, but also reinforced the value of researching GI usability for emergency situations.

For a detailed description of the workshop and abstracts – see this site. All the presentations from the conference are available on SlideShare and my presentation is below.

Follow

Get every new post delivered to your Inbox.

Join 2,226 other followers