A citizens observatory is a concept that evolved at EU policy circles, defining the combination of participatory community monitoring, technology and governance structures that are needed to monitor/observe/manage an environmental issue. About two years ago, the EU FP7 funded 5 citizens observatory projects covering areas from water management to biodiversity monitoring. A meeting at Brussels was an opportunities to review their progress and consider the wider implications of citizen science as it stand now. The meeting was organised and coordinated by the group in the Directorate General Research and Innovation that is responsible for Earth Observations (hence the observatory concept!).  The following are my notes from the meeting.

They are very long and I’m sure that they are incoherent at places! 

From Commons Lab The meeting was opened with Kurt Vandenberghe (Director Environment, DG R&I). He suggested that citizens observatories contribute to transparency in governance – for example, ensuring that monitoring is done in the correct place, and not, as done in some member states, where monitoring stations are in the places that will yield ‘useful’ or ‘acceptable’ results but not representative: “Transparency is a driver in intrinsic ethical behaviour”. There is also value in having citizens’ input complementing satellite data. It can help in engaging the public in co-design of environmental applications and addressing environmental challenges. Examples for such participation is provided in Marine LitterWatch and NoiseWatch from EEA and development of apps and technology can lead to new business opportunities. The concept of earth observations is about creating a movement of earth observers who collect information, but also allow citizens to participate in environmental decision-making. This can lead to societal innovation towards sustainable and smart society. From the point of view of the commission DG R&I, they are planning to invest political and financial capital in developing this vision of observatories. The New calls for citizens observatories demonstrators is focusing on citizens’ participation in monitoring land use and land cover in rural and remote areas. Data collected through observatories should be complementing those that are coming from other sources. The commission aim to continue the investment in future years – citizen science is seen as both business opportunities and societal values. A successful set of project that end by showing that citizen observatories are possible is not enough – they want to see the creation of mass movement. Aim to see maximising capital through the citizens observatories. Optimising framework condition to allow citizens observatories to be taken up by member states and extended, implemented and flourish. Some of the open questions include how to provide access to the data to those that collected it? How can we ensure that we reach out across society to new groups that are not currently involved in monitoring activities? How can we deal with citizens observatories security and privacy issues regarding the information? The day is an opportunity for co-creation and considering new ways to explore how to address the issue of citizens observatories from a cross-disciplinary perspective – “Citizen science as a new way to manage the global commons”.

Next, a quick set of presentations of the FP7 projects:

WeSenseIt (Fabio Ciravegna) is a project that focuses on citizens involvement in water resources – citizens have a new role in the information chain of water related decisions. Participants are expected to become part of the decision-making. In this project, citizens observatory is seen as a science method, an environment to implement collaboration and as infrastructure. They are working in Doncaster (UK), Vicenza (Italy) and Delft (The Netherlands). In WeSenseIt, they recognise that different cultures and different ways to do things are part of such systems. A major questions is – who are the citizens? In the UK : normal people and in Italy: civil protection officials and volunteers, while in the Netherlands water and flood management is highly structured and organised activity. They have used a participatory design approach and working on the issue of governance and understanding how the citizen observatories should be embedded in the existing culture and processes. They are creating a citizens’ portal and another one for decision makers. The role of citizens portal is to assist with data acquisition with areas and equipment citizens can deploy – weather, soil moisture,etc. On the decision makers portals, there is the possibility is to provide surveillance information (with low-cost cameras etc), opportunistic sensing and participatory sensing – e.g. smart umbrella while combining all this information to be used together. WeSenseIt created a hybrid network that is aimed to provide information to decision makers and citizens. After two years, they can demonstrate that their approach can work: In Vicenza they used the framework to develop action to deal with flood preparedness. They also started to work with large events to assist in the organisation and support the control room, so in Torino they are also starting to get involved in helping running an event with up to 2m people.

Omniscientis (Philippe Ledent)  – The Omniscientis project (which ended in September) focused on odour monitoring and using different sensors – human and electronic. Odour can be a strong / severe nuisance, in Wallonia and France, and there is concerns about motorways, factories, livestock and waste facilities. Odour is difficult to measure and quantify and complex to identify. Mainly because it is about human perception, not only the measurement of chemicals in the air. In too many regulations and discussions about odour, citizens were considered as passive or victims. The Omniscientis project provided an opportunity to participants be active in the monitoring. The project took a multi-stakeholders  approach (farmers, factory operators, local residents etc.). They created odour management information system with the concept of a living lab. They created a OdoMIS that combines information from sensors, industry, NGOs, experts, and citizens. They created an app OdoMap that provide opportunities for participants to provide observations, but also see what other people measured and access to further information. They created chemical sensor array (e-nose), and the citizens helped in assessing what is the concentration that they sense. This was linked to a computationally intensive dispersion model. They have done a pilot around a pig farm in Austria to validate the model, and another near pulp and paper mill. Evolution of citizens participation was important for the project, and people collected measurements for almost a year, with over 5000 measurements. The results is they would like to link odour sources, citizens and authorities to work on the area. They have used actor netowrk theory to enrol participants in the process with strong UCD element.

COBWEB (Chris Higgins) has been working a generic crowdsourcing infrastructure, with data that can supports policy formation while addressing data quality issues and using open standards. They aimed to encapsulate metadata and OGC standards to ensure that the ifnroamtion is interoperable. They would like to create a toolkit can be used in different contexts and scenarios. They focus on the biospehere reserve network across Europe. They carried out a lot of co-design activities from the start with stakeholders engagement, they are doing co-design with 7 organisations in Wales – Woodlands trusts, RSPB, Snowdonia national park, and others. This lead to different focus and interest from each organisation – from dolphins to birds. They hope to see greater buy-in because of that.

Citi-Sense (Alena Bartonova) focusing on air quality. The objectives of city sense is to explore if people can participate in environmental governance. They are doing empowerment initiatives – urban quality, schools, and public spaces. In the urban context they measure pollution, meteorological observations, noise, health, biomarkers and UV exposure – they looked at technologies from mobile sensors and also static sensors that can be compared to compliance monitoring. In schools, they engage the school children, with looking at sensors that are installed at school and also looking at indoor air quality data. There are co-design activities with students. In Public spaces they focused on acoustic sensing, and discover that phones are not suitable so went to external sensors (we discovered the problem with phones in EveryAware). They explore in 9 cities and focusing from sensors, data and services platform but also explore how to engage people in a meaningful way. The first two years focused on technical aspects. They are now moving to look at the engagement part much more but they need to consider how to put it out. They are developing apps and also considering how to improve air quality apps. They would like a sustainable infrastructure.

Citclops (Luigi Ceccaroni) originally aimed ‘to create a community participatory governance methods aided by social media streams’, but this is an unclear goal that the project partners found confusing! So they are dealing with the issue of marine environment: asking people to take pictures of marine environment and through the app facilitating  visual monitoring of marine environment (available to download by anyone) – they are helping people to assess visually the quality of water bodies. There is an official way of defining the colour of sea waters which they use in the project and also comparing ground observations with satellite information. The project included the design of DIY devices to allow the measurement of water opacity. Finally, exploring water fluorescence. They design and 3D printed a device that can be used with smartphones to measure  fluorescence as this help to understand concentration of chlorophyll and can be associated with remote sensing information. Citizen science is a way to complement official data – such as the data from the water directive.

After a break and demonstration from some of the projects, the first round-table of the day, which include executives from environment protection agencies across Europe started

From @ScotlandEuropa strategic views on Citizens Observatories

[I’ve lost my notes, so below is a summary of the session edited from Valentine Seymour notes]

The chair (Gilles Ollier) of the session highlighted that the following issues as significant for considering the role of citizen science: Are we doing something useful/usable? Valuable? And sustainable?
James Curran (Scottish Environmental Protection Agency) noted that SEPA took citizen science to the core of its business. He highlighted issues of growth, jobs and investments. The need for sustainable growth and that citizen science contributed to these goals very well as the Chinese proverb say “Involve me and I will understand”. SEPA has been promoting mobile applications to detect invasive species and environmental damages. The Riverfly project is an example of engaging people in monitoring to detect water quality and invertebrate sampling and how important it was for the Water Framework Directive (WFD) to include public participation. There is a need to provide accessible information, working with others collaboratively, measuring behavioural changes and the need for public engagement.

Laura Burke (Ireland EPA) main statement was that citizen science do not replace governmental and official scientific monitoring but that citizen science should be seen in complimentary. There are three main issues or areas to consider; terminology (spectrum of the term citizen science), the need for thinking about the long-term sustainable future of citizen science projects, and acknowledge the synergies between projects.

Hugo de Groof (DG Env) noted the importance of access to information and the Open Access Directive that has been passed.  In terms of governance, we need to follow 5 main principles: 1) Accountability, 2) transparency, 3) participation, 4) Effectiveness and efficiency and finally 5) Respect. Raymond Feron from the Dutch ministry for infrastructure and environment emphasised that there is a social change emerging. [End of Valentine’s notes]

The issues of operationalisation received attention – there are different projects, how far are we from large-scale deployment? Colin Chapman (Welsh Government) – maturity across observatory projects vary from case to case and across issues. Technologies are still maturing, there is a need to respond to issues and mobilise resources to address issues that citizens bring up. Systems approach to ecosystem management is also a factor in considering how to integrate observatories. There are too much reliance on macro modelling. A question for policy bodies is “can we incentivise citizens to collect data across policy areas?” for example invasive species, we can use the information in different areas from flood modelling to biodiversity management. David Stanners (EEA) noted that citizens observatories are vulnerable at this point in time and this lack of stability  and there are examples of projects that didn’t last. There are some inter-linkages, but not an ecology of observatories, of interconnectedness and ability to survive. Need better linkage with policy, but not across the board and no direct policy elements. The integration of citizens observatories is a fantastic opportunity at EU level – as issues of the environment suppose to be very visible. Raymond Feron noted that government might have issues in keeping pace with citizens actions. Government organisations need to learn how to integrate citizens observatories, need to learn to reuse parts. Integrate research programme with implementation strategy. James Curran also stated that working with anglers and other stakeholders can increase trust. In terms of quality and relevant, citizen science data is not different to other data. Laura Burke noted that no government have all the answers, and trust issues should be presented as such. Need to move away from concept of one organisation with a solution to any given problem. David Stanners raised the issues of truth seeking. Within the cupernicos programme, there are opportunities to support services with citizen science.

Following the point of views from the panellists, questions about trust, finding ways to include of people without access to technology were raised by the audience. The panellists agreed that from the point of view of policy makes the concept of citizens observatories is obvious but there is a need to make citizens observatories and citizen science activities sustainable and well-integrated in government activities. Interestingly, James Curran noted that the issue of data quality from citizen science is not a major obstacles, inherently because environmental authorities are used to make decisions that are risk based. There was willingness to work with intermediaries to reach out to under-represented groups. David Stanners called for  cross cutting meta-studies to understand citizens observations landscape.

The next series of presentations covered citizen science activities that are not part of the citizens observatories projects.

NoiseWatch/Marine litter watch (David Stanners, EEA) – Noisewatch was developed by the EEA and provie the modelling element, measurement, and citizen rating element. He argued that dB is not good measure, as noise is a perception issue and not about just measurement. NoiseWatch received an award in the Geospatial World Forum. It became global although it wasn’t promoted as such, with uptake in India and China and UNEP are considering to take it over and maintain it. Sustainability of NoiseWatch is a challenge for EEA and it might be more suitable in a global platform such as UNEP Live. NoiseWatch is seen as complementing existing monitoring stations because there as so few of them. When analysing the sources of the measurement, NoiseWatch get a lot of observations from roads, with 21% of industry noise – in total almost 195000 measurements. Another application is Marine LitterWatch which provides a way for people to share information about the state of beaches. The application is more complex as it embedded in protocol of data collection, and David argue that it is ‘more close to citizen science’, EEA got almost 7500 measurements with 144 events to use it, they are developing it further.

LakeWiki (Juhani Kettunen, who was not present) is an initiative that focus on motoring Finnish lakes – was launched by Syke and it is aimed to allow local communities to take care of their lakes, record information and build a long term observations. Simple platform, recording information such as ice break up but it is aimed to allow locals write about the lake, maintain observations sites, upload pictures, announce local events and write in discussion forums, 1400 sites [this project is also noted in COST Energic network]

Raymond Feron presented a programme in Netherlands called  digital Delta Initiative: partnership between research, public and government. IBM, TU Delft and government are involved. Trying to make water data available to everyone. focus of the system allow re-use of information, the government try to do things more efficiently, shorten time to market, improve quality of decisions, while also improving citizen participation. Ideas of increasing export to new places. Involving the public with dyke monitoring because they can do things locally easily.

I gave a talk about Mapping for Change air quality studies, and I hope to discuss them in a different post:

Claudia Goebel followed with a report on ECSA (see my report for ECSA meeting)

Antonoi Parodi from CIMA foundation discussed the DRIHM project. This is mostly a project focused on meteorological information. Issue of meteorology has a very long history of observations, going from 300 BC. There is plenty of reliance of observed patterns of events. Informal folklore was used by early meteorology citizen science. The middle ages, there are examples of getting information to deal with flash flood. Within the project they created a volunteer thinking platform to support classifications of thunderstorms. The Cb-TRAM monitoring observations of thunderstorms. Interestingly, a follow on question explored the link between extreme events (floods last year) and the role of the research project to provide relevant information.

The Socientize project was presented by Francisco Sanz, covering areas of digital science.

There was also a presentation from the SciCafe 2.0 project, including mentioning the European Observatory for Crowd-Sourcing . Another tool from the project is Citizens’ Say tool  

The final panel explored issues on the challenges of citizen science (I was part of this panel). The people on the panel were Jaume Piera (CITCLOPS),;Arne Berre (CITI-SENSE); Bart De Lathouwer (COBWEB); Philippe Valoggia (OMNISCIENTIS); Uta Wehn (WeSenseIt); Susanne Lützenkirchen, City of Oslo and myself.

Susanne noted that the city of Oslo developed some apps, such as safe for schools – people can experience their routes to schools and they are interested in more citizen science and citizen observatories.

Strategy for sustainability of engagement over time – Uta noted that the co-design process is very important and governance analysis to understand the processes and the local needs (in WeSenseIt). The observatories need to consider who are the partners – authorities are critical to see the value of observatories and provide feedback. Jaime suggested – identifying points in the project that give participants feeling that they are part of the process, allowing people to participate in different ways. Making people aware that they are part of the activities and they are valued. Showing the need for long-term observations. Susanne pointed that in Oslo there isn’t any simple answer – the issue of who are the citizens and in others it is a specific groups or more complex design sometime need to think who chose participants and how representative they are.

In WeSenseIT, they have privacy and consent setting – adhering to rules of social media, and it is an issue of data that came from other sources and how it is going to be reused. In general, Uta noted that WeSenseIt would like to try and make the data open as possible.

Data preservation is an issue – how data was handled, if we assume that there are probably 500 projects or more in Europe which is Max Craglia (JRC, who chaired the session) estimation. The issues of citizen observatories, we need to consider the individual data and there is sometime concern about releasing unvalidated data. Bart pointed that Cobweb is taking care of privacy and security of data and they are storing information about observers and there are privacy rules. Privacy legislation are local and need to follow the information. citizens see the benefit in what they collected and the sustainability of commitment. It is important to work with existing social structures and that provide opportunity for empowerment. Views about ownership of data were raised.

In terms of integration and synergy or interoperability of the citizen centred projects – interoperability is critical topic, the data need to be standardised and deal with the metadata (the most boring topic in the world). It should be collected at the right level. There is good foundation in GEOS and OGC, so we can consider how to do it.

What is the role of scientists? the role of scientists – there are partners who focus on dealing with the data and augment it with additional information and there is a role of managing the data. The link to policy also require an understanding of uncertainty. The discourse of science-policy is about what is considered as evidence. There is embracing of citizen science in environment agencies (which was demonstrated in the first panel), but there is a need for honest discussion about what happen to the data, and what degree citizens can participate in decision-making. Relevancy, legitimacy are critical to the understanding.

There was also call for accepting the uncertainty in the data – which is integral part of citizen science data. David Stanners emphasised the need for legitimacy of the information that is coming from citizens observatories as part of the trust that people put in contributing to them.

The final comments came from Andrea Tilche (Head of Unit Climate Actions and Earth Observation, DG R&I). The commission recognise that citizen observatories are not a replacement for institutional monitoring scheme (although he mentioned maybe in the future). The potential of engaging users is tremendous, and the conference demonstrated the energy and scale of activities that can be included in this area . The ownership of information need to be taken into account. We need to link and close the gaps with scientists and policy makers. We need to create market around the observatories – can’t only do it through project that disappear. There is a need for market of citizen observatories and business models. In the new call, they want to see the project generate and credible business processes. Citizens observatories will need demonstrate raising funding from other sources.

The Association of American Geographers is coordinating an effort to create an International Encyclopedia of Geography. Plans started in 2010, with an aim to see the 15 volumes project published in 2015 or 2016. Interestingly, this shows that publishers and scholars are still seeing the value in creating subject-specific encyclopedias. On the other hand, the weird decision by Wikipedians that Geographic Information Science doesn’t exist outside GIS, show that geographers need a place to define their practice by themselves. You can find more information about the AAG International Encyclopedia project in an interview with Doug Richardson from 2012.

As part of this effort, I was asked to write an entry on ‘Volunteered Geographic Information, Quality Assurance‘ as a short piece of about 3000 words. To do this, I have looked around for mechanisms that are used in VGI and in Citizen Science. This are covered in OpenStreetMap studies and similar work in GIScience, and in the area of citizen science, there are reviews such as the one by Andrea Wiggins and colleagues of mechanisms to ensure data quality in citizen science projects, which clearly demonstrated that projects are using multiple methods to ensure data quality.

Below you’ll find an abridged version of the entry (but still long). The citation for this entry will be:

Haklay, M., Forthcoming. Volunteered geographic information, quality assurance. in D. Richardson, N. Castree, M. Goodchild, W. Liu, A. Kobayashi, & R. Marston (Eds.) The International Encyclopedia of Geography: People, the Earth, Environment, and Technology. Hoboken, NJ: Wiley/AAG

In the entry, I have identified 6 types of mechanisms that are used to ensure quality assurance when the data has a geographical component, either VGI or citizen science. If I have missed a type of quality assurance mechanism, please let me know!

Here is the entry:

Volunteered geographic information, quality assurance

Volunteered Geographic Information (VGI) originate outside the realm of professional data collection by scientists, surveyors and geographers. Quality assurance of such information is important for people who want to use it, as they need to identify if it is fit-for-purpose. Goodchild and Li (2012) identified three approaches for VGI quality assurance , ‘crowdsourcing‘ and that rely on the number of people that edited the information, ‘social’ approach that is based on gatekeepers and moderators, and ‘geographic’ approach which uses broader geographic knowledge to verify that the information fit into existing understanding of the natural world. In addition to the approaches that Goodchild and li identified, there are also ‘domain’ approach that relate to the understanding of the knowledge domain of the information, ‘instrumental observation’ that rely on technology, and ‘process oriented’ approach that brings VGI closer to industrialised procedures. First we need to understand the nature of VGI and the source of concern with quality assurance.

While the term volunteered geographic information (VGI) is relatively new (Goodchild 2007), the activities that this term described are not. Another relatively recent term, citizen science (Bonney 1996), which describes the participation of volunteers in collecting, analysing and sharing scientific information, provide the historical context. While the term is relatively new, the collection of accurate information by non-professional participants turn out to be an integral part of scientific activity since the 17th century and likely before (Bonney et al 2013). Therefore, when approaching the question of quality assurance of VGI, it is critical to see it within the wider context of scientific data collection and not to fall to the trap of novelty, and to consider that it is without precedent.

Yet, this integration need to take into account the insights that emerged within geographic information science (GIScience) research over the past decades. Within GIScience, it is the body of research on spatial data quality that provide the framing for VGI quality assurance. Van Oort’s (2006) comprehensive synthesis of various quality standards identifies the following elements of spatial data quality discussions:

  • Lineage – description of the history of the dataset,
  • Positional accuracy – how well the coordinate value of an object in the database relates to the reality on the ground.
  • Attribute accuracy – as objects in a geographical database are represented not only by their geometrical shape but also by additional attributes.
  • Logical consistency – the internal consistency of the dataset,
  • Completeness – how many objects are expected to be found in the database but are missing as well as an assessment of excess data that should not be included.
  • Usage, purpose and constraints – this is a fitness-for-purpose declaration that should help potential users in deciding how the data should be used.
  • Temporal quality – this is a measure of the validity of changes in the database in relation to real-world changes and also the rate of updates.

While some of these quality elements might seem independent of a specific application, in reality they can be only be evaluated within a specific context of use. For example, when carrying out analysis of street-lighting in a specific part of town, the question of completeness become specific about the recording of all street-light objects within the bounds of the area of interest and if the data set includes does not include these features or if it is complete for another part of the settlement is irrelevant for the task at hand. The scrutiny of information quality within a specific application to ensure that it is good enough for the needs is termed ‘fitness for purpose’. As we shall see, fit-for-purpose is a central issue with respect to VGI.

To understand the reason that geographers are concerned with quality assurance of VGI, we need to recall the historical development of geographic information, and especially the historical context of geographic information systems (GIS) and GIScience development since the 1960s. For most of the 20th century, geographic information production became professionalised and institutionalised. The creation, organisation and distribution of geographic information was done by official bodies such as national mapping agencies or national geological bodies who were funded by the state. As a results, the production of geographic information became and industrial scientific process in which the aim is to produce a standardised product – commonly a map. Due to financial, skills and process limitations, products were engineered carefully so they can be used for multiple purposes. Thus, a topographic map can be used for navigation but also for urban planning and for many other purposes. Because the products were standardised, detailed specifications could be drawn, against which the quality elements can be tested and quality assurance procedures could be developed. This was the backdrop to the development of GIS, and to the conceptualisation of spatial data quality.

The practices of centralised, scientific and industrialised geographic information production lend themselves to quality assurance procedures that are deployed through organisational or professional structures, and explains the perceived challenges with VGI. Centralised practices also supported employing people with focus on quality assurance, such as going to the field with a map and testing that it complies with the specification that were used to create it. In contrast, most of the collection of VGI is done outside organisational frameworks. The people who contribute the data are not employees and seemingly cannot be put into training programmes, asked to follow quality assurance procedures, or expected to use standardised equipment that can be calibrated. The lack of coordination and top-down forms of production raise questions about ensuring the quality of the information that emerges from VGI.

To consider quality assurance within VGI require to understand some underlying principles that are common to VGI practices and differentiate it from organised and industrialised geographic information creation. For example, some VGI is collected under conditions of scarcity or abundance in terms of data sources, number of observations or the amount of data that is being used. As noted, the conceptualisation of geographic data collection before the emergence of VGI was one of scarcity where data is expensive and complex to collect. In contrast, many applications of VGI the situation is one of abundance. For example, in applications that are based on micro-volunteering, where the participant invest very little time in a fairly simple task, it is possible to give the same mapping task to several participants and statistically compare their independent outcomes as a way to ensure the quality of the data. Another form of considering abundance as a framework is in the development of software for data collection. While in previous eras, there will be inherently one application that was used for data capture and editing, in VGI there is a need to consider of multiple applications as different designs and workflows can appeal and be suitable for different groups of participants.

Another underlying principle of VGI is that since the people who collect the information are not remunerated or in contractual relationships with the organisation that coordinates data collection, a more complex relationships between the two sides are required, with consideration of incentives, motivations to contribute and the tools that will be used for data collection. Overall, VGI systems need to be understood as socio-technical systems in which the social aspect is as important as the technical part.

In addition, VGI is inherently heterogeneous. In large scale data collection activities such as the census of population, there is a clear attempt to capture all the information about the population over relatively short time and in every part of the country. In contrast, because of its distributed nature, VGI will vary across space and time, with some areas and times receiving more attention than others. An interesting example has been shown in temporal scales, where some citizen science activities exhibit ‘weekend bias’ as these are the days when volunteers are free to collect more information.

Because of the difference in the organisational settings of VGI, a different approaches to quality assurance is required, although as noted, in general such approaches have been used in many citizen science projects. Over the years, several approaches emerged and these include ‘crowdsourcing ‘, ‘social’, ‘geographic’, ‘domain’, ‘instrumental observation’ and ‘process oriented’. We now turn to describe each of these approaches.

Thecrowdsourcing approach is building on the principle of abundance. Since there are is a large number of contributors, quality assurance can emerge from repeated verification by multiple participants. Even in projects where the participants actively collect data in uncoordinated way, such as the OpenStreetMap project, it has been shown that with enough participants actively collecting data in a given area, the quality of the data can be as good as authoritative sources. The limitation of this approach is when local knowledge or verification on the ground (‘ground truth’) is required. In such situations, the ‘crowdsourcing’ approach will work well in central, highly populated or popular sites where there are many visitors and therefore the probability that several of them will be involved in data collection rise. Even so, it is possible to encourage participants to record less popular places through a range of suitable incentives.

Thesocial approach is also building on the principle of abundance in terms of the number of participants, but with a more detailed understanding of their knowledge, skills and experience. In this approach, some participants are asked to monitor and verify the information that was collected by less experienced participants. The social method is well established in citizen science programmes such as bird watching, where some participants who are more experienced in identifying bird species help to verify observations by other participants. To deploy the social approach, there is a need for a structured organisations in which some members are recognised as more experienced, and are given the appropriate tools to check and approve information.

Thegeographic approach uses known geographical knowledge to evaluate the validity of the information that is received by volunteers. For example, by using existing knowledge about the distribution of streams from a river, it is possible to assess if mapping that was contributed by volunteers of a new river is comprehensive or not. A variation of this approach is the use of recorded information, even if it is out-of-date, to verify the information by comparing how much of the information that is already known also appear in a VGI source. Geographic knowledge can be potentially encoded in software algorithms.

Thedomain approach is an extension of the geographic one, and in addition to geographical knowledge uses a specific knowledge that is relevant to the domain in which information is collected. For example, in many citizen science projects that involved collecting biological observations, there will be some body of information about species distribution both spatially and temporally. Therefore, a new observation can be tested against this knowledge, again algorithmically, and help in ensuring that new observations are accurate.

Theinstrumental observation approach remove some of the subjective aspects of data collection by a human that might made an error, and rely instead on the availability of equipment that the person is using. Because of the increased in availability of accurate-enough equipment, such as the various sensors that are integrated in smartphones, many people keep in their pockets mobile computers with ability to collect location, direction, imagery and sound. For example, images files that are captured in smartphones include in the file the GPS coordinates and time-stamp, which for a vast majority of people are beyond their ability to manipulate. Thus, the automatic instrumental recording of information provide evidence for the quality and accuracy of the information.

Finally, the ‘process oriented approach bring VGI closer to traditional industrial processes. Under this approach, the participants go through some training before collecting information, and the process of data collection or analysis is highly structured to ensure that the resulting information is of suitable quality. This can include provision of standardised equipment, online training or instruction sheets and a structured data recording process. For example, volunteers who participate in the US Community Collaborative Rain, Hail & Snow network (CoCoRaHS) receive standardised rain gauge, instructions on how to install it and an online resources to learn about data collection and reporting.

Importantly, these approach are not used in isolation and in any given project it is likely to see a combination of them in operation. Thus, an element of training and guidance to users can appear in a downloadable application that is distributed widely, and therefore the method that will be used in such a project will be a combination of the process oriented with the crowdsourcing approach. Another example is the OpenStreetMap project, which in the general do not follow limited guidance to volunteers in terms of information that they collect or the location in which they collect it. Yet, a subset of the information that is collected in OpenStreetMap database about wheelchair access is done through the highly structured process of the WheelMap application in which the participant is require to select one of four possible settings that indicate accessibility. Another subset of the information that is recorded for humanitarian efforts is following the social model in which the tasks are divided between volunteers using the Humanitarian OpenStreetMap Team (H.O.T) task manager, and the data that is collected is verified by more experienced participants.

The final, and critical point for quality assurance of VGI that was noted above is fitness-for-purpose. In some VGI activities the information has a direct and clear application, in which case it is possible to define specifications for the quality assurance element that were listed above. However, one of the core aspects that was noted above is the heterogeneity of the information that is collected by volunteers. Therefore, before using VGI for a specific application there is a need to check for its fitness for this specific use. While this is true for all geographic information, and even so called ‘authoritative’ data sources can suffer from hidden biases (e.g. luck of update of information in rural areas), the situation with VGI is that variability can change dramatically over short distances – so while the centre of a city will be mapped by many people, a deprived suburb near the centre will not be mapped and updated. There are also limitations that are caused by the instruments in use – for example, the GPS positional accuracy of the smartphones in use. Such aspects should also be taken into account, ensuring that the quality assurance is also fit-for-purpose.

References and Further Readings

Bonney, Rick. 1996. Citizen Science – a lab tradition, Living Bird, Autumn 1996.
Bonney, Rick, Shirk, Jennifer, Phillips, Tina B. 2013. Citizen Science, Encyclopaedia of science education. Berlin: Springer-Verlag.
Goodchild, Michael F. 2007. Citizens as sensors: the world of volunteered geography. GeoJournal, 69(4), 211–221.
Goodchild, Michael F., and Li, Linna. 2012, Assuring the quality of volunteered geographic information. Spatial Statistics, 1 110-120
Haklay, Mordechai. 2010. How Good is volunteered geographical information? a comparative study of OpenStreetMap and ordnance survey datasets. Environment and Planning B: Planning and Design, 37(4), 682–703.
Sui, Daniel, Elwood, Sarah and Goodchild, Michael F. (eds), 2013. Crowdsourcing Geographic Knowledge, Berlin:Springer-Verlag.
Van Oort, Pepjin .A.J. 2006. Spatial data quality: from description to application, PhD Thesis, Wageningen: Wageningen Universiteit, p. 125.

Once upon a time, Streetmap.co.uk was one of the most popular Web Mapping sites in the UK, competing successfully with the biggest rival at the time, Multimap. Moreover, it was ranked second in The Daily Telegraph list of leading mapping sites in October 2000 and described at ‘Must be one of the most useful services on the web – and it’s completely free. Zoom in on any UK area by entering a place name, postcode, Ordnance Survey grid reference or telephone code.’ It’s still running and because of its legacy, it’s around the 1250 popular website in the UK (though 4 years ago it was among the top 350).

Streetmap 2014

So far, nothing is especially noteworthy – popular website a decade ago replaced by a newer website, Google Maps, which provide better search results, more information and is the de facto  standard for web mapping. Moreover, already in 2006 Artemis Skaraltidou demonstrated that of the UK Web Mapping crop, Streetmap scored lowest on usability with only MapQuest, which largely ignored the UK, being worse.

However, recently, while running a practical session introducing User-Centred Design principles to our MSc in GIS students, I have noticed an interesting implication of the changes in the environment of Web Mapping – Streetmap has stopped  being usable just because it didn’t bother to update its interaction. By doing nothing, while the environment around it changed, it became unusable, with users failing to perform even the most basic of tasks.

The students explored the mapping offering from Google, Bing, Here and Streetmap. It was fairly obvious that across this cohort (early to mid 20s), Google Maps were the default, against which other systems were compared. It was not surprising to find impressions that Streetmap is ‘very old fashioned‘ or ‘archaic‘. However, more interesting was to notice people getting frustrated that the ‘natural’ interaction of zooming in and out using the mouse wheel just didn’t worked. Or failing to find the zoom in and out buttons. At some point in the past 10 years, people internalised the interaction mode of using the mouse and stopped using the zoom in and out button on the application, which explains the design decision in the new Google Maps interface to eliminate the dominant zoom slider from the left side of the map. Of course, Streetmap interface is also not responsive to touch screen interactions which are also learned across applications.

I experienced a similar, and somewhat amusing incident during the registration process of SXSW Eco, when I handed over my obviously old laptop at the registration desk to provide some detail, and the woman was trying to ‘pinch’ the screen in an attempt to zoom in. Considering that she was likely to be interacting with tablets most of the day (it was, after all, SXSW), this was not surprising. Interactions are learned and internalised, and we expect to experience them across devices and systems.

So what’s to learn? while this is another example of ‘Jacob’s Law of Internet User Experience‘ which states that ‘Users spend most of their time on other sites’, it is very relevant to many websites that use Web Mapping APIs to present information – from our own communitymaps.org.uk to the Environment Agency What’s in Your Backyard. In all these cases, it is critical to notice the basic map exploration interactions (pan, zoom, search) and make sure that they match common practices across the web. Otherwise, you might end like Streetmap.

During the symposium “The Future of PGIS: Learning from Practice?” which was held at ITC-University of Twente, 26 June 2013, I gave a talk titled ‘Keeping the spirit alive’ – preservations of participatory GIS values in the Geoweb, which explored what was are the important values in participatory GIS and how they translate to the Geoweb, Volunteered Geographic Information and current interests in crowdsourcing. You can watch the talk below.


To see the rest of the presentations during the day, see https://vimeo.com/album/2475389 and details of the event are available here http://www.itc.nl/Pub/Events-Conferences/2013/2013-June/Participatory-GIS-Symposium.html

 

CHI 2013 and GeoHCI workshop highlighted to me the importance of understanding media for maps. During CHI, the ‘Paper Tab’ demonstration used E-Ink displays to demonstrate multiple displays interaction. I found the interactions non-intuitive and not mapping very well to what you would expect to do with paper, so a source for confusion – especially when they will eventually be mixed with papers on a desk. Anyhow, it is an interesting exploration.

E Ink displays are very interesting in terms of the potential use for mapping. The image  below shows one of the early prototypes of maps that are designed specifically for the Kindle, or, more accurately, to the E Ink technology that is at heart of the Kindle. From a point of view of usability of geographical information technologies, the E Ink is especially interesting. There are several reasons for that.

Kindle map

First, the resolution of the Kindle display is especially high (close to 170 Pixels Per Inch) when the size of screen is considered. The Apple Retina display provide even better resolution and in colour and that makes maps on the iPad also interesting, as they are starting to get closer to the resolution that we are familiar with from paper maps (which is usually between 600 and 1200 Dot Per Inch). The reason that resolution matter especially when displaying maps, because the users need to see the context of the location that they are exploring. Think of the physiology of scanning the map, and the fact that capturing more information in one screen can help in understanding the relationships of different features. Notice that when the resolution is high but the screen area is limited (for example the screen of a smartphone) the limitations on the area that is displayed are quite severe and that reduce the usability of the map – scrolling require you to maintain in your memory where you came from.

Secondly, E Ink can be easily read even in direct sunlight because they are reflective and do not use backlight. This make them very useful for outdoor use, while other displays don’t do that very well.

Thirdly, they use less energy and can be used for long term display of the map while using it as a reference, whereas with most active displays (e.g. smartphone) continuous use will cause a rapid battery drain.

On the downside, E Ink refresh rates are slow, and they are more suitable for static display and not for dynamic and interactive display.

During the summer of 2011 and 2012, several MSc students at UCL explore the potential of E Ink for mapping in detail. Nat Evatt (who’s map is shown above) worked on the cartographic representation and shown that it is possible to create highly detailed and readable maps even with the limitation of 16 levels of grey that are available. The surprising aspects that he found is that while some maps are available in the Amazon Kindle store (the most likely place for e-book maps), it looks like the maps where just converted to shades of grey without careful attention to the device, which reduce their usability.

The work of Bing Cui and Xiaoyan Yu (in a case of collaboration between MSc students at UCLIC and GIScience) included survey in the field (luckily on a fairly sunny day near the Tower of London) and they explored which scales work best in terms of navigation and readability. The work shows that maps at scale of 1:4000 are effective – and considering that with E Ink the best user experience is when the number of refreshes are minimised that could be a useful guideline for e-book map designers.

As I’ve noted in the previous post, I have just attended CHI (Computer-Human Interaction) conference for the first time. It’s a fairly big conference, with over 3000 participants, multiple tracks that evolved over the 30 years that CHI have been going,  including the familiar paper presentations, panels, posters and courses, but also the less familiar ‘interactivity areas’, various student competitions, alt.CHI or Special Interest Groups meetings. It’s all fairly daunting even with all my existing experience in academic conferences. During the GeoHCI workshop I have discovered the MyCHI application, which helps in identifying interesting papers and sessions (including social recommendations) and setting up a conference schedule from these papers. It is a useful and effective app that I used throughout the conference (and wish that something similar can be made available in other large conferences, such as the AAG annual meeting).

With MyCHI in hand, while the fog started to lift and I could see a way through the programme, the trepidation about the relevance of CHI to my interests remained and even somewhat increased, after a quick search of the words ‘geog’,’marginal’,’disadvantage’ returned nothing. The conference video preview (below) also made me somewhat uncomfortable. I have a general cautious approach to the understanding and development of digital technologies, and a strong dislike to the breathless excitement from new innovations that are not necessarily making the world a better place.

Luckily, after few more attempts I have found papers about ‘environment’, ‘development’ and ‘sustainability’. Moreover, I discovered the special interest groups (SIG) that are dedicated to HCI for Development (HCI4D) and HCI for Sustainability and the programme started to build up. The sessions of these two SIGs were an excellent occasion to meet other people who are active in similar topics, and even to learn about the fascinating  concept of ‘Collapse Informatics‘ which is clearly inspired by Jared Diamond book and explores “the study, design, and development of  sociotechnical systems in the abundant present for use in a future of scarcity“.

Beyond the discussions, meeting people with shared interests and seeing that there is a scope within CHI to technology analysis and development that matches my approach, several papers and sessions were especially memorable. The studies by Elaine Massung an colleagues about community activism in encouraging shops to close the doors (and therefore waste less heating energy) and Kate Starbird on the use of social media in passing information between first responders during the Haiti earthquakeexplored how volunteered, ‘crowd’ information can be used in crisis and environmental activism.
Exploring a map next to Paire Lachaise
Other valuable papers in the area of HCI for development and sustainability include the excellent longitudinal study by Susan Wyche and Laura Murphy on the way mobile charging technology is used in Kenya , a study by Adrian Clear and colleagues about energy use and cooking practices of university students in Lancastera longitudinal study of responses to indoor air pollution monitoring by Sunyoung Kim and colleagues, and an interesting study of 8-bit, $10 computers that are common in many countries across the world by Derek Lomas and colleagues.

TheCHI at the Barricades – an activist agenda?‘ was one of the high points of the conference, with a showcase of the ways in which researchers in HCI can take a more active role in their research and lead to social or environmental change, and considering how the role of interactions in enabling or promoting such changes can be used to achieve positive outcomes. The discussions that followed the short interventions from the panel covered issues from accessibility to ethics to ways of acting and leading changes. Interestingly, while some presenters were comfortable with their activist role, the term ‘action-research’ was not mentioned. It was also illuminating to hear Ben Shneiderman emphasising his view that HCI is about representing and empowering the people who use the technologies that are being developed. His call for ‘activist HCI’ provides a way to interpret ‘universal usability‘ as an ethical and moral imperative.

It was good to see the work of the Citizen Sort team getting into the finalists of the students game competition, and to hear about their development of citizen science games.

So despite the early concerned, CHI was a conference worth attending and the specific jargon of CHI now seem more understandable. I wish that there was on the conference website a big sign ‘new to CHI? Start here…’

CHI (Computer-Human Interaction) is the premier conference in the calendar of Human-Computer Interaction (HCI) studies. While the first paper that deal with geographic technologies within this conference was presented in 1991 (it was about User Interfaces for Geographic Information Systems by Andrew Frank and presented at a special interest group meeting), geography did not received much attention from HCI researchers in general, though the growth of location-based technologies made it a growing area in recent years. As I noted elsewhere, HCI did received interest in GIScience over the years, with more attention paid to spatial cognition and fundamental aspects of knowledge representation but unfortunately less on interaction design and exploration of user studies.

This sort of loose coupling between GIScience and HCI is also reflected in personal histories.  I was aware of CHI and its importance for over 15 years, but I never managed to attend one – until now. When Brent Hecht invited me to join a CHI workshop proposal on Geographic HCI (GeoHCI), I jumped on the opportunity. The process of working together with HCI researchers on coordinating and curating a workshop led to mutual learning about priorities and practices of work of the two different research communities – in the tone and style of position papers, reviews and ways of organising a meeting. The response to the call for position papers was overwhelming and demonstrated the interest from both geography and HCI communities to find opportunities to converse and share ideas.

The workshop itself was excellent, with coverage of many topics that are being actively researched in Geography and GIScience – and the papers and presentation cover crowdsourced/volunteered geographic information, use of geographic information in crisis situations, participatory mapping and citizen science, concepts of place and space, personal memories, and of course many interactions with maps.

My own talk focused on Geography and HCI, exploring the point of view of geography when approaching computing environments to represent and communicate geographical knowledge. I have used human geography and particularly the concept of space/place to highlight the contribution that geography can make. For example in understanding the multiplicity of interpretation of place by using both David Harvey critique of spatial sciences in the understanding of place, and Doreen Massey relational geography description of places as ‘stories so far’ in ‘For Space‘ as a clear example of different conceptualisation of what they are.

One particular point that I highlighted, following the first chapter of Introducing Human Geographies in which a differentiation is made between Geography as ‘writing the Earth': looking at human-nature relationship in the wider sense, versus ‘writing the World’ : looking at society-space relationships. For HCI audience I described it by rephrasing Don Norman’s differentiation between ‘Geography in the world‘ which is about the way people interact with the physical environment around them, versus ‘Geography in the head‘ which is the cultural, personal and social understanding of the place where they are and how they want to shape their personal activities, memories and interactions. Of course, Geography in the world is easier to represent in computers then the Geography in the head, and my personal view is that too much emphasis is paid to the first type.

Another part of the presentation focused on the importance of Cartography for geographical technologies, and why issues of map scale, media and task context are very important when designing geographic applications. For example, the value of paper as a media and understanding that maps are more about context then about ‘you are here’.

My position paper is available here . My presentation is provided below

In my view, the workshop was very valuable in opening new conversations. I have now a better understanding of the context in which HCI researchers in Google, Yahoo! and Pitney-Bowes Business Insight consider geography and what problems they have. The issue of place and the need to explore platial information came up several times, and we also experienced the multi-sensory engagement with place which are difficult to capture in digital forms. Most importantly, this was an experience in understanding the language and ways of expression that can help in bridging the two communities.

Follow

Get every new post delivered to your Inbox.

Join 2,697 other followers