The Association of American Geographers is coordinating an effort to create an International Encyclopedia of Geography. Plans started in 2010, with an aim to see the 15 volumes project published in 2015 or 2016. Interestingly, this shows that publishers and scholars are still seeing the value in creating subject-specific encyclopedias. On the other hand, the weird decision by Wikipedians that Geographic Information Science doesn’t exist outside GIS, show that geographers need a place to define their practice by themselves. You can find more information about the AAG International Encyclopedia project in an interview with Doug Richardson from 2012.

As part of this effort, I was asked to write an entry on ‘Volunteered Geographic Information, Quality Assurance‘ as a short piece of about 3000 words. To do this, I have looked around for mechanisms that are used in VGI and in Citizen Science. This are covered in OpenStreetMap studies and similar work in GIScience, and in the area of citizen science, there are reviews such as the one by Andrea Wiggins and colleagues of mechanisms to ensure data quality in citizen science projects, which clearly demonstrated that projects are using multiple methods to ensure data quality.

Below you’ll find an abridged version of the entry (but still long). The citation for this entry will be:

Haklay, M., Forthcoming. Volunteered geographic information, quality assurance. in D. Richardson, N. Castree, M. Goodchild, W. Liu, A. Kobayashi, & R. Marston (Eds.) The International Encyclopedia of Geography: People, the Earth, Environment, and Technology. Hoboken, NJ: Wiley/AAG

In the entry, I have identified 6 types of mechanisms that are used to ensure quality assurance when the data has a geographical component, either VGI or citizen science. If I have missed a type of quality assurance mechanism, please let me know!

Here is the entry:

Volunteered geographic information, quality assurance

Volunteered Geographic Information (VGI) originate outside the realm of professional data collection by scientists, surveyors and geographers. Quality assurance of such information is important for people who want to use it, as they need to identify if it is fit-for-purpose. Goodchild and Li (2012) identified three approaches for VGI quality assurance , ‘crowdsourcing‘ and that rely on the number of people that edited the information, ‘social’ approach that is based on gatekeepers and moderators, and ‘geographic’ approach which uses broader geographic knowledge to verify that the information fit into existing understanding of the natural world. In addition to the approaches that Goodchild and li identified, there are also ‘domain’ approach that relate to the understanding of the knowledge domain of the information, ‘instrumental observation’ that rely on technology, and ‘process oriented’ approach that brings VGI closer to industrialised procedures. First we need to understand the nature of VGI and the source of concern with quality assurance.

While the term volunteered geographic information (VGI) is relatively new (Goodchild 2007), the activities that this term described are not. Another relatively recent term, citizen science (Bonney 1996), which describes the participation of volunteers in collecting, analysing and sharing scientific information, provide the historical context. While the term is relatively new, the collection of accurate information by non-professional participants turn out to be an integral part of scientific activity since the 17th century and likely before (Bonney et al 2013). Therefore, when approaching the question of quality assurance of VGI, it is critical to see it within the wider context of scientific data collection and not to fall to the trap of novelty, and to consider that it is without precedent.

Yet, this integration need to take into account the insights that emerged within geographic information science (GIScience) research over the past decades. Within GIScience, it is the body of research on spatial data quality that provide the framing for VGI quality assurance. Van Oort’s (2006) comprehensive synthesis of various quality standards identifies the following elements of spatial data quality discussions:

  • Lineage – description of the history of the dataset,
  • Positional accuracy – how well the coordinate value of an object in the database relates to the reality on the ground.
  • Attribute accuracy – as objects in a geographical database are represented not only by their geometrical shape but also by additional attributes.
  • Logical consistency – the internal consistency of the dataset,
  • Completeness – how many objects are expected to be found in the database but are missing as well as an assessment of excess data that should not be included.
  • Usage, purpose and constraints – this is a fitness-for-purpose declaration that should help potential users in deciding how the data should be used.
  • Temporal quality – this is a measure of the validity of changes in the database in relation to real-world changes and also the rate of updates.

While some of these quality elements might seem independent of a specific application, in reality they can be only be evaluated within a specific context of use. For example, when carrying out analysis of street-lighting in a specific part of town, the question of completeness become specific about the recording of all street-light objects within the bounds of the area of interest and if the data set includes does not include these features or if it is complete for another part of the settlement is irrelevant for the task at hand. The scrutiny of information quality within a specific application to ensure that it is good enough for the needs is termed ‘fitness for purpose’. As we shall see, fit-for-purpose is a central issue with respect to VGI.

To understand the reason that geographers are concerned with quality assurance of VGI, we need to recall the historical development of geographic information, and especially the historical context of geographic information systems (GIS) and GIScience development since the 1960s. For most of the 20th century, geographic information production became professionalised and institutionalised. The creation, organisation and distribution of geographic information was done by official bodies such as national mapping agencies or national geological bodies who were funded by the state. As a results, the production of geographic information became and industrial scientific process in which the aim is to produce a standardised product – commonly a map. Due to financial, skills and process limitations, products were engineered carefully so they can be used for multiple purposes. Thus, a topographic map can be used for navigation but also for urban planning and for many other purposes. Because the products were standardised, detailed specifications could be drawn, against which the quality elements can be tested and quality assurance procedures could be developed. This was the backdrop to the development of GIS, and to the conceptualisation of spatial data quality.

The practices of centralised, scientific and industrialised geographic information production lend themselves to quality assurance procedures that are deployed through organisational or professional structures, and explains the perceived challenges with VGI. Centralised practices also supported employing people with focus on quality assurance, such as going to the field with a map and testing that it complies with the specification that were used to create it. In contrast, most of the collection of VGI is done outside organisational frameworks. The people who contribute the data are not employees and seemingly cannot be put into training programmes, asked to follow quality assurance procedures, or expected to use standardised equipment that can be calibrated. The lack of coordination and top-down forms of production raise questions about ensuring the quality of the information that emerges from VGI.

To consider quality assurance within VGI require to understand some underlying principles that are common to VGI practices and differentiate it from organised and industrialised geographic information creation. For example, some VGI is collected under conditions of scarcity or abundance in terms of data sources, number of observations or the amount of data that is being used. As noted, the conceptualisation of geographic data collection before the emergence of VGI was one of scarcity where data is expensive and complex to collect. In contrast, many applications of VGI the situation is one of abundance. For example, in applications that are based on micro-volunteering, where the participant invest very little time in a fairly simple task, it is possible to give the same mapping task to several participants and statistically compare their independent outcomes as a way to ensure the quality of the data. Another form of considering abundance as a framework is in the development of software for data collection. While in previous eras, there will be inherently one application that was used for data capture and editing, in VGI there is a need to consider of multiple applications as different designs and workflows can appeal and be suitable for different groups of participants.

Another underlying principle of VGI is that since the people who collect the information are not remunerated or in contractual relationships with the organisation that coordinates data collection, a more complex relationships between the two sides are required, with consideration of incentives, motivations to contribute and the tools that will be used for data collection. Overall, VGI systems need to be understood as socio-technical systems in which the social aspect is as important as the technical part.

In addition, VGI is inherently heterogeneous. In large scale data collection activities such as the census of population, there is a clear attempt to capture all the information about the population over relatively short time and in every part of the country. In contrast, because of its distributed nature, VGI will vary across space and time, with some areas and times receiving more attention than others. An interesting example has been shown in temporal scales, where some citizen science activities exhibit ‘weekend bias’ as these are the days when volunteers are free to collect more information.

Because of the difference in the organisational settings of VGI, a different approaches to quality assurance is required, although as noted, in general such approaches have been used in many citizen science projects. Over the years, several approaches emerged and these include ‘crowdsourcing ‘, ‘social’, ‘geographic’, ‘domain’, ‘instrumental observation’ and ‘process oriented’. We now turn to describe each of these approaches.

Thecrowdsourcing approach is building on the principle of abundance. Since there are is a large number of contributors, quality assurance can emerge from repeated verification by multiple participants. Even in projects where the participants actively collect data in uncoordinated way, such as the OpenStreetMap project, it has been shown that with enough participants actively collecting data in a given area, the quality of the data can be as good as authoritative sources. The limitation of this approach is when local knowledge or verification on the ground (‘ground truth’) is required. In such situations, the ‘crowdsourcing’ approach will work well in central, highly populated or popular sites where there are many visitors and therefore the probability that several of them will be involved in data collection rise. Even so, it is possible to encourage participants to record less popular places through a range of suitable incentives.

Thesocial approach is also building on the principle of abundance in terms of the number of participants, but with a more detailed understanding of their knowledge, skills and experience. In this approach, some participants are asked to monitor and verify the information that was collected by less experienced participants. The social method is well established in citizen science programmes such as bird watching, where some participants who are more experienced in identifying bird species help to verify observations by other participants. To deploy the social approach, there is a need for a structured organisations in which some members are recognised as more experienced, and are given the appropriate tools to check and approve information.

Thegeographic approach uses known geographical knowledge to evaluate the validity of the information that is received by volunteers. For example, by using existing knowledge about the distribution of streams from a river, it is possible to assess if mapping that was contributed by volunteers of a new river is comprehensive or not. A variation of this approach is the use of recorded information, even if it is out-of-date, to verify the information by comparing how much of the information that is already known also appear in a VGI source. Geographic knowledge can be potentially encoded in software algorithms.

Thedomain approach is an extension of the geographic one, and in addition to geographical knowledge uses a specific knowledge that is relevant to the domain in which information is collected. For example, in many citizen science projects that involved collecting biological observations, there will be some body of information about species distribution both spatially and temporally. Therefore, a new observation can be tested against this knowledge, again algorithmically, and help in ensuring that new observations are accurate.

Theinstrumental observation approach remove some of the subjective aspects of data collection by a human that might made an error, and rely instead on the availability of equipment that the person is using. Because of the increased in availability of accurate-enough equipment, such as the various sensors that are integrated in smartphones, many people keep in their pockets mobile computers with ability to collect location, direction, imagery and sound. For example, images files that are captured in smartphones include in the file the GPS coordinates and time-stamp, which for a vast majority of people are beyond their ability to manipulate. Thus, the automatic instrumental recording of information provide evidence for the quality and accuracy of the information.

Finally, the ‘process oriented approach bring VGI closer to traditional industrial processes. Under this approach, the participants go through some training before collecting information, and the process of data collection or analysis is highly structured to ensure that the resulting information is of suitable quality. This can include provision of standardised equipment, online training or instruction sheets and a structured data recording process. For example, volunteers who participate in the US Community Collaborative Rain, Hail & Snow network (CoCoRaHS) receive standardised rain gauge, instructions on how to install it and an online resources to learn about data collection and reporting.

Importantly, these approach are not used in isolation and in any given project it is likely to see a combination of them in operation. Thus, an element of training and guidance to users can appear in a downloadable application that is distributed widely, and therefore the method that will be used in such a project will be a combination of the process oriented with the crowdsourcing approach. Another example is the OpenStreetMap project, which in the general do not follow limited guidance to volunteers in terms of information that they collect or the location in which they collect it. Yet, a subset of the information that is collected in OpenStreetMap database about wheelchair access is done through the highly structured process of the WheelMap application in which the participant is require to select one of four possible settings that indicate accessibility. Another subset of the information that is recorded for humanitarian efforts is following the social model in which the tasks are divided between volunteers using the Humanitarian OpenStreetMap Team (H.O.T) task manager, and the data that is collected is verified by more experienced participants.

The final, and critical point for quality assurance of VGI that was noted above is fitness-for-purpose. In some VGI activities the information has a direct and clear application, in which case it is possible to define specifications for the quality assurance element that were listed above. However, one of the core aspects that was noted above is the heterogeneity of the information that is collected by volunteers. Therefore, before using VGI for a specific application there is a need to check for its fitness for this specific use. While this is true for all geographic information, and even so called ‘authoritative’ data sources can suffer from hidden biases (e.g. luck of update of information in rural areas), the situation with VGI is that variability can change dramatically over short distances – so while the centre of a city will be mapped by many people, a deprived suburb near the centre will not be mapped and updated. There are also limitations that are caused by the instruments in use – for example, the GPS positional accuracy of the smartphones in use. Such aspects should also be taken into account, ensuring that the quality assurance is also fit-for-purpose.

References and Further Readings

Bonney, Rick. 1996. Citizen Science – a lab tradition, Living Bird, Autumn 1996.
Bonney, Rick, Shirk, Jennifer, Phillips, Tina B. 2013. Citizen Science, Encyclopaedia of science education. Berlin: Springer-Verlag.
Goodchild, Michael F. 2007. Citizens as sensors: the world of volunteered geography. GeoJournal, 69(4), 211–221.
Goodchild, Michael F., and Li, Linna. 2012, Assuring the quality of volunteered geographic information. Spatial Statistics, 1 110-120
Haklay, Mordechai. 2010. How Good is volunteered geographical information? a comparative study of OpenStreetMap and ordnance survey datasets. Environment and Planning B: Planning and Design, 37(4), 682–703.
Sui, Daniel, Elwood, Sarah and Goodchild, Michael F. (eds), 2013. Crowdsourcing Geographic Knowledge, Berlin:Springer-Verlag.
Van Oort, Pepjin .A.J. 2006. Spatial data quality: from description to application, PhD Thesis, Wageningen: Wageningen Universiteit, p. 125.

The 3 days of the Royal Geographical Society (with IBG) or RGS/IBG  annual conference are always valuable, as they provide an opportunity to catch up with the current themes in (mostly human) Geography. While I spend most of my time in an engineering department, I also like to keep my ‘geographer identity’ up to date as this is the discipline that I feel most affiliated with.

Since last year’s announcement that the conference will focus on ‘Geographies of Co-Production‘ I was looking forward to it, as this topic relate many themes of my research work. Indeed, the conference was excellent – from the opening session to the last one that I attended (a discussion about the co-production of co-production).

Just before the conference, the participatory geographies research group run a training day, in which I run a workshop on participatory mapping. It was good to see the range of people that came to the workshop, many of them in early stages of their research career who want to use participatory methods in their research.

In the opening session on Tuesday’s night, Uma Kothari raised a very important point about the risk of institutions blaming the participants if a solution that was developed with them failed. There is a need to ensure that bodies like the World Bank or other funders don’t escape their responsibilities and support as a result of participatory approaches. Another excellent discussion came from Keri Facer who analysed the difficulties of interdisciplinary research based on her experience from the ‘connected communities‘ project. Noticing and negotiating the multiple dimensions of differences between research teams is critical for the co-production of knowledge.

By the end of this session, and as was demonstrated throughout the conference, it became clear that there are many different notions of ‘co-production of knowledge’ – sometime it is about two researchers working together, for others it is about working with policy makers or civil servants, and yet for another group it means to have an inclusive knowledge production with all people that can be impacted by a policy or research recommendation. Moreover, there was even a tension between the type of inclusiveness – should it be based on simple openness (‘if you want to participate, join’), or representation of people within the group, or should it be a active effort for inclusiveness? The fuzziness of the concept proved to be very useful as it led to many discussions about ‘what co-production means?’, as well as ‘what co-production does?’.

Two GIS education sessions were very good (see Patrick’s summery on the ExCiteS blog) and I found Nick Tate and Claire Jarvis discussion about the potential of virtual community of practice (CoP) for GIScience professionals especially interesting. An open question that was left at the end of the session was about the value of generic expertise (GIScience) or the way they are used in a specific area. In other words, do we need a CoP to share the way we use the tools and methods or is it about situated knowledge within a specific domain? 

ECR panel (source: Keri Facer)

ECR panel (source: Keri Facer)

The Chair Early Career panel was, for me, the best session in the conferenceMaria Escobar-TelloNaomi Millner, Hilary Geoghegan and Saffron O’Neil discussed their experience in working with policy makers, participants, communities and universities. Maria explored the enjoyment of working at the speed of policy making in DEFRA, which also bring with it major challenges in formulating and doing research. Naomi discussed productive margins project which involved redesigning community engagement, and also noted what looks like very interesting reading: the e-book Problems of Participation: Reflections on Authority, Democracy, and the Struggle for Common Life. Hilary demonstrated how she has integrated her enthusiasm for enthusiasm into her work, while showing how knowledge is co-produced at the boundaries between amateurs and professionals, citizens and scientists. Hilary recommended another important resource – the review Towards co-production in research with communities (especially the diagram/table on page 9). Saffron completed the session with her work on climate change adaptation, and the co-production of knowledge with scientists and communities. Her research on community based climate change visualisation is noteworthy, and suggest ways of engaging people through photos that they take around their homes.

In another session which focused on mapping, the Connected Communities project appeared again, in the work of Chris Speed, Michelle Bastian & Alex Hale on participatory local food mapping in Liverpool and the lovely website that resulted from their project, Memories of Mr Seel’s Garden. It is interesting to see how methods travel across disciplines and to reflect what insights should be integrated in future work (while also resisting a feeling of ‘this is naive, you should have done this or that’!).

On the last day of the conference, the sessions on ‘the co-production of data based living‘ included lots to contemplate on. Rob Kitchin discussion and critique of smart-cities dashboards, highlighting that data is not-neutral, and that it is sometime used to decontextualised the city from its history and exclude non-quantified and sensed forms of knowledge (his new book ‘the data revolution’ is just out). Agnieszka Leszczynski continued to develop her exploration of the mediation qualities of techno-social-spatial interfaces leading to the experience of being at a place intermingled with the experience of the data that you consume and produce in it. Matt Wilson drawn parallel between the quantified self and the quantified city, suggesting the concept of ‘self-city-nation’ and the tensions between statements of collaboration and sharing within proprietary commercial systems that aim at extracting profit from these actions. Also interesting was Ewa Luger discussion of the meaning of ‘consent’ within the Internet of Things project ‘Hub of All Things‘ and the degree in which it is ignored by technology designers.

The highlight of the last day for me was the presentation by Rebecca Lave on Critical Physical Geography‘. This is the idea that it is necessary to combine scientific understanding of hydrology and ecology with social theory. It is also useful in alerting geographers who are dealing with human geography to understand the physical conditions that influence life in specific places. This approach encourage people who are involved in research to ask questions about knowledge production, for example social justice aspects in access to models when corporations can have access to weather or flood models that are superior to what is available to the rest of society.

Overall, Wendy Larner decision to focus the conference on co-production of knowledge was timely and created a fantastic conference. Best to complete this post with her statement on the RGS website:

The co-production of knowledge isn’t entirely new and Wendy is quick to point out that themes like citizen science and participatory methods are well established within geography. “What we are now seeing is a sustained move towards the co-production of knowledge across our entire discipline.”

 

Today, OpenStreetMap celebrates 10 years of operation as counted from the date of registration. I’ve heard about the project when it was in early stages, mostly because I knew Steve Coast when I was studying for my Ph.D. at UCL.  As a result, I was also able to secured the first ever research grant that focused on OpenStreetMap (and hence Volunteered Geographic Information – VGI) from the Royal Geographical Society in 2005. A lot can be said about being in the right place at the right time!

OSM Interface, 2006 (source: Nick Black)

OSM Interface, 2006 (source: Nick Black)

Having followed the project during this decade, there is much to reflect on – such as thinking about open research questions, things that the academic literature failed to notice about OSM or the things that we do know about OSM and VGI because of the openness of the project. However, as I was preparing the talk for the INSPIRE conference, I was starting to think about the start dates of OSM (2004), TomTom Map Share (2007), Waze (2008), Google Map Maker (2008).  While there are conceptual and operational differences between these projects, in terms of ‘knowledge-based peer production systems’ they are fairly similar: all rely on large number of contributors, all use both large group of contributors who contribute little, and a much smaller group of committed contributors who do the more complex work, and all are about mapping. Yet, OSM started 3 years before these other crowdsourced mapping projects, and all of them have more contributors than OSM.

Since OSM is described  as ‘Wikipedia of maps‘, the analogy that I was starting to think of was that it’s a bit like a parallel history, in which in 2001, as Wikipedia starts, Encarta and Britannica look at the upstart and set up their own crowdsourcing operations so within 3 years they are up and running. By 2011, Wikipedia continues as a copyright free encyclopedia with sizable community, but Encarta and Britannica have more contributors and more visibility.

Knowing OSM closely, I felt that this is not a fair analogy. While there are some organisational and contribution practices that can be used to claim that ‘it’s the fault of the licence’ or ‘it’s because of the project’s culture’ and therefore justify this, not flattering, analogy to OSM, I sensed that there is something else that should be used to explain what is going on.

TripAdvisor FlorenceThen, during my holiday in Italy, I was enjoying the offline TripAdvisor app for Florence, using OSM for navigation (in contrast to Google Maps which are used in the online app) and an answer emerged. Within OSM community, from the start, there was some tension between the ‘map’ and ‘database’ view of the project. Is it about collecting the data so beautiful maps or is it about building a database that can be used for many applications?

Saying that OSM is about the map mean that the analogy is correct, as it is very similar to Wikipedia – you want to share knowledge, you put it online with a system that allow you to display it quickly with tools that support easy editing the information sharing. If, on the other hand, OSM is about a database, then OSM is about something that is used at the back-end of other applications, a lot like DBMS or Operating System. Although there are tools that help you to do things easily and quickly and check the information that you’ve entered (e.g. displaying the information as a map), the main goal is the building of the back-end.

Maybe a better analogy is to think of OSM as ‘Linux of maps’, which mean that it is an infrastructure project which is expected to have a lot of visibility among the professionals who need it (system managers in the case of Linux, GIS/Geoweb developers for OSM), with a strong community that support and contribute to it. The same way that some tech-savvy people know about Linux, but most people don’t, I suspect that TripAdvisor offline users don’t notice that they use OSM, they are just happy to have a map.

The problem with the Linux analogy is that OSM is more than software – it is indeed a database of information about geography from all over the world (and therefore the Wikipedia analogy has its place). Therefore, it is somewhere in between. In a way, it provide a demonstration for the common claim in GIS circles that ‘spatial is special‘. Geographical information is infrastructure in the same way that operating systems or DBMS are, but in this case it’s not enough to create an empty shell that can be filled-in for the specific instance, but there is a need for a significant amount of base information before you are able to start building your own application with additional information. This is also the philosophical difference that make the licensing issues more complex!

In short, both Linux or Wikipedia analogies are inadequate to capture what OSM is. It has been illuminating and fascinating to follow the project over its first decade,  and may it continue successfully for more decades to come.

The Vespucci initiative has been running for over a decade, bringing together participants from wide range of academic backgrounds and experiences to explore, in a ‘slow learning’ way, various aspects of geographic information science research. The Vespucci Summer Institutes are week long summer schools, most frequently held at Fiesole, a small town overlooking Florence. This year, the focus of the first summer institute was on crowdsourced geographic information and citizen science.

101_0083The workshop was supported by COST ENERGIC (a network that links researchers in the area of crowdsourced geographic information, funded by the EU research programme), the EU Joint Research Centre (JRC), Esri and our Extreme Citizen Science research group. The summer school included about 30 participants and facilitators that ranged from master students students that are about to start their PhD studies, to established professors who came to learn and share knowledge. This is a common feature of Vespucci Institute, and the funding from the COST network allowed more early career researchers to participate.

Apart from the pleasant surrounding, Vespucci Institutes are characterised by the relaxed, yet detailed discussions that can be carried over long lunches and coffee breaks, as well as team work in small groups on a task that each group present at the end of the week. Moreover, the programme is very flexible so changes and adaptation to the requests of the participants and responding to the general progression of the learning are part of the process.

This is the second time that I am participating in Vespucci Institutes as a facilitator, and in both cases it was clear that participants take the goals of the institute seriously, and make the most of the opportunities to learn about the topics that are explored, explore issues in depth with the facilitators, and work with their groups beyond the timetable.

101_0090The topics that were covered in the school were designed to provide an holistic overview of geographical crowdsourcing or citizen science projects, especially in the area where these two types of activities meet. This can be when a group of citizens want to collect and analyse data about local environmental concerns, or oceanographers want to work with divers to record water temperature, or when details that are emerging from social media are used to understand cultural differences in the understanding of border areas. These are all examples that were suggested by participants from projects that they are involved in. In addition, citizen participation in flood monitoring and water catchment management, sharing information about local food and exploring data quality of spatial information that can be used by wheelchair users also came up in the discussion. The crossover between the two areas provided a common ground for the participants to explore issues that are relevant to their research interests. 

2014-07-07 15.37.55The holistic aspect that was mentioned before was a major goal for the school – so to consider the tools that are used to collect information, engaging and working with the participants, managing the data that is provided by the participants and ensuring that it is useful for other purposes. To start the process, after introducing the topics of citizen science and volunteered geographic information (VGI), the participants learned about data collection activities, including noise mapping, OpenStreetMap contribution, bird watching and balloon and kite mapping. As can be expected, the balloon mapping raised a lot of interest and excitement, and this exercise in local mapping was linked to OpenStreetMap later in the week.

101_0061The experience with data collection provided the context for discussions about data management and interoperability and design aspects of citizen science applications, as well as more detailed presentations from the participants about their work and research interests. With all these details, the participants were ready to work on their group task: to suggest a research proposal in the area of VGI or Citizen Science. Each group of 5 participants explored the issues that they agreed on – 2 groups focused on a citizen science projects, another 2 focused on data management and sustainability and finally another group explored the area of perception mapping and more social science oriented project.

Some of the most interesting discussions were initiated at the request of the participants, such as the exploration of ethical aspects of crowdsourcing and citizen science. This is possible because of the flexibility in the programme.

Now that the institute is over, it is time to build on the connections that started during the wonderful week in Fiesole, and see how the network of Vespucci alumni develop the ideas that emerged this week.

 

Today marks the publication of the report ‘crowdsourced geographic information in government‘. ReportThe report is the result of a collaboration that started in the autumn of last year, when the World Bank Global Facility for Disaster Reduction and Recovery(GFDRR)  requested to carry out a study of the way crowdsourced geographic information is used by governments. The identification of barriers and success factors were especially needed, since GFDRR invest in projects across the world that use crowdsourced geographic information to help in disaster preparedness, through activities such as the Open Data for Resilience Initiative. By providing an overview of factors that can help those that implement such projects, either in governments or in the World Bank, we can increase the chances of successful implementations. To develop the ideas of the project, Robert Soden (GFDRR) and I run a short workshop during State of the Map 2013 in Birmingham, which helped in shaping the details of project plan as well as some preliminary information gathering. The project team included myself, Vyron Antoniou, Sofia Basiouka, and Robert Soden (GFDRR). Later on, Peter Mooney (NUIM) and Jamal Jokar (Heidelberg) volunteered to help us – demonstrating the value in research networks such as COST ENERGIC which linked us.

The general methodology that we decided to use is the identification of case studies from across the world, at different scales of government (national, regional, local) and domains (emergency, environmental monitoring, education). We expected that with a large group of case studies, it will be possible to analyse common patterns and hopefully reach conclusions that can assist future projects. In addition, this will also be able to identify common barriers and challenges.

We have paid special attention to information flows between the public and the government, looking at cases where the government absorbed information that provided by the public, and also cases where two-way communication happened.

Originally, we were aiming to ‘crowdsource’  the collection of the case studies. We identified the information that is needed for the analysis by using  few case studies that we knew about, and constructing the way in which they will be represented in the final report. After constructing these ‘seed’ case study, we aimed to open the questionnaire to other people who will submit case studies. Unfortunately, the development of a case study proved to be too much effort, and we received only a small number of submissions through the website. However, throughout the study we continued to look out for cases and get all the information so we can compile them. By the end of April 2014 we have identified about 35 cases, but found clear and useful information only for 29 (which are all described in the report).  The cases range from basic mapping to citizen science. The analysis workshop was especially interesting, as it was carried out over a long Skype call, with members of the team in Germany, Greece, UK, Ireland and US (Colorado) while working together using Google Docs collaborative editing functionality. This approach proved successful and allowed us to complete the report.

You can download the full report from UCL Discovery repository

Or download a high resolution copy for printing and find much more information about the project on the Crowdsourcing and government website 

At the last day of INSPIRE conference, I’ve attended a session about  apps and applications and the final plenary which focused on knowledge based economy and the role of inspire within it. Some notes from the talks including my interpretations and comments.

Dabbie Wilson from the Ordnance Survey highlighted the issues that the OS is facing in designing next generation products from an information architect point of view. She noted that the core large scale product, MasterMap has been around for 14 years and been provided in GML all the way through. She noted that now the client base in the UK is used to it and happy with (and when it was introduced, there was a short period of adjustment that I recall, but I assume that by now everything is routine). Lots of small scale products are becoming open and also provided as linked data. The user community is more savvy – they want the Ordnance Survey to push data to them, and access the data through existing or new services and not just given the datasets without further interaction. They want to see ease of access and use across multiple platforms. The OS is considering moving away from provision of data to online services as the main way for people to get access to the data. The OS is investing heavily in Mobile apps for leisure but also helping the commercial sector in developing apps that are based on OS data and tools. For example, OS locate app provide mechanisms to work worldwide so it’s not only UK. They also put effort to create APIs and SDKs – such as OS OnDemands – and also allowing local authorities to update their address data. There is also focus on cloud-based application – such as applications to support government activities during emergencies. The information architecture side moving from product to content. The OS will continue to maintain content that is product agnostic and running the internal systems for a long period of 10 to 20 years so they need to decouple outward facing services from the internal representation. The OS need to be flexible to respond to different needs – e.g. in file formats it will be GML, RDF and ontology but also CSV and GeoJSON. Managing the rules between the various formats is a challenging task. Different representations of the same thing is another challenge – for example 3D representation and 2D representation.

Didier Leibovici presented a work that is based on Cobweb project and discussing quality assurance to crowdsourcing data. In crowdsourcing there are issues with quality of both the authoritative and the crowdsourcing data. The COBWEB project is part of a set of 5 citizen observatories, exploring air quality, noise, water quality, water management, flooding and land cover, odour perception and nuisance and they can be seen at http://www.citizen-obs.eu. COBWEB is focusing on the infrastructure and management of the data. The pilot studies in COBWEB look at landuse/land cover, species and habitat observations and flooding. They are mixing sensors in the environment, then they get the data in different formats and the way to managed it is to validate the data, approve its quality and make sure that it’s compliant with needs. The project involve designing an app, then encouraging people to collect the data and there can be lack of connection to other sources of data. The issues that they are highlighting are quality/uncertainty, accuracy, trust and relevance. One of the core questions is ‘is crowd-sourcing data need to different to any other QA/QC?’ (my view: yes, but depending on the trade offs in terms of engagement and process) they see a role of crowdsourcing in NSDI, with real time data capture QA and post dataset collection QA (they do both) and there are also re-using and conflating data sources. QA is aimed to know what is collected  – there are multiple ways to define the participants which mean different ways of involving people and this have implications to QA. They are suggesting a stakeholder quality model with principles such as vaueness, ambiguity, judgement, reliability, validity, and trust. There is a paper in AGILE 2014 about their framework.  The framework suggests that the people who build the application need to develop the QA/QC process and do that with workflow authoring tool, which is supported with ontology and then running it as web processing service. Temporality of data need to be consider in the metadata, and how to update the metadata on data quality.

Patrick Bell considered the use of smartphone apps – in a project of the BGS and the EU JRC they review existing applications. The purpose of the survey to explore what national geological organisations can learn from the shared experience with development of smartphone apps – especially in the geological sector. Who is doing the development work and which partnerships are created? What barriers are perceived and what the role of INSPIRE directive within the development of these apps? They also try to understand who are the users?  There are 33 geological survey organisations in the EU and they received responses from 16 of them. They found 23 different apps – from BGS – iGeology http://www.bgs.ac.uk/igeology/home.html and provide access to geological amps and give access to subsidence and radon risk with in-app payment. They have soil information in the MySoil app which allow people to get some data for free and there is also ability to add information and do citizen science. iGeology 3D is adding AR to display a view of the geological map locally. aFieldWork is a way to capture information in harsh environment of Greenland.  GeoTreat is providing information of sites with special value that is relevant to tourists or geology enthusiasts. BRGM – i-infoTerre provide geological information to a range of users with emphasis on professional one, while i-infoNappe tell you about ground water level. The Italian organisation developed Maps4You with hiking route and combining geology with this information in Emilia-Romagna region. The Czech Geologcial survey provide data in ArcGIS online.

The apps deal with a wide range of topics, among them geohazards, coastline, fossils, shipwrecks … The apps mostly provide map data and 3D, data collection and tourism. Many organisation that are not developing anything stated no interest or a priority to do so, and also lack of skills. They see Android as the most important – all apps are free but then do in app purchase. The apps are updated on a yearly basis. about 50% develop the app in house and mostly work in partnerships in developing apps. Some focus on webapps that work on mobile platform, to cross platform frameworks but they are not as good as native apps, though the later are more difficult to develop and maintain. Many people use ESRI SDK and they use open licenses. Mostly there is lack of promotion of reusing the tools – most people serve data. Barriers – supporting multiple platform, software development skills, lack of reusable software and limited support to reuse across communities – heavy focus on data delivery, OGC and REST services are used to deliver data to an app. Most suggesting no direct link to INSPIRE by respondents but principles of INSPIRE are at the basis of these applications.

Timo Aarmio – presented the OSKARI platform to release open data to end users (http://www.oskari.org/). They offer role-based security layers with authenticates users and four levels of permissions – viewing, viewing on embedded maps, publishing and downloading. The development of Oskari started in 2011 and is used by 16 member organisations and the core team is running from National Land Survey of Finland. It is used in Arctic SDI, ELF and Finish Geoportal – and lots of embedded maps. The end-users features allow search of metadata, searching map layers by data providers or INSPIRE themes. they have drag and drop layers and customisation of features in WFS.  Sharing is also possible with uploading shapefiles by users.  They also have printing functionality which allow PNG or PDF and provide also embedded maps so you can create a map and then embed  it in your web page.  The data sources that they support are OGC web services – WMS, WMTS, WFS, CSW and also ArcGIS REST, data import for Shapefiles and KML, and JSON for thematic maps . Spatial analysis is provided with OGC Web Processing Service – providing basic analysis of 6 methods – buffer, aggregate, union, intersect, union of analysed layres and area and sector. They are planning to add thematic maps, more advanced spatial analysis methods, and improve mobile device support. 20-30 people work on Oskari with 6 people at the core of it.

The final session focused on knowledge based economy and the link to INSPIRE.

Andrew Trigg provide the perspective of HMLR on fueling the knowledge based economy with open data. The Land registry dealing with 24 million titles with 5 million property transaction a year. They provided open access to individual titles since 1990 and INSPIRE and the open data agenda are important to the transition that they went through in the last 10 years. Their mission is now include an explicit reference to the management and reuse of land and property data and this is important in terms of how the organisation defines itself. From the UK context there is shift to open data through initiatives such as INSPIRE, Open Government Partnership, the G8 Open Data Charter (open by default) and national implementation plans. For HMLR, there is the need to be INSPIRE Compliance, but in addition, they have to deal with public data group, the outcomes of the Shakespeare review and commitment to a national information infrastructure. As a result, HMLR now list 150 datasets but some are not open due to need to protect against fraud and other factors. INSPIRE was the first catalyst to indicate that HMLR need to change practices and allowed the people in the organisation to drive changes in the organisation, secure resources and invest in infrastructure for it. It was also important to highlight to the board of the organisation that data will become important. Also a driver to improving quality before releasing data. The parcel data is available for use without registration. They have 30,000 downloads of the index polygon of people that can potentially use it. They aim to release everything that they can by 2018.

The challenges that HMLR experienced include data identification, infrastructure, governance, data formats and others. But the most important to knowledge based economy are awareness, customer insight, benefit measurement and sustainable finance. HMLR invested effort in promoting the reuse of their data however, because there is no registration, their is not customer insight but no relationships are being developed with end users – voluntary registration process might be an opportunity to develop such relations. Evidence is growing that few people are using the data because they have low confidence in commitment of providing the data and guarantee stability in format and build applications on top of it, and that will require building trust. knowing who got the data is critical here, too. Finally, sustainable finance is a major thing – HMLR is not allowed to cross finance from other areas of activities so they have to charge for some of their data.

Henning Sten Hansen from Aalborg University talked about the role of education. The talk was somewhat critical of the corporatisation of higher education, but also accepting some of it’s aspects, so what follows might be misrepresenting his views though I think he tried to mostly raise questions. Henning started by noting that knowledge workers are defined by OECD as people who work autonomously and reflectively, use tools effectively and interactively, and work in heterogeneous groups well (so capable of communicating and sharing knowledge). The Danish government current paradigm is to move from ‘welfare society’ to the ‘competitive society’ so economic aspects of education are seen as important, as well as contribution to enterprise sector with expectations that students will learn to be creative and entrepreneurial. The government require more efficiency and performance from higher education and as a result  reduce the autonomy of individual academics. There is also expectation of certain impacts from academic research and emphasis on STEM  for economic growth, governance support from social science and the humanities need to contribute to creativity and social relationships. The comercialisation is highlighted and pushing patenting, research parks and commercial spin-offs. There is also a lot of corporate style behaviour in the university sector – sometime managed as firms and thought as consumer product. He see a problem that today that is strange focus and opinion that you can measure everything with numbers only. Also the ‘Google dream’ dream is invoked – assuming that anyone from any country can create global companies. However, researchers that need time to develop their ideas more deeply – such as Niels Bohr who didn’t published and secure funding – wouldn’t survive in the current system. But is there a link between education and success? LEGO founder didn’t have any formal education [though with this example as with Bill Gates and Steve Jobs, strangely their business is employing lots of PhDs - so a confusion between a person that start a business and the realisation of it]. He then moved from this general context to INSPIRE, Geoinformation plays a strong role in e-Governance and in the private sector with the increase importance in location based services. In this context, projects such as GI-N2K (Geographic Information Need to Know) are important. This is a pan European project to develop the body of knowledge that was formed in the US and adapting it to current need. They already identified major gaps between the supply side (what people are being taught) and the demand side – there are 4 areas that are cover in the supply side but the demand side want wider areas to be covered. They aim to develop a new BoK for Europe and facilitating knowledge exchange between institutions. He concluded that Higher education is prerequisite  for the knowledge economy – without doubt but the link to innovation is unclear . Challenges – highly educated people crowd out the job market and they do routine work which are not matching their skills, there are unclear the relationship to entrepreneurship and innovation and the needed knowledge to implement ideas. What is the impact on control universities reducing innovation and education – and how to respond quickly to market demands in skills when there are differences in time scale.

Giacomo Martirano provided a perspective of a micro-enterprise (http://www.epsilon-italia.it/IT/) in southern Italy. They are involved in INSPIRE across different projects – GeoSmartCities, Smart-Islands and SmeSpire – so lots of R&D funding from the EU. They are also involved in providing GIS services in their very local environment. From a perspective of SME, he see barriers that are orgnaisational, technical and financial. They have seen many cases of misalignment of technical competencies of different organisations that mean that they can’t participate fully in projects. Also misalignment of technical ability of clients and suppliers, heterogeneity in client organisation culture that add challenges. Financial management of projects and payment to organisations create problems to SME to join in because of sensitivity to cash-flow. They experience cases were awarded contracts won offering a price which is sometime 40% below the reference one. There is a need to invest more and more time with less aware partners and clients. When moving to the next generation of INSPIRE – there is a need to engage with micro-SMEs in the discussion ‘don’t leave us alone’ as the market is unfair. There is also a risk that member states, once the push for implementation reduced and without the EU driver will not continue to invest. His suggestion is to progress and think of INSPIRE as a Serivce – SDI as a Service can allow SMEs to join in. There is a need for cooperation between small and big players in the market.

Andrea Halmos (public services unit, DG CONNECT) – covering e-government, she noted her realisation that INSPIRE is more than ‘just environmental information’. From DG CONNECT view, ICT enabled open government, and the aim of the digital agenda for Europe is to empowering citizen and businesses, strengthening the internal market, highlighting efficiency and effectiveness and recognised pre-conditions. One of the focus is the effort to put public services in digital format and providing them in cross border way. The principles are to try to be user centred, with transparency and cross border support – they have used life events for the design. There are specific activities in sharing identity details, procurement, patient prescriptions, business, and justice.  They see these projects as the building blocks for new services that work in different areas. They are seeing challenges such financial crisis, but there is challenge of new technologies and social media as well as more opening data. So what is next to public administration? They need to deal with customer – open data, open process and open services – with importance to transparency, collaboration and participation (http://www.govloop.com/profiles/blogs/three-dimensions-of-open-government). The services are open to other to join in and allow third party to create different public services. We look at analogies of opening decision making processes and support collaboration with people – it might increase trust and accountability of government. The public service need to collaborative with third parties to create better or new services. ICT is only an enablers – you need to deal with human capital, organisational issue, cultural issues, processes and business models – it even question the role of government and what it need to do in the future. What is the governance issue – what is the public value that is created at the end? will government can be become a platform that others use to create value? They are focusing on Societal Challenge   Comments on their framework proposals are welcomed – it’s available at http://ec.europa.eu/digital-agenda/en/news/vision-public-services 

After these presentations, and when Alessandro Annoni (who was charring the panel) completed the first round of questions, I was bothered that in all these talks about knowledge-based economy only the government and the private sector were mentioned as actors, and even when discussing development of new services on top of the open data and services, the expectation is only for the private sector to act in it. I therefore asked about the role of the third-sector and civil-society within INSPIRE and the visions that the different speakers presented. I even provided the example of mySociety – mainly to demonstrate that third-sector organisations have a role to play.

To my astonishment, Henning, Giacomo, Andrea and Alessandro answered this question by first not treating much of civil-society as organisations but mostly as individual citizens, so a framing that allow commercial bodies, large and small, to act but citizens do not have a clear role in coming together and acting. Secondly, the four of them seen the role of citizens only as providers of data and information – such as the reporting in FixMyStreet. Moreover, each one repeated that despite the fact that this is low quality data it is useful in some ways. For example, Alessandro highlighted that OSM mapping in Africa is an example for a case where you accept it, because there is nothing else (really?!?) but in other places it should be used only when it is needed because of the quality issue – for example, in emergency situation when it is timely.

Apart from yet another repetition of dismissing citizen generated environmental information on the false argument of data quality (see Caren Cooper post on this issue), the views that presented in the talks helped me in crystallising some of the thoughts about the conference.

As one would expect, because the participants are civil servants, on stage and in presentations they follow the main line of the decision makers for which they work, and therefore you could hear the official line that is about efficiency, managing to do more with reduced budgets and investment, emphasising economic growth and very narrow definition of the economy that matters. Different views were expressed during breaks.

The level in which the citizens are not included in the picture was unsurprising under the mode of thinking that was express in the conference about the aims of information as ‘economic fuel’. While the tokenism of improving transparency, or even empowering citizens appeared on some slides and discussions, citizens are not explicitly included in a meaningful and significant way in the consideration of the services or in the visions of ‘government as platform’. They are reprieved as customers or service users.  The lesson that were learned in environmental policy areas in the 1980s and 1990s, which are to provide an explicit role for civil society, NGOs and social-enterprises within the process of governance and decision making are missing. Maybe this is because for a thriving civil society, there is a need for active government investment (community centres need to built, someone need to be employed to run them), so it doesn’t match the goals of those who are using austerity as a political tool.

Connected to that is the fact that although, again at the tokenism level, INSPIRE is about environmental applications, the implementation now is all driven by narrow economic argument. As with citizenship issues, environmental aspects are marginalised at best, or ignored.

The comment about data quality and some responses to my talk remind me of Ed Parsons commentary from 2008 about the UK GIS community reaction to Web Mapping 2.0/Neogeography/GeoWeb. 6 years on from that , the people that are doing the most important geographic information infrastructure project that is currently going, and it is progressing well by the look of it, seem somewhat resistant to trends that are happening around them. Within the core area that INSPIRE is supposed to handle (environmental applications), citizen science has the longest history and it is already used extensively. VGI is no longer new, and crowdsourcing as a source of actionable information is now with a decade of history and more behind it. Yet, at least in the presentations and the talks, citizens and civil-society organisations have very little role unless they are controlled and marshaled.

Despite all this critique, I have to end with a positive note. It has been a while since I’ve been in a GIS conference that include the people that work in government and other large organisations, so I did found the conference very interesting to reconnect and learn about the nature of geographic information management at this scale. It was also good to see how individuals champion use of GeoWeb tools, or the degree in which people are doing user-centred design.

Opening geodata is an interesting issue for INSPIRE  directive. INSPIRE was set before the hype of Government 2.0 was growing and pressure on opening data became apparent, so it was not designed with these aspects in mind explicitly. Therefore the way in which the organisations that are implementing INSPIRE are dealing with the provision of open and linked data is bound to bring up interesting challenges.

Dealing with open and linked data was the topic that I followed on the second day of INSPIRE 2014 conference. The notes below are my interpretation of some of the talks.

Tina Svan Colding discussed the Danish attempt to estimate the value (mostly economically) of open geographic data. The study was done in collaboration with Deloitte, and they started with a change theory – expectations that they will see increase demands from existing customers and new ones. The next assumption is that there will be new products, companies and lower prices and then that will lead to efficiency and better decision making across the public and private sector, but also increase transparency to citizens. In short, trying to capture the monetary value with a bit on the side. They have used statistics, interviews with key people in the public and private sector and follow that with a wider survey – all with existing users of data. The number of users of their data increased from 800 users to over 10,000 within a year. The Danish system require users to register to get the data, so this are balk numbers, but they could also contacted them to ask further questions. The new users – many are citizens (66%) and NGO (3%). There are further 6% in the public sector which had access in principle in the past but the accessibility to the data made it more usable to new people in this sector. In the private sector, construction, utilities and many other companies are using the data. The environmental bodies are aiming to use data in new ways to make environmental consultation more engaging to audience (is this is another Deficit Model? assumption that people don’t engage because it’s difficult to access data?). Issues that people experienced are accessibility to users who don’t know that they need to use GIS and other datasets. They also identified requests for further data release. In the public sector, 80% identified potential for saving with the data (though that is the type of expectation that they live within!).

Roope Tervo, from the Finish Meteorological Institute talked about the implementation of open data portal. Their methodology was very much with users in mind and is a nice example of user-centred data application. They hold a lot of data – from meteorological observations to air quality data (of course, it all depends on the role of the institute). They have chose to use WFS download data, with GML as the data format and coverage data in meteorological formats (e.g. grib). He showed that selection of data models (which can be all compatible with the legislation) can have very different outcomes in file size and complexity of parsing the information. Nice to see that they considered user needs – though not formally. They created an open source JavaScript library that make it is to use the data- so go beyond just releasing the data to how it is used. They have API keys that are based on registration. They had to limit the number of requests per day and the same for the view service. After a year, they have 5000 users, and 100,000 data downloads per day and they are increasing. Increasing slowly. They are considering how to help clients with complex data models.

Panagiotis Tziachris was exploring the clash between ‘heavy duty’ and complex INSPIRE standards and the usual light weight approaches that are common in Open Data portal (I think that he intended in the commercial sector that allow some reuse of data). This is a project of 13 Mediterranean regions in Spain, Italy, Slovenia, Montenegro, Greece, Cyprus and Malta. The HOMER project (website http://homerproject.eu/) used different mechanism, including using hackathons to share knowledge and experience between more experienced players and those that are new to the area. They found them to be a good way to share practical knowledge between partners. This is an interesting side of purposeful hackathon within a known people in a project and I think that it can be useful for other cases. Interestingly, from the legal side, they had to go beyond the usual documents that are provided in an EU consortium, and  in order to allow partners to share information they created a memorandum of understanding for the partners as this is needed to deal with IP and similar issues. Also practices of open data – such as CKAN API which is a common one for open data websites were used. They noticed separation between central administration and local or regional administration – the competency of the more local organisations (municipality or region) is sometimes limited because knowledge is elsewhere (in central government) or they are in different stages of implementation and disagreements on releasing the data can arise. Antoehr issue is that open data is sometime provided at regional portals while another organisation at the national level (environment ministry or cadastre body) is the responsible to INSPIRE. The lack of capabilities at different governmental levels is adding to the challenges of setting open data systems. Sometime Open Data legislation are only about the final stage of the process and not abour how to get there, while INPIRE is all about the preparation, and not about the release of data – this also creates mismatching.

Adam Iwaniak discussed how “over-engineering” make the INSPIRE directive inoperable or relevant to users, on the basis of his experience in Poland. He asks “what are the user needs?” and demonstrated it by pointing that after half term of teaching students about the importance of metadata, when it came to actively searching for metadata in an assignment, the students didn’t used any of the specialist portals but just Google. Based on this and similar experiences, he suggested the creation of a thesaurus that describe keywords and features in the products so it allows searching  according to user needs. Of course, the implementation is more complex and therefore he suggests an approach that is working within the semantic web and use RDF definitions. By making the data searchable and index-able in search engines so they can be found. The core message  was to adapt the delivery of information to the way the user is most likely to search it – so metadata is relevant when the producer make sure that a search in Google find it.

Jesus Estrada Vilegas from the SmartOpenData project http://www.smartopendata.eu/ discussed the implementation of some ideas that can work within INSPIRE context while providing open data. In particular, he discussed a Spanish and Portuguese data sharing. Within the project, they are providing access to the data by harmonizing the data and then making it linked data. Not all the data is open, and the focus of their pilot is in agroforestry land management. They are testing delivery of the data in both INSPIRE compliant formats and the internal organisation format to see which is more efficient and useful. INSPIRE is a good point to start developing linked data, but there is also a need to compare it to other ways of linked the data

Massimo Zotti talked about linked open data from earth observations in the context of business activities, since he’s working in a company that provide software for data portals. He explored the business model of open data, INSPIRE and the Copernicus programme. From the data that come from earth observation, we can turn it into information – for example, identifying the part of the soil that get sealed and doesn’t allow the water to be absorbed, or information about forest fires or floods etc. These are the bits of useful information that are needed for decision making. Once there is the information, it is possible to identify increase in land use or other aspects that can inform policy. However, we need to notice that when dealing with open data mean that a lot of work is put into bringing datasets together. The standarisation of data transfer and development of approaches that helps in machine-to-machine analysis are important for this aim. By fusing data they are becoming more useful and relevant to knowledge production process. A dashboard approach to display the information and the processing can help end users to access the linked data ‘cloud’. Standarisation of data is very important to facilitate such automatic analysis, and also having standard ontologies is necessary. From my view, this is not a business model, but a typical one to the operations in the earth observations area where there is a lot of energy spend on justification that it can be useful and important to decision making – but lacking quantification of the effort that is required to go through the process and also the speed in which these can be achieved (will the answer come in time for the decision?). A member of the audience also raised the point that assumption of machine to machine automatic models that will produce valuable information all by themselves is questionable.

Maria Jose Vale talked about the Portuguese experience in delivering open data. The organisation that she works in deal with cadastre and land use information. She was also discussing on activities of the SmartOpenData project. She describe the principles of open data that they considered which are: data must be complete, primary, timely, accessible, processable; data formats must be well known, should be permanence and addressing properly usage costs. For good governance need to know the quality of the data and the reliability of delivery over time. So to have automatic ways for the data that will propagate to users is within these principles. The benefits of open data that she identified are mostly technical but also the economic values (and are mentioned many times – but you need evidence similar to the Danish case to prove it!). The issues or challenges of open data is how to deal with fuzzy data when releasing (my view: tell people that it need cleaning), safety is also important as there are both national and personal issues, financial sustainability for the producers of the data, rates of updates and addressing user and government needs properly. In a case study that she described, they looked at land use and land cover changes to assess changes in river use in a river watershed. They needed about 15 datasets for the analysis, and have used different information from CORINE land cover from different years. For example, they have seen change from forest that change to woodland because of fire. It does influence water quality too. Data interoperability and linking data allow the integrated modelling of the evolution of the watershed.

Francisco Lopez-Pelicer covered the Spanish experience and the PlanetData project http://www.planet-data.eu/ which look at large scale public data management. Specifically looking in a pilot on VGI and Linked data with a background on SDI and INSPIRE. There is big potential, but many GI producers don’t do it yet. The issue is legacy GIS approaches such as WMS and WFS which are standards that are endorsed in INSPIRE, but not necessarily fit into linked data framework. In the work that he was involved in, they try to address complex GI problem with linked data . To do that, they try to convert WMS to a linked data server and do that by adding URI and POST/PUT/DELETE resources. The semantic client see this as a linked data server even through it can be compliant with other standards. To try it they use the open national map as authoritative source and OpenStreetMap as VGI source and release them as linked data. They are exploring how to convert large authoritative GI dataset into linked data and also link it to other sources. They are also using it as an experiment in crowdsourcing platform development – creating a tool that help to assess the quality of each data set. The aim is to do quality experiments and measure data quality trade-offs associated with use of authoritative or crowdsourced information. Their service can behave as both WMS and “Linked Map Server”. The LinkedMap, which is the name of this service, provide the ability to edit the data and explore OpenStreetMap and thegovernment data – they aim to run the experiment in the summer so this can be found at http://linkedmap.unizar.es/. The reason to choose WMS as a delivery standard is due to previous crawl over the web which showed that WMS is the most widely available service, so it assumed to be relevant to users or one that most users can capture.

Paul van Genuchten talked about the GeoCat experience in a range of projects which include support to Environment Canada and other activities. INSPIRE meeting open data can be a clash of cultures and he was highlighting neogeography as the term that he use to describe the open data culture (going back to the neogeo and paleogeo debate which I thought is over and done – but clearly it is relevant in this context). INSPIRE recommend to publish data open and this is important to ensure that it get big potential audience, as well as ‘innovation energy’ that exist among the ‘neogeo’/’open data’ people. The common things within this culture are expectations that APIs are easy to use, clean interfaces etc. But under the hood there are similarities in the way things work. There is a perceived complexity by the community of open data users towards INSPIRE datasets. Many of Open Data people are focused and interested in OpenStreetMap, and also look at companies such as MapBox as a role model, but also formats such as GeoJSON and TopoJSON. Data is versions and managed in git like process. The projection that is very common is web mercator. There are now not only raster tiles, but also vector tiles. So these characteristics of the audience can be used by data providers to provide help in using their data, but also there are intermediaries that deliver the data and convert it to more ‘digestible’ forms. He noted CitySDK by Waag.org which they grab from INSPIRE and then deliver it to users in ways that suite open data practices.He demonstrated the case of Environment Canada where they created a set of files that are suitable for human and machine use.

Ed Parsons finished the set of talks of the day (talk link goo.gl/9uOy5N) , with a talk about multi-channel approach to maximise the benefits of INSPIRE.  He highlighted that it’s not about linked data, although linked data it is part of the solution to make data accessibility. Accessibility always wins online – and people make compromises (e.g. sound quality in CD and Spotify). Google Earth can be seen as a new channel that make things accessible, and while the back-end is not new in technology the ease of access made a big difference. The example of Denmark use of minecraft to release GI is an example of another channel. Notice the change over the past 10 years in video delivery, for example, so the early days of the video delivery was complex and require many steps and expensive software and infrastructure, and this is somewhat comparable to current practice within geographic information. Making things accessible through channels like YouTube and the whole ability around it changed the way video is used, uploaded and consumed, and of course changes in devices (e.g. recording on the phone) made it even easier. Focusing on the aspects of maps themselves, people might want different things that are maps  and not only the latest searchable map that Google provide – e.g. the  administrative map of medieval Denmark, or maps of flood, or something that is specific and not part of general web mapping. In some cases people that are searching for something and you want to give them maps for some queries, and sometime images (as in searching Yosemite trails vs. Yosemite). There are plenty of maps that people find useful, and for that Google now promoting Google Maps Gallery – with tools to upload, manage and display maps. It is also important to consider that mapping information need to be accessible to people who are using mobile devices. The web infrastructure of Google (or ArcGIS Online) provide the scalability to deal with many users and the ability to deliver to different platforms such as mobile. The gallery allows people to brand their maps. Google want to identify authoritative data that comes from official bodies, and then to have additional information that is displayed differently.  But separation of facts and authoritative information from commentary is difficult and that where semantics play an important role. He also noted that Google Maps Engine is just maps – just a visual representation without an aim to provide GIS analysis tools.

Follow

Get every new post delivered to your Inbox.

Join 2,565 other followers