New paper: The epistemology(s) of volunteered geographic information: a critique

Considering how long Reneé Sieber  (McGill University) and I know each other, and working in similar areas (participatory GIS, participatory geoweb, open data, socio-technical aspects of GIS, environmental information), I’m very pleased that a collaborative paper that we developed together is finally published.

The paper ‘The epistemology(s) of volunteered geographic information: a critique‘ took some time to evolve. We started jotting ideas in late 2011, and slowly developed the paper until it was ready, after several rounds of peer review, for publication in early 2014, but various delays led to its publication only now. What is pleasing is that the long development time did not reduced the paper relevancy – we hope! (we kept updating it as we went along). Because the paper is looking at philosophical aspects of GIScience, we needed periods of reflection and re-reading to make sure that the whole paper come together, and I’m pleased with the way ideas are presented and discussed in it. Now that it’s out, we will need to wait and see how it will be received.

The abstract of the paper is:

Numerous exegeses have been written about the epistemologies of volunteered geographic information (VGI). We contend that VGI is itself a socially constructed epistemology crafted in the discipline of geography, which when re-examined, does not sit comfortably with either GIScience or critical GIS scholarship. Using insights from Albert Borgmann’s philosophy of technology we offer a critique that, rather than appreciating the contours of this new form of data, truth appears to derive from traditional analytic views of information found within GIScience. This is assisted by structures that enable VGI to be treated as independent of the process that led to its creation. Allusions to individual emancipation further hamper VGI and problematise participatory practices in mapping/geospatial technologies (e.g. public participation geographic information systems). The paper concludes with implications of this epistemological turn and prescriptions for designing systems and advancing the field to ensure nuanced views of participation within the core conceptualisation of VGI.

The paper is open access (so anyone can download it) and it is available in the Geo website . 

Building Centre – from Mapping to Making

The London based Building Centre organised an evening event – from Mapping to Making –  which looked at the “radical evolution in the making and meaning of maps is influencing creative output. New approaches to data capture and integration – from drones to crowd-sourcing – suggest maps are changing their impact on our working life, particularly in design.”  The event included 5 speakers (including me, on behalf of Mapping for Change) and a short discussion.

Lewis Blackwell of the Building Centre opened the evening by noting that in a dedicated exhibition on visualisation and the city, the Building Centre is looking at new visualisation techniques. He realised that a lot of the visualisations are connected to mapping – it’s circular: mapping can ask and answer questions about the design process of the build environment, and changes in the built environment create new data. The set of talks in the evening is exploring the role of mapping.

Rollo Home, Geospatial Product Development Manager, Ordnance Survey (OS), started by thinking about the OS as the ‘oldest data company in the world‘. The OS thinking of itself as data company – the traditional mapping products that are very familiar represent only 5% of turnover. The history of OS go back to 1746 and William Roy’s work on accurately mapping Britain. The first maps produced in Kent, for the purpose of positioning ordinances. The maps of today, when visualised, look somewhat the same as maps from 1800, but the current maps are in machine readable formats that mean that the underlying information is very different. Demands for mapping changed over the years: Originally for ordinances, then for land information and taxation, and later helping the development of the railways. During WW I & II the OS led many technological innovations – from national grid in 1930s to photogrammetry. In 1973 the first digital maps were produced, and the process was completed in the 1980s. This was, in terms of data structures, still structured as a map. Only in 2000, MasterMap appear with more machine readable format that is updated 10,000 times a day, based on Oracle database (the biggest spatial data in the world) – but it’s not a map. Real world information is modelled to allow for structure and meaning. Ability to answer questions from the database is critical to decision-making. The information in the data can become explicit to many parts of the information – from the area of rear gardens to height of a building. They see developments in the areas of oblique image capture, 3D data, details under the roof, facades and they do a lot of research to develop their future directions – e.g. challenges of capturing data in cloud points. They see data that come from different sources including social media, satellite, UAVs, and official sources. Most of Smart Cities/Transport etc. areas need geospatial information and the OS is moving from mapping to data, and enabling better decisions.

Rita Lambert, Development Planning Unit, UCL. Covered the ReMap Lima project – running since 2012, and looking at marginalised neighbourhoods in the city. The project focused on the questions of what we are mapping and what we are making through representations. Maps contain potential of what might become – we making maps and models that are about ideas, and possibilities for more just cities. The project is collaboration between DPU and CASA at UCL, with 3 NGOs in Lima, and 40 participants from the city. They wanted to explore the political agency of mapping, open up spaces to negotiate outcomes and expand the possibilities of spatial analysis in marginalised areas in a participatory action-learning approach. The use of technology is in the context of very specific theoretical aims. Use of UAV is deliberate to explore their progressive potential. They mapped the historic centre which is overmapped and it is marginalised through over-representation (e.g. using maps to show that it need regeneration) while the periphery is undermapped – large part of the city (50% of the area), and they are marginalised through omission. Maps can act through undermapping or overmapping. Issues are very different – from evictions, lack of services, loss of cultural heritage (people and building) at the centre, while at the informal settlement there are risks, land trafficking, destruction of ecological infrastructure, and lack of coordination between spatial planning between places. The process that they followed include mapping from the sky (with a drone) and mapping from the ground (through participatory mapping using aerial images). The drones provided the imagery in an area that changes rapidly – and the outputs were used in participatory mapping, with the people on the ground deciding what to map and where to map. The results allow to identify eviction through changes to the building that can be observed from above. The mapping process itself was also a mean to strengthen community organisations. The use of 3D visualisation at the centre and at the periphery helped in understanding the risks that are emerging or the changes to their area. Data collection is using both maps and data collection through tools such as EpiCollect+ and community mapping, and also printing 3D models so they can used by discussions and conversations. The work carries on as the local residents continue the work. The conclusion: careful consideration for the use of technology in the context, and mapping from the sky and the ground go hand in hand. Creating these new representation are significant and what is that we are producing. more information at  and

Simon Mabey, Digital Services Lead for City Modelling, Arup. Simon discussed city modelling in Arup – with the moved from visualisation to more sophisticated models. He leads on modelling cities in 3D, since the 1988, when visualisation of future designs was done stitching pieces of paper and photos. The rebuilding of Manchester in the mid 1990s, led to the development of 3D urban modelling, with animations and created an interactive CDROM. This continued to develop the data about Manchester and then shared it with others. The models were used in different ways – from gaming software to online, and trying to find ways to allow people to use it in real world context. Many models are used in interactive displays – e.g. for attracting inward investment. They went on to model many cities across the UK, with different levels of details and area that is covered. They also starting to identify features underground – utilities and the such. Models are kept up to date through collaboration, with clients providing back information about things that they are designing and integrating BIM data. In Sheffield, they also enhance the model through planning of new projects and activities. Models are used to communicate information to other stakeholders – e.g. traffic model outputs, and also do that with pedestrians movement. Using different information to colour code the model (e.g. enregy) or acoustic modelling or flooding. More recently, they move to city analytics, understanding the structure within models – for example understanding solar energy potential with the use and consumption of the building. They find themselves needing information about what utility data exist and that need to be mapped and integrated into their analysis. They also getting mobile phone data to predict trip journeys that people make.

I was the next speaker, on behalf Mapping for Change. I provided the background of Mapping for Change, and the approach that we are using for the mapping. In the context of other talks, which focused on technology, I emphasised that just as we are trying to reach out to people in the places that they use daily and fit the participatory process into their life rhythms, we need to do it in the online environment. That mean that conversations need to go where they are – so linking to facebook, twitter or whatsapp. We should also know that people are using different ways to access information – some will use just their phone, other laptops, and for others we need to think of laptop/desktop environment. In a way, this complicates participatory mapping much more than earlier participatory web mapping systems, when participants were more used to the idea of using multiple websites for different purposes. I also mentioned the need for listening to the people that we work with, and deciding if information should be shown online or not – taking into account what they would like to do with the data. I mentioned the work that involve citizen science (e.g. air quality monitoring) but more generally the ability to collect facts and evidence to deal with a specific issue. Finally, I also used some examples of our new community mapping system, which is based on GeoKey.

The final talk was from Neil Clark, Founder, EYELEVEL. He is from an architectural visualisation company that work in the North East and operate in the built environment area. They are using architectural modelling and us Ordnance Survey data and then position the designs, so they can be rendered accurately. Many of the processes are very expensive and complex. They have developed a tool called EYEVIEW for accurate augmented reality – working on iPad to allow viewing models in real-time. This can cut the costs of producing these models. They use a tripod to make it easier to control. The tool is the outcome of 4 years of development, allow the navigation of the architectural model to move it to overlay with the image. They are aiming at Accurate Visual Representation and they follow the detailed framework that is used in London for this purpose

The discussion that follow explored the political nature of information and who is represented and how. A question to OS was how open it will be with the detailed data and while Rollo explained that access to the data is complicated one and it need to be funded. I found myself defending the justification of charging high detailed models by suggesting to imagine a situation where the universal provision of high quality data at national level wasn’t there, and you had to deal with each city data model.

The last discussion point was about the truth in the mapping and the positions that were raised – It about the way that people understand their truth or is there an absolute truth that is captured in models and maps – or represented in 3D visualisations? Interestingly, 3 of the talk assume that there is a way to capture specific aspects of reality (structures, roads, pollution) and model it by numbers, while Rita and I took a more interpretive and culturally led representations.

Data and the City workshop (day 2)

The second day of the Data and City Workshop (here are the notes from day 1) started with the session Data Models and the City.

Pouria Amirian started with Service Oriented Design and Polyglot Binding for Efficient Sharing and Analysing of Data in Cities. The starting point is that management of the city need data, and therefore technologies to handle data are necessary. In traditional pipeline, we start from sources, then using tools to move them to data warehouse, and then doing the analytics. The problems in the traditional approach is the size of data – the management of the data warehouse is very difficult, and need to deal with real-time data that need to answer very fast and finally new data types – from sensors, social media and cloud-born data that is happening outside the organisation. Therefore, it is imperative to stop moving data around but analyse them where they are. Big Data technologies aim to resolve these issues – e.g. from the development of Google distributed file system that led to Hadoop to similar technologies. Big Data relate to the technologies that are being used to manage and analyse it. The stack for managing big data include now over 40 projects to support different aspects of the governance, data management, analysis etc. Data Science is including many areas: statistics, machine learning, visualisation and so on – and no one expert can know all these areas (such expert exist as much as unicorns exist). There is interaction between data science researchers and domain experts and that is necessary for ensuring reasonable analysis. In the city context, these technologies can be used for different purposes – for example deciding on the allocation of bikes in the city using real-time information that include social media (Barcelona). We can think of data scientists as active actors, but there are also opportunities for citizen data scientists using tools and technologies to perform the analysis. Citizen data scientists need data and tools – such as visual analysis language (AzureML) that allow them to create models graphically and set a process in motion. Access to data is required to facilitate finding the data and accessing it – interoperability is important. Service oriented architecture (which use web services) is an enabling technology for this, and the current Open Geospatial Consortium (OGC) standards require some further development and changes to make them relevant to this environment. Different services can provided to different users with different needs [comment: but that increase in maintenance and complexity]. No single stack provides all the needs.

Next Mike Batty talked about Data about Cities: Redefining Big, Recasting Small (his paper is available here) – exploring how Big Data was always there: locations can be seen are bundles of interactions – flows in systems. However, visualisation of flows is very difficult, and make it challenging to understand the results, and check them. The core issue is that in N locations there are N^2 interactions, and the exponential growth with the growth of N is a continuing challenge in understanding and managing cities. In 1964, Brian Berry suggested a system on location, attributes and time – but temporal dimension was suppressed for a long time. With Big Data, the temporal dimension is becoming very important. An example of how understanding data is difficult is demonstrated with understanding travel flows – the more regions are included, the bigger the interaction matrix, but it is then difficult to show and make sense of all these interactions. Even trying to create scatter plots is complex and not helping to reveal much.

The final talk was from Jo Walsh titled Putting Out Data Fires; life with the OpenStreetMap Data Working Group (DWG) Jo noted that she’s talking from a position of volunteer in OSM, and recall that 10 years ago she gave a talk about technological determinism but not completely a utopian picture about cities , in which OpenStreetMap (OSM) was considered as part of the picture. Now, in order to review the current state of OSM activities relevant for her talk, she asked in the OSM mailing list for examples. She also highlighted that OSM is big, but it’s not Big Data- it can still fit to one PostGres installation. There is no anonymity in the system – you can find quite a lot about people from their activity and that is built into the system. There are all sort of projects that demonstrate how OSM data is relevant to cities – such as OSM building to create 3D building from the database, or use OSM in 3D modelling data such as DTM. OSM provide support for editing in the browser or with offline editor (JOSM). Importantly it’s not only a map, but OSM is also a database (like the new OSi database) – as can be shawn by running searches on the database from web interface. There are unexpected projects, such as custom clothing from maps, or Dressmap. More serious surprises are projects like the humanitarian OSM team and the Missing Maps projects – there are issues with the quality of the data, but also in the fact that mapping is imposed on an area that is not mapped from the outside, and some elements of colonial thinking in it (see Gwilym Eddes critique) . The InaSAFE project is an example of disaster modeling with OSM. In Poland, they extend the model to mark details of road areas and other details. All these are demonstrating that OSM is getting close to the next level of using geographic information, and there are current experimentations with it. Projects such as UTC of Mappa Marcia is linking OSM to transport simulations. Another activity is the use of historical maps – .
One of the roles that Jo play in OSM is part of the data working group, and she joined it following a discussion about diversity in OSM within the community. The DWG need some help, and their role is geodata thought police/Janitorial judicial service/social work arm of the volunteer fire force. DWG clean up messy imports, deal with vandalisms, but also deal with dispute resolutions. They are similar to volunteer fire service when something happens and you can see how the sys admins sparking into action to deal with an emerging issue. Example, someone from Ozbekistan saying that they found corruption with some new information, so you need to find out the changeset, asking people to annotate more, say what they are changing and why. OSM is self policing and self regulating – but different people have different ideas about what they are doing. For example, different groups see the view of what they want to do. There are also clashes between armchair mapping and surveying mappers – a discussion between someone who is doing things remotely, and the local person say that know the road and asking to change the editing of classification. DWG doesn’t have a legal basis, and some issues come up because of the global cases – so for example translated names that does not reflect local practices. There are tensions between commercial actors that do work on OSM compared to a normal volunteer mappers. OSM doesn’t have privileges over other users – so the DWG is recognised by the community and gathering authority through consensus.

The discussion that follows this session explored examples of OSM, there are conflicted areas such as Crimea nad other contested territories. Pouria explained that distributed computing in the current models, there are data nodes, and keeping the data static, but transferring the code instead of data. There is a growing bottleneck in network latency due to the amount of data. There are hierarchy of packaging system that you need to use in order to work with distributed web system, so tightening up code is an issue.
Rob – there are limited of Big Data such as hardware and software, as well as the analytics of information. The limits in which you can foster community when the size is very large and the organisation is managed by volunteers. Mike – the quality of big data is rather different in terms of its problem from traditional data, so while things are automated, making sense of it is difficult – e.g. tap in but without tap out in the Oyster data. The bigger the dataset, there might be bigger issues with it. The level of knowledge that we get is heterogeneity in time and transfer the focus to the routine. But evidence is important to policy making and making cases. Martijn – how to move the technical systems to allow the move to focal community practice? Mike – the transport modelling is based on promoting digital technology use by the funders, and it can be done for a specific place, and the question is who are the users? There is no clear view of who they are and there is wide variety, different users playing different roles – first, ‘policy analysts’ are the first users of models – they are domain experts who advise policy people. less thinking of informed citizens. How people react to big infrastructure projects – the articulations of the policy is different from what is coming out of the models. there are projects who got open and closed mandate. Jo – OSM got a tradition of mapping parties are bringing people together, and it need a critical mass already there – and how to bootstrap this process, such as how to support a single mapper in Houston, Texas. For cases of companies using the data while local people used historical information and created conflict in the way that people use them. There are cases that the tension is going very high but it does need negotiation. Rob – issues about data citizens and digital citizenship concepts. Jo – in terms of community governance, the OSM foundation is very hands off, and there isn’t detailed process for dealing with corporate employees who are mapping in their job. Evelyn – the conventions are matters of dispute and negotiation between participants. The conventions are being challenged all the time. One of the challenges of dealing with citizenship is to challenge the boundaries and protocols that go beyond the state. Retain the term to separate it from the subject.

The last session in the workshop focused on Data Issues: surveillance and crime 

David Wood talked about Smart City, Surveillance City: human flourishing in a data-driven urban world. The consideration is of the smart cities as an archetype of the surveillance society. Especially trying to think because it’s part of Surveillance Society, so one way to deal with it is to consider resistance and abolishing it to allow human flourishing. His interest is in rights – beyond privacy. What is that we really want for human being in this data driven environment? We want all to flourish, and that mean starting from the most marginalised, at the bottom of the social order. The idea of flourishing is coming from Spinoza and also Luciano Floridi – his anti-enthropic information principle. Starting with the smart cities – business and government are dependent on large quant of data, and increase surveillance. Social Science ignore that these technology provide the ground for social life. The smart city concept include multiple visions, for example, a European vision that is about government first – how to make good government in cities, with technology as part of a wider whole. The US approach is about how can we use information management for complex urban systems? this rely on other technologies – pervasive computing, IoT and things that are weaved into the fabric of life. The third vision is Smart Security vision – technology used in order to control urban terrain, with use of military techniques to be used in cities (also used in war zones), for example biometrics systems for refugees in Afghanistan which is also for control and provision of services. The history going back to cybernetics and policing initiatives from the colonial era. The visions overlap – security is not overtly about it (apart from military actors). Smart Cities are inevitably surveillance cities – a collection of data for purposeful control of population. Specific concerns of researchers – is the targeting of people that fit a profile of a certain kind of people, aggregation of private data for profit on the expense of those that are involved. The critique of surveillance is the issue of sorting, unfair treatment of people etc. Beyond that – as discussed in the special issue on surveillance and empowerment– there are positive potentials. Many of these systems have a role for the common good. Need to think about the city within neoliberal capitalism, separate people in space along specific lines and areas, from borders to building. Trying to make the city into a tamed zone – but the danger parts of city life are also source for opportunities and creativity. The smart city fit well to this aspect – stopping the city from being disorderly. There is a paper from 1995 critique pervasive computing as surveillance and reduce the distance between us and things, the more the world become a surveillance device and stop us from acting on it politically. In many of the visions of the human in pervasive computing is actually marginalised. This is still the case. There are opportunities for social empowerment, say to allow elderly to move to areas that they stop exploring, or use it to overcome disability. Participation, however, is flawed – who can participate in what, where and how? additional questions are that participation in highly technical people is limited to a very small group, participation can also become instrumental – ‘sensors on legs’. The smart city could enable to discover the beach under the pavement (a concept from the situationists) – and some are being hardened. The problem is corporate ‘wall garden’ systems and we need to remember that we might need to bring them down.

Next Francisco Klauser talked about Michel Foucault and the smart city: power dynamics inherent in contemporary governing through code. Interested in power dynamics of governing through data. Taking from Foucault the concept of understanding how we can explain power put into actions. Also thinking about different modes of power: Referentiality – how security relate to governing? Normativity – looking at what is the norm and where it is came from? Spatiality – how discipline and security is spread across space. Discipline is how to impose model of behaviour on others (panopticon). Security work in another way – it is free things up within the limits. So the two modes work together. Power start from the study of given reality. Data is about the management of flows. The specific relevance to data in cities is done by looking at refrigerated warehouses that are used within the framework of smart grid to balance energy consumption – storing and releasing energy that is preserved in them. The whole warehouse has been objectified and quantified – down to specific product and opening and closing doors. He see the core of the control through connections, processes and flows. Think of liquid surveillance – beyond the human.

Finally, Teresa Scassa explored Crime Data and Analytics: Accounting for Crime in the City. Crime data is used in planning, allocation of resources, public policy making – broad range of uses. Part of oppositional social justice narratives, and it is an artefact of the interaction of citizen and state, as understood and recorded by the agents of the state operating within particular institutional cultures. Looking at crime statistics that are provided to the public as open data – derived from police files under some guidelines, and also emergency call data which made from calls to the policy to provide crime maps. The data that use in visualisation about the city is not the same data that is used for official crime statistics. There are limits to the data – institutional factors: it measure the performance of the police, not crime. It’s how police are doing their job – and there are lots of acts of ‘massaging’ the data by those that are observed. The stats are manipulated to produce the results that are requested. The police are the sensors, and there is unreporting of crime according to the opinion of police person – e.g. sexual assault, and also the privatisation of policing who don’t report. Crime maps are offered by private sector companies that sell analytics, and then provide public facing option – the narrative is controlled – what will be shared and how. Crime maps are declared as ‘public awareness or civic engagement’ but not transparency or accountability. Focus on property offence and not white collar one. There are ‘alternalytics’ – using other sources, such as victimisation survey, legislation, data from hospital, sexual assault crisis centres, and crowdsourcing. Example of the reporting bottom up is harrassmap to report cases that started in Egypt. Legal questions are how relationship between private and public sector data affect ownership, access and control. Another one is how the state structure affect data comparability and interoperability. Also there is a question about how does law prescribe and limit what data points can be collected or reported.

The session closed with a discussion that explored some examples of solutionism  like crowdsourcing that ask the most vulnerable people in society to contribute data about assault against them which is highly problematic. The crime data is popular in portals such as the London one, but it is mixed into multiple  concerns such as property price. David – The utopian concept of platform independence, and assuming that platforms are without values is inherently wrong.

The workshop closed with a discussion of the main ideas that emerged from it and lessons. How are all these things playing out. Some questions that started emerging are questions on how crowdsourcing can be bottom up (OSM) and sometime top-down, with issues about data cultures in Citizen Science, for example. There are questions on to what degree the political aspects of citizenship and subjectivity are playing out in citizen science. Re-engineering information in new ways, and rural/urban divide are issues that bodies such as Ordnance Survey need to face, there are conflicts within data that is an interesting piece, and to ensure that the data is useful. The sensors on legs is a concept that can be relevant to bodies such as Ordnance Survey. The concept of stack – it also relevant to where we position our research and what different researchers do: starting from the technical aspects to how people engage, and the workshop gave a slicing through these layers. An issue that is left outside is the business aspect – who will use it, how it is paid. We need the public libraries with the information, but also the skills to do things with these data. The data economy is important and some data will only produced by the state, but there are issues with the data practices within the data agencies within the state – and it is not ready to get out. If data is garbage, you can’t do much with it – there is no economy that can be based on it. An open questions is when data produce software? when does it fail? Can we produce data with and without connection to software? There is also the physical presence and the environmental impacts. Citizen engagement about infrastructure is lacking and how we tease out how things open to people to get involved. There was also need to be nuanced about the city the same way that we focused on data. Try to think about the way the city is framed: as a site to activities, subjectivity, practices; city as a source for data – mined; city as political jurisdiction; city as aspiration – the city of tomorrow; city as concentration of flows; city as a social-cultural system; city as a scale for analysis/ laboratory. The title and data and the city – is it for a city? Back to environmental issues – data is not ephemeral and does have tangible impacts (e.g. energy use in blockchain, inefficient algorithms, electronic WEEE that is left in the city). There are also issues of access and control – huge volumes of data. Issues are covered in papers such as device democracy. Wider issues that are making link between technology and wider systems of thought and considerations.

Esri survey123 tool – rapid prototyping geographical citizen science tool

There are several applications that allow creating forms rapidly – such as Open Data Kit (ODK) or EpiCollect. Now, there is another offering from Esri, in the form of Survey123 app – which is explained in the video below.

Survey123 is integrated into ArcGIS Online, so you need an ArcGIS account to use it (you can have a short experiment if you register for a trial account, but for a longer project you’ll have to pay). The forms are configured in XForms, like ODK . The forms can be designed in Excel fairly quickly, and the desktop connection package make it easy to link to the Survey123 site, as well as testing forms.  I tried creating a form for local data collection, including recording a location and taking an image with the phone. It was fairly easy to create forms with textual, numerical, image and location information, and the software also supports the use of images to items in the form, so they can be illustrated visually. The desktop connector application also allow use to render the form, so they can be tested before they are uploaded to ArcGIS Online. Then it is possible to distribute the form to mobile devices and use them to collect the information.

The app works well offline, and it is possible to collect multiple forms and then upload them all together. While the application still showing rough edges in terms of interaction design, meaningful messages and bug clearing, it can be useful for developing prototypes and forms when the geographic aspect of the data collection is central. For example, during data collection the application supports both capturing the location from GPS and pointing on a map to the location where the data was collected. You can only use GPS when you are offline, as for now it doesn’t let you cache a map of a study area.

As might be expected, the advantage of Survey123 is coming once you’ve got the information and want to analyse it, since ArcGIS Online provide the tools for detailed GIS analysis, or you can link to it from a desktop GIS and analyse and visualise the information.

Luckily for us, Esri is a partner of the Extreme Citizen Science group and UCL also holds an institutional licence for ArcGIS Online, so we have access to these tools. However, through Esri conservation programme can also apply to have access to ArcGIS Online and use this tool.

Call for papers – special issue of the Cartographic Journal on Participatory GIS

Call for papers for a special issue of The Cartographic Journal on past, present and future of
Participatory GIS and Public Participation GIS.

DSC01463In the 1990s, participatory GIS (PGIS) and Public Participation GIS (PPGIS) emerged as an approach and tool to make geospatial technologies more relevant and accessible to marginalized groups. The goal has been to integrate the qualitative and experiential knowledge of local communities and individuals, thereby empowering local peoples and non-profit organizations to participate in political decision-making. By enabling the participation of local people from different walks of life, P/PGIS has provided a platform where these people can share their viewpoints and create maps depicting alternative views of the same problem, but from a local perspective.

Over the years, numerous applications integrating GIS and social and spatial knowledge of local groups have been developed. P/PGIS appears well articulated as a technique. With the growth of Information and Communication Technologies (ICT), from an epistemological view point the relationship of P/PGIS constructs (society, technology and institutions) and the use of components (access, power relations, diverse knowledge) in P/PGIS necessitates an exploration of what P/PGIS means in 21st century.

A related field, Citizen Science a.k.a. public participation in scientific research is a research technique that allows participation of public in the discovery of new scientific knowledge through data collection, analysis, or reporting. This approach can be viewed to be somewhat similar in its implementation to P/PGIS, which broadens the scope of data collection and enables information sharing among stakeholders in specific policies to solve a problem. The success of all three concepts, citizen science, PGIS and PPGIS, is influenced by the Geoweb – an integration of the Information and Communication Technologies (ICT) (e.g., social networking sites) and geospatial technologies (e.g., virtual globes like Google Earth, free and open source GIS like QGIS and location enabled devices like the iPhone) – that allows a platform for non-experts to participate in the creation and sharing of geospatial information without the aid of geospatial professionals.

Following a successful session in the AAG 2015 Annual Meeting, this call is for papers that will appear in a special issue of ‘The Cartographic Journal’ ( We are calling for reflections on PPGIS/PGIS and citizen science that address some of the questions that are listed below.

  1. What social theories form the basis for the current implementation of P/PGIS? Have these theories changed? What remains persistent and intractable?
  2. What role do spatial theories, such as Tobler’s law of spatial relations or issues of spatial data accuracy, have in P/PGIS, Citizen Science or crowdsourcing?
  3. Since Schlossberg and Shuford, have we gotten better at understanding who the public is in PPGIS and what their role is in a successful deployment of PGIS?
  4. Which new knowledge should be included in data collection, mapping and decision-making and knowledge production? To what extent are rural, developing country, or marginalized communities really involved in the counter-mapping process? Are they represented when this action is undertaken by volunteers?
  5. What role do new ICTs and the emergence of crowdsourcing plays in the inclusion of indigenous and local knowledge? Do new tech and concepts hinder the participatory process or enable empowerment of local communities? Do we have new insights on what could be considered technological determinism?
  6. Do we need to revisit P/PGIS in light of any of these shifts? How often do P/PGIS projects need to be revisited to address the dynamic nature of society and political factors and to allow future growth?
  7. How effective have P/PGIS and Citizen Science been in addressing issues of environmental and social justice and resource allocation, especially, from a policy-making perspective?
  8. Are we any better at measuring the success of P/PGIS and/or Citizen Science? Should there be policies to monitor citizen scientists’ participation in Geoweb? If so, for what purpose?
  9. What should be the role of privacy in P/PGIS, for example, when it influences the accuracy of the data and subsequent usability of final products? How have our notions of needed literacy (e.g., GIS) and skills shifted with the emergence of new technologies?
  10. How has the concept of the digital divide been impacted by the emergence of the Geoweb, crowdsourcing and/or neogeography?
  11. What is the range of participatory practices in Citizen Science and what are the values and theories that they encapsulate?
  12. What are the different applications of Citizen Science from policy and scientific research perspective?
  13. To what extent do the spatial distribution of citizens influence their participation in decision making process and resolving scientific problems?
  14. How have our notions of needed literacy (e.g., GIS) and skills shifted with the emergence of new technologies?

Editors: Muki Haklay (, University College London, UK; Renee Sieber (, McGill University; Rina Ghose (, University of Wisconsin – Milwaukee; Bandana Kar (, University of Southern Mississippi – Hattiesburg. Please use this link to send queries about the special issues, or contact one of the editors.

Submission Deadlines
Abstract – a 250 word abstract along with the title of the paper, name(s) of authors and their affiliations must be submitted by 15th August 2015 to Muki Haklay (use the links above). The editorial team will make a decision if the paper is suitable for the special issue by 1st September
Paper – The final paper created following the guidelines of The Cartographic Journal must be submitted by 30th October 2015.
Our aim is that the final issue will be published in early 2016

COST Energic Summer School on VGI and Citizen Science in Malta

Vyron Antoniou covering VGI foundations
Vyron Antoniou covering VGI foundations

COST Energic organised a second summer school that is dedicated to Volunteered Geographic Information (VGI) and citizen science. This time, the school was run by the Institute for Climate Change & Sustainable Development of the University of Malta. with almost 40 participants from across Europe and beyond (Brazil, New Zealand), and, of course, participants from Malta. Most of the students are in early stage of their academic career (Masters and Ph.D. students and several postdoctoral fellows) but the school was also attended by practitioners – for example in urban planning or in cultural heritage. Their backgrounds included engineering, geography, environmental studies, sociology, architecture, biology and ecology, computer science. The areas from which the participants came from demonstrate the range of disciplines and practices that are now involved in crowdsourced data collection and use. Also interesting is the opening of governmental and non-governmental bodies to the potential of crowdsourcing as evident from the practitioners group.

The teachers on the programme, Maria Attard, Claire Ellul, Rob Lemmens, Vyron Antoniou, Nuno Charneca, Cristina Capineri (and myself) are all part of the COST Energic network. Each provide a different insight and interest in VGI in their work – from transport, to spatial data infrastructure or participatory mapping. The aim of the training school was to provide a ‘hands-on’ experience with VGI and citizen science data sources, assuming that some of the students might be new to the topics, the technologies or both. Understanding how to get the data and how to use it is an important issue that can be confusing to someone who is new to this field – where the data is, how do you consume it, which software you use for it etc.

Collecting information in the University of Malta
Collecting information in the University of Malta

After covering some of the principles of VGI, and examples from different areas of data collection, the students started to learn how to use various OpenStreetMap data collection tools. This set the scene to the second day, which was dedicated to going around the university campus and collecting data that is missing from OpenStreetMap, and carrying out both the data collection and then uploading the GPS Tracks and sharing the information. Of particular importance was the reflection part, as the students were asked to consider how other people, who are also new to OpenStreetMap will find the process.

Using meteorological sensors in Gozo
Using meteorological sensors in Gozo

The next level of data collection involved using sensors, with an introduction to the potential of DIY electronics such as Arduino or Raspberry Pi as a basis for sensing devices. A field trip to Gozo in the next day provided the opportunity to explore these tools and gain more experience in participatory sensing. Following a lecture on participatory GIS application in Gozo, groups of students explored a local park in the centre of Rabat (the capital of Gozo) and gained experience in participatory sensing and citizen science.

Learning together The training school also included a public lecture by Cristina Capineri on ‘the fortune of VGI’.

The students will continue to develop their understanding of VGI and citizen science, culminating with group presentations on the last day. The most important aspects of any training school, as always, is in the development of new connections and links between the people on the programme, and in the conversations you could notice how these areas of research are still full of questions and research challenges.

COST ENERGIC meeting – Tallinn 21-22 May

TallinnThe COST Energic network is progressing in its 3rd year. The previous post showed one output from the action – a video that describe the links between volunteered geographic information and indigenous knowledge.

The people who came to the meeting represent the variety of interest in crwodsourced geographic information, from people with background in Geography, Urban planning, and many people with interest in computing – from semantic representation of information, cloud computing, data mining and similar issues where VGI represent an ‘interesting’ dataset.

Part of the meeting focused on the next output of the network, which is an Open Access book which is titled ‘European Handbook of Crowdsourced Geographic Information’. The book will be made from short chapters that are going through peer-review by people within the network. The chapters will cover topics such as theoretical and social aspects, quality – criteria and methodologies, data analysis and finally applied research and case studies. We are also creating a combined reference list that will be useful for researchers in the field. There will be about 25 chapters. Different authors gave a quick overview of their topics, with plenty to explore – from Smart Cities to concepts on the nature of information.

COST ‘actions’ (that’s how these projects are called), operate through working groups. In COST Energic, there are 3 working groups, focusing on human and societal issues,  Spatial data Quality and infrastructures, and Data mining, semantics and VGI.

Working Group 1 looked at an example of big data from Alg@line –  22 years of data of ferry data from the Baltic sea – with 17 millions observations a year. Data from  that can be used for visualisation and exploring the properties. Another case study that the working group consider is the engagement of schoolchildren and VGI – with activities in Portugal, Western Finland, and Italy. These activities are integrating citizen science and VGI, and using free and open source software and data. In the coming year, they are planning specific activities in big data and urban planning and crowd atlas on urban biodiversity.

Working Group 2 have been progressing in its activities linking VGI quality with citizen science, and how to produce reliable information from it. The working group collaborate with another COST action (TD1202) which called ‘Mapping and the Citizen Sensor‘. They carried out work on topics of quality of information – and especially with vernacular gazetteers. In their forthcoming activities, they contribute to ISSDQ 2015 (international symposium on spatial data quality) with a set of special sessions. Future work will focus on quality tools and quality visualisation.

Prof. Cristina Capineri opening the meeting
Prof. Cristina Capineri opening the meeting

Working Group 3 also highlighted the ISSDQ 2015 and will have a good presence in the conference. The group aims to plan a hackathon in which people will work on VGI, with a distributed event for people to work with data over time. Another plan is to focus on research around the repository. The data repository from the working group – contains way of getting of data and code. It’s mostly how to get at the data.

There is also a growing repository of bibliography on VGI in CiteULike. The repository is open to other researchers in the area of VGI, and WG3 aim to manage it as a curated resource.