Building Centre – from Mapping to Making

The London based Building Centre organised an evening event – from Mapping to Making –  which looked at the “radical evolution in the making and meaning of maps is influencing creative output. New approaches to data capture and integration – from drones to crowd-sourcing – suggest maps are changing their impact on our working life, particularly in design.”  The event included 5 speakers (including me, on behalf of Mapping for Change) and a short discussion.

Lewis Blackwell of the Building Centre opened the evening by noting that in a dedicated exhibition on visualisation and the city, the Building Centre is looking at new visualisation techniques. He realised that a lot of the visualisations are connected to mapping – it’s circular: mapping can ask and answer questions about the design process of the build environment, and changes in the built environment create new data. The set of talks in the evening is exploring the role of mapping.

Rollo Home, Geospatial Product Development Manager, Ordnance Survey (OS), started by thinking about the OS as the ‘oldest data company in the world‘. The OS thinking of itself as data company – the traditional mapping products that are very familiar represent only 5% of turnover. The history of OS go back to 1746 and William Roy’s work on accurately mapping Britain. The first maps produced in Kent, for the purpose of positioning ordinances. The maps of today, when visualised, look somewhat the same as maps from 1800, but the current maps are in machine readable formats that mean that the underlying information is very different. Demands for mapping changed over the years: Originally for ordinances, then for land information and taxation, and later helping the development of the railways. During WW I & II the OS led many technological innovations – from national grid in 1930s to photogrammetry. In 1973 the first digital maps were produced, and the process was completed in the 1980s. This was, in terms of data structures, still structured as a map. Only in 2000, MasterMap appear with more machine readable format that is updated 10,000 times a day, based on Oracle database (the biggest spatial data in the world) – but it’s not a map. Real world information is modelled to allow for structure and meaning. Ability to answer questions from the database is critical to decision-making. The information in the data can become explicit to many parts of the information – from the area of rear gardens to height of a building. They see developments in the areas of oblique image capture, 3D data, details under the roof, facades and they do a lot of research to develop their future directions – e.g. challenges of capturing data in cloud points. They see data that come from different sources including social media, satellite, UAVs, and official sources. Most of Smart Cities/Transport etc. areas need geospatial information and the OS is moving from mapping to data, and enabling better decisions.

Rita Lambert, Development Planning Unit, UCL. Covered the ReMap Lima project – running since 2012, and looking at marginalised neighbourhoods in the city. The project focused on the questions of what we are mapping and what we are making through representations. Maps contain potential of what might become – we making maps and models that are about ideas, and possibilities for more just cities. The project is collaboration between DPU and CASA at UCL, with 3 NGOs in Lima, and 40 participants from the city. They wanted to explore the political agency of mapping, open up spaces to negotiate outcomes and expand the possibilities of spatial analysis in marginalised areas in a participatory action-learning approach. The use of technology is in the context of very specific theoretical aims. Use of UAV is deliberate to explore their progressive potential. They mapped the historic centre which is overmapped and it is marginalised through over-representation (e.g. using maps to show that it need regeneration) while the periphery is undermapped – large part of the city (50% of the area), and they are marginalised through omission. Maps can act through undermapping or overmapping. Issues are very different – from evictions, lack of services, loss of cultural heritage (people and building) at the centre, while at the informal settlement there are risks, land trafficking, destruction of ecological infrastructure, and lack of coordination between spatial planning between places. The process that they followed include mapping from the sky (with a drone) and mapping from the ground (through participatory mapping using aerial images). The drones provided the imagery in an area that changes rapidly – and the outputs were used in participatory mapping, with the people on the ground deciding what to map and where to map. The results allow to identify eviction through changes to the building that can be observed from above. The mapping process itself was also a mean to strengthen community organisations. The use of 3D visualisation at the centre and at the periphery helped in understanding the risks that are emerging or the changes to their area. Data collection is using both maps and data collection through tools such as EpiCollect+ and community mapping, and also printing 3D models so they can used by discussions and conversations. The work carries on as the local residents continue the work. The conclusion: careful consideration for the use of technology in the context, and mapping from the sky and the ground go hand in hand. Creating these new representation are significant and what is that we are producing. more information at  and

Simon Mabey, Digital Services Lead for City Modelling, Arup. Simon discussed city modelling in Arup – with the moved from visualisation to more sophisticated models. He leads on modelling cities in 3D, since the 1988, when visualisation of future designs was done stitching pieces of paper and photos. The rebuilding of Manchester in the mid 1990s, led to the development of 3D urban modelling, with animations and created an interactive CDROM. This continued to develop the data about Manchester and then shared it with others. The models were used in different ways – from gaming software to online, and trying to find ways to allow people to use it in real world context. Many models are used in interactive displays – e.g. for attracting inward investment. They went on to model many cities across the UK, with different levels of details and area that is covered. They also starting to identify features underground – utilities and the such. Models are kept up to date through collaboration, with clients providing back information about things that they are designing and integrating BIM data. In Sheffield, they also enhance the model through planning of new projects and activities. Models are used to communicate information to other stakeholders – e.g. traffic model outputs, and also do that with pedestrians movement. Using different information to colour code the model (e.g. enregy) or acoustic modelling or flooding. More recently, they move to city analytics, understanding the structure within models – for example understanding solar energy potential with the use and consumption of the building. They find themselves needing information about what utility data exist and that need to be mapped and integrated into their analysis. They also getting mobile phone data to predict trip journeys that people make.

I was the next speaker, on behalf Mapping for Change. I provided the background of Mapping for Change, and the approach that we are using for the mapping. In the context of other talks, which focused on technology, I emphasised that just as we are trying to reach out to people in the places that they use daily and fit the participatory process into their life rhythms, we need to do it in the online environment. That mean that conversations need to go where they are – so linking to facebook, twitter or whatsapp. We should also know that people are using different ways to access information – some will use just their phone, other laptops, and for others we need to think of laptop/desktop environment. In a way, this complicates participatory mapping much more than earlier participatory web mapping systems, when participants were more used to the idea of using multiple websites for different purposes. I also mentioned the need for listening to the people that we work with, and deciding if information should be shown online or not – taking into account what they would like to do with the data. I mentioned the work that involve citizen science (e.g. air quality monitoring) but more generally the ability to collect facts and evidence to deal with a specific issue. Finally, I also used some examples of our new community mapping system, which is based on GeoKey.

The final talk was from Neil Clark, Founder, EYELEVEL. He is from an architectural visualisation company that work in the North East and operate in the built environment area. They are using architectural modelling and us Ordnance Survey data and then position the designs, so they can be rendered accurately. Many of the processes are very expensive and complex. They have developed a tool called EYEVIEW for accurate augmented reality – working on iPad to allow viewing models in real-time. This can cut the costs of producing these models. They use a tripod to make it easier to control. The tool is the outcome of 4 years of development, allow the navigation of the architectural model to move it to overlay with the image. They are aiming at Accurate Visual Representation and they follow the detailed framework that is used in London for this purpose

The discussion that follow explored the political nature of information and who is represented and how. A question to OS was how open it will be with the detailed data and while Rollo explained that access to the data is complicated one and it need to be funded. I found myself defending the justification of charging high detailed models by suggesting to imagine a situation where the universal provision of high quality data at national level wasn’t there, and you had to deal with each city data model.

The last discussion point was about the truth in the mapping and the positions that were raised – It about the way that people understand their truth or is there an absolute truth that is captured in models and maps – or represented in 3D visualisations? Interestingly, 3 of the talk assume that there is a way to capture specific aspects of reality (structures, roads, pollution) and model it by numbers, while Rita and I took a more interpretive and culturally led representations.

OpenStreetMap studies (and why VGI not equal OSM)

As far as I can tell, Nelson et al. (2006) ‘Towards development of a high quality public domain global roads database‘ and Taylor & Caquard (2006) Cybercartography: Maps and Mapping in the Information Era are the first peer-reviewed papers that mention OpenStreetMap. Since then, OpenStreetMap has received plenty of academic attention. More ‘conservative’ search engines such as ScienceDirect or Scopus find 286 and 236 peer reviewed papers (respectively) that mention the project. The ACM digital library finds 461 papers in the areas that are relevant to computing and electronics, while Microsoft Academic Research finds only 112. Google Scholar lists over 9000 (!). Even with the most conservative version from Microsoft, we can see an impact on fields ranging from social science to engineering and physics. So lots to be proud of as a major contribution to knowledge beyond producing maps.

Michael Goodchild, in his 2007 paper that started the research into Volunteered Geographic Information (VGI), mentioned OpenStreetMap (OSM), and since then there is a lot of conflation of OSM and VGI. In some recent papers you can find statements such as ‘OpenstreetMap is considered as one of the most successful and popular VGI projects‘ or ‘the most prominent VGI project OpenStreetMap so, at some level, the boundary between the two is being blurred. I’m part of the problem – for example, with the title of my 2010 paper ‘How good is volunteered geographical information? A comparative study of OpenStreetMap and Ordnance Survey datasetsHowever, the more I think about it, the more uncomfortable I am with this equivalence. I feel that the recent line from Neis & Zielstra (2014) is more accurate: ‘One of the most utilized, analyzed and cited VGI-platforms, with an increasing popularity over the past few years, is OpenStreetMap (OSM)‘. I’ll explain why.

Let’s look at the whole area of OpenStreetMap studies. Over the past decade, several types of research paper have emerged.

First, there is a whole set of research projects that use OSM data because it’s easy to use and free to access (in computer vision or even string theory). These studies are not part of ‘OSM studies’ or VGI, as, for them, this is just data to be used.

Edward Betts. CC-By-SA 2.0 via Wikimedia Commons

Second, there are studies about OSM data: quality, evolution of objects and other aspects from researchers such as Peter Mooney, Pascal Neis, Alex Zipf  and many others.

Third, there are studies that also look at the interactions between the contribution and the data – for example, in trying to infer trustworthiness.

Fourth, there are studies that look at the wider societal aspects of OpenStreetMap, with people like Martin Dodge, Chris Perkins, and Jo Gerlach contributing in interesting discussions.

Finally, there are studies of the social practices in OpenStreetMap as a project, with the work of Yu-Wei Lin, Nama Budhathoki, Manuela Schmidt and others.

[Unfortunately, due to academic practices and publication outlets, many of these papers are locked behind paywalls, but thatis another issue… ]

In short, there is a significant body of knowledge regarding the nature of the project, the implications of what it produces, and ways to understand the information that emerges from it. Clearly, we now know that OSM produces good data and are ware of the patterns of contribution. What is also clear is that many of these patterns are specific to OSM. Because of the importance of OSM to so many application areas (including illustrative maps in string theory!) these insights are very important. Some of these insights are expected to also be present in other VGI projects (hence my suggestions for assertions about VGI) but this needs to be done carefully, only when there is evidence from other projects that this is the case. In short, we should avoid conflating VGI and OSM.

Happy 10th Birthday, OpenStreetMap!

Today, OpenStreetMap celebrates 10 years of operation as counted from the date of registration. I’ve heard about the project when it was in early stages, mostly because I knew Steve Coast when I was studying for my Ph.D. at UCL.  As a result, I was also able to secured the first ever research grant that focused on OpenStreetMap (and hence Volunteered Geographic Information – VGI) from the Royal Geographical Society in 2005. A lot can be said about being in the right place at the right time!

OSM Interface, 2006 (source: Nick Black)
OSM Interface, 2006 (source: Nick Black)

Having followed the project during this decade, there is much to reflect on – such as thinking about open research questions, things that the academic literature failed to notice about OSM or the things that we do know about OSM and VGI because of the openness of the project. However, as I was preparing the talk for the INSPIRE conference, I was starting to think about the start dates of OSM (2004), TomTom Map Share (2007), Waze (2008), Google Map Maker (2008).  While there are conceptual and operational differences between these projects, in terms of ‘knowledge-based peer production systems’ they are fairly similar: all rely on large number of contributors, all use both large group of contributors who contribute little, and a much smaller group of committed contributors who do the more complex work, and all are about mapping. Yet, OSM started 3 years before these other crowdsourced mapping projects, and all of them have more contributors than OSM.

Since OSM is described  as ‘Wikipedia of maps‘, the analogy that I was starting to think of was that it’s a bit like a parallel history, in which in 2001, as Wikipedia starts, Encarta and Britannica look at the upstart and set up their own crowdsourcing operations so within 3 years they are up and running. By 2011, Wikipedia continues as a copyright free encyclopedia with sizable community, but Encarta and Britannica have more contributors and more visibility.

Knowing OSM closely, I felt that this is not a fair analogy. While there are some organisational and contribution practices that can be used to claim that ‘it’s the fault of the licence’ or ‘it’s because of the project’s culture’ and therefore justify this, not flattering, analogy to OSM, I sensed that there is something else that should be used to explain what is going on.

TripAdvisor FlorenceThen, during my holiday in Italy, I was enjoying the offline TripAdvisor app for Florence, using OSM for navigation (in contrast to Google Maps which are used in the online app) and an answer emerged. Within OSM community, from the start, there was some tension between the ‘map’ and ‘database’ view of the project. Is it about collecting the data so beautiful maps or is it about building a database that can be used for many applications?

Saying that OSM is about the map mean that the analogy is correct, as it is very similar to Wikipedia – you want to share knowledge, you put it online with a system that allow you to display it quickly with tools that support easy editing the information sharing. If, on the other hand, OSM is about a database, then OSM is about something that is used at the back-end of other applications, a lot like DBMS or Operating System. Although there are tools that help you to do things easily and quickly and check the information that you’ve entered (e.g. displaying the information as a map), the main goal is the building of the back-end.

Maybe a better analogy is to think of OSM as ‘Linux of maps’, which mean that it is an infrastructure project which is expected to have a lot of visibility among the professionals who need it (system managers in the case of Linux, GIS/Geoweb developers for OSM), with a strong community that support and contribute to it. The same way that some tech-savvy people know about Linux, but most people don’t, I suspect that TripAdvisor offline users don’t notice that they use OSM, they are just happy to have a map.

The problem with the Linux analogy is that OSM is more than software – it is indeed a database of information about geography from all over the world (and therefore the Wikipedia analogy has its place). Therefore, it is somewhere in between. In a way, it provide a demonstration for the common claim in GIS circles that ‘spatial is special‘. Geographical information is infrastructure in the same way that operating systems or DBMS are, but in this case it’s not enough to create an empty shell that can be filled-in for the specific instance, but there is a need for a significant amount of base information before you are able to start building your own application with additional information. This is also the philosophical difference that make the licensing issues more complex!

In short, both Linux or Wikipedia analogies are inadequate to capture what OSM is. It has been illuminating and fascinating to follow the project over its first decade,  and may it continue successfully for more decades to come.

Second day of INSPIRE 2014 – open and linked data

Opening geodata is an interesting issue for INSPIRE  directive. INSPIRE was set before the hype of Government 2.0 was growing and pressure on opening data became apparent, so it was not designed with these aspects in mind explicitly. Therefore the way in which the organisations that are implementing INSPIRE are dealing with the provision of open and linked data is bound to bring up interesting challenges.

Dealing with open and linked data was the topic that I followed on the second day of INSPIRE 2014 conference. The notes below are my interpretation of some of the talks.

Tina Svan Colding discussed the Danish attempt to estimate the value (mostly economically) of open geographic data. The study was done in collaboration with Deloitte, and they started with a change theory – expectations that they will see increase demands from existing customers and new ones. The next assumption is that there will be new products, companies and lower prices and then that will lead to efficiency and better decision making across the public and private sector, but also increase transparency to citizens. In short, trying to capture the monetary value with a bit on the side. They have used statistics, interviews with key people in the public and private sector and follow that with a wider survey – all with existing users of data. The number of users of their data increased from 800 users to over 10,000 within a year. The Danish system require users to register to get the data, so this are balk numbers, but they could also contacted them to ask further questions. The new users – many are citizens (66%) and NGO (3%). There are further 6% in the public sector which had access in principle in the past but the accessibility to the data made it more usable to new people in this sector. In the private sector, construction, utilities and many other companies are using the data. The environmental bodies are aiming to use data in new ways to make environmental consultation more engaging to audience (is this is another Deficit Model? assumption that people don’t engage because it’s difficult to access data?). Issues that people experienced are accessibility to users who don’t know that they need to use GIS and other datasets. They also identified requests for further data release. In the public sector, 80% identified potential for saving with the data (though that is the type of expectation that they live within!).

Roope Tervo, from the Finish Meteorological Institute talked about the implementation of open data portal. Their methodology was very much with users in mind and is a nice example of user-centred data application. They hold a lot of data – from meteorological observations to air quality data (of course, it all depends on the role of the institute). They have chose to use WFS download data, with GML as the data format and coverage data in meteorological formats (e.g. grib). He showed that selection of data models (which can be all compatible with the legislation) can have very different outcomes in file size and complexity of parsing the information. Nice to see that they considered user needs – though not formally. They created an open source JavaScript library that make it is to use the data- so go beyond just releasing the data to how it is used. They have API keys that are based on registration. They had to limit the number of requests per day and the same for the view service. After a year, they have 5000 users, and 100,000 data downloads per day and they are increasing. Increasing slowly. They are considering how to help clients with complex data models.

Panagiotis Tziachris was exploring the clash between ‘heavy duty’ and complex INSPIRE standards and the usual light weight approaches that are common in Open Data portal (I think that he intended in the commercial sector that allow some reuse of data). This is a project of 13 Mediterranean regions in Spain, Italy, Slovenia, Montenegro, Greece, Cyprus and Malta. The HOMER project (website used different mechanism, including using hackathons to share knowledge and experience between more experienced players and those that are new to the area. They found them to be a good way to share practical knowledge between partners. This is an interesting side of purposeful hackathon within a known people in a project and I think that it can be useful for other cases. Interestingly, from the legal side, they had to go beyond the usual documents that are provided in an EU consortium, and  in order to allow partners to share information they created a memorandum of understanding for the partners as this is needed to deal with IP and similar issues. Also practices of open data – such as CKAN API which is a common one for open data websites were used. They noticed separation between central administration and local or regional administration – the competency of the more local organisations (municipality or region) is sometimes limited because knowledge is elsewhere (in central government) or they are in different stages of implementation and disagreements on releasing the data can arise. Antoehr issue is that open data is sometime provided at regional portals while another organisation at the national level (environment ministry or cadastre body) is the responsible to INSPIRE. The lack of capabilities at different governmental levels is adding to the challenges of setting open data systems. Sometime Open Data legislation are only about the final stage of the process and not abour how to get there, while INPIRE is all about the preparation, and not about the release of data – this also creates mismatching.

Adam Iwaniak discussed how “over-engineering” make the INSPIRE directive inoperable or relevant to users, on the basis of his experience in Poland. He asks “what are the user needs?” and demonstrated it by pointing that after half term of teaching students about the importance of metadata, when it came to actively searching for metadata in an assignment, the students didn’t used any of the specialist portals but just Google. Based on this and similar experiences, he suggested the creation of a thesaurus that describe keywords and features in the products so it allows searching  according to user needs. Of course, the implementation is more complex and therefore he suggests an approach that is working within the semantic web and use RDF definitions. By making the data searchable and index-able in search engines so they can be found. The core message  was to adapt the delivery of information to the way the user is most likely to search it – so metadata is relevant when the producer make sure that a search in Google find it.

Jesus Estrada Vilegas from the SmartOpenData project discussed the implementation of some ideas that can work within INSPIRE context while providing open data. In particular, he discussed a Spanish and Portuguese data sharing. Within the project, they are providing access to the data by harmonizing the data and then making it linked data. Not all the data is open, and the focus of their pilot is in agroforestry land management. They are testing delivery of the data in both INSPIRE compliant formats and the internal organisation format to see which is more efficient and useful. INSPIRE is a good point to start developing linked data, but there is also a need to compare it to other ways of linked the data

Massimo Zotti talked about linked open data from earth observations in the context of business activities, since he’s working in a company that provide software for data portals. He explored the business model of open data, INSPIRE and the Copernicus programme. From the data that come from earth observation, we can turn it into information – for example, identifying the part of the soil that get sealed and doesn’t allow the water to be absorbed, or information about forest fires or floods etc. These are the bits of useful information that are needed for decision making. Once there is the information, it is possible to identify increase in land use or other aspects that can inform policy. However, we need to notice that when dealing with open data mean that a lot of work is put into bringing datasets together. The standarisation of data transfer and development of approaches that helps in machine-to-machine analysis are important for this aim. By fusing data they are becoming more useful and relevant to knowledge production process. A dashboard approach to display the information and the processing can help end users to access the linked data ‘cloud’. Standarisation of data is very important to facilitate such automatic analysis, and also having standard ontologies is necessary. From my view, this is not a business model, but a typical one to the operations in the earth observations area where there is a lot of energy spend on justification that it can be useful and important to decision making – but lacking quantification of the effort that is required to go through the process and also the speed in which these can be achieved (will the answer come in time for the decision?). A member of the audience also raised the point that assumption of machine to machine automatic models that will produce valuable information all by themselves is questionable.

Maria Jose Vale talked about the Portuguese experience in delivering open data. The organisation that she works in deal with cadastre and land use information. She was also discussing on activities of the SmartOpenData project. She describe the principles of open data that they considered which are: data must be complete, primary, timely, accessible, processable; data formats must be well known, should be permanence and addressing properly usage costs. For good governance need to know the quality of the data and the reliability of delivery over time. So to have automatic ways for the data that will propagate to users is within these principles. The benefits of open data that she identified are mostly technical but also the economic values (and are mentioned many times – but you need evidence similar to the Danish case to prove it!). The issues or challenges of open data is how to deal with fuzzy data when releasing (my view: tell people that it need cleaning), safety is also important as there are both national and personal issues, financial sustainability for the producers of the data, rates of updates and addressing user and government needs properly. In a case study that she described, they looked at land use and land cover changes to assess changes in river use in a river watershed. They needed about 15 datasets for the analysis, and have used different information from CORINE land cover from different years. For example, they have seen change from forest that change to woodland because of fire. It does influence water quality too. Data interoperability and linking data allow the integrated modelling of the evolution of the watershed.

Francisco Lopez-Pelicer covered the Spanish experience and the PlanetData project which look at large scale public data management. Specifically looking in a pilot on VGI and Linked data with a background on SDI and INSPIRE. There is big potential, but many GI producers don’t do it yet. The issue is legacy GIS approaches such as WMS and WFS which are standards that are endorsed in INSPIRE, but not necessarily fit into linked data framework. In the work that he was involved in, they try to address complex GI problem with linked data . To do that, they try to convert WMS to a linked data server and do that by adding URI and POST/PUT/DELETE resources. The semantic client see this as a linked data server even through it can be compliant with other standards. To try it they use the open national map as authoritative source and OpenStreetMap as VGI source and release them as linked data. They are exploring how to convert large authoritative GI dataset into linked data and also link it to other sources. They are also using it as an experiment in crowdsourcing platform development – creating a tool that help to assess the quality of each data set. The aim is to do quality experiments and measure data quality trade-offs associated with use of authoritative or crowdsourced information. Their service can behave as both WMS and “Linked Map Server”. The LinkedMap, which is the name of this service, provide the ability to edit the data and explore OpenStreetMap and thegovernment data – they aim to run the experiment in the summer so this can be found at The reason to choose WMS as a delivery standard is due to previous crawl over the web which showed that WMS is the most widely available service, so it assumed to be relevant to users or one that most users can capture.

Paul van Genuchten talked about the GeoCat experience in a range of projects which include support to Environment Canada and other activities. INSPIRE meeting open data can be a clash of cultures and he was highlighting neogeography as the term that he use to describe the open data culture (going back to the neogeo and paleogeo debate which I thought is over and done – but clearly it is relevant in this context). INSPIRE recommend to publish data open and this is important to ensure that it get big potential audience, as well as ‘innovation energy’ that exist among the ‘neogeo’/’open data’ people. The common things within this culture are expectations that APIs are easy to use, clean interfaces etc. But under the hood there are similarities in the way things work. There is a perceived complexity by the community of open data users towards INSPIRE datasets. Many of Open Data people are focused and interested in OpenStreetMap, and also look at companies such as MapBox as a role model, but also formats such as GeoJSON and TopoJSON. Data is versions and managed in git like process. The projection that is very common is web mercator. There are now not only raster tiles, but also vector tiles. So these characteristics of the audience can be used by data providers to provide help in using their data, but also there are intermediaries that deliver the data and convert it to more ‘digestible’ forms. He noted CitySDK by which they grab from INSPIRE and then deliver it to users in ways that suite open data practices.He demonstrated the case of Environment Canada where they created a set of files that are suitable for human and machine use.

Ed Parsons finished the set of talks of the day (talk link , with a talk about multi-channel approach to maximise the benefits of INSPIRE.  He highlighted that it’s not about linked data, although linked data it is part of the solution to make data accessibility. Accessibility always wins online – and people make compromises (e.g. sound quality in CD and Spotify). Google Earth can be seen as a new channel that make things accessible, and while the back-end is not new in technology the ease of access made a big difference. The example of Denmark use of minecraft to release GI is an example of another channel. Notice the change over the past 10 years in video delivery, for example, so the early days of the video delivery was complex and require many steps and expensive software and infrastructure, and this is somewhat comparable to current practice within geographic information. Making things accessible through channels like YouTube and the whole ability around it changed the way video is used, uploaded and consumed, and of course changes in devices (e.g. recording on the phone) made it even easier. Focusing on the aspects of maps themselves, people might want different things that are maps  and not only the latest searchable map that Google provide – e.g. the  administrative map of medieval Denmark, or maps of flood, or something that is specific and not part of general web mapping. In some cases people that are searching for something and you want to give them maps for some queries, and sometime images (as in searching Yosemite trails vs. Yosemite). There are plenty of maps that people find useful, and for that Google now promoting Google Maps Gallery – with tools to upload, manage and display maps. It is also important to consider that mapping information need to be accessible to people who are using mobile devices. The web infrastructure of Google (or ArcGIS Online) provide the scalability to deal with many users and the ability to deliver to different platforms such as mobile. The gallery allows people to brand their maps. Google want to identify authoritative data that comes from official bodies, and then to have additional information that is displayed differently.  But separation of facts and authoritative information from commentary is difficult and that where semantics play an important role. He also noted that Google Maps Engine is just maps – just a visual representation without an aim to provide GIS analysis tools.

Environmental Information Systems – R.F. Tomlinson (1970)

The news that Roger Tomlinson, famously ‘the father of GIS‘, passed away few days ago are sad – although I met him only few times, it was an honour to meet one of the pioneers of my field.

A special delight during my PhD research was to discover, at the UCL library the proceedings of the first ever symposium on GIS. Dr Tomlinson studied towards a PhD at UCL, and probably that is how the copy found its way to the library. It was fairly symbolic for me that the symposium was titled ‘environmental information systems’. See my earlier comment about the terminology: Geographic information or Environmental Information.

For a long time I wanted to share the copy, and today is a suitable day to share it. So here is the copy – thanks to the help of Michalis Vitos who converted it to PDF.



GIS chapter in ‘Introducing Human Geographies’

There is something in the physical presence of book that is pleasurable. Receiving the copy of Introducing Human Geographies was special, as I have contributed a chapter about Geographic Information Systems to the ‘cartographies’ section.

It might be a response to Ron Johnston critique of Human Geography textbooks or a decision by the editors to extend the content of the book, but the book now contains three chapters that deal with maps and GIS. The contributions are the ‘Power of maps’ by Jeremy Crampton, a chapter about ‘Geographical information systems’ by me, and ‘Counter geographies’ by Wen Lin. To some extent, we’ve coordinated the writing, as this is a textbook for undergraduates in geography and we wanted to have a coherent message.

Overall, you’ll notice a lot of references to participatory and collaborative mapping, with OpenStreetMap and mentioned several times.

In my chapter I have covered both the quantitative/spatial science face of GIS, as well as the critical/participatory one. As the introduction to the section describes:

“Chapter 14 focuses on the place of Geographical Information Systems (GIS) within contemporary mapping. A GIS involves the representation of geographies in digital computers. … GIS is now a widespread and varied form of mapping, both within the academy and beyond. In the chapter, he speaks to that variety by considering the use of GIS both within practices such as location planning, where it is underpinned by the intellectual paradigm of spatial science and quantitative data, and within emergent fields of ‘critical’ and ‘qualitative GIS’, where GIS could be focused on representing the experiences of marginalized groups of people, for example. Generally, Muki argues against the equation of GIS with only one sort of Human Geography, showing how it can be used as a technology within various kinds of research. More specifically, his account shows how current work is pursuing those options through careful consideration of both the wider issues of power and representation present in mapping and the detailed, technical and scientific challenges within GIS development.”

To preview the chapter on Google Book, use this link . I hope that it will be useful introduction to GIS to Geography students.


‘Keeping the spirit alive’ – preservations of participatory GIS values in the Geoweb

During the symposium “The Future of PGIS: Learning from Practice?” which was held at ITC-University of Twente, 26 June 2013, I gave a talk titled ‘Keeping the spirit alive’ – preservations of participatory GIS values in the Geoweb, which explored what was are the important values in participatory GIS and how they translate to the Geoweb, Volunteered Geographic Information and current interests in crowdsourcing. You can watch the talk below.

To see the rest of the presentations during the day, see and details of the event are available here