I’ve been using 37Signals’ Basecamp now for over 5 years. I’m involved in many projects with people from multiple departments and organisations. In the first large project that I run in 2007 – Mapping Change for Sustainable Communities – Basecamp was recommended to us by Nick Black (just before he co-founded CloudMade), so we’ve started using it. Since then, it was used for 33 projects and activities which range from coordinating writing an academic paper to running a large multidisciplinary group. In some projects it was used a lot in other it didn’t work as well. As with any other information system, the use of it depends on needs and habits of different users and not only on the tool itself.

It is generally an excellent tool to organise messages, information and documents about projects and activities and act well as a repository of project related information – but project management software is not what this post is about.

I’m sure that in the scheme of things, we are a fairly small users of Basecamp. Therefore, I was somewhat surprised to receive a card from 37Signals. 
I’m fairly passive user of Basecamp as far as 37Signals are concerned – I’m please with what it does, but I have not contacted them with requests or anything like that. So getting this hand-written card was a very nice touch from a company that could very easily wrote the code to send me an email with the same information – but that wouldn’t be the same in terms of emotional impact.

As Sherry Turkle is noting in her recent book, the human contact is valuable and appreciated. This is important and lots of times undervalued aspect of communication and interaction – the analog channels are there and can be very effective. This blog post – and praising 37Signals for making this small effort, is an example of why it is worth doing it.

The London Citizen Cyberscience Summit in early September was a stimulating event, which brought together a group of people with an interest in this area. A report from the event, with a very good description of the presentations, including a reflection piece, is available on the ‘Strange Attractor’ blog.

During the summit, I discussed the aspects of ‘Extreme’ Citizen Science, where we move from usual science to participatory research. The presentation was partly based on a paper that I wrote and that I presented during the workshop on the value of Volunteered Geographical Information in advancing science, which was run as part of the GIScience 2010 conference towards the middle of September. Details about the workshop are available on the workshop’s website including a set of interesting position papers.

The presentation below covers the topics that I discussed in both workshops. Here, I provide a brief synopsis for the presentation, as it is somewhat different from the paper.

In the talk, I started by highlighting that by using different terminologies we can notice different facets of the practice of crowd data collection (VGI within the GIScience community, crowdsourcing, participatory mapping …).

The first way in which we can understand this information is in the context of Web 2.0 applications. These applications can be non-spatial (such as Wikipedia or Twitter), or implicitly spatial (such as Flickr – you need to be in a location before you can capture a photograph), or explicitly spatial , in applications that are about collecting geographical information – for example OpenStreetMap. When looking at VGI from the perspective of Web 2.0 it’s possible to identify the specific reasons that it emerged and how other similar applications influence its structure and practices.

The second way to view this information is as part of geographical information produced by companies who need mapping information (such as Google or TomTom). In this case, you notice that it’s about reducing the costs of labour and the need for active or passive involvement of the person who carries out the mapping.

The third, and arguably new way to view VGI is as part of Citizen Science. These activities have been going for a long time in ornithology and in meteorology. However, there are new forms of Citizen Science that rely on ICT – such as movement-activated cameras (slide 11 on the left) that are left near animal trails and are operated by volunteers, or a network of accelerometers that form a global earthquake monitoring network. Not all Citizen Science is spatial, and there are very effective examples, especially in the area of Citizen Cyberscience. So in this framing of VGI we can pay special attention to the collection of scientific information. Importantly, as in the case of spatial application, some volunteers become experts, such as Hanny van Arkel who has discovered a type of galaxy in Galaxy Zoo.

Slides 16-17 show the distribution of crowdsourced images, and emphasise the spatial distribution of information near population centres and tourist attractions. Slides 19-25 show the analysis of the data that was collected by OpenStreetMap volunteers and highlight bias towards highly populated and affluent areas.

Citizen Science is not just about the data collections. There are also cultural problems regarding the trustworthiness of the data, but slides 28-30 show that the data is self-improving as more volunteers engage in the process (in this case, mapping in OpenStreetMap). On that basis, I do question the assumption about trustworthiness of volunteers and the need to change the way we think about projects. There are emerging examples of such Citizen Science where the engagement of participants is at a higher level. For example, noise mapping activities that a community near London City Airport carried out (slides 34-39) which shows that people can engage in science and are well placed when there are opportunities, such as the ash cloud in April 2010, to collect ‘background’ noise. This is not possible without the help of communities.
Finally, slides 40 and 41 demonstrate that it is possible to engage non-literate users in environmental data collection.

So in summary, a limitless Citizen Science is possible – we need to create the tool for it and understand how to run such projects, as well study them.

While sorting out our departmental GIS library, I came across a small booklet titled Computers and the Renaissance of Cartography from 1976. It was written by Dr Tom Margerison, the first editor of New Scientist, and describes the activities of the Experimental Cartography Unit (ECU), which pioneered the use of computers for geographical and cartographical applications. Though Prof. David Rhind told me that the description should be taken with a pinch of salt, and that there are alternative accounts.

Interestingly, the ECU operated within the Royal College of Art to encourage new designs and innovations in map making. It was established in 1967 and operated until the late 1980s.
The booklet provides a description of the main processes of assembling maps at the ECU in the middle of the 1970s, and what is especially interesting is to see some amazing outputs of maps from that time, which, unlike the typical crude output of Symap, are beautiful and clear.
I have asked Dan Lewis, who was involved in the digitising of the CATMOG catalogue of booklets about quantitative methods in geography, to turn this booklet into PDF format so we can share it. Dan put some of the maps on his blog .

If you want to download the booklet – it is now available here.

Today is a good day to publish this booklet, following the announcement that Prof. Peter Woodsford, who was among the founders of Laser-Scan (now 1Spatial), received an MBE for his services to the geographic information industry in the Queen’s birthday honours list, and it was the equipment of Laser-Scan that enabled the creation of these maps.

The discussion about the future of the GIS ‘profession’ has flared up in recent days – see the comments from Sean Gorman, Steven Feldman (well, citing me) and Don Meltz among others. My personal perspective is about the educational aspect of this debate.

I’ve been teaching GIS since 1995, and been involved in the MSc in GIS at UCL since 1998 – teaching on it since 2001. Around 1994 I was contemplating the excellent MSc in GIS programme in Edinburgh, though I opted to continue with my own mix of geography and computer science, which turned out to be great in the end – but I can say that I have been following the trends in GIS education for quite a while.

Based on this experience, I would argue that the motivation for studying an MSc in GIS over the past 20 years was to get the ‘ARC/INFO driving licence’. I use ARC/INFO as a metaphor – you can replace it with any other package, but ARC/INFO was the de facto package for teaching GIS (and its predecessor ArcGIS is today), so it is suitable shorthand. What I mean by that is that for a long time GIS packages were hard to use and required a significant amount of training in order to operate successfully. Even if a fairly simple map was needed, the level of technical knowledge and the number of steps required were quite significant. So employers, who mostly wanted someone who could make them maps, recruited people who gained skills in operating the complex packages that allow the production of maps.

The ‘ARC/INFO driving licence’ era included an interesting dissonance – the universities were telling themselves that they were teaching the principles of GIScience but the students were mostly interested in learning how to operate a GIS at a proficient level to get a job. I’ve seen and talked with enough students to recognise that many of them, in their daily jobs, rarely used the spatial statistical analysis that we were teaching and they mostly worked at ‘taming the beast’, which GIS was.

As expected, at UCL there was always a group that was especially interested in the principles of GIScience and that continued their studies beyond the MSc. But they are never the majority of the cohort.

The model worked well for everyone – universities were teaching GIS by a combination of principles and training of specific packages and the students found jobs at the end and joined GIS departments in different organisations.

The disruption that changed this arrangement started in the late 1990s, with Oracle Spatial starting to show that GIS can be integrated in mainstream products. The whole process accelerated around 2005 with the emergence of GeoWeb, Free and Open Source GIS (FOSS GIS) and the whole range of applications that come with it. Basically, you don’t need a licence any more. More and more employers (even GIS consultancies) are not recruiting from GIS education programmes – they are taking computing professionals and teaching them the GIS skills. Going through an MSc in GIS to be proficient with a tool is not necessary.

So in an era in which you don’t need a licence to join the party, what is the MSc in GIS for?

The answer is that it can be the time when you focus on principles and on improving specific skills. Personally, that was my route to education. I started working in GIS software development without much more than high school education in 1988. After hearing people around me talking about registers, bugs, polygons and databases I was convinced that I must understand these principles properly. So I went for a degree that provided me with the knowledge. In the same way, I would expect that MSc programmes cater for the needs of people who gain some practical experience with operating geospatial technologies and want to learn the principles or become specialists in specific aspects of these systems.

We already see people doing the MSc while working with GIS – currently studying an MSc by distance learning or in the evening is very popular and I expect that this will continue. However, the definition of what is covered by GIS must be extended – it should include everything from Bing Maps API to PostGIS to ArcGIS.

I can also see the need for specialised courses – maybe to focus on the technical development of geospatial technologies or maybe on spatial statistical analysis for those who want to become geographical information analysts. I would also expect much more integration of GIS with other fields of study where it is taught as a tool – just look at the many MSc programmes that currently include GIS. I’m already finding myself teaching students of urban design, development planning or asset management.
All in all, I’m not going to feel sorry that the ‘ARC/INFO driving licence’ era is coming to its end.

UPDATE: a more detailed version of this post appeared in Cartographica, and can be accessed here or email me to receive a copy.

I have checked on Twitter to see how the follow-up meeting to Terra Future 2009, last Friday, went. It was a very pleasant surprise to see that the idea that I have put forward in February, that the Ordnance Survey should consider hosting OpenStreetMap and donate some data to it, was voted the best idea that came out of Terra Future 2009. With this sort of peer-review of the idea, and with the added benefit of 2 months for rethinking, I still think that it is quite a good idea.

The most important aspect of this idea is to understand that OpenStreetMap and Ordnance Survey can both thrive in the GeoWeb era. Despite the imaginary competition, each has a clear value to certain parts of the marketplace. There are a very clear benefits that the OpenStreetMap community can gain from working closely with the Ordnance Survey – such as some aspects of mapping that the Ordnance Survey are highly knowledgeable about, and vice versa, such as how to innovate in delivery of geographical information. A collaborative model might work after all…

I wonder how this idea will evolve now?

If we take the lag of geotechnologies behind mainstream computing as a common feature of this type of technology, there are quite interesting conclusions that can be drawn in terms of developing new applications and products. For example, it can help in predicting when certain technology will be ready for wide application in the geographical field.

Here is an example: very recently, Jakob Nielsen reported that he was positively surprised with the quality of reading from the Amazon Kindle 2, and that this is leading him to withdraw his conclusion that the efficiency of reading from a computer screen is low.

I’ve written about the problem of computer monitor resolution and the use of small screens for urban navigation – such as the use of maps for tourism where you would like to have a map that gives you a wider context of your area than the ‘tunnel vision’ that is provided on today’s mobile phones.

So here is my guess: in about 10 years, Kindle 10, or whatever its equivalent at that point, will be a suitable platform for delivering clever maps that can be as effective as paper maps. That means that if you are in the business of creating maps that will be used on these devices, you should start exploring how best to deliver them in about 5 years.

I can also guess that it will be more energy hungry, wasteful and way too expensive when compared to the paper tourist maps of today, but the prediction is about technology – not about what I think about its use…

While working on a text about HCI and GIS, I started to notice a general pattern of ten years or so delay between the dates a new functionality starts to become ‘mainstream’ in general computer use and when it becomes common in GIS.

Here are some examples: the early use of computers in the business environment was in the mid to late 1950s, but we had to wait until the late 1960s to get the first full-scale GIS (and even that was fairly primitive). Personal computers and microcomputers appeared in the late 1970s with machines such as the Apple II, which started to be used by many small offices for word processing and accounting, but the first PC GIS application, Mapinfo, appeared only in the second half of the 1980s. Human-Computer Interaction emerged as a field of research in the early 1980s, but only in the early 1990s was it recognised by GIS researchers. Graphical User Interfaces were first implemented in mainstream computing in the very early 1980s, and didn’t arrive to GIS until the 1990s. Finally, notice how e-commerce, e-mail and other Web applications were very successful in the early 1990s, but only in the mid 2000s did the GeoWeb emerge, with the success of Google Maps.

Several other examples of this gap exist – for example, the use of SQL databases. Even if you search for the earliest research paper or documentation for a major GIS functionality with a parallel in the mainstream, this lag appears. Some very early research appears around 5 years after the mainstream use (see the first HCI and GIS paper as an example) but it will take at least another 5 years to see it in real products that are used outside research labs.

This observation explains, to me, two puzzles: first, why is it that, for the two decades that I’ve been working with GIS, it keeps being referred to as an ‘emerging technology’? The answer is that it is always catching up so, for the journalist, who is familiar with other areas of computing, it feels like something that is emerging; second, why are companies that are getting into geotechnologies early either failing (examples aplenty in the Location-based services area in the 1990s) or needing about 10 years of survival to become successful? The reason here is that they are too optimistic about the technical challenges that they are facing.

I think that the lag is due to the complexities of dealing with geographical information, and the need for hardware and software to get to the stage when geographical applications are possible. Another reason is the relative lack of investment in the development of geotechnologies, which were considered for a long time niche applications.

What is your explanation for the gap?

Follow

Get every new post delivered to your Inbox.

Join 2,271 other followers