I have checked on Twitter to see how the follow-up meeting to Terra Future 2009, last Friday, went. It was a very pleasant surprise to see that the idea that I have put forward in February, that the Ordnance Survey should consider hosting OpenStreetMap and donate some data to it, was voted the best idea that came out of Terra Future 2009. With this sort of peer-review of the idea, and with the added benefit of 2 months for rethinking, I still think that it is quite a good idea.

The most important aspect of this idea is to understand that OpenStreetMap and Ordnance Survey can both thrive in the GeoWeb era. Despite the imaginary competition, each has a clear value to certain parts of the marketplace. There are a very clear benefits that the OpenStreetMap community can gain from working closely with the Ordnance Survey – such as some aspects of mapping that the Ordnance Survey are highly knowledgeable about, and vice versa, such as how to innovate in delivery of geographical information. A collaborative model might work after all…

I wonder how this idea will evolve now?

If we take the lag of geotechnologies behind mainstream computing as a common feature of this type of technology, there are quite interesting conclusions that can be drawn in terms of developing new applications and products. For example, it can help in predicting when certain technology will be ready for wide application in the geographical field.

Here is an example: very recently, Jakob Nielsen reported that he was positively surprised with the quality of reading from the Amazon Kindle 2, and that this is leading him to withdraw his conclusion that the efficiency of reading from a computer screen is low.

I’ve written about the problem of computer monitor resolution and the use of small screens for urban navigation – such as the use of maps for tourism where you would like to have a map that gives you a wider context of your area than the ‘tunnel vision’ that is provided on today’s mobile phones.

So here is my guess: in about 10 years, Kindle 10, or whatever its equivalent at that point, will be a suitable platform for delivering clever maps that can be as effective as paper maps. That means that if you are in the business of creating maps that will be used on these devices, you should start exploring how best to deliver them in about 5 years.

I can also guess that it will be more energy hungry, wasteful and way too expensive when compared to the paper tourist maps of today, but the prediction is about technology – not about what I think about its use…

While working on a text about HCI and GIS, I started to notice a general pattern of ten years or so delay between the dates a new functionality starts to become ‘mainstream’ in general computer use and when it becomes common in GIS.

Here are some examples: the early use of computers in the business environment was in the mid to late 1950s, but we had to wait until the late 1960s to get the first full-scale GIS (and even that was fairly primitive). Personal computers and microcomputers appeared in the late 1970s with machines such as the Apple II, which started to be used by many small offices for word processing and accounting, but the first PC GIS application, Mapinfo, appeared only in the second half of the 1980s. Human-Computer Interaction emerged as a field of research in the early 1980s, but only in the early 1990s was it recognised by GIS researchers. Graphical User Interfaces were first implemented in mainstream computing in the very early 1980s, and didn’t arrive to GIS until the 1990s. Finally, notice how e-commerce, e-mail and other Web applications were very successful in the early 1990s, but only in the mid 2000s did the GeoWeb emerge, with the success of Google Maps.

Several other examples of this gap exist – for example, the use of SQL databases. Even if you search for the earliest research paper or documentation for a major GIS functionality with a parallel in the mainstream, this lag appears. Some very early research appears around 5 years after the mainstream use (see the first HCI and GIS paper as an example) but it will take at least another 5 years to see it in real products that are used outside research labs.

This observation explains, to me, two puzzles: first, why is it that, for the two decades that I’ve been working with GIS, it keeps being referred to as an ‘emerging technology’? The answer is that it is always catching up so, for the journalist, who is familiar with other areas of computing, it feels like something that is emerging; second, why are companies that are getting into geotechnologies early either failing (examples aplenty in the Location-based services area in the 1990s) or needing about 10 years of survival to become successful? The reason here is that they are too optimistic about the technical challenges that they are facing.

I think that the lag is due to the complexities of dealing with geographical information, and the need for hardware and software to get to the stage when geographical applications are possible. Another reason is the relative lack of investment in the development of geotechnologies, which were considered for a long time niche applications.

What is your explanation for the gap?

Follow

Get every new post delivered to your Inbox.

Join 2,226 other followers