7 February, 2015
The Open University, with support from Nominet Trust and UTC Sheffield have launched the nQuire-it.org website, which seem to have a great potential for running citizen science activities. The nQuire platform allows participants to create science inquiry ‘missions’. It is accompanied by an Android app called Sense-it that exposed all the sensors that are integrated in a smartphone and let you see what they are doing and the values that they are showing.
The process of setting up a project on the nQuire-it site is fairly quick and you can figure it out in few clicks. Then, joining the project that you’ve created on the phone is also fairly simple, and the integration with Google, Facebook and Twitter accounts mean that linking the profiles is quick. Then you can get few friends to start using it, and the Sense-it app let you collect the data and then share it with other participants in the project on the nQuire website. Then participants can comment on the data, ask questions about how it was produced and up or down vote it. All these make nQuire a very suitable place for experimentation with sensors in smartphones and prototyping citizen science activities. It also provides an option for recording geographic location, and it good to see that it’s disabled by default, so the project designer need to actively switch it on.
16 January, 2015
Thanks to invitations from UNIGIS and Edinburgh Earth Observatory / AGI Scotland, I had an opportunity to reflect on how Geographic Information Science (GIScience) can contribute to citizen science, and what citizen science can contribute to GIScience.
Despite the fact that it’s 8 years since the term Volunteers Geographic Information (VGI) was coined, I didn’t assume that all the audience is aware of how it came about or the range of sources of VGI. I also didn’t assume knowledge of citizen science, which is far less familiar term for a GIScience audience. Therefore, before going into a discussion about the relationship between the two areas, I opened with a short introduction to both, starting with VGI, and then moving to citizen science. After introduction to the two areas, I’m suggesting the relationships between them – there are types of citizen science that are overlapping VGI – biological recording and environmental observations, as well as community (or civic) science, while other types, such as volunteer thinking includes many projects that are non-geographical (think EyeWire or Galaxy Zoo).
However, I don’t just list a catalogue of VGI and citizen science activities. Personally, I found trends a useful way to make sense of what happen. I’ve learned that from the writing of Thomas Friedman, who used it in several of his books to help the reader understand where the changes that he covers came from. Trends are, of course, speculative, as it is very difficult to demonstrate causality or to be certain about the contribution of each trends to the end result. With these caveats in mind, there are several technological and societal trends that I used in the talk to explain how VGI (and the VGI element of citizen science) came from.
Of all these trends, I keep coming back to one technical and one societal that I see as critical. The removal of selective availability of GPS in May 2000 is my top technical change, as the cascading effect from it led to the deluge of good enough location data which is behind VGI and citizen science. On the societal side, it is the Flynn effect as a signifier of the educational shift in the past 50 years that explains how the ability to participate in scientific projects have increased.
In terms of the reciprocal contributions between the fields, I suggest the following:
GIScience can support citizen science by considering data quality assurance methods that are emerging in VGI, there are also plenty of Spatial Analysis methods that take into account heterogeneity and therefore useful for citizen science data. The areas of geovisualisation and human-computer interaction studies in GIS can assist in developing more effective and useful applications for citizen scientists and people who use their data. There is also plenty to do in considering semantics, ontologies, interoperability and standards. Finally, since critical GIScientists have been looking for a long time into the societal aspects of geographical technologies such as privacy, trust, inclusiveness, and empowerment, they have plenty to contribute to citizen science activities in how to do them in more participatory ways.
On the other hand, citizen science can contribute to GIScience, and especially VGI research, in several ways. First, citizen science can demonstrate longevity of VGI data sources with some projects going back hundreds of years. It provides challenging datasets in terms of their complexity, ontology, heterogeneity and size. It can bring questions about Scale and how to deal with large, medium and local activities, while merging them to a coherent dataset. It also provide opportunities for GIScientists to contribute to critical societal issues such as climate change adaptation or biodiversity loss. It provides some of the most interesting usability challenges such as tools for non-literate users, and finally, plenty of opportunities for interdisciplinary collaborations.
The slides from the talk are available below.
29 January, 2014
Once upon a time, Streetmap.co.uk was one of the most popular Web Mapping sites in the UK, competing successfully with the biggest rival at the time, Multimap. Moreover, it was ranked second in The Daily Telegraph list of leading mapping sites in October 2000 and described at ‘Must be one of the most useful services on the web – and it’s completely free. Zoom in on any UK area by entering a place name, postcode, Ordnance Survey grid reference or telephone code.’ It’s still running and because of its legacy, it’s around the 1250 popular website in the UK (though 4 years ago it was among the top 350).
So far, nothing is especially noteworthy – popular website a decade ago replaced by a newer website, Google Maps, which provide better search results, more information and is the de facto standard for web mapping. Moreover, already in 2006 Artemis Skaraltidou demonstrated that of the UK Web Mapping crop, Streetmap scored lowest on usability with only MapQuest, which largely ignored the UK, being worse.
However, recently, while running a practical session introducing User-Centred Design principles to our MSc in GIS students, I have noticed an interesting implication of the changes in the environment of Web Mapping – Streetmap has stopped being usable just because it didn’t bother to update its interaction. By doing nothing, while the environment around it changed, it became unusable, with users failing to perform even the most basic of tasks.
The students explored the mapping offering from Google, Bing, Here and Streetmap. It was fairly obvious that across this cohort (early to mid 20s), Google Maps were the default, against which other systems were compared. It was not surprising to find impressions that Streetmap is ‘very old fashioned‘ or ‘archaic‘. However, more interesting was to notice people getting frustrated that the ‘natural’ interaction of zooming in and out using the mouse wheel just didn’t worked. Or failing to find the zoom in and out buttons. At some point in the past 10 years, people internalised the interaction mode of using the mouse and stopped using the zoom in and out button on the application, which explains the design decision in the new Google Maps interface to eliminate the dominant zoom slider from the left side of the map. Of course, Streetmap interface is also not responsive to touch screen interactions which are also learned across applications.
I experienced a similar, and somewhat amusing incident during the registration process of SXSW Eco, when I handed over my obviously old laptop at the registration desk to provide some detail, and the woman was trying to ‘pinch’ the screen in an attempt to zoom in. Considering that she was likely to be interacting with tablets most of the day (it was, after all, SXSW), this was not surprising. Interactions are learned and internalised, and we expect to experience them across devices and systems.
So what’s to learn? while this is another example of ‘Jacob’s Law of Internet User Experience‘ which states that ‘Users spend most of their time on other sites’, it is very relevant to many websites that use Web Mapping APIs to present information – from our own communitymaps.org.uk to the Environment Agency What’s in Your Backyard. In all these cases, it is critical to notice the basic map exploration interactions (pan, zoom, search) and make sure that they match common practices across the web. Otherwise, you might end like Streetmap.
Looking across the range of crowdsourced geographic information activities, some regular patterns are emerging and it might be useful to start notice them as a way to think about what is possible or not possible to do in this area. Since I don’t like the concept of ‘laws’ – as in Tobler’s first law of geography which is stated as ‘Everything is related to everything else, but near things are more related than distant things.’ – I would call them assertions. There is also something nice about using the word ‘assertion’ in the context of crowdsourced geographic information, as it echos Mike Goodchild’s differentiation between asserted and authoritative information. So not laws, just assertions or even observations.
The first one, is rephrasing a famous quote:
‘you can be supported by a huge crowd for a very short time, or by few for a long time, but you can’t have a huge crowd all of the time (unless data collection is passive)’
So the Christmas Bird Count can have tens of thousands of participants for a short time, while the number of people who operate weather observation stations will be much smaller. Same thing is true for OpenStreetMap – for crisis mapping, which is a short term task, you can get many contributors but for the regular updating of an area under usual conditions, there will be only few.
The exception for the assertion is the case for passive data collection, where information is collected automatically through the logging of information from a sensor – for example the recording of GPS track to improve navigation information.
26 June, 2013
CHI 2013 and GeoHCI workshop highlighted to me the importance of understanding media for maps. During CHI, the ‘Paper Tab’ demonstration used E-Ink displays to demonstrate multiple displays interaction. I found the interactions non-intuitive and not mapping very well to what you would expect to do with paper, so a source for confusion – especially when they will eventually be mixed with papers on a desk. Anyhow, it is an interesting exploration.
E Ink displays are very interesting in terms of the potential use for mapping. The image below shows one of the early prototypes of maps that are designed specifically for the Kindle, or, more accurately, to the E Ink technology that is at heart of the Kindle. From a point of view of usability of geographical information technologies, the E Ink is especially interesting. There are several reasons for that.
First, the resolution of the Kindle display is especially high (close to 170 Pixels Per Inch) when the size of screen is considered. The Apple Retina display provide even better resolution and in colour and that makes maps on the iPad also interesting, as they are starting to get closer to the resolution that we are familiar with from paper maps (which is usually between 600 and 1200 Dot Per Inch). The reason that resolution matter especially when displaying maps, because the users need to see the context of the location that they are exploring. Think of the physiology of scanning the map, and the fact that capturing more information in one screen can help in understanding the relationships of different features. Notice that when the resolution is high but the screen area is limited (for example the screen of a smartphone) the limitations on the area that is displayed are quite severe and that reduce the usability of the map – scrolling require you to maintain in your memory where you came from.
Secondly, E Ink can be easily read even in direct sunlight because they are reflective and do not use backlight. This make them very useful for outdoor use, while other displays don’t do that very well.
Thirdly, they use less energy and can be used for long term display of the map while using it as a reference, whereas with most active displays (e.g. smartphone) continuous use will cause a rapid battery drain.
On the downside, E Ink refresh rates are slow, and they are more suitable for static display and not for dynamic and interactive display.
During the summer of 2011 and 2012, several MSc students at UCL explore the potential of E Ink for mapping in detail. Nat Evatt (who’s map is shown above) worked on the cartographic representation and shown that it is possible to create highly detailed and readable maps even with the limitation of 16 levels of grey that are available. The surprising aspects that he found is that while some maps are available in the Amazon Kindle store (the most likely place for e-book maps), it looks like the maps where just converted to shades of grey without careful attention to the device, which reduce their usability.
The work of Bing Cui and Xiaoyan Yu (in a case of collaboration between MSc students at UCLIC and GIScience) included survey in the field (luckily on a fairly sunny day near the Tower of London) and they explored which scales work best in terms of navigation and readability. The work shows that maps at scale of 1:4000 are effective – and considering that with E Ink the best user experience is when the number of refreshes are minimised that could be a useful guideline for e-book map designers.
As I’ve noted in the previous post, I have just attended CHI (Computer-Human Interaction) conference for the first time. It’s a fairly big conference, with over 3000 participants, multiple tracks that evolved over the 30 years that CHI have been going, including the familiar paper presentations, panels, posters and courses, but also the less familiar ‘interactivity areas’, various student competitions, alt.CHI or Special Interest Groups meetings. It’s all fairly daunting even with all my existing experience in academic conferences. During the GeoHCI workshop I have discovered the MyCHI application, which helps in identifying interesting papers and sessions (including social recommendations) and setting up a conference schedule from these papers. It is a useful and effective app that I used throughout the conference (and wish that something similar can be made available in other large conferences, such as the AAG annual meeting).
With MyCHI in hand, while the fog started to lift and I could see a way through the programme, the trepidation about the relevance of CHI to my interests remained and even somewhat increased, after a quick search of the words ‘geog’,’marginal’,’disadvantage’ returned nothing. The conference video preview (below) also made me somewhat uncomfortable. I have a general cautious approach to the understanding and development of digital technologies, and a strong dislike to the breathless excitement from new innovations that are not necessarily making the world a better place.
Luckily, after few more attempts I have found papers about ‘environment’, ‘development’ and ‘sustainability’. Moreover, I discovered the special interest groups (SIG) that are dedicated to HCI for Development (HCI4D) and HCI for Sustainability and the programme started to build up. The sessions of these two SIGs were an excellent occasion to meet other people who are active in similar topics, and even to learn about the fascinating concept of ‘Collapse Informatics‘ which is clearly inspired by Jared Diamond book and explores “the study, design, and development of sociotechnical systems in the abundant present for use in a future of scarcity“.
Beyond the discussions, meeting people with shared interests and seeing that there is a scope within CHI to technology analysis and development that matches my approach, several papers and sessions were especially memorable. The studies by Elaine Massung an colleagues about community activism in encouraging shops to close the doors (and therefore waste less heating energy) and Kate Starbird on the use of social media in passing information between first responders during the Haiti earthquake, explored how volunteered, ‘crowd’ information can be used in crisis and environmental activism.
Other valuable papers in the area of HCI for development and sustainability include the excellent longitudinal study by Susan Wyche and Laura Murphy on the way mobile charging technology is used in Kenya , a study by Adrian Clear and colleagues about energy use and cooking practices of university students in Lancaster, a longitudinal study of responses to indoor air pollution monitoring by Sunyoung Kim and colleagues, and an interesting study of 8-bit, $10 computers that are common in many countries across the world by Derek Lomas and colleagues.
The ‘CHI at the Barricades – an activist agenda?‘ was one of the high points of the conference, with a showcase of the ways in which researchers in HCI can take a more active role in their research and lead to social or environmental change, and considering how the role of interactions in enabling or promoting such changes can be used to achieve positive outcomes. The discussions that followed the short interventions from the panel covered issues from accessibility to ethics to ways of acting and leading changes. Interestingly, while some presenters were comfortable with their activist role, the term ‘action-research’ was not mentioned. It was also illuminating to hear Ben Shneiderman emphasising his view that HCI is about representing and empowering the people who use the technologies that are being developed. His call for ‘activist HCI’ provides a way to interpret ‘universal usability‘ as an ethical and moral imperative.
So despite the early concerned, CHI was a conference worth attending and the specific jargon of CHI now seem more understandable. I wish that there was on the conference website a big sign ‘new to CHI? Start here…’