Citizen Cyberlab Summit (day 2)

DSCN1165The second day of the Citizen Cyberlab Summit followed the same pattern of the first day: Two half day sessions, in each one short presentations from guest speakers from outside the project consortium, followed by two demonstrations of specific platform, tool, pilot or learning, and ending with discussion in groups, which were then shared back.

The first session started with History of Citizen Sciences – Bruno Strasser (Uni Geneva) – looking at both practical citizen science and the way it is integrated into the history of science. The Bioscope is a place in Geneva that allowing different public facing activities in the medical and life science: biodiversity, genetic research etc. They are developing new ways of doing microscopy – a microscope which is sharing the imagery with the whole room so it is seen on devices and on turning the microscope from solitary experience to shared one. They are involved in biodiversity research that is aimed to bar-coding DNA of different insects and animals. People collect data, extract DNA and sequence it, and then share it in a national database. Another device that they are using is a simple add-on that turns a smartphone can be turned into powerful macro camera, so children can share images on instagram with bioscope hashtag. They also do ‘Sushi night’ where they tell people what fish you ate if at all…
This link to a European Research Council (ERC) project  – the rise of citizen sciences – on the history of the movement. Is there something like ‘citizen sciences’? From history of science perspective, in the early 20c the amateur scientist is passing and professionals are replacing it. He use a definition of citizen science as amateurs producing scientific knowledge – he is not interested in doing science without the production of knowledge. He noted that there are a lot of names that are used in citizen science research. In particular, the project focus is on experimental sciences – and that because of the laboratory revolution of the 1930s which dominated the 20th century. The lab science created the divide between the sciences and the public (Frankenstein as a pivotal imagery is relevant here). Science popularisation was trying to bridge the gap to the public, but the rise in experimental sciences was coupled with decline of public participation. His classification looks at DIYbio to volunteer computing – identifying observers, analysers etc. and how they become authors of scientific papers. Citizen science is taken by the shift in science policy to science with and for society. Interest in the promises that are attached to it: scientific, educational (learning more about science) and political (more democratic). It’s interesting because it’s an answer to ‘big data’, to the contract of science and society, expertise, participation and democratisation. The difference is demonstrated in the French response following Chernobyl in 1986, with presentation by a leading scientists in France that the particle will stop at the border of France, compared that to Deep Horizon in 2010 with participatory mapping through public lab activities that ‘tell a different story’. In the project, there are 4 core research question: how citizen science transform the relationship between science and society? who are the participants in the ‘citizen sciences’ – we have some demographic data, but no big picture – collective biography of people who are involved in it. Next, what is the ‘moral economies’ that sustain the citizen sciences? such as the give and take that people get out of project and what they want. Motivations and rewards. Finally, how do citizen sciences impact the production of knowledge? What is possible and what is not. He plan to use approaches from digital humanities process. He will build up the database about the area of citizen science, and look at Europe, US and Asia. He is considering how to run it as participatory project. Issues of moral economies are demonstrated in the BOINC use in commercial project. 

Lifelong learning & DIY AFM – En-Te Hwu (Edwin) from Academia Sinica, Taiwan). There are different ways of doing microscopy at different scales – in the past 100 years, we have the concept of seeing is believing, but what about things that we can’t see because of the focused light of the microscope – e.g. under 1 micron. This is possible with scanning electron microscope which costs 500K to 2M USD, and can use only conductive samples, which require manipulation of the sample. The Atomic Force Microscope (AFM) is more affordable 50K to 500K USD but still out of reach to many. This can be used to examine nanofeatures – e.g. carbon nanotubes – we are starting to have higher time and spatial resolution with the more advanced systems. Since 2013, the LEGO2NANO project started – using the DVD head to monitor the prob and other parts to make the AFM affordable. They put an instructable prototype that was mentioned by the press and they called it DIY AFM. They created an augmented reality tool to guide people how to put the device together, and it can be assembled by early high school students – moving from the clean room to the class room.  The tool is being used to look at leafs, CDs – area of 8×8 microns and more. The AFM data can be used with 3D printing – they run a summer school in 2015 and now they have a link to LEGO foundation. They are going through a process of reinventing the DIY AFM, because of patenting and intellectual property rights (IPR) – there is a need to rethink how to do it. They started to rethink the scanner, the control and other parts. They share the development process (using building process platform of MIT media lab). There is a specific application of using the AFM for measuring air pollution at PM2.5. using a DVD – exposing the DVD by removing the protection layer, exposing it for a period of time and then bringing it and measuring the results. They combined the measurements to crowdcrafting for analysis. The concept behind the AFM is done by using LEGO parts, and scanning the Lego points as a demonstration, so students can understand the process. 

wpid-wp-1442566370890.jpgThe morning session included two demonstrations. First, Creativity in Citizen Cyberscience – Charlene Jennett  (UCLIC, UCL) – Charlene is interested in psychological aspects of HCI. Creativity is a challenge in the field of psychology. Different ideas of what is creativity – one view is that it’s about eureka moment as demonstrated in Foldit breakthrough. However, an alternative is to notice everyday creativity of doing thing that are different, or not thought off original. In cyberlab, we are looking at different projects that use technologies and different context. In the first year, the team run interviews with BOINC, Eyewire, transcribe Bentham, Bat Detective, Zooniverse and Mapping for Change – a wide range of citizen science projects. They found many examples  – volunteers drawing pictures of the ships that they were transcribing in Old Weather, or identifying the Green Peas in Galaxy zoo which was a new type of galaxy. There are also creation of chatbots about their work -e.g. in EyeWire to answer questions, visualisation of information, creating dictionaries and further information. The finding showed that the link was about motivation leading to creativity to help the community or the project. They created the model of linking motivation, learning through participation, and volunteer identity that lead to creativity. The tips for projects include: feedback on project progress at individual and project level, having regular communication – forum and social media, community events – e.g. competitions in BOINC, and role management – if you can see someone is doing well, then encourage them to take more responsibility. The looked at the different pilots of Cyberlab – GeoTag-X, Virtual Atom Smasher, Synthetic Biology through iGEM and Extreme Citizen Science. They interview 100 volunteers. Preliminary results – in GeoTag-X, the design of the app is seen as the creative part, while for the analysts there are some of the harder tasks – e.g. the georeferencing of images and sharing techniques which lead to creative solutions. In the iGEM case they’ve seen people develop games and video. in the ExCiteS cases, there is DIY and writing of blog posts and participants being expressive about their own work. There are examples of people creating t-Shirt, or creating maps that are appropriate for their needs.They are asking questions about other projects and how to design for creativity. It is interesting to compare the results of the project to the definition of creativity in the original call for the project. The cyberlab project is opening up questions about creativity more than answering them. 

wpid-wp-1442679548581.jpgPreliminary Results from creativity and learning survey – Laure Kloetzer (university of Geneva). One of the aims of Citizen Cyberlab was to look at different aspects of creativity. The project provided a lot of information from a questionnaire about learning and creativity in citizen science. The general design of the questionnaire was to learn the learning outcomes. Need to remember that out of the whole population, small group participate in citizen science – and within each project, there is a tiny group of people that do most of the work (down to 16 in Transcribed Bentham) and the question of how people turn from the majority, who do very little work to highly active participants is unknown, yet. In Citizen Cyberlab we carried out interviews with participants in citizen science projects, which led to a typology of learning outcomes – which are lot wider than those that are usually expected or discussed in the literature – but they didn’t understand what people actually learn. The hypothesis is that people who engage with the community can learn more than those that doesn’t – the final questionnaire of the project try to quantify learning outcomes (informal learning in citizen science – ILICS survey). The questionnaire was tested in partial pilot. Sent to people in volunteer computing, volunteer thinking and others types. They had about 700 responses, and the analysis only started. Results – age group of participants is diverse from 20-70, but need to analyse it further according to projects. Gender – 2/3 male, third female, and 20% of people just have high school level of education, with 40% with master degree or more – large minority of people have university degree. They got people from 64 countries – US, UK, Germany and France are the main ones (the survey was translated to French). Science is important to most, and a passion for half, and integrated in their profession (25% of participants). Time per week – third of people spend less than 1 hour, and 70% spend 1-5 hours – so the questionnaire captured mostly active people. Results on learning – explore feeling, what people learn, how they learn and confidence (based on the typology from previous stages of the project). The results show that – people who say that they learn something to a lot, and most people accept that they learn on-topic knowledge (about the domain itself – 88%), scientific skills (80%), technological skills (61%), technical skills (58%), with political, collaboration skills and communication skills in about 50% of the cases. The how question – people learn most from project documentation (75%) but also by external resources (70%). Regarding social engagement, about 11% take part in the community, and for 61% it’s the first time in their life that they took such a role. There are different roles – translation, moderating forums with other things in the community that were not recognised in the questionnaire. 25% said that they met people online to share scientific interests – opportunity to share and meet new people. Learning dimensions and types of learners – some people feel that they learn quite a lot about various things, while others focus on specific types of learning. wpid-wp-1442679528037.jpgPrincipal Component Analysis show that learner types correlate with different forms of engagement – more time spent correlate to specific type of learner. There are different dimensions of learning that are not necessarily correlate. The cluster analysis show about 10 groups – people who learn a lot on-topic and about science with increase self-confidence. Second group learn on topic but not much confidence. Group 3, like 2 but less perception of learning. Group 4 don’t seem to learn much but prefer looking at resources. 5 learn somewhat esp about computers. 6 learn through other means. 7 learn by writing and communicating, collaborating and some science. 8 learn only about tools, but have general feeling of learning. 9 learn on topic but not transferable and 10 learn a lot on collaboration and communication – need to work more on this, but these are showing the results and the raw data will be shared in December. 

DSCN1160Following the presentation, the group discussion first explored examples of creativity from a range of projects. In crowdcrafting, when people are not active for a month, they get email with telling them that they will be deleted – one participant created activities that link to the project – e.g. tweeting from a transcriptions from WW I exactly 100 years after it happen. In Cornell Lab of Ornithology, volunteers suggest new protocols and tasks about the project – new ways of modifying things. In the games of ScienceatHome are targeted specifically to explore when problem solving become creative – using the tools and explaining to the researchers how they solve issues. In WCG one volunteered that create graphics from the API that other volunteers use and expect now to see it as part of the project. There is a challenge to project coordinators what to do with such volunteers – should they be part of the core project?
Next, there are questions about roles – giving the end users enough possibilities is one option, while another way is to construct modularising choices, to allow people to combine them in different ways. In ScienceatHome they have decided to put people into specific modes so consciously changing activities. There is wide variety of participants – some want to be fairly passive and low involvement, while other might want to do much more. Also creativity can express itself in different forms, which are not always seem linked to the project. The learning from Citizen Cyberlab is that there isn’t simple way of linking creativity and capture it in computer software, but that you need organisational structure and most importantly, awareness to look out for it and foster it to help it develop. Having complementarity – e.g. bringing game people and science people to interact together is important to creativity. Another point is to consider is to what degree people progress across citizen science projects and type of activities – the example of that without the hackspace it was not possible to make things happen. So it’s volunteers + infrastructure and support that allow for creativity to happen. There are also risks – creating something that you didn’t know before – ignorance – in music there isn’t much risk, but in medical or synthetic biology there can be risks and need to ask if people are stopping their creativity when they see perceived risks.

wpid-wp-1442679513070.jpgThe final session of the summit was dedicated to Evaluation and Sustainability. Starting with The DEVISE project – Tina Philips (Cornell Lab of Ornithology). Tina is involved in the public engagement part of Cornell Lab of Ornithology . Starting from the work on the 2009 of the Public Participation in Scientific Research (PPSR) report – the finding from the CAISE project that scarcity of evaluations, higher engagement suggested deeper learning, and need for a more sensitive measures and lack of overall finding that relate to many projects. The DEVISE project (Developing, Validating, and Implementing Situated Evaluation Instruments) focused on evaluation in citizen science overall – identifying goals and outcomes, building professional opportunities for people in the field of informal learning, and creating a community of practice around this area. Evaluation is about improving the overall effectiveness of programmes and projects. Evaluation is different from research as it is trying to understand strengths and weaknesses of the specific case and is less about universal rules – it’s the localised learning that matter. In DEVISE, they particularly focused on individual learning outcomes. The project used literature review, interviews  with participants, project leaders and practitioners to understand their experience. They looked at a set of different theories of learning. This led to a framework for evaluating PPSR learning outcomes. The framework includes aspects such as interest in science & the environment, self efficacy, motivation, knowledge of the nature of science, skills of science inquiry, and behaviour & stewardship. They also develop scales – short surveys that allow to examine specific tools – e.g. survey about interest in science and nature or survey about self-efficacy for science. There is a user guide for project evaluators that allow to have plan, implement and share guidance. There is a logic model for evaluation that includes Inputs, activities, outputs, short-term and long-term impacts. It is important to note that out of these, usually short and long terms outcomes are not being evaluated. Tina’s research looked at citizen science engagement, and understand how they construct science identity. Together with Heidi Ballard, they looked at contributory, collaborative and co-created projects – including Nestwatch, CoCoRaHS, and Global Community Monitor. They had 83 interviews from low , medium and high contributors and information from project leaders. The data analysis is using qualitative analysis methods and tools (e.g. Nvivo). The interview asked about engagement and what keep participants involved and asking about memorable aspects of their research involvement. There are all sort of extra activities that people bring into interviews – in GCM people say ‘it completely changes the way that they respond to us and actually how much time they even give us because previously without that data, without something tangible’ – powerful experiences through science. The interviews that were coded show that data collection, communicating with others and learning protocols are very common learning outcomes. About two-third of interviewees are also involved in exploring the data, but smaller group analyse and interpret it. Majority of people came with high interest in science, apart of the people who are focused on local environmental issues of water or air quality. Lower engagers tend to feel less connected to the project – and some crave more social outlets. The participants have a strong understanding of citizen science and their role in it. Data transparency is both a barrier and facilitator – participants want to know what is done with their data. QA/QC is important personally and organisationally important. Participants are engaged in wide range of activities beyond the project itself. Group projects may have more impact than individual projects.
Following the presentation, the discussion explore the issue of data – people are concerned about how the data is used, and what is done with it even if they won’t analyse it themselves. In eBird, you can get your raw data, and checking the people that used the data there is the issue of the level in which those who download the data understand how to use it in an appropriate way. 

wpid-wp-1442679499689.jpgThe final guest presentation was Agroecology as citizen science – Peter Hanappe (Sony Computer Science Lab, Paris).  Peter is interested in sustainability, and in previous projects he was involved in working on accessibility issues for people who use wheelchair, the development of NoiseTube, or porting ClimatePrediction BOINC framework to PlayStation, and reducing energy consumption in volunteer computing. In his current work he looks at sustainability in food systems. Agroecology is the science of sustainable agriculture, through reducing reliance on external inputs – trying to design productive ecosystems that produce food. Core issues include soil health and biodiversity, with different ways of implementing systems that will keep them productive. The standard methods of agriculture don’t apply, and need to understand local conditions and the practice of agroecology is very knowledge intensive. Best practices are not always studied scientifically – with many farms in the world that are small (below 2 hectares, 475 millions farms across the world). There are more than 100M households around the world that grow food.  This provide the opportunity for citizen science – each season can be seen as an experiment, with engaging more people and asking them to share information so the knowledge slowly develops to provide all the needed details. Part of his aim is to develop new, free tools and instruments to facilitate the study of agroecology. This can be a basic set with information about temperature and humidity or more complex. The idea to have local community and remote community that share information on a wiki to learn how to improve. Together with a group of enthusiasts that he recruited in Paris, they run CitizenSeeds where they tried different seeds in a systematic way – for example, with a fixed calendar of planting and capturing information People took images and shared information online. The information included how much sunlight plants get and how much humidity the soil have. on they can see information in a calendar form. They had 80 participants this year. Opportunity for citizen science – challenges include community building, figuring out how much of it is documentation of what worked, compared to experimentation – what are the right way to carry out simple, relevant, reproducible experiments. Also if there is focus on soil health, we need multi-year experiments.  

I opened the last two Demonstrations of the session with a description of the 
Extreme Citizen Science pilots – starting similarly to the first presentation of the day, it is useful to notice the three major period in science (with regard to public participation). First, the early period of science when you needed to be wealthy to participate – although there are examples like Mary Anning, who. for gender, religion and class reasons was not accepted within the emerging scientific establishment as an equal, and it is justified to describe her as citizen scientists, although in full time capacity. However, she’s the exception that point to the rule. More generally, not only science was understood by few, but also the general population had very limited literacy, so it was difficult to engage with them in joint projects. During the period of professional science, there are a whole host of examples for volunteer data collection – from phenology to meteorology and more. As science became more professional, the role of volunteered diminished, and scientists looked for automatic sensors as more reliable mean to collect information. At the same time, until the late 20th century, most of the population had limited education – up to high school mostly, so the tasks that they were asked to perform were limited to data collection. In the last ten years, there are many more people with higher education – especially in industrialised societies, and that is part of the opening of citizen science that we see now. They can participate much more deeply in projects.
Yet, with all these advances, citizen science is still mostly about data collection and basic analysis, and also targeted at the higher levels of education within the population. Therefore, Extreme Citizen Science is about the extremities of citizen science practice – engage people in the whole scientific process, allow them to shape data collection protocols, collect and analyse the data, and use it in ways that suit their goals. It is also important to engage people from all levels of literacy, and to extend it geographically across the world.
The Extreme Citizen Science (ExCiteS) group is developing methodologies that are aimed at facilitating this vision. Tool like GeoKey, which is part of the Cyberlab project, facilitate community control over the data and decision what information is shared and with whom. Community Maps, which are based on GeoKey are way to allow community data collection and visualisation, although there is also a link to EpiCollect, so mobile data collection is possible and then GeoKey managed the information.
These tools can be used for community air quality monitoring, using affordable and accessible methods (diffusion tubes and borrowed black carbon monitors), but also the potential of creating a system that will be suitable for people with low level of literacy. Another pilot project that was carried out in Cyberlab included playshops and exploration of scientific concepts through engagement and play. This also include techniques from Public Lab such as kite and balloon mapping, with potential of linking the outputs to community maps through GeoKey. 

 Finally, CCL Tracker was presented by Jose Luis Fernandez-Marquez (CERN) – the motivations to create the CCL tracker is the need to understand more about participants in citizen cyberscience projects and what they learn. Usual web analytics  provide information about who is visiting the site, how they are visiting and what they are doing. Tools like Google analytics – are not measuring what people do on websites. We want to understand how the 20% of the users doing 80% of the work in citizen cyberscience projects and that require much more information. Using an example of Google Analytics from volunteer computing project, we can see about 16K sessions, 8000 users, from 108 countries and 400 sessions per day. Can see that most are males – we can tell which route they arrived to the website, etc. CCL tracker help to understand the actions performed in the site and measure participants contribution. Need to be able to make the analytics data public and create advanced data aggregation – clustering it so it is not disclosing unwanted details about participants. CCL tracker library work together with Google tag manager and Google analytics. There is also Google Super Proxy to share the information. 

New paper: Footprints in the sky – using student track logs in Google Earth to enhance learning

screen shot for paperIn 2011-2012, together with Richard Treves, I was awarded a Google Faculty Research Award, and we were lucky to work with Paolo Battino for about a year, exploring how to use Google Earth tours for educational aims. The details of the projects and some reports from the project are available on Richard’s blog, who was leading on many aspects of the work. Now, over 2 years since the end of the project, we have a publication in the Journal of Geography in Higher Education. The paper, titled ‘Footprints in the sky: using student track logs from a “bird’s eye view” virtual field trip to enhance learning’, is now out and describes the methodology that we developed for tracking students’ actions.

The abstract of the paper is:

Research into virtual field trips (VFTs) started in the 1990s but, only recently, the maturing technology of devices and networks has made them viable options for educational settings. By considering an experiment, the learning benefits of logging the movement of students within a VFT are shown. The data are visualized by two techniques: “animated path maps” are dynamic animations of students’ movement in a VFT; “paint spray maps” show where students concentrated their visual attention and are static. A technique for producing these visualizations is described and the educational use of tracking data in VFTs is critically discussed.

The paper is available here, and special thanks to Ed Parsons who advised us during the project.

Kindle Maps and E Ink properties

CHI 2013 and GeoHCI workshop highlighted to me the importance of understanding media for maps. During CHI, the ‘Paper Tab’ demonstration used E-Ink displays to demonstrate multiple displays interaction. I found the interactions non-intuitive and not mapping very well to what you would expect to do with paper, so a source for confusion – especially when they will eventually be mixed with papers on a desk. Anyhow, it is an interesting exploration.

E Ink displays are very interesting in terms of the potential use for mapping. The image  below shows one of the early prototypes of maps that are designed specifically for the Kindle, or, more accurately, to the E Ink technology that is at heart of the Kindle. From a point of view of usability of geographical information technologies, the E Ink is especially interesting. There are several reasons for that.

Kindle map

First, the resolution of the Kindle display is especially high (close to 170 Pixels Per Inch) when the size of screen is considered. The Apple Retina display provide even better resolution and in colour and that makes maps on the iPad also interesting, as they are starting to get closer to the resolution that we are familiar with from paper maps (which is usually between 600 and 1200 Dot Per Inch). The reason that resolution matter especially when displaying maps, because the users need to see the context of the location that they are exploring. Think of the physiology of scanning the map, and the fact that capturing more information in one screen can help in understanding the relationships of different features. Notice that when the resolution is high but the screen area is limited (for example the screen of a smartphone) the limitations on the area that is displayed are quite severe and that reduce the usability of the map – scrolling require you to maintain in your memory where you came from.

Secondly, E Ink can be easily read even in direct sunlight because they are reflective and do not use backlight. This make them very useful for outdoor use, while other displays don’t do that very well.

Thirdly, they use less energy and can be used for long term display of the map while using it as a reference, whereas with most active displays (e.g. smartphone) continuous use will cause a rapid battery drain.

On the downside, E Ink refresh rates are slow, and they are more suitable for static display and not for dynamic and interactive display.

During the summer of 2011 and 2012, several MSc students at UCL explore the potential of E Ink for mapping in detail. Nat Evatt (who’s map is shown above) worked on the cartographic representation and shown that it is possible to create highly detailed and readable maps even with the limitation of 16 levels of grey that are available. The surprising aspects that he found is that while some maps are available in the Amazon Kindle store (the most likely place for e-book maps), it looks like the maps where just converted to shades of grey without careful attention to the device, which reduce their usability.

The work of Bing Cui and Xiaoyan Yu (in a case of collaboration between MSc students at UCLIC and GIScience) included survey in the field (luckily on a fairly sunny day near the Tower of London) and they explored which scales work best in terms of navigation and readability. The work shows that maps at scale of 1:4000 are effective – and considering that with E Ink the best user experience is when the number of refreshes are minimised that could be a useful guideline for e-book map designers.

CHI 2013: sustainability, development and activism

As I’ve noted in the previous post, I have just attended CHI (Computer-Human Interaction) conference for the first time. It’s a fairly big conference, with over 3000 participants, multiple tracks that evolved over the 30 years that CHI have been going,  including the familiar paper presentations, panels, posters and courses, but also the less familiar ‘interactivity areas’, various student competitions, alt.CHI or Special Interest Groups meetings. It’s all fairly daunting even with all my existing experience in academic conferences. During the GeoHCI workshop I have discovered the MyCHI application, which helps in identifying interesting papers and sessions (including social recommendations) and setting up a conference schedule from these papers. It is a useful and effective app that I used throughout the conference (and wish that something similar can be made available in other large conferences, such as the AAG annual meeting).

With MyCHI in hand, while the fog started to lift and I could see a way through the programme, the trepidation about the relevance of CHI to my interests remained and even somewhat increased, after a quick search of the words ‘geog’,’marginal’,’disadvantage’ returned nothing. The conference video preview (below) also made me somewhat uncomfortable. I have a general cautious approach to the understanding and development of digital technologies, and a strong dislike to the breathless excitement from new innovations that are not necessarily making the world a better place.

Luckily, after few more attempts I have found papers about ‘environment’, ‘development’ and ‘sustainability’. Moreover, I discovered the special interest groups (SIG) that are dedicated to HCI for Development (HCI4D) and HCI for Sustainability and the programme started to build up. The sessions of these two SIGs were an excellent occasion to meet other people who are active in similar topics, and even to learn about the fascinating  concept of ‘Collapse Informatics‘ which is clearly inspired by Jared Diamond book and explores “the study, design, and development of  sociotechnical systems in the abundant present for use in a future of scarcity“.

Beyond the discussions, meeting people with shared interests and seeing that there is a scope within CHI to technology analysis and development that matches my approach, several papers and sessions were especially memorable. The studies by Elaine Massung an colleagues about community activism in encouraging shops to close the doors (and therefore waste less heating energy) and Kate Starbird on the use of social media in passing information between first responders during the Haiti earthquakeexplored how volunteered, ‘crowd’ information can be used in crisis and environmental activism.
Exploring a map next to Paire Lachaise
Other valuable papers in the area of HCI for development and sustainability include the excellent longitudinal study by Susan Wyche and Laura Murphy on the way mobile charging technology is used in Kenya , a study by Adrian Clear and colleagues about energy use and cooking practices of university students in Lancastera longitudinal study of responses to indoor air pollution monitoring by Sunyoung Kim and colleagues, and an interesting study of 8-bit, $10 computers that are common in many countries across the world by Derek Lomas and colleagues.

TheCHI at the Barricades – an activist agenda?‘ was one of the high points of the conference, with a showcase of the ways in which researchers in HCI can take a more active role in their research and lead to social or environmental change, and considering how the role of interactions in enabling or promoting such changes can be used to achieve positive outcomes. The discussions that followed the short interventions from the panel covered issues from accessibility to ethics to ways of acting and leading changes. Interestingly, while some presenters were comfortable with their activist role, the term ‘action-research’ was not mentioned. It was also illuminating to hear Ben Shneiderman emphasising his view that HCI is about representing and empowering the people who use the technologies that are being developed. His call for ‘activist HCI’ provides a way to interpret ‘universal usability‘ as an ethical and moral imperative.

It was good to see the work of the Citizen Sort team getting into the finalists of the students game competition, and to hear about their development of citizen science games.

So despite the early concerned, CHI was a conference worth attending and the specific jargon of CHI now seem more understandable. I wish that there was on the conference website a big sign ‘new to CHI? Start here…’

Google Research Award – Identifying Learning Benefits of Google Earth Tours in Education

Image representing Google Earth as depicted in...

It is always nice to announce good news. Back in February, together with Richard Treves at the University of Southampton, I submitted an application to the Google’s Faculty Research Award program for a grant to investigate Google Earth Tours in education. We were successful in getting a grant worth $86,883 USD.  The project builds on my expertise in usability studies of geospatial technologies, including the use of  eye tracking and other usability engineering techniques for GIS and Richard’s expertise in Google Earth tours and education, and longstanding interest in usability issues.

In this joint UCL/Southampton project, UCL will be lead partner and we will appoint a junior researcher for a year to develop run experiments that will help us in understanding of the effectiveness of Google Earth Tours in geographical learning, and we aim to come up with guidelines to their use. If you are interested, let me know.

Our main contact at Google for the project is Ed Parsons. We were also helped by Tina Ornduff and Sean Askay who acted as referees for the proposal.
The core question that we want to address is “How can Google Earth Tours be used create an effective learning experience?”

So what do we plan to do? Previous research on Google Earth Tours (GETs) has shown them to be an effective visualization technique for teaching geographical concepts, yet their use in this way is essentially passive.  Active learning is a successful educational approach where student activity is combined with instruction to enhance learning.  In the proposal we suggest that there is great education value in combining the advantages of the rich visualization of GETs with student activities. Evaluating the effectiveness of this combination is the purpose of the project, and we plan to do this by creating educational materials that consist of GETs and activities and testing them against other versions of the materials using student tests, eye tracking and questionnaires as data gathering techniques.

We believe that by improving the techniques by which spatial data is visualized we are improving spatial information access overall.
A nice aspect of the getting the project funded is that it works well with a project that is led by Claire Ellul and Kate Jones and funded by JISC. The G3 project, or “Bridging the Gaps between the GeoWeb and GIS” is touching on similar aspects and we surely going to share knowledge with them.
For more background on Richard Treves, see his blog (where the same post is published!)

Some important questions about the usability of geospatial technologies

At the beginning of May, I gave a lecture at the UCL Interaction Centre (UCLIC) seminar titled ‘Interacting with Geospatial Technologies – Overview and Research Challenges’. The talk was somewhat similar to the one that I gave at the BCS Geospatial SIG. However, I was trying to answer a question that I was asked during a UCLIC seminar in 2003, when, together with Carolina Tobón, I presented the early work on usability of GIS for e-government applications. During that talk, the discussion was, as always is in UCLIC, intensive. One core question that remained with me from the discussion was: ‘What makes geospatial technology special or is it just another case of a complex and demanding information system that you should expect difficulties with and spend time to master?’

Over the years, I have been trying to improve the answer beyond the ‘it’s special because it’s about maps‘ or ‘geospatial information comes in large volumes and requires special handling‘ or similar partial answers. In the book Interacting with Geospatial Technologies different chapters deal with these aspects in detail. During the talk, I tried to cover some of them. In particular, I highlighted the lag of geospatial technologies behind other computing technologies (an indication of complexity), the problems of devices such as SatNavs that require design intervention in the physical world to deal with a design fault (see image), and the range of problems in interfaces of GIS as were discovered in the snapshot study that was carried out by Antigoni Zafiri.

There was an excellent discussion after the presentation ended. Some of the very interesting questions that I think need addressing are the following:

  • In the talk, I highlighted that examples of spatial representations exist in non-literate societies, and that, therefore, the situation with computers, where textual information is much more accessible than geographical information, is something that we should consider as odd. The question that was raised was about the accessibility of these representations – how long does it take people from the societies that use them to learn them? Is the knowledge about them considered privileged or held by a small group?
  • For almost every aspect of geospatial technology use, there is some parallel elsewhere in the ICT landscape, but it is the combination of issues – such as the need for a base map as a background to add visualisation on top of it, or the fact that end users of geospatial analysis need the GIS operators as intermediaries (and the intermediaries are having problems with operating their tools – desktop GIS, spatial databases etc. – effectively) – that creates the unique combination that researchers who are looking at HCI issues of GIS are dealing with. If so, what can be learned from existing parallels, such as the organisations where intermediaries are used in decision making (e.g. statisticians)?
  • The issue of task analysis and considerations of what the user is trying to achieve were discussed. For example, Google Maps makes the task of ‘finding directions from A to B’ fairly easy by using a button on the interface that allows the user to put in the information. To what extent do GIS and web mapping applications help users to deal with more complex, temporally longer and less well-defined tasks? This is a topic that was discussed early on in the HCI (Human-Computer Interaction) and GIS literature in the 1990s, and we need to continue and explore.

In my talk I used a slide about a rude group in Facebook that relates to a specific GIS package. I checked it recently and was somewhat surprised to see that it is still active. I thought that it would go away with more recent versions of the software that should have improved its usability. Clearly there is space for more work to deal with the frustration of the users. Making users happy is, after all, the goal of usability engineering…

G3 – Bridging the Gap between the GeoWeb and GIS

The G3 Project, is a new project led by Claire Ellul and  Kate Jones and funded by the JISC geospatial working group.  The project’s aim is to create an interactive online mapping tutorial system for students in areas that are not familiar with GIS such as urban design, anthropology and environmental management.

The project can provides a template for the introduction of geographical concepts to new groups of learners. By choosing a discipline specific scenario, key geographic concepts and functions will be presented to novices in a useful and useable manner so the learning process is improved. Users will be introduced to freely available geographic data relevant to their particular discipline and know where to look for more. G3 Project will create a framework to support learners and grow their confidence without facing the difficult interfaces and complexity of desktop mapping systems that are likely to create obstacles for students, with the feeling that ‘this type of analysis is not for me’.

Check the project’s blog for regular updates and developments.