25 February, 2014
Now, that the Citizen Cyberscience Summit is over, come the time to reflect more widely on the event and what it say about the state of citizen science. My previos posts, covering the three days of the summit (first day, second day, third day) were written every day during the summit – this is something I learned from Andrea Wiggins and the way she blogged about the 2012 summit (here are her descriptions of the first, second and third days). However, unlike Andrea, my notes focused on my immediate thoughts from each day and less on a synopsis of what I’ve been listening to. The current post reflect on the event as a whole, in terms of my personal expectations and hopes for the summit. It also covers the rational behind the summit’s design, so it can be evaluated against the practice. As a result, it’s a long piece!
The structure of the summit follows the model that we first tried in 2012 and that proved to be very successful. When trying to explain the summit’s organisation, I use the description ‘starts fairly formal, and end with organised chaos’ which inherently tries to mix traditional academic conferences with open and creative events such as hackathons, but doing that in an inclusive way so people from different communities of practice can feel that there is something for them in the summit. In practice, this translates to the three days of the summit in the following way.
The first day, which uses the formal settings of the Royal Geographical Society, provided the needed academic gravitas to send the message that citizen science is noteworthy. About half of the talks in this day were from speakers that we have invited to ‘set the scene‘. We didn’t provide a detailed brief to speakers to set them ‘on message’, rather inviting them to discuss their work and how it links to a general theme. The rest of the talks were selected carefully from the open submissions to provide the breadth of citizen science.
We deliberately chose an open submission format which falls somewhere between community-led conferences (such as OpenStreetMap State of the Map) and academic conferences, to make both groups comfortable. We were aware that for the volunteers who participate in citizen science we will need a different, more proactive way of encouraging them to join. In previous summits they were the least represented group. So to encourage them to come we created two special ticket categories (for the whole summit and for the citizen science cafe) and actively contacted different projects to encourage their volunteers to come.
In the past, the first day was deliberately ‘single track’ to create a common vocabulary for all participants. This time, because of the perceived increase in the policy implications of citizen science (e.g. the creation of the Citizen Science Association or the European Citizen Science Association, or the activities of Eye on Earth initiative) we decided
to split part of the day to two sessions: one that focuses on the technology and another on policy and engagement. The aim was to attract people who might be less interested in the technology or the specific scientific domain and more with its implications, as well as a recognition that the citizen science community is growing with people that have different interests.
The second day signaled the importance of the citizens of citizen science in two elements in the programme: the citizen science panel (which happen to be only women) and the citizen science café as the closing reception. Setting the summit in such a way that this day fall on Friday is also important, as it allow people to come to the event after work and meet
with other participants who are enthusiastic about citizen science. More generally, the day was submission-led and included workshops, opportunities for discussions and shorter presentations. Only one talk was organised by invitation. This was the opening talk, to bring everyone into one place so it is possible to welcome new people and link to the previous day. Also important is the provision of central space with chairs and tables that was used as the coffee & lunch area to allow people to start or follow up discussions that started the day before. The day also included sponsored sessions (sponsors are important and need to be treated well!)
Finally, the third day was dedicated to the hackday. This was done so people with technical skills or interest in citizen science can come on a weekend day and help with the challenges (the tasks that were explored in the hackday). The posters for the challenges were on display from Friday to start the conversations about them. Saturday also include more short talks on a range of topics (mostly because we wanted to accept all the submissions) but also make sure that we left space for an unconference session – a set of very short talks (5 minutes) for people who came to attend the event and decided that they also want to talk about their work. The final keynote is schedule to keep people interested and to bring them together for the hackday presentation. This is based on a lesson from Over the Air event.
The ideas for this plan came from all the people who planned the summit, through discussions that were facilitated through an open Skype channel in the last month before the summit, regular ‘Google Hangouts’ in the 3 months before the summit and, of course, email, Google Docs and all the other collaborative tools that are now available.
So did the summit live up to these expectations?
Mostly ‘yes’. First of all, we’ve done much better than in the previous summit in terms of representation and participation of the people who actually involved in citizen science and not only the scientists, coordinators and other people who are running citizen science projects. Catherine Jones post about the summit is exactly what we set out to achieve, so I was delighted to read it. At the same time, I think that we can do better and in future events we need to consider bursaries or grants for volunteers to attend the event. Just dropping the event price to zero is good, but not enough.
Another strength of the summit is in bringing together the community of practice of those who are involved in citizen science or are in their early stages of developing a citizen science programme. The seahorse programme at UBC is an example of a project that benefited from the interactions last time, and I’ve noticed that similar knowledge and best practice sharing this time. This will hopefully improve the projects that are run by the people who came to the summit. I’m pleased that we managed to bring people from across the domains in which citizen science is evolving and that despite the growth in number of participants, there was enough space for meaningful exchanges. The Citizen Science Café served as much for this aspect of the summit as in bringing citizens and scientists together.
It is interesting to notice how many people already knew each other from citizen science events, and there is a need to avoid creating a clique that is less welcoming to newcomers – something for the new associations to think about!
While the policy session was excellent, I noticed that we failed to get significant attention from academics and practitioners who work on science policy, public engagement in science, and people from policy making areas. The number of participants from these areas is relatively small, and include people that are already ‘converted’ (e.g. Katherine Mathieson or Erinma Ochu) but my feeling was that there wasn’t attendance on the basis ‘I need to know what this thing is because it’s important‘.
The same can be said about the commercial sector – we had some attendance from people who are involved in start-ups, and Esri showed their generosity by supporting the summit (disclaimer: they are also strong supporters of ExCiteS) but we weren’t in a situation of fending off sponsorship offers.
I find the last two points very interesting, as that signal to me the amount of ‘leg work’ that the new citizen science associations, the academics that are involved in this field and the practitioners still need to do to get the attention that the field deserve.
Another fascinating aspect that came out from the summit is a clear demonstration of the many facets of every single citizen science project – technology, education, science communication, specific scientific domain knowledge, usability and Human-Computer Interaction, community development, legal and philosophical aspects – all those were mentioned in different sessions. This calls for ongoing conversations and collaborations across the wider area of citizen science to ensure that we indeed share knowledge effectively.
The final reflection is on the size of the summit. The first summit had less than 100 participants, the second about 200 and this time over 300 participants visited the summit. Not everyone was there for the whole event – but it was clear that those that been for the whole event benefited the most. This can be expected at this size, and it feels like the maximum size to make it still effective – I know of several people that I follow but didn’t had chance to have a proper conversation (though admittedly, I was busy organising). Hopefully, with the online resources from the summit can provide a way to go beyond those who physically attended the event.
23 February, 2014
After a day of ‘listening‘, and a day of ‘talking‘, the final day of the citizen cyberscience summit brought ‘doing‘ to the summit. Although the art installation on the second day of the summit would clearly fall into the ‘doing’ category, participation in the installation was mostly in the ‘contributory’ form: after summit participants handed over the citizen (cyber)science objects, the decisions on how to use them in the installation were left to the artist, Leni Diner Dothan.
The day started with setting up desks for each of the hackday challenges. The challenges ranged from Synthetic Biology to Citizen Science & Big Data. While those interested in assisting the challenge proposers to develop their ideas set to work, a set of shorter talks and discussions continued - including a set of impromptu 5 minute talks in an unconference session. Despite the compactness of the session, it was clear that people are responding to themes that appeared in the two previous days of the summit. For example, Jeff Parsons addressed the common ‘how good is the data from citizen science?‘ question, which made an appearance in several talks. Jeff pointed to his Nature paper that ‘easier citizen science is better‘. Francois Grey started the conversation which he is developing with Creative Commons and Open Knowledge Foundation about the relationships between Open Science and Citizen Science, asking if there should be an ‘Open Citizen Science’.
Geographical citizen Science was at the heart of several talks that explored the links between mapping technologies, DIY sensors and citizen science. The summit benefited from the participation of several early career researchers who were funded to visit UCL as part of the COST ENERGIC scientific network. The exchange of knowledge that is not only enabled through networks, but also through the communities of practice in DIY electronics or VGI, was clearly visible. One talk discussed using Public Laboratory technologies in schools in Germany and in another talk about using those technologies in Jerusalem. Another example of such links was demonstrated in the collaboration between Chinese and UK-based students to build a new DIY microscope.
Personally, the re-appearance of my ‘levels of participation in citizen science‘ classification is both satisfying (someone found it useful!) and fascinating, as each use of it illustrated a different interpretation and understanding of it. The levels are fuzzy and open to interpretation, so these discussions help the process of understanding what should be included in each category, and how the different levels map onto a specific project or activity.
The final talk by Jeff Howe – who coined the term crowdsourcing - discussed the way new ideas emerge from allowing a large group of people to participate in solving problems as this can open up a wider set of skills and expertise. He noted that in many cases, the success of large collaborations comes from a ‘gift’, which is creating a system or a service that provides something that people want, or which can help them to do what interests them. Or, as he phrased it, ‘ask not what your community can do for you, but what you can do for your community‘.
An example of some of the issues that Jeff covered was provided during the presentations from the hackday. As in the previous summit, we carefully measured the applause from the audience with a noise meter, to ascertain the activity that the participants in the summit liked the most. This time, it was the development of a bio-sensor that can be integrated into textiles. This challenge was led by Paula Nerlich, who is studying at the Edinburgh College of Art, showing that citizen science ideas can come from outside the traditional scientific disciplines (image by Cindy Regalado).
To get a better sense of the atmosphere, you can find plenty of interviews on the ‘Citizens of Science’ podcast board which explores the needs of the citizen science community.
Since we first began to organise the summit almost a year ago, I have had a lingering concern that the summit would not fulfill the expectations and the success of the previous one. Once the summit ended, I was more relaxed about this – I noticed many new connections being made, and new ideas discovered by participants. Now it is time to sit back and watch what will come out of these!
21 February, 2014
The second day of the summit (see my reflections on the first day) started with an unplanned move to the Darwin Lecture Theatre of UCL. This was appropriate, as the theatre is sited in a place where Charles Darwin used to live, and he is mentioned many times as a citizen scientist. Moreover, the unplanned move set the tone for a day which paid more attention to DIY science.
We started with a vision for the future of citizen science by Rick Bonney from Cornell Lab of Ornithology in which he highlighted how important it is to keep growing the field and bring together different approaches to citizen science to save the world. This was followed by a panel that explored the experiences and wishes of citizen scientists themselves – from participant in Zooniverse, to DIY electronic and environmental justice applications of citizen science (image from Daniel Lombrana Glez). The panel demonstrated the level of interest and the commitment that people that are engaged in citizen science have, and that it is taken seriously by the participants. It also gave a glimpse to the empowerment aspect of citizen science.
In my opening, I have pressed the message that while the first day of the summit involve a lot of listening, the second day is about talking with one another and sharing ideas, in order to move to doing in the third day. In fact, this was not needed, and throughout the day many conversations were happening in workshops, in the main meeting area of the conference and during the coffee and tea breaks.
Another aspects that gave a different atmosphere to the day was the work of Leni Diner-Dothan. Leni is studying at UCL Slade School, and accepted a request to create an art installation during the summit. After collecting both operational and defunct items of citizen science and developing the concept, the work commenced during the day.
With the help of the technicians from my own department, she developed the ‘citizen cyberscience nightmare wall‘ which have pieces of citizen cyberscience embedded in concrete with a reliquary. It is a thought provoking and fascinating piece of art, and I hope to write about it more soon.
The citizen science cafe that closed the day open up thematic conversation, and I encountered discussions between related projects that the summit provided an opportunity for.
Now, it’s time to move to the doing – let’s see what ideas will come tomorrow…
19 February, 2014
The Citizen Cyberscience Summit that will be running in London this week sparked the interest of the producers of BBC World Service ‘Click’ programme, and it was my first experience of visiting BBC Broadcasting House – about 15 minutes walk from UCL.
Here is the clip from the programme that covers the discussion about the summit and Extreme Citizen Science
More information is provided in the Citizens of Science podcast - where myself and the other organisers discuss and preview the summit. That is an opportunity to recommend the other podcasts that can be found in the series.
Following the two previous assertions, namely that:
‘you can be supported by a huge crowd for a very short time, or by few for a long time, but you can’t have a huge crowd all of the time (unless data collection is passive)’ (original post here)
‘All information sources are heterogeneous, but some are more honest about it than others’ (original post here)
The third assertion is about pattern of participation. It is one that I’ve mentioned before and in some way it is a corollary of the two assertions above.
‘When looking at crowdsourced information, always keep participation inequality in mind’
Because crowdsourced information, either Volunteered Geographic Information or Citizen Science, is created through a socio-technical process, all too often it is easy to forget the social side – especially when you are looking at the information without the metadata of who collected it and when. So when working with OpenStreetMap data, or viewing the distribution of bird species in eBird (below), even though the data source is expected to be heterogeneous, each observation is treated as similar to other observation and assumed to be produced in a similar way.
Yet, data is not only heterogeneous in terms of consistency and coverage, it is also highly heterogeneous in terms of contribution. One of the most persistence findings from studies of various systems – for example in Wikipedia , OpenStreetMap and even in volunteer computing is that there is a very distinctive heterogeneity in contribution. The phenomena was term ‘Participation Inequality‘ by Jakob Nielsn in 2006 and it is summarised succinctly in the diagram below (from Visual Liberation blog) – very small number of contributors add most of the content, while most of the people that are involved in using the information will not contribute at all. Even when examining only those that actually contribute, in some project over 70% contribute only once, with a tiny minority contributing most of the information.
Therefore, when looking at sources of information that were created through such process, it is critical to remember the nature of contribution. This has far reaching implications on quality as it is dependent on the expertise of the heavy contributors, on their spatial and temporal engagement, and even on their social interaction and practices (e.g. abrasive behaviour towards other participants).
Because of these factors, it is critical to remember the impact and implications of participation inequality on the analysis of the information. There will be some analysis to which it will have less impact and some where it will have major one. In either cases, it need to be taken into account.
Following the last post, which focused on an assertion about crowdsourced geographic information and citizen science I continue with another observation. As was noted in the previous post, these can be treated as ‘laws’ as they seem to emerge as common patterns from multiple projects in different areas of activity – from citizen science to crowdsourced geographic information. The first assertion was about the relationship between the number of volunteers who can participate in an activity and the amount of time and effort that they are expect to contribute.
This time, I look at one aspect of data quality, which is about consistency and coverage. Here the following assertion applies:
‘All information sources are heterogeneous, but some are more honest about it than others’
What I mean by that is the on-going argument about authoritative and crowdsourced information sources (Flanagin and Metzger 2008 frequently come up in this context), which was also at the root of the Wikipedia vs. Britannica debate, and the mistrust in citizen science observations and the constant questioning if they can do ‘real research’.
There are many aspects for these concerns, so the assertion deals with the aspects of comprehensiveness and consistency which are used as a reason to dismiss crowdsourced information when comparing them to authoritative data. However, at a closer look we can see that all these information sources are fundamentally heterogeneous. Despite of all the effort to define precisely standards for data collection in authoritative data, heterogeneity creeps in because of budget and time limitations, decisions about what is worthy to collect and how, and the clash between reality and the specifications. Here are two examples:
Take one of the Ordnance Survey Open Data sources – the map present themselves as consistent and covering the whole country in an orderly way. However, dig in to the details for the mapping, and you discover that the Ordnance Survey uses different standards for mapping urban, rural and remote areas. Yet, the derived products that are generalised and manipulated in various ways, such as Meridian or Vector Map District, do not provide a clear indication which parts originated from which scale – so the heterogeneity of the source disappeared in the final product.
The census is also heterogeneous, and it is a good case of specifications vs. reality. Not everyone fill in the forms and even with the best effort of enumerators it is impossible to collect all the data, and therefore statistical analysis and manipulation of the results are required to produce a well reasoned assessment of the population. This is expected, even though it is not always understood.
Therefore, even the best information sources that we accept as authoritative are heterogeneous, but as I’ve stated, they just not completely honest about it. The ONS doesn’t release the full original set of data before all the manipulations, nor completely disclose all the assumptions that went into reaching the final value. The Ordnance Survey doesn’t tag every line with metadata about the date of collection and scale.
Somewhat counter-intuitively, exactly because crowdsourced information is expected to be inconsistent, we approach it as such and ask questions about its fitness for use. So in that way it is more honest about the inherent heterogeneity.
Importantly, the assertion should not be taken to be dismissive of authoritative sources, or ignoring that the heterogeneity within crowdsources information sources is likely to be much higher than in authoritative ones. Of course all the investment in making things consistent and the effort to get universal coverage is indeed worth it, and it will be foolish and counterproductive to consider that such sources of information can be replaced as is suggest for the census or that it’s not worth investing in the Ordnance Survey to update the authoritative data sets.
Moreover, when commercial interests meet crowdsourced geographic information or citizen science, the ‘honesty’ disappear. For example, even though we know that Google Map Maker is now used in many part
s of the world (see the figure), even in cases when access to vector data is provided by Google, you cannot find out about who contribute, when and where. It is also presented as an authoritative source of information.
Despite the risk of misinterpretation, the assertion can be useful as a reminder that the differences between authoritative and crowdsourced information are not as big as it may seem.
Looking across the range of crowdsourced geographic information activities, some regular patterns are emerging and it might be useful to start notice them as a way to think about what is possible or not possible to do in this area. Since I don’t like the concept of ‘laws’ – as in Tobler’s first law of geography which is stated as ‘Everything is related to everything else, but near things are more related than distant things.’ – I would call them assertions. There is also something nice about using the word ‘assertion’ in the context of crowdsourced geographic information, as it echos Mike Goodchild’s differentiation between asserted and authoritative information. So not laws, just assertions or even observations.
The first one, is rephrasing a famous quote:
‘you can be supported by a huge crowd for a very short time, or by few for a long time, but you can’t have a huge crowd all of the time (unless data collection is passive)’
So the Christmas Bird Count can have tens of thousands of participants for a short time, while the number of people who operate weather observation stations will be much smaller. Same thing is true for OpenStreetMap – for crisis mapping, which is a short term task, you can get many contributors but for the regular updating of an area under usual conditions, there will be only few.
The exception for the assertion is the case for passive data collection, where information is collected automatically through the logging of information from a sensor – for example the recording of GPS track to improve navigation information.
27 August, 2013
An interview by Prof Anthony Costello of UCL Institute of Global Health, discussing the growth in citizen science today.
8 July, 2013
The term ‘Citizen Science’ is clearly gaining more recognition and use. It is now get mentioned in radio and television broadcasts, social media channels as well as conferences and workshops. Some of the clearer signs for the growing attention include discussion of citizen science in policy oriented conferences such as UNESCO’s World Summit on Information Society (WSIS+10) review meeting discussion papers (see page ), or the Eye on Earth users conference (see the talks here) or the launch of the European Citizen Science Association in the recent EU Green Week conference.
Another aspect of the expanding world of citizen science is the emerging questions from those who are involved in such projects or study them about the efficacy of the term. As is very common with general terms, some reflections on the accuracy of the term are coming to the fore – so Rick Bonney and colleagues suggest to use ‘Public Participation in Scientific Research‘ (significantly, Bonney was the first to use ‘Citizen Science’ in 1995); Francois Grey coined Citizen Cyberscience to describe projects that are dependent on the Internet; recently Chris Lintott discussed some doubts about the term in the context of Zooniverse; and Katherine Mathieson asks if Citizen Science is just a passing fad. In our own group, there are also questions about the correct terminology, with Cindy Regalado suggestions to focus on ‘Publicly Initiated Scientific Research (PIScR)‘, and discussion on the meaning of ‘Extreme Citizen Science‘.
One way to explore what is going on is to consider the evolution of the ‘hype’ around citizen science through ‘Gartner’s Hype Cycle‘ which can be seen as a way to consider the way technologies are being adopted in a world of rapid communication and inflated expectations from technologies. leaving aside Gartner own hype, the story that the model is trying to tell is that once a new approach (technology) emerges because it is possible or someone reconfigured existing elements and claim that it’s a new thing (e.g. Web 2.0), it will go through a rapid growth in terms of attention and publicity. This will go on until it reaches the ‘peak of inflated expectations’ where the expectations from the technology are unrealistic (e.g. that it will revolutionize the way we use our fridges). This must follow by a slump, as more and more failures come to light and the promises are not fulfilled. At this stage, the disillusionment is so deep that even the useful aspects of the technology are forgotten. However, if it passes this stage, then after the realisation of what is possible, the technology is integrated into everyday life and practices and being used productively.
So does the hype cycle apply to citizen science?
If we look at Gartner cycle from last September, Crowdsourcing is near the ‘peak of inflated expectations’ and some descriptions of citizen science as scientific crowdsourcing clearly match the same mindset.
There is a growing evidence of academic researchers entering citizen science out of opportunism, without paying attention to the commitment and work that is require to carry out such projects. With some, it seems like that they decided that they can also join in because someone around know how to make an app for smartphones or a website that will work like Galaxy Zoo (failing to notice the need all the social aspects that Arfon Smith highlights in his talks). When you look around at the emerging projects, you can start guessing which projects will succeed or fail by looking at the expertise and approach that the people behind it take.
Another cause of concern are the expectations that I noticed in the more policy oriented events about the ability of citizen science to solve all sort of issues – from raising awareness to behaviour change with limited professional involvement, or that it will reduce the resources that are needed for activities such as environmental monitoring, but without an understanding that significant sustained investment is required – community coordinator, technical support and other aspects are needed here just as much. This concern is heightened by statements that promote citizen science as a mechanism to reduce the costs of research, creating a source of free labour etc.
On the other hand, it can be argued that the hype cycle doesn’t apply to citizen science because of history. Citizen science existed for many years, as Caren Cooper describe in her blog posts. Therefore, conceptualising it as a new technology is wrong as there are already mechanisms, practices and institutions to support it.
In addition, and unlike the technologies that are on Gartner chart, academic projects within which citizen science happen benefit from access to what is sometime termed patient capital without expectations for quick returns on investment. Even with the increasing expectations of research funding bodies for explanations on how the research will lead to an impact on wider society, they have no expectations that the impact will be immediate (5-10 years is usually fine) and funding come in chunks that cover 3-5 years, which provides the breathing space to overcome the ‘through of disillusionment’ that is likely to happen within the technology sector regarding crowdsourcing.
And yet, I would guess that citizen science will suffer some examples of disillusionment from badly designed and executed projects – to get these projects right you need to have a combination of domain knowledge in the specific scientific discipline, science communication to tell the story in an accessible way, technical ability to build mobile and web infrastructure, understanding of user interaction and user experience to to build an engaging interfaces, community management ability to nurture and develop your communities and we can add further skills to the list (e.g. if you want gamification elements, you need experts in games and not to do it amateurishly). In short, it need to be taken seriously, with careful considerations and design. This is not a call for gatekeepers , more a realisation that the successful projects and groups are stating similar things.
Which bring us back to the issue of the definition of citizen science and terminology. I have been following terminology arguments in my own discipline for over 20 years. I have seen people arguing about a data storage format for GIS and should it be raster or vector (answer: it doesn’t matter). Or arguing if GIS is tool or science. Or unhappy with Geographic Information Science and resolutely calling it geoinformation, geoinformatics etc. Even in the minute sub-discipline that deals with participation and computerised maps that are arguments about Public Participation GIS (PPGIS) or Participatory GIS (PGIS). Most recently, we are debating the right term for mass-contribution of geographic information as volunteered geographic information (VGI), Crowdsourced geographic information or user-generated geographic information.
It’s not that terminology and precision in definition is not useful, on the contrary. However, I’ve noticed that in most cases the more inclusive and, importantly, vague and broad church definition won the day. Broad terminologies, especially when they are evocative (such as citizen science), are especially powerful. They convey a good message and are therefore useful. As long as we don’t try to force a canonical definition and allow people to decide what they include in the term and express clearly why what they are doing is falling within citizen science, it should be fine. Some broad principles are useful and will help all those that are committed to working in this area to sail through the hype cycle safely.
17 May, 2013
The UCL Urban Laboratory is a cross-disciplinary initiative that links various research interest in urban issues, from infrastructure to the way they are expressed in art, films and photography. The Urban Laboratory has just published its first Urban Pamphleteer which aim to ‘confront key contemporary urban questions from diverse perspectives. Written in a direct and accessible tone, the intention of these pamphlets is to draw on the history of radical pamphleteering to stimulate debate and instigate change.’
My contribution to the first pamphleteer, which focused on ‘Future & Smart Cities’ is dealing with the balance between technology companies, engineers and scientists and the values, needs and wishes of the wider society. In particular, I suggest the potential of citizen science in opening up some of the black boxes of smart cities to wider societal control. Here are the opening and the closing paragraphs of my text, titled Beyond quantification: we need a meaningful smart city:
‘When approaching the issue of Smart Cities, there is a need to discuss the underlying assumptions at the basis of Smart Cities and challenge the prevailing thought that only efficiency and productivity are the most important values. We need to ensure that human and environmental values are taken into account in the design and implementation of systems that will influence the way cities operate…
…Although these Citizen Science approaches can potentially develop new avenues for discussing alternatives to the efficiency and productivity logic of Smart Cities, we cannot absolve those with most resources and knowledge from responsibility. There is an urgent need to ensure that the development and use of the Smart Cities technologies that are created is open to democratic and societal control, and that they are not being developed only because the technologists and scientists think that they are possible.’
The pamphleteer is not too long – 32 pages – and include many thought-provoking pieces from researchers in Geography, Environmental Engineering, Architecture, Computer Science and Art. It can be downloaded here.