Geographies of Social Enterprise – Call for papers

As part of a research project with UnLtd, the foundation for social entrepreneurs, I’m co-organising a session in the RGS-IBG Annual International Conference, 27-29 August 2008, on the geographicla aspects of social enterprise research. The detailed call is:

Social enterprise and social entrepreneurship have grown in quantity and strength in the last decade in the UK. Positioned within ‘Third Sector’ social enterprises are characterised by their business-like approach to social action and have grown in the UK under New Labour. The relevance of social enterprise to Geography has previously been by-passed by particular discourses that debate the political-economic and socio-economic nature of non-state, non-commercial organisations – namely volunteer or non-profit organisations. This work helps to define and map the landscape of the Third sector but is yet to give adequate attention to organisations and individuals who use their entrepreneurial ideas to deliver social change while aiming to be financially sustainable.

There is a need for more social and cultural geographers to examine the nature and emergence of social enterprise/entrepreneurship in the UK. Whilst some work has explored the interrelationships between people, place and volunteering (Milligan, 2007), work on social enterprise/entrepreneurship in this field is scarce. Social entrepreneurs identify social need at the local, national and global scales; generate interest from a variety of social, cultural, economic and political spheres; and create tangible/intangible social impacts on individuals, communities, and cultures through their encounters with people, environment and place.

For social and cultural geography, social entrepreneurs not only present the opportunity to revive long-standing debates over agency, community, citizenship, space and place but also to make contributions to recent work on mobility, diasporic geographies, geographies of enchantment and especially to rethink the links between modes of economic activity and the creation of social goods.

This session aims to move current debates in geography, e.g. within geographies of volunteerism, forward by looking at individuals as drivers of social change from a new perspective. This is also pertinent given that social entrepreneurship/enterprise is fast becoming the major force of change in UK society. This session stems from a collaborative research between UCL and a leading supporter of social entrepreneurs (UnLtd), and we want to create a forum for debate about the emergence of and contribution to be made by geographies of social enterprise.

We invite proposals from geographers to present papers on:

  • Geographical patterns of social entrepreneurial activities
  • The role of Social Enterprise, Voluntarism and Charities in shaping places
  • The concepts of space within the third sector, and how its geometry changes as result of social enterprise
  • The merits and demerits of mapping social impact
  • The relevance of non-spatial mapping to better understand social entrepreneurial activity.

If you are interested, please send expressions of interest to both m.haklay@ucl.ac.uk and LauraFry@unltd.org.uk

Deadline for title and abstracts (c. 200 words): 10 February 2008

This session is part of two planned sessions about Social Enterprise. The second one is a closed session organise by Dr. Sarah-Anne Munoz, which will focus on Social Enterprise, Social Theory and Geographies of Empowerment.

Indices of Deprivation 2007

Early in December, the new version of the Indices of Deprivation (also known as the Index of Multiple Deprivation or IMD) was released. The first IMD was published in 2000, with a new version in 2004 which has now been updated. Created by Oxford University’s Social Disadvantage Research Centre, the indices classify each Lower-Layer Super Output Area (LSOA) in England according to the level of deprivation in multiple domains. An LSOA is an areal unit that contains on average 1500 people – a neighbourhood unit more or less.

As this is a data set widely used in many of my research projects, it was useful to analyse it and see how it changes in comparison to the previous version. There are some surprises, and, if the indices are really reflecting the changes in neighbourhood, the implication is that it is difficult to escape deprivation at the bottom of the ladder.

The IMD is very useful and has significant political implications. There are hundreds of academic articles that are based on applications of the IMD, and far more significant is the role that they play in allocating resources to local authorities through various governmental programmes such as Sure Start, which assists children in their early years, or Decent Homes, which improves the quality of the social housing stock. Of special importance are the points of 20% and 10% deprivation, as they are used widely in policy decisions. We use the IMD in the research with UnLtd to evaluate the location of projects and awardees, and in the Environmental Inequalities project with London 21 to show communities where they are positioned in the national scale.

After 7 years of use and acceptance at all levels of government in the UK (there are separate indices for Wales, Scotland and Northern Ireland), the creation of the new indices must have been a challenging task – a lot is at stake if a specific area moves up or down. The IMD is a league table of sorts, placing each of the LSOAs (and there are around 32,500 of them) in a position relative to others. For each LSOA that is declared as deprived, another one will move up the scale and out of the bottom 20%, which usually means fewer resources for the community. Therefore, it is interesting to analyse the changes in the 2007 edition in comparison to the 2004 one.

Although the Department of Communities and Local Government staes that:

“The Index scores from 2004 cannot be compared with those from 2007. Though the two Indices are very similar, it is not valid to compare the scores between the two time points. An area’s score is affected by the scores of every other area; so it is impossible to tell whether a change in score is a real change in the level of deprivation in an area or whether it is due to the scores of other areas going up or down.”(see this document)

While this is true for each area, it is still valid to check what is the overall pattern of movement across the whole data set. To do that, each LSOA was coded with the percentile point in the IMD 2007 to which it belongs (in each percentile point there are about 325 LSOAs) and compared to the percentile position in 2004. The gap represents the relative change in the position of the LSOA – positive change means that it is now less deprived, while a negative change means that the place is now more deprived compared to 2004.

Within the span of 3 years and due to the differences in the calculation method, it is expected that specific LSOAs will shift their place – especially when the investment that was put into them is taken into account. For the sake of the discussion, let’s assume that 5% change is not too big – although it can be significant if your LSOA belonged to the 17 percentile in 2004 and now belongs to the 22 percentile. Thus, it is worth exploring where the LSOAs that moved more than 5 percentage points are. In IMD 2007, over 25% of LSOAs have shifted more than 5 percentage points and some LSOAs have moved over 20 percentage points.

The distribution of the LSOAs that moved is shown in the chart below. Notice that, although this might look like normal distribution, actually the number of changes at the lowest percentages is not equivalent to the changes at the top of the range. It might be caused by the fact that the indices are especially designed to locate deprived areas and therefore located them accurately in 2004 and the situation haven’t shifted in 2007. The problem with this is it means that, in the periods of 2001-2 (on which IMD 2004 is based) and 2004-5 (on which IMD 2007 is based), not too many places were shifted out of deprivation, while the rest of the places happily shifted about. Is it possible that the IMD team was especially careful not to bump communities that were already included in the bottom 20%?

IMD 2007 Significant Change by Percentile

Another way to look at the data is of course through mapping. The following map represents the LSOAs that experienced significant change of over 5%. You can download an A2 size PDF in which it is possible to zoom to a specific area to see the changes.

IMD 2007 Significant Change - Map

While most of the changes are not in the most deprived areas, it is fascinating to see the geographical pattern of change. For example, by zooming in to London, it is easy to see that Barnet, Brent and Harrow are some of the local authorities with the biggest change downward, while Camden and Westminster have seen significant change upward. As many of the changes are in the middle range, will they have policy implications?

A final point about this analysis is that it was fairly easy run: the analysis was done in 4-5 hours, using an ageing laptop (a 4 years old IBM X31), Excel 2007 and Manifold GIS 8.0. While the cartography can be improved, the ability of modern GIS to do this type of work so quickly helps in focusing on the task, and not spending the time waiting for the GIS to process data…

How can we ensure that GI is a good career choice?

This entry is based on my article that was published in GIS Professional, December 2007 issue. Reproduced with permission from GIS Pro.

AGI’07 was the setting for a provocative debate on whether “GI is a bad career choice“. Speaking for the motion were GiSPro publisher Stephen Booth and AGI past chair Simon Doyle. Vigorously opposing were Gesche Schmid of Atkins and The Geoinformation Group’s Dr Seppe Casattari.

The debate, which worryingly for the AGI was only marginally lost, sparked lively interest from the audience. The text below argues that the more important question is “How can we ensure that GI is a good career choice?

This article is mainly aimed at professional who are working in GIS at all levels.

It seems that there was never a better time than the present to be a GI professional – demand for skills is high, pay is good and prospects are rosy. It was a delight for many of us to read the famous article in Nature in 2004, announcing increased demands forpeople with knowledge and the ability to operate geospatial technologies. According to the US Department of Labor, GI is ‘one of the three most important emerging and evolving fields, along with nanotechnology and biotechnology. Job opportunities are growing and diversifying. . .’ What could be better?

Veterans of GIS can attest to the dramatic increase in awareness and knowledge of geospatial technologies. If you have been working in GIS for more than five years, then you belong to the generation which, when trying to explain what your job involved, would launch into a convoluted explanation, only to end with “oh, well, it’s complex”. The advance of satnav, geobrowsers such as Google Earth and the ubiquity of web mapping sites such as Multimap and Google Maps, together with those “mash ups” that bring new applications on top of them, are making it much easier to explain what we do and the importance of maps and how the use of geographical information can help in daily routines and in business. So, you conclude, GI is an excellent career choice with a fantastic future.

Is the picture quite so rosy? The rise of ‘neogeography’ is highlighting some risks, and there are certainly cautionary tales to be learned from our professional allies: land surveyors, photogrammetrists and cartographers. A significant reason for concern is that in the era of ‘neogeography’ many core geographical concepts are seen as unproblematic and not worth bothering about, as Dave Unwin noted in his 2006 AGI Educational Lecture (or a similar paper in Geoforum). There are some issues that matter greatly to GI professionals but which the vast majority of users don’t seem to care about. One example is the lack of metadata in applications like Google Earth, e.g. the ability to tell when the data was collected and the currency of each piece of information. Some users of these systems even think that they can log on and check if their car is still parked in front of their home!

Or consider the place of cartography. It seems that a large proportion of the new web mapping applications are ignoring many important cartographic principles – look at some of the current sites and you can spot the lack of legends, poor selection of colour for thematic mapping and other aspects of properly composed maps. Yet, for many users and for too many applications, this problematic world in which geography is useful, but cartographical and geographical information science principles do not matter, is a very satisfying world and many are happy to live and use geography in this way. In such a world, what will prevent your future employer from saying: why do I need a person that costs me so much, when we can hack something easily with a web mapping API (Application Programming Interface)?

Here is where we need to look at the lessons learnt by land surveyors, photogrammetrists and cartographers. Not so long ago, maybe 15 or 20 years ago – well within the span of a professional career – being a surveyor or a photogrammetrist seemed like an excellent career choice. For photogrammetrists, the increased use of digital aerial photography meant that there would be a need for their skills and for the surveyor the requirement of ever increasing complex civil engineering projects and the need to understand how to use GPS, meant that they, too, should have a secure future. But look at what has happened to those professions today. The entry salary for a photogrammetrist is very low (that is, if you manage to find a job) and the same is true for land surveyors. This is reflected in the demand for academic courses.

Importantly, it is not the case that automation has eaten away all these jobs, though to some extent it has. Nor can it all be blamed on outsourcing and off-shoring. What happened is that more and more employers think that because of automation, GPS, total stations and satellite imagery, they do not need the highly paid skills of the professional photogrammetrist or the surveyor – they can just hire a technician and the machine will do all the calculations. . . Of course, many practising surveyors can provide tales of companies that have discovered midway through a project that they actually need the skills – but now they don’t have them. This attitude has led to widespread errors – but the overall trend doesn’t change.

Arguably, what happened with these professions is that they failed to convey the importance of the skills and knowledge that they bring to the marketplace. This risk is true for GIS professionals too. The following is taken from “Prospect” – the UK graduate job site which describes the job prospects of cartographers: “With relatively low salary levels and small numbers of job vacancies, this role is often seen as more of a vocation for people with a strong interest in maps and geographical information.” Will the GI profession follow the same pattern?

For too many employers, the justification in employing GIS professionals is that the software used to create maps is very complex, so having specialists who produce maps is justified. But if it is possible to create maps with a simple API instead of buying an expensive and difficult to maintain Internet mapping server software, or if it is enough to analyse the data by creating a point map on Google Earth – then why keep the expensive professional?

Therefore, we need to change the question from the passive ‘Is GI a good career choice?’ to ‘how can we ensure that GI is a good career choice?’ There is a clear need to move away from the conception that GIS is all about making maps. It is actually about analysing geographical information, and in order to do this properly you need a GI analyst – a professional who understands the underlying data structures, the way in which the data can be manipulated and how to visualise the output of the analysis in a meaningful and effective way.

As for yourself, dear reader, if you are a GIS professional – it is worth considering how you structure your career so it does indeed become fulfilling, enjoyable and long. There aren’t many jobs in the IT sector that offer a variety of tasks like GI. Develop your skills by ongoing training – and while you are at it, why not become a Chartered Geographer? Read books that help in understanding how rapidly the world around you is changing – Thomas Friedman’s The World Is Flat (2007) is one of my favourites. Most importantly, start your own local campaign to explain to your employer, if you are working for an organisation that doesn’t specialises in GIS, or to your clients, if you are working in a GIcentric organisation, how special is spatial? And in what ways the knowledge and skills that you’ve got are contributing to the operation of the organisation. By collaborating and promoting the wonders of GI we can ensure that GI is indeed an excellent career choice.

The formatted version of this article is available here (Include Gesche Schmid comment – which reaches a similar conclusion).

The British Museum Test for public mapping websites

Back in 2005, when I worked with Artemis Skarlatidou on an evaluation of public mapping websites, we came up with a simple test to check how well these search sites perform: Can a tourist find a famous landmark easily?

The reasoning behind raising this question was that tourists are an obvious group of users of public mapping sites such as Multimap, MapQuest, Yahoo! Maps, Microsoft’s Virtual Earth or Google Maps. Market research information presented by Vincent Tao from Microsoft in a seminar a year ago confirmed this assumption.

During the usability evaluation, we gave the participants the instruction ‘Locate the following place on the map: British Museum: Great Russell Street, London, WC1B 3DG’. Not surprising, those participants who started with the postcode found the information quickly, but about a third typed ‘British Museum, London. While our participants were London’s residents and were used to postcodes as a means of stating an address precisely, a more realistic expectation from tourists is that they would not use postcodes when searching for a landmark.

In the summer of 2005 when we ran the test, the new generation of public mapping websites (such as Google Maps and Microsoft Virtual Earth) performed especially bad.
The most amusing result came from Google Maps, pointing to Crewe as the location of the British Museum (!).
Google - British Museum in Crewe

The most simple usability test for a public mapping site that came out of this experiment is the ‘British Museum Test’: find the 10 top tourist attractions in a city/country and check if the search engine can find them. Here is how it works for London:

The official Visit London site suggests the following top attractions: Tate Modern, British Museum, National Gallery, Natural History Museum, the British Airways London Eye, Science Museum, the Victoria & Albert Museum (V&A Museum), the Tower of London, St Paul’s Cathedral and the National Portrait Gallery.

Now, we can run the test by typing the name of the attraction in the search box of public mapping sites. As an example, here I’ve used Yahoo! Maps, Google Maps, Microsoft’s Virtual Earth and Multimap. With all these sites I’ve imitated a potential tourist – I’ve accessed the international site (e.g. maps.google.com) and panned the map to the UK, and then typed the query. The results are:

Attraction (search term used) Yahoo! Google Microsoft Multimap
Tate Modern Found and zoomed Found and zoomed Found and zoomed Found and zoomed
British Museum Found and zoomed Found as part of a list Found and zoomed Found and zoomed
National Gallery Found and zoomed Found as part of a list Found and zoomed Found as part of a list (twice!)
Natural History Museum Failed Found as part of a list Found and zoomed Found and zoomed
British Airways London Eye (commonly abbreviated to London Eye) Failed on the full name, found and zoomed on the common abbreviation Found as part of a list, failed on the common abbreviation Failed on the full name, found and zoomed on the common abbreviation Failed on the full name, found and zoomed on the common abbreviation
Science Museum Found and zoomed Found as part of a list Found and zoomed Found and zoomed
The Victoria & Albert Museum (commonly abbreviated to V&A Museum) Found and zoomed on both Found and zoomed, but failed on the common abbreviation Found and zoomed, but failed on the common abbreviation Found and zoomed, but the common abbreviation zoomed on Slough (!)
The Tower of London Found and zoomed Found and zoomed Found and zoomed (failed if ‘the’ included in the search) Found and zoomed
St Paul’s Cathedral Found and zoomed Found and zoomed Found as part of a list Failed
National Portrait Gallery Failed (zoomed to the one in Washington DC) Found and zoomed Found and zoomed Found and zoomed

Notice that none of these search engines managed to pass the test on all the top ten attractions, which are visited by millions every year. There is a good reason for this – geographical search is not a trivial matter and the semantics of place names can be quite tricky (for example, if you look at a map of Ireland and the UK, there are two National Galleries).

On the plus side, I can note that search engines are improving. At the end of 2005 and for most of 2006 the failure rate was much higher. I used the image above in several presentations and have run the ‘British Museum Test’ several times since then, with improved results in every run.

The natural caveat is that I don’t have access to the server logs of the search engines and, therefore, can’t say that the test really reflects the patterns of use. It would be very interesting to have Google Maps Hot Trends or to see it for other search engines. Even without access to the search logs though, the test reveals certain aspects in the way that information is searched and presented and is useful in understanding how good the search engines are in running geographical queries.

By a simple variation of the test you can see how tolerant an engine is for spelling errors, and which one you should use when guests visit your city and you’d like to help them in finding their way around. It is also an indication of the general ability of the search engine to find places. You can run your own test on your city fairly quickly – it will be interesting to compare the results!

For me, Microsoft Virtual Earth is, today, the best one for tourists, though it should improve the handling of spelling errors…

Web 2.0 notion of democratisation and Participatory GIS

An interesting issue that emerges from The Cult of
the Amateur is about Participatory GIS or PPGIS. As Chris Dunn mentioned in her recent paper in Progress in Human Geography, Participatory GIS makes many references to ‘democratisation’ of GIS (together with Renee Sieber’s 2006 review, these two papers are excellent introduction to PPGIS) .

According to the OED, democratisation is ‘the action of rendering, or process of becoming, democratic’, and democracy is defined as ‘Government by the people; that form of government in which the sovereign power resides in the people as a whole, and is exercised either directly by them (as in the small republics of antiquity) or by officers elected by them. In modern use often more vaguely denoting a social state in which all have equal rights, without hereditary or arbitrary differences of rank or privilege.’ [emphasis added].
The final point is the notion that is mostly used when advocates of Web 2.0 use the term, and it seems that in this notion of democratisation, erasure of hereditary or arbitrary differences is extended also to expertise and hierarchies in the media and knowledge production. In some areas, Web 2.0 actively erodes the differentiation between experts and amateurs, using mechanisms such as anonymous contributions that hide from the reader any information about who is contributing, what their authority is and why we should listen to them.
As Keen notes, doing away with social structures and equating amateurs with experts is actually not a good thing in the long run.
This brings us back to Participatory GIS – the PGIS literature discusses the need to ‘level the field’ and deal with power structures and inequalities in involvement in decision making – and this is exactly what we are trying to achieve in the Mapping Change for Sustainable Communities project. We also know very well from the literature that, even in complex issues, individuals and groups are investing time and effort to understand complex issues and as a result can become quite expert. For example, the work of Maarten Wolsink on NIMBYs shows that this very local focus is not so parochial after all.
I completely agree with the way Dunn puts it (p. 627-8):

‘Rather than the ‘democratization of GIS’ through th[e] route [of popularization] , it would seem that technologizing of deliberative democracy through Participatory GIS currently offers a more effective path towards individual and community empowerment – an analytical as opposed to largely visual process; an interventionist approach which actively rather than passively seeks citizen involvement; and a community-based as opposed to individualist ethos.’

Yet, what I’m taking from Keen is that we also need to rethink the role of the expert within Participatory GIS – at the end of the day, we are not suggesting we do away with planning departments or environmental experts.
I don’t recall that I’ve seen much about how to define the role of experts and how to integrate hierarchies of knowledge in Participatory GIS processes – potentially an interesting research topic?

Democratisation in Web 2.0 and the participation inequality

Continuing to reflect on Keen’s The Cult of the Amateur, I can’t fail to notice how Web 2.0 influences our daily lives – from the way we implement projects, to the role of experts and non-experts in the generation of knowledge. Some of the promises of Web 2.0 are problematic – especially the claim for ‘democratisation’.

Although Keen doesn’t discuss this point, Jakob Nielsen’s analysis of ‘Participation Inequality on the Web’ is pertinent here. As Nielsen notes, on Wikipedia 0.003% of users contribute two thirds of the content, with a further 0.2% contributing something and 99.8% who just use the information. Blogs are supposed to have a 95-5-0.1 (95% just read, 5% post infrequently, 0.1% post regularly). In Blogs, this posting inequality is enhanced by readership inequalities on the Web (power laws are influencing this domain, too – top blogs are read by far more people).

This aspect of access and influence means that the use of the word ‘democratisation’ is a misnomer to quite an extent. If anything, it is a weird laissez-faire democracy, where a few plutocrats rule. Not a democracy of the type that I’d like to live in.

The Cult of the Amateur – worth reading

I have just finished reading Andrew Keen’s The Cult of the Amateur, which, together with Paulina Borsook’s Cyberselfish, provides quite a good antidote to the overexcitement of The Long Tail, Wikinomics and a whole range publications about Web 2.0 that marvel in the ‘democratisation’ capacity of technology. Even if Keen’s and Borsook’s books are seen as dystopian (and in my opinion they are not), I think that through their popularity these critical analyses of current online culture are very valuable in encouraging reflection on how technology influences society.

The need for a critical reflection on technology and society stems from the fact that most of society seems to accept the ‘common-sense’ perspective that technology is a human activity which is neutral and ‘value-free’ (values here in the meaning of guiding principles in life) – that it can be used for good ends or bad ones, but by itself it does not encapsulate any values internally.

In contrast, I personally prefer Andrew Feenberg’s analysis in Questioning Technology and Transforming Technology where he suggests that a more complete attitude towards technology must accept that technology encapsulates certain values and that these values should be taken into account when we evaluate the impact of new technologies on our life.

In Feenberg’s terms, we should not separate means from ends and should understand how certain cultural values influence technological projects and end up integrated in them. For example, Wikipedia’s decision to ‘level the playing field’ so experts do not have any more authority in editing content than other contributors should be seen as a an important value judgment, suggesting that expertise is not important or significant or that experts cannot be trusted. Such a point of view does have an impact on a tool that it widely used and therefore influences society.