Opening geodata is an interesting issue for INSPIRE  directive. INSPIRE was set before the hype of Government 2.0 was growing and pressure on opening data became apparent, so it was not designed with these aspects in mind explicitly. Therefore the way in which the organisations that are implementing INSPIRE are dealing with the provision of open and linked data is bound to bring up interesting challenges.

Dealing with open and linked data was the topic that I followed on the second day of INSPIRE 2014 conference. The notes below are my interpretation of some of the talks.

Tina Svan Colding discussed the Danish attempt to estimate the value (mostly economically) of open geographic data. The study was done in collaboration with Deloitte, and they started with a change theory – expectations that they will see increase demands from existing customers and new ones. The next assumption is that there will be new products, companies and lower prices and then that will lead to efficiency and better decision making across the public and private sector, but also increase transparency to citizens. In short, trying to capture the monetary value with a bit on the side. They have used statistics, interviews with key people in the public and private sector and follow that with a wider survey – all with existing users of data. The number of users of their data increased from 800 users to over 10,000 within a year. The Danish system require users to register to get the data, so this are balk numbers, but they could also contacted them to ask further questions. The new users – many are citizens (66%) and NGO (3%). There are further 6% in the public sector which had access in principle in the past but the accessibility to the data made it more usable to new people in this sector. In the private sector, construction, utilities and many other companies are using the data. The environmental bodies are aiming to use data in new ways to make environmental consultation more engaging to audience (is this is another Deficit Model? assumption that people don’t engage because it’s difficult to access data?). Issues that people experienced are accessibility to users who don’t know that they need to use GIS and other datasets. They also identified requests for further data release. In the public sector, 80% identified potential for saving with the data (though that is the type of expectation that they live within!).

Roope Tervo, from the Finish Meteorological Institute talked about the implementation of open data portal. Their methodology was very much with users in mind and is a nice example of user-centred data application. They hold a lot of data – from meteorological observations to air quality data (of course, it all depends on the role of the institute). They have chose to use WFS download data, with GML as the data format and coverage data in meteorological formats (e.g. grib). He showed that selection of data models (which can be all compatible with the legislation) can have very different outcomes in file size and complexity of parsing the information. Nice to see that they considered user needs – though not formally. They created an open source JavaScript library that make it is to use the data- so go beyond just releasing the data to how it is used. They have API keys that are based on registration. They had to limit the number of requests per day and the same for the view service. After a year, they have 5000 users, and 100,000 data downloads per day and they are increasing. Increasing slowly. They are considering how to help clients with complex data models.

Panagiotis Tziachris was exploring the clash between ‘heavy duty’ and complex INSPIRE standards and the usual light weight approaches that are common in Open Data portal (I think that he intended in the commercial sector that allow some reuse of data). This is a project of 13 Mediterranean regions in Spain, Italy, Slovenia, Montenegro, Greece, Cyprus and Malta. The HOMER project (website http://homerproject.eu/) used different mechanism, including using hackathons to share knowledge and experience between more experienced players and those that are new to the area. They found them to be a good way to share practical knowledge between partners. This is an interesting side of purposeful hackathon within a known people in a project and I think that it can be useful for other cases. Interestingly, from the legal side, they had to go beyond the usual documents that are provided in an EU consortium, and  in order to allow partners to share information they created a memorandum of understanding for the partners as this is needed to deal with IP and similar issues. Also practices of open data – such as CKAN API which is a common one for open data websites were used. They noticed separation between central administration and local or regional administration – the competency of the more local organisations (municipality or region) is sometimes limited because knowledge is elsewhere (in central government) or they are in different stages of implementation and disagreements on releasing the data can arise. Antoehr issue is that open data is sometime provided at regional portals while another organisation at the national level (environment ministry or cadastre body) is the responsible to INSPIRE. The lack of capabilities at different governmental levels is adding to the challenges of setting open data systems. Sometime Open Data legislation are only about the final stage of the process and not abour how to get there, while INPIRE is all about the preparation, and not about the release of data – this also creates mismatching.

Adam Iwaniak discussed how “over-engineering” make the INSPIRE directive inoperable or relevant to users, on the basis of his experience in Poland. He asks “what are the user needs?” and demonstrated it by pointing that after half term of teaching students about the importance of metadata, when it came to actively searching for metadata in an assignment, the students didn’t used any of the specialist portals but just Google. Based on this and similar experiences, he suggested the creation of a thesaurus that describe keywords and features in the products so it allows searching  according to user needs. Of course, the implementation is more complex and therefore he suggests an approach that is working within the semantic web and use RDF definitions. By making the data searchable and index-able in search engines so they can be found. The core message  was to adapt the delivery of information to the way the user is most likely to search it – so metadata is relevant when the producer make sure that a search in Google find it.

Jesus Estrada Vilegas from the SmartOpenData project http://www.smartopendata.eu/ discussed the implementation of some ideas that can work within INSPIRE context while providing open data. In particular, he discussed a Spanish and Portuguese data sharing. Within the project, they are providing access to the data by harmonizing the data and then making it linked data. Not all the data is open, and the focus of their pilot is in agroforestry land management. They are testing delivery of the data in both INSPIRE compliant formats and the internal organisation format to see which is more efficient and useful. INSPIRE is a good point to start developing linked data, but there is also a need to compare it to other ways of linked the data

Massimo Zotti talked about linked open data from earth observations in the context of business activities, since he’s working in a company that provide software for data portals. He explored the business model of open data, INSPIRE and the Copernicus programme. From the data that come from earth observation, we can turn it into information – for example, identifying the part of the soil that get sealed and doesn’t allow the water to be absorbed, or information about forest fires or floods etc. These are the bits of useful information that are needed for decision making. Once there is the information, it is possible to identify increase in land use or other aspects that can inform policy. However, we need to notice that when dealing with open data mean that a lot of work is put into bringing datasets together. The standarisation of data transfer and development of approaches that helps in machine-to-machine analysis are important for this aim. By fusing data they are becoming more useful and relevant to knowledge production process. A dashboard approach to display the information and the processing can help end users to access the linked data ‘cloud’. Standarisation of data is very important to facilitate such automatic analysis, and also having standard ontologies is necessary. From my view, this is not a business model, but a typical one to the operations in the earth observations area where there is a lot of energy spend on justification that it can be useful and important to decision making – but lacking quantification of the effort that is required to go through the process and also the speed in which these can be achieved (will the answer come in time for the decision?). A member of the audience also raised the point that assumption of machine to machine automatic models that will produce valuable information all by themselves is questionable.

Maria Jose Vale talked about the Portuguese experience in delivering open data. The organisation that she works in deal with cadastre and land use information. She was also discussing on activities of the SmartOpenData project. She describe the principles of open data that they considered which are: data must be complete, primary, timely, accessible, processable; data formats must be well known, should be permanence and addressing properly usage costs. For good governance need to know the quality of the data and the reliability of delivery over time. So to have automatic ways for the data that will propagate to users is within these principles. The benefits of open data that she identified are mostly technical but also the economic values (and are mentioned many times – but you need evidence similar to the Danish case to prove it!). The issues or challenges of open data is how to deal with fuzzy data when releasing (my view: tell people that it need cleaning), safety is also important as there are both national and personal issues, financial sustainability for the producers of the data, rates of updates and addressing user and government needs properly. In a case study that she described, they looked at land use and land cover changes to assess changes in river use in a river watershed. They needed about 15 datasets for the analysis, and have used different information from CORINE land cover from different years. For example, they have seen change from forest that change to woodland because of fire. It does influence water quality too. Data interoperability and linking data allow the integrated modelling of the evolution of the watershed.

Francisco Lopez-Pelicer covered the Spanish experience and the PlanetData project http://www.planet-data.eu/ which look at large scale public data management. Specifically looking in a pilot on VGI and Linked data with a background on SDI and INSPIRE. There is big potential, but many GI producers don’t do it yet. The issue is legacy GIS approaches such as WMS and WFS which are standards that are endorsed in INSPIRE, but not necessarily fit into linked data framework. In the work that he was involved in, they try to address complex GI problem with linked data . To do that, they try to convert WMS to a linked data server and do that by adding URI and POST/PUT/DELETE resources. The semantic client see this as a linked data server even through it can be compliant with other standards. To try it they use the open national map as authoritative source and OpenStreetMap as VGI source and release them as linked data. They are exploring how to convert large authoritative GI dataset into linked data and also link it to other sources. They are also using it as an experiment in crowdsourcing platform development – creating a tool that help to assess the quality of each data set. The aim is to do quality experiments and measure data quality trade-offs associated with use of authoritative or crowdsourced information. Their service can behave as both WMS and “Linked Map Server”. The LinkedMap, which is the name of this service, provide the ability to edit the data and explore OpenStreetMap and thegovernment data – they aim to run the experiment in the summer so this can be found at http://linkedmap.unizar.es/. The reason to choose WMS as a delivery standard is due to previous crawl over the web which showed that WMS is the most widely available service, so it assumed to be relevant to users or one that most users can capture.

Paul van Genuchten talked about the GeoCat experience in a range of projects which include support to Environment Canada and other activities. INSPIRE meeting open data can be a clash of cultures and he was highlighting neogeography as the term that he use to describe the open data culture (going back to the neogeo and paleogeo debate which I thought is over and done – but clearly it is relevant in this context). INSPIRE recommend to publish data open and this is important to ensure that it get big potential audience, as well as ‘innovation energy’ that exist among the ‘neogeo’/’open data’ people. The common things within this culture are expectations that APIs are easy to use, clean interfaces etc. But under the hood there are similarities in the way things work. There is a perceived complexity by the community of open data users towards INSPIRE datasets. Many of Open Data people are focused and interested in OpenStreetMap, and also look at companies such as MapBox as a role model, but also formats such as GeoJSON and TopoJSON. Data is versions and managed in git like process. The projection that is very common is web mercator. There are now not only raster tiles, but also vector tiles. So these characteristics of the audience can be used by data providers to provide help in using their data, but also there are intermediaries that deliver the data and convert it to more ‘digestible’ forms. He noted CitySDK by Waag.org which they grab from INSPIRE and then deliver it to users in ways that suite open data practices.He demonstrated the case of Environment Canada where they created a set of files that are suitable for human and machine use.

Ed Parsons finished the set of talks of the day (talk link goo.gl/9uOy5N) , with a talk about multi-channel approach to maximise the benefits of INSPIRE.  He highlighted that it’s not about linked data, although linked data it is part of the solution to make data accessibility. Accessibility always wins online – and people make compromises (e.g. sound quality in CD and Spotify). Google Earth can be seen as a new channel that make things accessible, and while the back-end is not new in technology the ease of access made a big difference. The example of Denmark use of minecraft to release GI is an example of another channel. Notice the change over the past 10 years in video delivery, for example, so the early days of the video delivery was complex and require many steps and expensive software and infrastructure, and this is somewhat comparable to current practice within geographic information. Making things accessible through channels like YouTube and the whole ability around it changed the way video is used, uploaded and consumed, and of course changes in devices (e.g. recording on the phone) made it even easier. Focusing on the aspects of maps themselves, people might want different things that are maps  and not only the latest searchable map that Google provide – e.g. the  administrative map of medieval Denmark, or maps of flood, or something that is specific and not part of general web mapping. In some cases people that are searching for something and you want to give them maps for some queries, and sometime images (as in searching Yosemite trails vs. Yosemite). There are plenty of maps that people find useful, and for that Google now promoting Google Maps Gallery – with tools to upload, manage and display maps. It is also important to consider that mapping information need to be accessible to people who are using mobile devices. The web infrastructure of Google (or ArcGIS Online) provide the scalability to deal with many users and the ability to deliver to different platforms such as mobile. The gallery allows people to brand their maps. Google want to identify authoritative data that comes from official bodies, and then to have additional information that is displayed differently.  But separation of facts and authoritative information from commentary is difficult and that where semantics play an important role. He also noted that Google Maps Engine is just maps – just a visual representation without an aim to provide GIS analysis tools.

Once upon a time, Streetmap.co.uk was one of the most popular Web Mapping sites in the UK, competing successfully with the biggest rival at the time, Multimap. Moreover, it was ranked second in The Daily Telegraph list of leading mapping sites in October 2000 and described at ‘Must be one of the most useful services on the web – and it’s completely free. Zoom in on any UK area by entering a place name, postcode, Ordnance Survey grid reference or telephone code.’ It’s still running and because of its legacy, it’s around the 1250 popular website in the UK (though 4 years ago it was among the top 350).

Streetmap 2014

So far, nothing is especially noteworthy – popular website a decade ago replaced by a newer website, Google Maps, which provide better search results, more information and is the de facto  standard for web mapping. Moreover, already in 2006 Artemis Skaraltidou demonstrated that of the UK Web Mapping crop, Streetmap scored lowest on usability with only MapQuest, which largely ignored the UK, being worse.

However, recently, while running a practical session introducing User-Centred Design principles to our MSc in GIS students, I have noticed an interesting implication of the changes in the environment of Web Mapping – Streetmap has stopped  being usable just because it didn’t bother to update its interaction. By doing nothing, while the environment around it changed, it became unusable, with users failing to perform even the most basic of tasks.

The students explored the mapping offering from Google, Bing, Here and Streetmap. It was fairly obvious that across this cohort (early to mid 20s), Google Maps were the default, against which other systems were compared. It was not surprising to find impressions that Streetmap is ‘very old fashioned‘ or ‘archaic‘. However, more interesting was to notice people getting frustrated that the ‘natural’ interaction of zooming in and out using the mouse wheel just didn’t worked. Or failing to find the zoom in and out buttons. At some point in the past 10 years, people internalised the interaction mode of using the mouse and stopped using the zoom in and out button on the application, which explains the design decision in the new Google Maps interface to eliminate the dominant zoom slider from the left side of the map. Of course, Streetmap interface is also not responsive to touch screen interactions which are also learned across applications.

I experienced a similar, and somewhat amusing incident during the registration process of SXSW Eco, when I handed over my obviously old laptop at the registration desk to provide some detail, and the woman was trying to ‘pinch’ the screen in an attempt to zoom in. Considering that she was likely to be interacting with tablets most of the day (it was, after all, SXSW), this was not surprising. Interactions are learned and internalised, and we expect to experience them across devices and systems.

So what’s to learn? while this is another example of ‘Jacob’s Law of Internet User Experience‘ which states that ‘Users spend most of their time on other sites’, it is very relevant to many websites that use Web Mapping APIs to present information – from our own communitymaps.org.uk to the Environment Agency What’s in Your Backyard. In all these cases, it is critical to notice the basic map exploration interactions (pan, zoom, search) and make sure that they match common practices across the web. Otherwise, you might end like Streetmap.

Looking across the range of crowdsourced geographic information activities, some regular patterns are emerging and it might be useful to start notice them as a way to think about what is possible or not possible to do in this area. Since I don’t like the concept of ‘laws’ – as in Tobler’s first law of geography which is  stated as ‘Everything is related to everything else, but near things are more related than distant things.’ – I would call them assertions. There is also something nice about using the word ‘assertion’ in the context of crowdsourced geographic information, as it echos Mike Goodchild’s differentiation between asserted and authoritative information. So not laws, just assertions or even observations.

The first one, is rephrasing a famous quote:

you can be supported by a huge crowd for a very short time, or by few for a long time, but you can’t have a huge crowd all of the time (unless data collection is passive)’

So the Christmas Bird Count can have tens of thousands of participants for a short time, while the number of people who operate weather observation stations will be much smaller. Same thing is true for OpenStreetMap – for crisis mapping, which is a short term task, you can get many contributors  but for the regular updating of an area under usual conditions, there will be only few.

The exception for the assertion is the case for passive data collection, where information is collected automatically through the logging of information from a sensor – for example the recording of GPS track to improve navigation information.

OSM Haiyan

CHI 2013 and GeoHCI workshop highlighted to me the importance of understanding media for maps. During CHI, the ‘Paper Tab’ demonstration used E-Ink displays to demonstrate multiple displays interaction. I found the interactions non-intuitive and not mapping very well to what you would expect to do with paper, so a source for confusion – especially when they will eventually be mixed with papers on a desk. Anyhow, it is an interesting exploration.

E Ink displays are very interesting in terms of the potential use for mapping. The image  below shows one of the early prototypes of maps that are designed specifically for the Kindle, or, more accurately, to the E Ink technology that is at heart of the Kindle. From a point of view of usability of geographical information technologies, the E Ink is especially interesting. There are several reasons for that.

Kindle map

First, the resolution of the Kindle display is especially high (close to 170 Pixels Per Inch) when the size of screen is considered. The Apple Retina display provide even better resolution and in colour and that makes maps on the iPad also interesting, as they are starting to get closer to the resolution that we are familiar with from paper maps (which is usually between 600 and 1200 Dot Per Inch). The reason that resolution matter especially when displaying maps, because the users need to see the context of the location that they are exploring. Think of the physiology of scanning the map, and the fact that capturing more information in one screen can help in understanding the relationships of different features. Notice that when the resolution is high but the screen area is limited (for example the screen of a smartphone) the limitations on the area that is displayed are quite severe and that reduce the usability of the map – scrolling require you to maintain in your memory where you came from.

Secondly, E Ink can be easily read even in direct sunlight because they are reflective and do not use backlight. This make them very useful for outdoor use, while other displays don’t do that very well.

Thirdly, they use less energy and can be used for long term display of the map while using it as a reference, whereas with most active displays (e.g. smartphone) continuous use will cause a rapid battery drain.

On the downside, E Ink refresh rates are slow, and they are more suitable for static display and not for dynamic and interactive display.

During the summer of 2011 and 2012, several MSc students at UCL explore the potential of E Ink for mapping in detail. Nat Evatt (who’s map is shown above) worked on the cartographic representation and shown that it is possible to create highly detailed and readable maps even with the limitation of 16 levels of grey that are available. The surprising aspects that he found is that while some maps are available in the Amazon Kindle store (the most likely place for e-book maps), it looks like the maps where just converted to shades of grey without careful attention to the device, which reduce their usability.

The work of Bing Cui and Xiaoyan Yu (in a case of collaboration between MSc students at UCLIC and GIScience) included survey in the field (luckily on a fairly sunny day near the Tower of London) and they explored which scales work best in terms of navigation and readability. The work shows that maps at scale of 1:4000 are effective – and considering that with E Ink the best user experience is when the number of refreshes are minimised that could be a useful guideline for e-book map designers.

As I’ve noted in the previous post, I have just attended CHI (Computer-Human Interaction) conference for the first time. It’s a fairly big conference, with over 3000 participants, multiple tracks that evolved over the 30 years that CHI have been going,  including the familiar paper presentations, panels, posters and courses, but also the less familiar ‘interactivity areas’, various student competitions, alt.CHI or Special Interest Groups meetings. It’s all fairly daunting even with all my existing experience in academic conferences. During the GeoHCI workshop I have discovered the MyCHI application, which helps in identifying interesting papers and sessions (including social recommendations) and setting up a conference schedule from these papers. It is a useful and effective app that I used throughout the conference (and wish that something similar can be made available in other large conferences, such as the AAG annual meeting).

With MyCHI in hand, while the fog started to lift and I could see a way through the programme, the trepidation about the relevance of CHI to my interests remained and even somewhat increased, after a quick search of the words ‘geog’,’marginal’,’disadvantage’ returned nothing. The conference video preview (below) also made me somewhat uncomfortable. I have a general cautious approach to the understanding and development of digital technologies, and a strong dislike to the breathless excitement from new innovations that are not necessarily making the world a better place.

Luckily, after few more attempts I have found papers about ‘environment’, ‘development’ and ‘sustainability’. Moreover, I discovered the special interest groups (SIG) that are dedicated to HCI for Development (HCI4D) and HCI for Sustainability and the programme started to build up. The sessions of these two SIGs were an excellent occasion to meet other people who are active in similar topics, and even to learn about the fascinating  concept of ‘Collapse Informatics‘ which is clearly inspired by Jared Diamond book and explores “the study, design, and development of  sociotechnical systems in the abundant present for use in a future of scarcity“.

Beyond the discussions, meeting people with shared interests and seeing that there is a scope within CHI to technology analysis and development that matches my approach, several papers and sessions were especially memorable. The studies by Elaine Massung an colleagues about community activism in encouraging shops to close the doors (and therefore waste less heating energy) and Kate Starbird on the use of social media in passing information between first responders during the Haiti earthquakeexplored how volunteered, ‘crowd’ information can be used in crisis and environmental activism.
Exploring a map next to Paire Lachaise
Other valuable papers in the area of HCI for development and sustainability include the excellent longitudinal study by Susan Wyche and Laura Murphy on the way mobile charging technology is used in Kenya , a study by Adrian Clear and colleagues about energy use and cooking practices of university students in Lancastera longitudinal study of responses to indoor air pollution monitoring by Sunyoung Kim and colleagues, and an interesting study of 8-bit, $10 computers that are common in many countries across the world by Derek Lomas and colleagues.

TheCHI at the Barricades – an activist agenda?‘ was one of the high points of the conference, with a showcase of the ways in which researchers in HCI can take a more active role in their research and lead to social or environmental change, and considering how the role of interactions in enabling or promoting such changes can be used to achieve positive outcomes. The discussions that followed the short interventions from the panel covered issues from accessibility to ethics to ways of acting and leading changes. Interestingly, while some presenters were comfortable with their activist role, the term ‘action-research’ was not mentioned. It was also illuminating to hear Ben Shneiderman emphasising his view that HCI is about representing and empowering the people who use the technologies that are being developed. His call for ‘activist HCI’ provides a way to interpret ‘universal usability‘ as an ethical and moral imperative.

It was good to see the work of the Citizen Sort team getting into the finalists of the students game competition, and to hear about their development of citizen science games.

So despite the early concerned, CHI was a conference worth attending and the specific jargon of CHI now seem more understandable. I wish that there was on the conference website a big sign ‘new to CHI? Start here…’

The Consumers’ Association Which? magazine  is probably not the first place to turn to when you look for usability studies. Especially not if you’re interested in computer technology – for that, there are sources such as PC Magazine on the consumer side, and professional magazines such as Interactions from Association for Computing Machinery (ACM) Special Interest Group on Computer-Human Interaction (SIGCHI).

And yet…

Over the past few years, Which? is reviewing, testing and recommending Satnavs (also known Personal Navigation Devices – PNDs). Which? is an interesting case because it reaches over 600,000 households and because of the level of trust that it enjoys. If you look at their methodology for testing satnavs , you’ll find that it does resemble usability testing – click on the image to see the video from Which? about their methodology. The methodology is more about everyday use and the opinion of the assessors seems to play an important role.

Link to Which Satnav video

Professionals in geographical information science or human-computer interaction might dismiss the study as unrepresentative, or not fitting their ways of evaluating technologies, but we need to remember that Which? is providing an insight into the experience of the people who are outside our usual professional and social context – people who go to a high street shop or download an app and start using it straightaway. Therefore, it’s worth understanding how they review the different systems and what the experience is like when you try to think like a consumer, with limited technical knowledge and understanding of maps.

There are also aspects that puncture the ‘filter bubble‘ of geoweb people – Google Maps are now probably the most used maps on the web, but the satnav application using Google Maps was described as ‘bad, useful for getting around on foot, but traffic information and audio instructions are limited and there’s no speed limit or speed camera data‘. Waze, the crowdsourced application received especially low marks and the magazine noted that it ‘lets users share traffic and road info, but we found its routes and maps are inaccurate and audio is poor‘ (both citations from Which? Nov 2012, p. 38). It is also worth reading their description of OpenStreetMap when discussing map updates, and also the opinions on the willingness to pay for map updates.

There are many ways to receive information about the usability and the nature of interaction with geographical technologies, and some of them, while not traditional, can provide useful insights.

I’ve been using 37Signals’ Basecamp now for over 5 years. I’m involved in many projects with people from multiple departments and organisations. In the first large project that I run in 2007 – Mapping Change for Sustainable Communities – Basecamp was recommended to us by Nick Black (just before he co-founded CloudMade), so we’ve started using it. Since then, it was used for 33 projects and activities which range from coordinating writing an academic paper to running a large multidisciplinary group. In some projects it was used a lot in other it didn’t work as well. As with any other information system, the use of it depends on needs and habits of different users and not only on the tool itself.

It is generally an excellent tool to organise messages, information and documents about projects and activities and act well as a repository of project related information – but project management software is not what this post is about.

I’m sure that in the scheme of things, we are a fairly small users of Basecamp. Therefore, I was somewhat surprised to receive a card from 37Signals. 
I’m fairly passive user of Basecamp as far as 37Signals are concerned – I’m please with what it does, but I have not contacted them with requests or anything like that. So getting this hand-written card was a very nice touch from a company that could very easily wrote the code to send me an email with the same information – but that wouldn’t be the same in terms of emotional impact.

As Sherry Turkle is noting in her recent book, the human contact is valuable and appreciated. This is important and lots of times undervalued aspect of communication and interaction – the analog channels are there and can be very effective. This blog post – and praising 37Signals for making this small effort, is an example of why it is worth doing it.

Follow

Get every new post delivered to your Inbox.

Join 2,271 other followers