Evaluating Switch Campaigns
Once the delivery of your campaign has been completed, it is important to look back and to evaluate. This evaluation serves these distinct purposes:
Measuring effectiveness: Assess how effective your campaign was. This is interesting for yourself but also for local stakeholders, your local support network, politicians and the general public.
Understanding mechanisms: What were the reasons for the observed effects? did people change their behaviours or only to a certain degree? What would make the effects even stronger?
Evaluating processes: Reflect self-critically what elements of the process went well and which ones could have been better; can be better next time.
Analysing and publishing the results: The information gathered for 1), 2) and 3) needs to be analysed, processed or “condensed” into formats that can be published with certain stakeholders and the general public.
These purposes require different kinds of information (“data”) and different ways to obtain it. As you can see in the general overview flow-diagram here, there are three components with the tick-box symbol shown on the right. They represent three steps with a so-called “quantiative” evaluation method, that is, where you count and measure things with numerical data.
Some information, however, cannot be captured with numbers; especially information that answers questions starting with why and how. Gathering this kind of “data” requires words, discussions, conversations, interviews. This is called a “qualitative” approach, represented in the diagram with the speech bubble icon.
You do not need a degree in statistics or in any other special discipline to perform a rigorous evaluation. But it is crucial to plan this important element of a SWITCH campaign very carefully and from the beginning. This section helps you prepare and conduct such an evaluation and to analyse its results.
1. Measuring effectiveness
The effectiveness of your campaign is best measured with a comparison between the situation before and after the campaign. The most important indicator to measure is the number of car trips and car kilometres replaced by walking and cycling trips. From this you can calculate the savings in GHG-emissions and primary energy consumption (see the Analyse and publish the results section for tips how to do this) and the additional level of activity people gained by modal shift. These aspects are covered by the questions Q1-Q5 and Q14 of the suggested baseline questionnaire and the questions Q1-Q5 and Q11 of the post survey .Therefore, you must design this aspect of the evaluation already during the preparation phase to capture the “before” situation by defining suitable measurement indicators (see Design section here).
It is absolutely crucial that the before-survey and the after-survey correspond to each other and measure comparable information. (In an ideal case, you even have data for a so called “control group”, which has very similar characteristics to your participants but did not actually participate in the campaign. This would allow you to “control for” important factors that have nothing to do with your campaign but that nevertheless influenced travel behaviour in your city (e.g. a long-lasting spell of unusually good weather). When a control groups technique is not feasible, it is very important to document the possible other factors carefully and to address them in the qualitative conversations / interviews).
The “after” situation should be measured shortly after the end of your campaign with pretty much the same questionnaire and method that was used for the baseline survey. (Ideally, two to four weeks after the target person went through all phases of the SWITCH campaign and had the opportunity to test new travel behaviours). The main purpose of this 1st After-Engagement survey is to compare the data it produces with the results of the baseline data in order to identify the short term effects on people’s travel behaviour changes (and possibly on people’s attitudes and mind set) as a result of the campaign.
What matters in the long term, however, is whether the new behaviours still “stick” months and years after the end of the campaign. For this reason, you should also conduct a second evaluation survey about four to six to nine months after the end of the campaign. This will find out whether new behaviours have consolidated into new mobility routines. This is a good time to assess whether participants have really formed new and healthier habits.
In technical terms, SWITCH therefore differentiates between a 1st and a 2nd “After-Engagement survey.” To keep the data comparable, the baseline survey and the 2nd after-engagement survey should be carried out in comparable seasons, e.g. in spring and autumn. And the questions in the 2nd after-engagement survey should be identical to the ones in the 1st after-engagement survey.
If thoroughly conducted, this method can lead to numbers that can carry particular weight as proof or evidence. Especially in discussions with politicians, stakeholders, the media and the general public, it will be helpful to have such quantitative “facts”. They can also tell you whether the campaign was “worth it” – especially whether it was worth the money invested in it (cost-benefit analysis). Such numerical data can and should also be used to calculate the related reduction of greenhouse gas emissions and primary energy consumption resulting from participants’ travel behaviour.
It is important that you get very clear, beforehand (!), what exactly you want to achieve. We emphasise this because it is an all too common mistake of such activities that they collect way too much data, more data than can be handled afterwards. Besides, overly long questionnaires require more time than necessary to fill in, which is a problem for both the interviewer and the interviewee.
Conversely, sometimes evaluators realise during the analysis of the results that it would have been good to include another question – but then it is too late. It is therefore important that you get clear, first of all, what exactly it is that you want to know. In other words, you need to sharpen your evaluation tools.
For this, think about what can truly tell you something meaningful about effects? For example, if you want to know how many people changed their behaviour you have to get very clear what you mean by “people”, by “change” and by “behaviour”:
People: Do you mean all participants or are you also interested in gender and age differences? You will need to design your questionnaire accordingly.
Change: Has someone who used to cycle once a week and is now cycling twice a week changed their behaviour? Just be clear whether you also want to measure the regularity of change, how often and for what distances people use which mode of transport? There are no “correct” answers to these questions. The point is merely that you/your team need to define these things in order to be able to formulate precise questions in your questionnaire that return the information you need.
Behaviour: Are you only interested whether people switched from car to bicycle or also from car to walking; maybe even from walking to cycling, from public transport to active modes or any other direction? You might also want to capture the number of car kilometres that were avoided.
In essence, a good tool to measure effects is all about the precision of its components. Think also about precise indicators if you want to measure the cost- benefit ratio of your campaign. For this you need to keep an overview of all expenses and investments made for the campaign (material, staff time, printing costs, … see design section) and then compare this with the financial savings in terms of avoided costs for the treatment of NCDs, productive hours saved from avoiding congestion etc.
In other words: What you need to define is what parameter exactly represents the topic you are interested in (also called operationalisation) If you do this well, your results will be “valid”.
Please have a look at the existing evaluation questionnaire here. It will hopefully provide inspiration for your own city- and context-specific evaluation activities. The SWITCH project has also defined Common Performance Indicators (on the basis of the EU programme Intelligent Energy Europe (IEE)) that can help to develop your interview guideline.
2. Understanding mechanisms
In addition to some information about what happened, you will surely want to know the reasons why certain effects were achieved (or not); why people changed (or did not change) their behaviours and routines. To illustrate this point, the effectiveness measurement is analogous to someone who compares the input and output of a machine.
What is also interesting is to open the engine bonnet, to look inside and to understand the mechanisms that explain the effects. The type of information you need for this purpose is not captured by numbers but by words; and it can be gathered through conversations. This is a so-called “qualitative” approach.
The conversations to obtain these insights can either take the form of interviews with individuals or discussions with groups of people. For interviews, make sure you prepare a set of questions beforehand to ensure that the conversation is well structured. However, you should also allow people to elaborate on certain points because they might have interesting information which you did not anticipate in your questions. Proceed similarly if you conduct a so-called “focus group” discussion; this is a meeting where several participants (ideally 5-10) exchange their views live in your presence. In any case, you should get people’s written consent to participate, you should promise them anonymity and you should take notes; it would be even better to make audio-recordings of such conversations (interviews and focus group meetings).
You can also employ some other qualitative “data gathering” techniques. One that can be particularly effective in a SWITCH campaign is a written diary where participants record their behaviour, thoughts and experience during the campaign. You can also travel with a few selected participants to understand why they do things the way they do. Some researchers even watch video footage with travellers who recorded their trip beforehand with a mobile camera. Again, you are free to develop your own techniques that fit your specific context and target group.
By the way, qualitative information is also crucial to draw some lessons about infrastructural problems in your city. If you get repeated feedback about, say, long waiting times for pedestrians at certain traffic lights, or damaged surface on bicycle paths, you can use them constructively and send them to your colleagues in other city departments. It is remarkable how much more attentive people who recently switched transport modes are to such things compared to long-term users of the same mode, who have simply grown accustomed to such situations, even if they are really annoying.
To develop good and precise questions for such conversations think about what kind of information can provide insights into the mechanisms behind the effects of your campaign. You might have a guess about some intended mechanisms (the PTP advice being one of them of course) so make sure you include them in your semi-structured interview guide. You might also have a guess about potential other mechanisms that were not part of your campaign. For example, if there was an unusually long period of pleasant weather during the campaign phase you might want to find out to what degree this might have influenced the effects. Or did it matter that a new bridge for cyclists and pedestrians across the river opened during the campaign? Maybe you also have assumptions about other factors and mechanisms that might have mattered (e.g. participants’ cultural background). They could help you to understand the situation in your city better and to devise more effective measures in the future by all means do include them in your list of questions you want to ask during an interview or a focus group meeting. Most certainly, there will also be certain mechanisms that you could not possibly have anticipated. For this reason make sure you also provide respondents with an opportunity to tell you unprompted things that matter in their own subjective view. This can help you discover seemingly “irrational” factors that nevertheless are hugely relevant and won’t simply go away no matter how much someone “preaches” about desired behaviour.
You should follow a systematic approach to extract the key lessons from the information you obtained through qualitative methods. If you have the capacity to transcribe audio recordings from interviews and focus groups you should definitively do this – or have it done for you. Ideally, this written material should then be analysed with a qualitative data analysis software and for this, you will need a so-called “code plan”, which is basically a list of topics (represented by key words) that you expect to feature in the conversations. Read through every transcript and highlight these key words at every occurrence. If you do not have a special software for this purpose you could simply open several text documents (one each per topic) and copy related statements from the transcript into the corresponding document. At the end of this process you will have all “nuggets” of your data sorted in several topic-specific documents, which can be extremely helpful to deepen your understanding of (non) cyclists and (non-) walkers in your city. These insights will also be useful for colleagues in other departments like those responsible for urban planning, traffic safety, green space, demographic change or air quality.
3. Evaluating processes
Through the so-called “process evaluation” you document, measure and assess the dynamics of your campaign, the barriers and drivers encountered, the decisions taken and the efforts in terms of money, staff, material and other infrastructure. In other words, the process evaluation should answer questions like: How did it go? What went well / wrong and why? What did it cost? Who did or should have done what? Information on the process can be derived by talking to stakeholders and persons responsible for the implementation of the campaign. Documenting the costs and resources used in the different phases of the campaign is one important part of the process evaluation. It is the basis for computing cost-benefit ratios as one important evaluation indicator.
You should also use this opportunity to look back self-critically and to document the experience so that you and others, including SWITCH campaigners in other cities, can learn from it. After all, there might be another round of a behaviour change campaign – especially if the SWITCH experience was positive. It is therefore useful to have some robust evidence about the main barriers you encountered, the key support factors, the amount and types of resources required and other aspects of managing a campaign.
Some of the parameters that allow you to assess the quality and effectiveness of the whole campaign process are probably the same regardless of where and when you conduct it. Among them are the questions of how many person-hours had to be invested and – correspondingly – how much money had to be spent. You might also want to reflect upon how many flyers were handed out and which allied organisations you managed to win for your campaign. Did pro-cycling clubs support you more / less than bicycle stores? What might the effect have been of the local election midway through the campaign? These and many other issues are obviously very specific to your city and its particular situation, political constellation, topography and climate, historical context and so forth. Prepare related questions for when you interact with the member of your team and external stakeholders in your process evaluation efforts; be it through one-on-one interviews or focus group discussions.
4. Analyse and publish the results – Overall evaluation
Collecting data is not an end in itself; it is a means towards an end. Therefore, you will have to do something with the data, analyse it, synthesise it and publish it in a suitable form. This might be easier said than done because if you conduct your evaluation well, you will end up with quite a lot of data, both quantitative (numbers) and qualitative (audio-recordings, notes). It is good to have developed a data management scheme beforehand and also ideally a routine to check the quality and reliability of the data.
The main point of the quantitative analysis is the comparison between the baseline surveys and the 1st and 2nd after-engagement surveys because this allows you to assess the effects of your campaign in the most rigorous way. With the help of a spreadsheet software it will be easy to convert the essence of this data into some visual form, using bar charts, pie charts or other suitable diagram types that convey the message clearly.
When assessing certain impacts you will have to make some logical conclusions from the data you get through the survey results. For example: Imagine, someone tells you he or she lives 2 kilometres away from a child’s school and used to bring the child to school by car but has now shifted to the bicycle. Think carefully what this really means in terms of car kilometres saved because one school trip could mean that the parent drove the distance twice in the past: once to school and once back home in the morning and the same double trip after school. In this example, the total car kilometres saved per day is actually 8.
Some rule-of-thumb information that might help you to calculate related benefits are:
Fuel saved: 0,30€/km (especially a win for employers if it is a company car with free fuel card)
Parking space saved: average parking space in Brussels’ offices costs 1.500euro per year
Time saved: cyclist is on average 1 day of work less sick per year compared to other employees (multiply this with the average cost per work day in your region)
Productivity & stress: cycling employees are 44% more happy & 20% more productive (Fietsersbond et al., n.d.)
The value-for-money assessment of the SWITCH campaign in Hounslow was based on the NICE ROI Tool for physical activity (http://tinyurl.com/zf2fmfe). See also Mallender et al. (2013). Preliminary results (assuming an inactivity level of 49%) indicate a return on investment for every £1 spent over 2 and 5 years, respectively: Productivity £19.46 / £45.68. Transport £4.18 / £9.91. Healthcare £17.23 / £17.39.
For further information, especially about health-related benefits, see Davis (2014) and Kahlmeier et al. (2013) or a very concise summary here from Travelwest.
In addition to the impacts on people’s travel behaviour “per se” it is also interesting to get a sense of what this means in terms of energy saving and a reduction of greenhouse gas emissions. To calculate this, you will need primarily two types of additional information:
The amount of energy that is typically used per kilometre. The average new car must not consume more than 5.6 litres per 100 km of petrol. This is equivalent to 1,79 Megajoule (MJ) per kilometre. Since there are many older cars on our streets you can calculate with 2,06 MJ per km.
The amount of CO2 that is typically emitted per kilometre. Under current EU legislation (2015), new cars must emit no more than 130 grams of CO2 per kilometre. Older cars tend to have less efficient engines, so for your calculation you can assume an average value of 150 g CO2/km.
If you also want to make statements about the time saved (from avoiding congestion) and if you want to extrapolate the effects of the SWITCH campaign (i.e. if you want to say what effects it would have if every citizen did the same kind of switch) you will need further information about average transport figures in your city. You can usually obtain this data from other travel surveys conducted in your area.
Typical data required for this kind of calculation includes:
Modal split [% / mode]
Mean trip distance per day [km / day]
Mean trip distance per travel mode [km / day / mode]
Mean trip duration [min / day]
Mean trip duration per travel mode [min / day / mode]
Mean speed per travel mode [km / h / mode]
Mean number of trips per day [number / day].