Page 287

Chapter 20 Evaluation in health promotion

Key points

Defining evaluation
Evaluation research methodologies
Why evaluate?
What to evaluate?
Process, impact and outcome evaluation
How to evaluate?
Evaluating whole systems
Cost-effectiveness
Using evaluation to build an evidence base for health promotion

Overview

Evaluation is an integral aspect of all planned activities, enabling an assessment of the value or worth of an intervention. Evaluation also performs several other roles. For practitioners, evaluation helps develop their skills and competences. For funders evaluation demonstrates where resources can be most usefully channelled. For lay people, evaluation provides an opportunity to have their voice heard. There are additional reasons why evaluating health promotion is a key aspect of practice. As a relatively new discipline there is great pressure on health promotion to prove its worth through evaluation of its activities. In addition, the drive in the National Health Service (NHS) to ensure that all practice is evidence-based affects health promotion as well as more clinical activities. In a situation where resources will always be limited, demonstrating the cost-effectiveness of interventions is important. There are thus many factors leading to a demand for evaluation of health promotion practice.

Page 288

Evaluating health promotion is not a straightforward task. Health promotion interventions often involve different kinds of activities, a long timescale, and several partners who may each have their own objectives. Health promotion is still seen as belonging within the health services, where the dominant evaluation model is quantitative research centred on experimental trials, with randomized controlled trials (RCTs) as the preferred evaluation tool. Health promotion has had to argue its case for a more holistic evaluation strategy encompassing qualitative methodologies and taking into account contextual features.

The focus of this chapter is on evaluating health promotion interventions. Evaluation of research studies is also part of the health promoter’s role and remit, and readers are referred to Chapters 2 and 3 in our companion volume (Naidoo & Wills 2005) for a detailed discussion of this topic. This chapter considers what is meant by evaluation, the range of research methodologies used in evaluation studies, its rationale, how it is done and the role of evaluation in building the evidence base for health promotion.

Defining evaluation

Evaluation is a complex concept with many definitions that vary according to purpose, disciplinary boundaries and values. A comprehensive definition of evaluation is: ‘the systematic examination and assessment of features of a programme or other intervention in order to produce knowledge that different stakeholders can use for a variety of purposes’ (Rootman et al 2001, p. 26). The above definition is useful because it also flags up the importance of the purpose of evaluation, and the fact that there can be many different reasons to evaluate. Evaluation can provide information on the extent to which an intervention met its aims and goals, the manner in which the intervention was carried out, and the cost-effectiveness of the intervention. It is important to be clear at the outset about the purpose of evaluation as this will determine what information is gathered and how information is obtained. The value-driven purpose of evaluation distinguishes it from research (Springett 2001). Evaluation uses resources which might otherwise be used for programme planning and implementation, so a clear purpose is also necessary in order to legitimate and protect this use of resources.

imageBOX 20.1

You have a limited budget (from lottery money) and tight timescale to deliver a community health promotion intervention designed to improve nutrition. Stakeholders include the funders, local schools, social housing and sheltered accommodation providers, community associations, primary health care staff, social care staff and the community. Your proposed plan of action includes an evaluation and you have suggested earmarking 5–10% of your budget and time for this purpose. A coalition of some of the stakeholders has approached you requesting that you omit the evaluation and concentrate all your resources on the intervention. How would you respond? What arguments might you use to defend the proposed evaluation?

From a practitioner’s perspective, evaluation is needed to assess results, determine whether objectives have been met and find out if the methods used were appropriate and efficient. These findings can then be fed back into the planning process in order to progress practice. Evaluations of interventions are used to build an evidence base of what works, enabling other practitioners to focus their inputs where they will have most effect. From a lay perspective, evaluation helps to clarify expectations and assess the extent to which these have been met. Evaluation may also help determine what strategies had most impact, and why. Without evaluation, it is very difficult to make a reasoned case for more resources or expansion of an intervention. Even when a programme is rolling out an established and effective intervention, specific local features may have an unanticipated impact that will only become apparent in an evaluation. There are sound reasons for evaluating all interventions, although more innovative projects will require more substantial and costly evaluation.

imageBOX 20.2

Principles to guide the evaluation of health promotion interventions

The World Health Organization (WHO) has identified four principles that should be used to guide the evaluation of health promotion interventions:

1. Participation – all stakeholders should be involved in evaluation
2. Multidisciplinary – evaluation should combine quantitative and qualitative methodologies drawn from different disciplines
3. Capacity – evaluation should help to build the capacity of all stakeholders to address health promotion concerns
4. Appropriate – evaluation should be appropriate, taking into account the complexity of health promotion interventions and their long timescale (Rootman et al 2001).

imageBOX 20.3

A well-man clinic is introduced in a primary health care practice. The aim is to monitor the health of middle-aged men and to provide information and advice enabling them to adopt healthier lifestyles, so that in the longer term health risks such as high blood pressure or smoking are reduced. Over a period of time, the practice nurse invites all men aged 40–65 into the practice for a half-hour session where she checks vital statistics (weight, blood pressure), asks about lifestyle (e.g. diet, smoking, alcohol and drug use, sexual activity, exercise) and gives individually tailored information and advice about adopting a healthier lifestyle. This intervention takes up a significant proportion of her time and workload.

How would you evaluate this programme, using the four criteria outlined above?

Page 289

Evaluation covers many different activities undertaken with varying degrees of rigour or reflectiveness. At its simplest level, evaluation describes what any competent practitioner does as a matter of course, that is, the process of appraising and assessing work activities. This includes the process of informal feedback or more systematic review of health promotion interventions. In the example above, noting how the sessions have been received by the men, or soliciting their comments, or those of peers and colleagues, is part of the evaluation process. Evaluation is often used to refer to a more formal or systematic activity, where assessment is linked to original intentions and is fed back into the planning process. For the well-man clinic example, this might involve monitoring vital statistics and doing a before-and-after study of lifestyle behaviours. Health promotion evaluation should integrate core health promotion principles such as equity and participation into the evaluation process. In the well-man clinic example this might be achieved through asking the men what they wanted from participating in the programme and whether they achieved their goals. Comparing the socioeconomic status of participants and non-participants would help determine if the programme was reinforcing or challenging social and health inequalities.

Evaluation research methodologies

imageBOX 20.4

A hospital nurse has set up a project to help cardiac patients to stop smoking. The intervention involved the identification of a key worker who was allocated time to interview patients to assess their smoking behaviour and draw up individual plans. After discharge, patients were followed up by a weekly telephone call for 6 weeks.

How could this project be evaluated so that any success in terms of smoking cessation in the target group could be shown to be due to the project?

What would be the strengths and limitations of the methods you identify?

1. An RCT would involve each smoking patient on arrival in the ward to be randomly allocated to either the experimental group (who receive the interview) or the control group (who do not receive the interview but get a care plan and general leaflet).
2. Case-study evaluation would interview patients about their involvement in the project and examine their knowledge, attitudes and reported behaviour.
Page 290

Evaluation is often more formally conducted as research using a variety of different methods. The classic scientific method of proof, the experiment, relies on controlling all factors apart from the one being studied and can best be achieved under laboratory conditions. However, this is clearly impossible and unethical to achieve where people’s health is concerned. The RCT is the next most rigorous scientific method of proof. The RCT involves randomly allocating people to an intervention or control group. Random allocation means that the two groups should be matched in terms of factors such as age, gender and social class which are all known to affect health. Any changes detected in the intervention group are then compared to those found amongst the control group. Those changes which occur in the intervention but not the control group can then be attributed to the health promotion programme.

In the well-man clinic example in Activity 20.3 an RCT study would involve randomly allocating all men in the target group to either the intervention group (invited for screening) or the control group (not invited). The two groups would then be compared after the intervention had taken place. If the intervention group showed statistically significant improvements in health status or health-related behaviour over and above those recorded for the control group the intervention would be deemed to be effective.

The degree of scientific rigour necessary to conduct an RCT is hard to achieve in real-life situations. Most health promotion programmes have spin-off effects and indeed are designed to do so. It is impossible to isolate different groups of people or to ensure that programmes do not ‘leak’ beyond their set boundaries. However, the RCT design does mean that changes detected in the input group may be ascribed to the health promotion programme with a greater degree of confidence.

Evaluation research may also use qualitative methods to focus on understanding the processes involved in change. This kind of evaluation provides details on what is happening in interventions and which features have been effective. This is achieved through the use of qualitative methodologies and methods, and the case study is one example of this approach. The health promotion intervention is the ‘case’ that is intensively studied using a variety of methods. This enables the evaluator to get a detailed picture of how the intervention has affected the people involved. Case studies are typically small-scale and findings are expressed in descriptive rather than numerical terms. Each case study is unique and findings cannot be generalized to other situations. Its strength as a method is that there is a high degree of confidence that identified effects are real and result from the programme.

In the well-man clinic example in Activity 20.3, a qualitative case-study approach might involve indepth interviews with a sample of men who took up the screening opportunity and the practice nurse. The interviews would aim to explore what motivated the men to accept the invitation to attend the clinic, how they found the experience and how (if at all) it has affected them.

Both the RCT and the case study are valid methods which can be used to isolate the effects of health promotion interventions. There are also many other methods that lie between these two extremes, e.g. surveys which aim to identify significant trends. In practice methods often overlap or are combined. The RCT fits into a scientific, quantitative medical model of proof, has higher status and is generally regarded as more respectable and credible than the case study.

Page 291

Why evaluate?

Evaluation uses resources that could otherwise be used to provide services. Given that services are always in demand, there needs to be a strong rationale for devoting resources to evaluation rather than service provision. New or pilot interventions warrant a rigorous evaluation because, without evidence of their effectiveness or efficiency, it is difficult to argue that they should become established work practices. Other criteria that can be used to determine if evaluation is worth the effort relate to how well it can be done. If it will be impossible to obtain cooperation from the different groups involved in the activity, it is probably not worthwhile trying to evaluate. If evaluation has not been considered at the outset but is tacked on as an afterthought, the chances are that it will be so partial and biased as to be not worth the effort.

Evaluation is only worthwhile if it will make a difference. This means that the results of the evaluation need to be interpreted and fed back to the relevant audiences in an accessible form. All too often, evaluations are buried in inappropriate formats. Work reports may go no further than the manager, or academic studies full of jargon may be published in little-known journals.

imageBOX 20.5

Reasons for evaluation

To assess how resources were deployed (effort)
To assess whether what has been achieved was an economically sound use of resources (efficiency)
To measure impact and outcomes and whether the intervention was worthwhile (effectiveness)
To judge the adequacy and relevance of the delivery of the intervention (execution)
To assess the overall benefits of the intervention (efficacy)
To inform future plans
To justify decisions to others (O’Connor-Fleming & Parker 2001).

Results of evaluation studies will be relevant to many different groups and it may be necessary to reproduce findings in different ways in order to reach all these groups.

imageBOX 20.6

A specialist community public health nurse has evaluated her health promotion activities. These include opportunistic one-to-one counselling and education, setting up a carers’ support group, producing information leaflets on coping with dementia, and health surveys of people aged 75 and over.

How could she make her findings known to her clients, her manager, her nursing colleagues and other health and welfare workers?

What to evaluate?

Health promotion objectives may be about individual changes, service use or changes in the environment. Example 20.7 shows the range of possible objectives associated with smoking reduction interventions, each of which would need evaluation.

imageBOX 20.7

Health promotion objectives for smoking reduction

Increased knowledge, e.g. harmful effects of passive smoking
Changes in attitudes, e.g. less willingness to breathe in others’ smoke
Changes in behaviour, e.g. stopping smoking
Acquiring new skills, e.g. learning relaxation methods to reduce stress
Introduction of healthy policies, e.g. funding to enable GPs to prescribe nicotine replacement aids for people on low income
Modifying the environment, e.g. banning tobacco advertising and promotion, workplace no-smoking policies
Reduction in risk factors, e.g. reduction in number of smokers and amount of tobacco smoked per person
Increased use of services, e.g. take-up rates for smoking cessation clinics, number of calls made to quit-smoking telephone helplines
Reduced morbidity, e.g. reduced rates of respiratory illness and coronary heart disease (CHD)
Reduced mortality, e.g. reduced mortality from lung cancer.
Page 292

Although all these factors relate to health, they are quite separate, and there is no necessary connection between, say, increased knowledge and behaviour change. It is therefore inappropriate to evaluate a given objective (e.g. increased physical activity) by measuring other aspects of an intervention (e.g. number of leaflets taken at a health fair or number of people reporting that they would like to exercise more). It is important to choose appropriate indicators for the stated objectives. This issue is discussed further in Chapter 19, where the log-frame model and the use of logic to select appropriate indicators are considered.

Process, impact and outcome evaluation

Evaluation is always incomplete. It is not possible to assess every element of an intervention. Instead, decisions are taken about which evaluation criteria to prioritize and also sometimes which objectives are to be assessed. A distinction is often made between process, impact and outcome evaluation. Process evaluation (also called formative or illuminative evaluation) is concerned with assessing the process of programme implementation. Outcomes can be immediate (impacts), intermediate or long-term (outcomes). Impact and outcome evaluation are both concerned with assessing the effects of interventions.

imageBOX 20.8

Classify the objectives in Example 20.7 according to whether they refer to immediate, intermediate or long-term outcomes.

The following criteria have been proposed to guide evaluation in public health (Phillips et al 1994, cited in Douglas et al 2007):

Effectiveness – the extent to which aims and objectives are met
Acceptability – to the people concerned and society at large
Appropriateness – relevance to need
Equity – equal provision for equal needs
Efficiency – cost–benefit ratio.

imageBOX 20.9

Which of these criteria above apply to process evaluation? Which apply to impact and outcome evaluation?

Process evaluation

Process evaluation may be from the perspective of participants and/or practitioners and/or other stakeholders such as funders. Stakeholders’ perceptions and reactions to health promotion interventions and facilitating or inhibiting factors may be sought. More objective data, such as whether targets were met and timescales and budgets adhered to, can also be included. The aims of process evaluation are practical – can the intervention be repeated, can it be refined, and can it be reproduced in similar or different settings with similar or different target groups (Parry-Langdon et al 2003)?

There are four main questions in process evaluation:

1. Is the programme reaching the target group (programme reach)?
2. Are participants satisfied with the programme (programme acceptability)?
3. Are all the activities of the programme being implemented (programme integrity)?
4. Are all the materials and components of the programme of good quality (programme quality) (Hawe et al 1994; Nutbeam 1998)?
Page 293

Process evaluation employs a wide range of qualitative or ‘soft’ methods. Examples of such methods are interviews, diaries, observations and content analysis of documents. These methods tell us a great deal about that particular programme and the factors responsible for its success or failure, but they are unable to predict what would happen if the programme were to be replicated in other areas. Because process evaluation does not use ‘hard’ scientific methods, its findings tend to be more easily dismissed as unrepresentative. However, process evaluation is crucial to health promotion. We need to understand how health promotion interventions are interpreted and responded to by different groups of people and whether the intervention itself is health-promoting, and for this we need process evaluation.

Impact and outcome evaluation

Evaluation of health promotion programmes is usually concerned to identify their effects. The effects of an intervention may be evaluated according to its:

Impact – the immediate effects or outputs such as increased knowledge or shifts in attitude
Outcome – the longer-term effects such as changes in lifestyle.

The timing of an evaluation will affect what data can be collected and how confident we can be that the effects are due to the intervention. This is illustrated in the following example.

imageBOX 20.10

When to evaluate: impact and outcomes of a CHD prevention programme

A CHD prevention programme may have the following five effects:

1. Improves people’s knowledge of the risk factors for CHD
2. Increases people’s motivation and intention to take up CHD risk factor screening opportunities
3. Persuades more people to attend screening clinics
4. Increases media coverage of CHD
5. Reduces premature mortality rate from CHD.

An immediate postprogramme evaluation may identify the first and second effects, or the impact of the intervention. The third and fourth effects may only be apparent at a later evaluation, e.g. after 6 months, and are called outcomes. Twelve months after the programme, the increased attendance at screening clinics may no longer be discernible and attendance figures may have reverted to preprogramme levels. A reduction in the mortality rate may not be discernible for 5 years or more, by which time it will be difficult to attribute it to the health promotion programme. The assessment of the overall success or failure of a programme is therefore influenced by the timing of the evaluation.

Page 294

Impact evaluation tends to be the most popular choice, as it is the easier to do. Impact evaluation can be built into a programme as the end stage. For example, a health promotion programme for secondary schools may include as the last session a review of the programme. Students may be invited to identify how they have changed since the programme began and how they think the programme will affect their future behaviour. Outcome evaluation is more difficult, because it involves an assessment of longer-term effects. Using the same example given above, outcome evaluation may be used to determine whether the programme did affect students’ behaviour 1 year later. One way of ascertaining this would be to compare participants’ health-related behaviour (e.g. smoking, alcohol and exercise) before and after the programme, but there are bound to be changes in students’ behaviour over 1 year irrespective of any health promotion programme. So it would be better to compare the students to another group of similar students who did not receive the programme, to see if the same changes occur in both groups. The second or control group of students is necessary to avoid the danger of attributing all behaviour change to the health promotion programme and therefore of overestimating its influence.

Outcome evaluation is therefore more complex and costly than impact evaluation. Going back a year later to the same students and getting new information from them will take up time and resources, as will obtaining a matched group of students to use as the control group. However, despite these problems, outcome evaluation is often the preferred evaluation method because it measures sustained changes over time. Results using data on impact or outcome are often expressed numerically, and this again increases credibility. Quantitative or ‘hard’ data are seen as more concrete or factual than the ‘soft’ data used in process evaluation.

How to evaluate: the process of evaluation

In order to evaluate, decisions need to be taken about what information is needed and how it will be gathered. This needs to be done at the outset of an intervention, in order to ensure that relevant data are gathered at the appropriate time. Rootman et al (2001) propose an eight-stage framework for the evaluation of health promotion interventions:

1. Describe the programme, clarify aims and objectives.
2. Identify issues and questions of concern to all stakeholders.
3. Design the information-gathering process.
4. Collect the data.
5. Analyse the data.
6. Make recommendations.
7. Disseminate findings.
8. Take action.

Many commentators have argued that the evaluation process should adhere to health-promoting principles (Thorogood & Coombes 2004; Morgan 2006). Evaluation should involve the participation of all stakeholders and be an empowering experience. The evaluation of community health promotion interventions is particularly challenging as these are complex, context-specific programmes focusing on socioeconomic and environmental determinants of well-being.

Evaluation therefore involves several key aspects that need to be considered. These key aspects may be summarized as: what to measure? who evaluates? how to evaluate, including how to gather and analyse data? and what to do with the results? or putting the findings into practice. Each of these key aspects will now be considered.

What to measure?

Deciding what to measure to assess the effects of health promotion is not easy. In theoretical terms, the many meanings and definitions of the concept of health result in a lack of consensus about how best to evaluate it. For those who subscribe to the medical model of health, data concerning morbidity, disability and mortality are appropriate measures to use for evaluation purposes. For those who adopt a more social model of health, a much broader range of measures (including, for example, measures of socioeconomic status or the quality of the environment) will be appropriate. For people who prioritize the educational model, measures of knowledge and attitude change will be paramount.

Page 295

The golden rule must be to measure the objectives set during the planning process. (For more details on programme objectives, see Chapter 19.) Although this sounds straightforward, in practice it can be difficult, and a surprising number of evaluation studies violate this principle. Different stakeholders might have different objectives and the evaluation needs to take this into account. The objectives set may concern areas where there is a lack of consensus over appropriate measurement. For example, process objectives such as increased multiagency collaboration or increased community involvement are difficult to measure. To collect relevant data would require a special effort because they are not measured routinely. Change in people’s attitudes or beliefs is particularly problematic to measure.

The success of a health promotion intervention is not solely about achieving behavioural changes or reductions in disease rates. For example, a needle exchange scheme should not be judged solely by a reduction in the rate of human immunodeficiency virus (HIV) infection among drug users. Other markers of success, such as the take-up rate, are also important. In many cases, expecting a clear change in morbidity from a behaviour change would be unrealistic. Although there is a link between needle-sharing and HIV infection, there are other risk factors, and expecting a preventive outcome from this initiative might be unwise.

A programme may have several different objectives, some of which are easier to measure than others. It then becomes tempting to measure the easiest objectives and extrapolate from these findings. But if the measurements are of different classes of events (e.g. combining behavioural, environmental and attitudinal objectives), it is not legitimate to do this.

imageBOX 20.11

A programme has been launched with the objective of reducing child accidents. Key stakeholders include community and hospital-based health practitioners, community groups (parents’ and neighbourhood groups) and local authority staff, including environmental health officers and health and safety officers. The following indicators have been suggested as suitable means of evaluating the programme. For each indicator discuss:

Is it appropriate?
Is it feasible?
Who do you think suggested it?

1. Take-up of campaign literature
2. Campaign awareness
3. Sales of child safety equipment
4. Making changes to the home environment to improve safety e.g. installing stair gates
5. Making changes to the local environment, e.g. traffic-calming measures
6. Establishment of local child accident prevention working groups
7. Reduction in the number of accidents to children
8. Reduction in the number of severe accidents to children that require hospitalization.

Who evaluates?

Success means different things to different groups of people, or stakeholders, who each have their own agendas and interests. Different stakeholders have unequal power to impose their evaluation agendas on others. Different groups of people engaged in health promotion interventions will each have invested something but may well be looking for different results. For example, funders of a project may be looking for efficiency or results which can be interpreted as cost-effective. Practitioners may be looking for evidence that their way of working is acceptable to clients and achieves the objectives set. Managers may be looking for evidence of increased productivity, measured by performance indicators. Clients may be looking for opportunities to take control over some health-related aspects of their lives.

It is therefore important to be clear at the outset about whose perspectives are being addressed in any evaluation. A starting point is simply to acknowledge that different vested interests are involved and try to identify them. The ideal is then to go on to represent the views of the different stakeholders by collecting data from each group. This process is called pluralistic evaluation (Smith & Cantley 1985). Using the process of methodological triangulation, which employs a wide range of data sources, an overall picture may be built up. Pluralistic evaluation which takes into account different stakeholders’ views is more complete, although the findings may be complex and lack clarity. Pluralistic evaluation is a means of building capacity and empowering clients and service users as well as practitioners.

Page 296

In practice, pluralistic evaluation may appear too complex and costly, and evaluation is often carried out by external researchers or by practitioners. The former tend to be larger-scale and more ambitious in their remit. There are advantages and drawbacks to each of these options, as Box 20.12 demonstrates.

imageBOX 20.12

A dental health project has been launched, and needs to be evaluated. There are two choices: either an in-house evaluation conducted by the people involved in running the project, or an external evaluation conducted by outside researchers. These are some of the pros and cons of each option. Can you identify any others?

  Insider evaluation Outside evaluation
Pros Knows background to project Cheaper Acceptable to everyone Unbiased attitude
Research expertise
Fresh perspective
Cons Too involved in project
No research expertise
Biased to prove success
Expensive
May appear
threatening
Unfamiliar with project

How to evaluate: gathering and analysing data

The process of evaluation involves making decisions about what methods to use to gather and analyse relevant data. Each of these stages is next discussed separately.

Gathering data

Practical difficulties arise when trying to obtain data and trying to combine different forms of data to provide an overall picture. Some relevant data are already available and accessible, for example morbidity and mortality data. Other data already exist and may be obtained, for example policy documents or health surveillance data. However, some data will need to be specially collected and, particularly in areas such as attitude change or empowerment, there are no easy or accepted means of doing this.

A wide range of data, both qualitative and quantitative, may be used in evaluation studies. Guiding principles to use when selecting methods of data collection are to use appropriate and feasible tools. Appropriate means gathering data that will help meet the objectives of the evaluation. Feasible means gathering data within budgetary and time constraints. Process evaluation often concentrates on qualitative data whereas outcome evaluation is more likely to use quantitative data. However both forms of data may be applicable in various ways at different times.

The medical model of research dominant within health care settings prioritizes the RCT as the most rigorous form of quantitative methodology. However RCTs may be inappropriate for evaluating health promotion interventions, where the context is an important and acknowledged element. RCTs may also be misleading and unnecessarily expensive (Morgan 2006). The call for evaluation to be a health-promoting process in itself also mitigates against the use of specialist quantitative methodologies such as RCTs. For participation in an evaluation to be empowering, stakeholders need to be able to understand, contribute and oversee the process.

Evaluation seeks to assess process and effect, and it is therefore vital to have baseline data to use for comparison purposes. Unless baseline data are collected, it will be impossible to state that impacts or outcomes are due to the intervention. Planning needs to take account of this and allocate sufficient resources to allow for the collection of pre- and postintervention data.

Page 297

Analysing the data

There are various ways of analysing data, depending on whether the data are quantitative or qualitative, what kind of intervention or study was carried out, and what resources and expertise are available. There are many excellent textbooks that discuss data analysis in depth (e.g. Bowling 2002) and the reader is referred to these for a more detailed discussion of methods. However data analysis is not just a question of methodological awareness and expertise. Values also have an impact on data analysis processes. The assumption is that, faced with a certain set of findings, everyone would agree on their significance or meaning, but this is not necessarily the case. There may also be dispute about which findings are relevant or significant. Data analysis should be an inclusive and capacity-building exercise for all participants, enabling everyone to have a say about what data are significant and why.

Evaluating complex interventions

Many health promotion interventions are deliberately complex, involving multiple stakeholders and many different programme components. Interventions may also be context-specific, i.e. take account of, and try to use, specific features of the context. Community programmes such as Health Action Zones and Sure Start are examples of complex interventions. The goals may include not just direct effects, but triggering changes that will impact on the context and magnify the effects.

The use of scientific methods of evaluation, such as experimental trials, to evaluate such interventions is therefore inappropriate. Instead of screening out all factors apart from the intervention, evaluation of complex interventions seeks to unpack and examine the trigger effects of the intervention. Pawson & Tilley (1997) describe this process as looking inside the black box to explore what is happening at the inputs–outcomes interface. Outcome and process evaluation need to take place together. Pawson & Tilley’s (1997) realist approach to evaluation provides a means of doing this, and may be summarized as:


image


Realist evaluation seeks to understand how causal mechanisms work within specific contexts – what works for whom and under what circumstances. Once the whole picture is understood, the results may be appropriately transferred to other situations.

The Theory of Change approach has developed as a response to the challenge of evaluating complex community initiatives (Fulbright-Anderson et al 1998). This approach seeks to make explicit stakeholders’ assumptions about cause and effect, and how mini-steps build and combine to create long-term outcomes. There are five stages in the Theory of Change approach to evaluation:

1. Identify long-term goals and assumptions behind them.
2. Backwards mapping to reveal the necessary preconditions to achieve goals.
3. Identify the initiative’s interventions that will lead to the desired changes.
4. Develop outcome measurement indicators in order to assess the initiative.
5. Write a narrative to explain the logic of the initiative.

A good theory of change is plausible, do-able and testable (Connell & Kubisch 1998).

imageBOX 20.13

You have been asked to evaluate a community initiative aimed at reducing nuisance behaviour and crime (youths hanging around in public areas, noisy and inconsiderate behaviour, muggings and street robberies). How could you use the Theory of Change model to plan your evaluation?

Green & South (2006, p. 84) identify six elements of good practice in the evaluation of community health projects:

1. Building evaluation into the project
2. Maximizing stakeholder involvement
3. Measuring changes in individual and community health
4. Using appropriate evaluation methods
5. Examining processes
6. Learning in practice.

imageBOX 20.14

How would you evaluate the community project described below, using Green & South’s six criteria for good practice?

Community health project

A community health project is launched in a deprived inner-city area. The project has two dedicated community health workers and its aims are to increase participation, reduce health inequalities, work collaboratively with existing agencies and achieve sustainable results. The project workers link up with existing groups, e.g. church groups and parents’ groups, and set up four new groups focusing on carers, older people, unemployed people and young people. Activities include the establishment of a community garden, setting up a buddying system for vulnerable people in the community and a weekly drop-in support group for carers. The project workers also establish links with local statutory services, e.g. schools, general practices and social services. Two years on, you are invited to contribute to the project’s evaluation.

Page 298

What to do with the evaluation: putting the findings into practice

Dissemination of findings is important in order to publicise good practice and also to flag up interventions that were not as successful as had been anticipated. Knowing what doesn’t work is as valuable as knowing what does work, but there is a great emphasis on producing and publicising positive results. As Hawe et al (1994) state:

Sometimes to avoid ‘failure’, health promoters may avoid evaluation … At any one time many of the current initiatives may turn out to be those that fail to produce intended results. There is no shame in this … Stigma should not be attached to programmes that fail, only to those programmes that fail to learn from these experiences or to those programmes that fail to evaluate.

Putting findings into practice can take many forms. The results of evaluation should ideally feed into an ongoing cycle of action and reflection, allowing more knowledgeable and reasoned interventions to take place. Evaluation may also enable stakeholders to progress activities and gain more support to do so. Evaluation helps to establish the cost-effectiveness of health promotion and contributes towards its evidence base.

Cost-effectiveness

Part of the reason for evaluation is to determine whether desired results were achieved in the most economical way and whether allocating resources to health promotion can be justified. There are many different ways of calculating the economic pluses and minuses of health promotion. Cost–benefit analysis is a way of calculating whether, and to what extent, something is worth doing. Cost–benefit analysis relies on pricing both the inputs and the benefits of a health promotion programme. An attempt is then made to calculate the cost of each benefit. This is known as a cost–benefit ratio. Putting a price on health outcomes or benefits is a very difficult exercise. One approach to this problem is to compare the cost–benefit ratio for a health promotion intervention with the cost–benefit ratio for some other health intervention. It is often assumed that prevention is cheaper than cure and that health promotion saves money, but this is not necessarily the case.

imageBOX 20.15

An effective smoking prevention campaign is associated with the following costs. Money is saved by:

Not having to treat people with smoking-related diseases on the NHS
Not having to pay sickness benefit and disability pensions to people with smoking-related diseases
Increased production in industry because fewer employees are off sick.

Money is lost by:

Retirement pensions paid to people who live longer
Unemployment benefits to people in tobacco production and retail industry made unemployed due to fall in demand
Loss of government revenue from tobacco taxation.

Overall, do you think this campaign is cost-effective?

Page 299

Once a decision has been made to implement an intervention, economic analysis can help to determine the most efficient way of resourcing it. Efficiency refers to the maximum benefit that can be derived from the least cost. Cost-effectiveness is a comparison in monetary terms of different methods used to achieve the same outcomes. ‘Cost-effectiveness analysis addresses technical efficiency in the sense that it can tell us the best way to do something but not whether or not that something is worth doing’ (author’s italics; Cohen 2008, p. 337). Opportunity costs refer to what is sacrificed or forgone when resources are allocated to something, e.g. a health promotion project.

Economic appraisal is an important element in evaluation because there are always competing claims for limited resources. Using economics to make health-related decisions might seem a distasteful idea and people may shy away from attempts to put a value on people’s health, well-being or life. But the reality is that people, societies and governments are constantly making choices and decisions that are influenced by economic considerations. It is therefore important to make the decision-making process transparent and include economic principles and concepts in evaluation studies.

Using evaluation to build an evidence base for health promotion

Evaluation helps build a basis of research to demonstrate which health promotion interventions succeed in meeting objectives. Evaluation therefore identifies effective health promotion practice which others can adopt. Evidence-based practice is firmly established in medicine and nursing, where RCTs of alternative treatment protocols are used to establish which form of treatment is most effective for most people. In health promotion, creating evidence-based practice is more problematic.

imageBOX 20.16

Why might it be difficult to establish evidence-based health promotion practice?

There are several reasons why proving an evidence base exists for health promotion is problematic. These include knowing when to evaluate, knowing what constitutes success, being able to attribute results to interventions and the inappropriateness of using RCTs.

Page 300

Knowing when to evaluate is a challenge, and the timing of an evaluation can affect its results. If an evaluation is seeking to determine the outcomes of an intervention, a longer timescale is desirable. However this has problems. Health promotion is a long-term process and contexts and settings are constantly changing, so it can be difficult to be sure that any changes detected are due to the health promotion input, and not to any other factor. Health-related knowledge, attitudes and behaviour are constantly changing, regardless of health promotion programmes. Societies and environments are also changing in response to many different factors. One response to this problem might be to evaluate sooner and use a shorter timescale. However to do so might mean that longer-term sustained outcomes are missed. The best solution is to evaluate over different time periods, but this requires more resources.

Knowing the threshold for success is another challenge, and is illustrated in Example 20.17. A balance needs to be struck between setting the threshold too high, leading to interventions that are unjustly deemed to be ineffective, and setting the threshold too low, leading to a judgement that health promotion is not worth the effort. Striking the correct balance involves knowing what changes are likely to take place in the absence of the intervention, and then setting a realistic goal of what additional change is feasible and represents an efficient use of resources.

imageBOX 20.17

What constitutes evidence of success?

A smoking cessation programme is launched which includes clinics for those wishing to give up smoking. A clinic run by a health promoter attracts 20 clients who attend all six sessions. At a 6-month evaluation, 25% of the participants have stopped smoking.

Is this a success?

The health promoter may be pleased with these results. People attend clinics often as a last resort, and 6 months is a reasonable time period to assess long-term behaviour change. However, the health promoter’s manager may point out that 20% is an average success rate for people trying to quit, regardless of what methods are used. Clinics are time-consuming and 20 people is not a large group. The result, 25% quitters, is five people, four of whom might have quit using other less intensive or expensive methods. So one additional ex-smoker might be the result of the smoking cessation clinic.

The third problem is being able to attribute any changes that occur to the health promotion input. The many different individual, organizational, social, economic and environmental factors affecting health are in a constant state of flux and it is very difficult to pin any changes down to specific causes. A health promotion intervention may trigger a variety of changes, some immediate, some intermediate, and some longer-term. It is challenging to record and capture changes happening at different times.

The most solid evidence that is used to prove cause and effect is derived from quantitative methodologies such as RCTs. However there are several reasons why the RCT is often inappropriate for health promotion interventions, and this is the fourth problem. Many interventions are complex and multicomponent and are designed to trigger effects and spread into other contexts and groups. So attempts to isolate the intervention are at odds with health promotion principles and practice.

There are also ethical problems with adopting the RCT model of evidence. When and how does the practitioner or agency decide there is sufficient proof to roll out a programme? Is it ethical to deny what is very likely to be an effective intervention in order to obtain more scientific evidence of its efficacy? These methodological and ethical problems mean the RCT is often inappropriate for health promotion interventions.

Although the RCT is often inappropriate, this does not mean that there is no evidence on which to base health promotion work. Meta-analyses or systematic reviews of research studies pool together findings from different studies in effectiveness reviews. Effectiveness reviews are a means of building up a knowledge base which can tell us what are reasonable expectations of success in health promotion. Success in health promotion is complicated because the aim is not just to change knowledge or behaviour, but to change the social determinants of health, and this requires qualitative as well as quantitative evaluation.

Page 301

There is now an independent national body, the National Institute for Health and Clinical Excellence (NICE), devoted to providing evidence-based guidance on the promotion of good health and the prevention and treatment of ill health. NICE publications include guides on providing environments to encourage physical activity (NICE 2008a) and smoking cessation (NICE 2008b). In conclusion, the evidence base for health promotion is developing and includes a variety of approaches and methodologies. Syntheses of evaluations are being produced, enabling practitioners to start to practise evidence-based health promotion.

Conclusion

Evaluation contributes to the accountability and development of evidence-based health promotion practice, and so is an important aspect of the health promoter’s work. This involves evaluating health promotion activity with which you are involved. There are often pressures to adopt unrealistic measures of success, such as reduced mortality rates or demonstrable cost benefits. Most health promoters are engaged in more modest activities which seek to achieve changes in knowledge, behaviour, attitudes, service take-up or the policy process. These are more appropriate outcomes to use for evaluation purposes.

Evaluation is a practical activity which feeds into the theoretical debate about the nature and purpose of health promotion. This debate cannot be confined to professionals, or those who hold managerial or financial power. It must include the public, those who are the targets of health promotion activity. This is why pluralistic evaluation, which enables participants to have a voice in determining effectiveness, is so important.

Evaluation is not a simple activity and it consumes resources which might otherwise be spent on doing health promotion. The decision about whether, when and how to evaluate is therefore important. The question of evaluation should be considered at the outset of any planned health promotion intervention. If it is to be done, it should be done in the best possible way. If this is not feasible, then it is better to admit the impossible, and not attempt to evaluate. Ongoing monitoring may be the best one can do. This is acceptable, but there is a distinction between routine monitoring of activities through the use of performance indicators and a more thoroughgoing evaluation. It is important not to confuse the two and to be clear about which it is you are doing.

imageBOX 20.18

Guidelines for good practice in evaluation

Which of the following suggested guidelines for good practice in evaluation do you think should be included in a checklist of criteria to be met if undertaking evaluation?

Are there any other guidelines you would wish to add?

Evaluate early on before vested interests have had time to solidify.
Evaluate only if it will make a difference.
Evaluate only when it is appropriate.
Evaluate only when you can include the perceptions of different groups, e.g. only when you can do a pluralistic evaluation.
Publicise the results of evaluation widely in relevant formats.
Evaluate only when there is a chance of scientific accuracy.
If you cannot meet these criteria, do not evaluate.

Questions for further discussion

What factors would influence your decision about whether to evaluate a particular health promotion activity?
What factors would you wish to consider when evaluating a health promotion intervention?
Page 302

Summary

This chapter has looked at how evaluation is defined, the different kinds of research methodologies used in evaluation research, and why health promotion needs to be evaluated. Different kinds of evaluation have been identified, including process, impact, outcome and whole-systems evaluation. The process of evaluation, including principles and stages, has been outlined. The importance of demonstrating the cost-effectiveness of health promotion and the role of evaluation in building an evidence base for health promotion have been discussed.

Further reading

Douglas et al. (2007) Douglas J, Sidell M, Lloyd C, et al. Evaluating public health interventions. In: Earle S, Lloyd CE, Sidell M, et al, editors. Theory and research in promoting public health. London: Sage Open University; 2007:327-354. A succinct chapter that includes a detailed discussion of how the evaluation criteria can be applied to public health and health promotion interventions

Green J, South J. Evaluation. Maidenhead: Open University Press; 2006. A very readable account of the theoretical underpinnings of evaluation and the practicalities of doing evaluation. The real-life challenges and complexities of evaluation are discussed in depth

Rootman I, Goodstadt M, Hyndman B, et al, editors. Evaluation in health promotion: principles and perspectives. Denmark: WHO. 2001. A very thorough and comprehensive account of the theoretical and methodological issues relating to the evaluation of health promotion interventions

National Institute for Health and Clinical Excellence (NICE) produces guidance documents and effectiveness reviews on a variety of topics, including public health and health promotion issues such as obesity and nutrition, exercise and smoking cessation. Their website is www.nice.org.uk.

References

Bowling (2002) Bowling A. Research methods in health: investigating health and health services, 2nd edn. Maidenhead: Open University; 2002.

Cohen (2008) Cohen D. Health economics. In Naidoo J, Wills J, editors: Health studies: an introduction, 2nd edn, Basingstoke: Palgrave Macmillan, 2008. Chapter 10

Connell J P, Kubisch A C. Applying a theory of change approach to the evaluation of comprehensive community initiatives: progress, prospects and problems. Fulbright-Anderson K, Kubisch A C, Connell J P, editors. New approaches to evaluating community initiatives, vol. 2: theory, measurement and analysis. Washington DC: Aspen Institute. 1998.

Douglas J, Sidell M, Lloyd C, et al. Evaluating public health interventions. In: Earle S, Lloyd C E, Sidell M, et al, editors. Theory and research in promoting public health. London: Sage, Open University, 2007. Chapter 11

Fulbright-Anderson K, Kubisch A C, Connell J P, editors. New approaches to evaluating community initiatives, vol. 2: theory, measurement and analysis. Washington DC: Aspen Institute. 1998.

Green J, South J. Evaluation. Maidenhead: Open University Press; 2006.

Hawe P, Degeling D, Hall J. Evaluating health promotion: a health worker’s guide. Sydney: Maclennan and Petty; 1994.

Morgan (2006) Morgan A. Evaluation of health promotion. In: Davies M, Macdowall W, editors. Health promotion theory. Maidenhead: Open University Press; 2006:169-187.

Naidoo J, Wills J. Public health and health promotion: developing practice, 2nd edn. London: Baillière Tindall; 2005.

National Institute for Health and Clinical Excellence. Promoting and creating built or natural environments that encourage and support physical activity. London: NICE; 2008.

National Institute for Health and Clinical Excellence. Smoking cessation services in primary care, pharmacies, local authorities and workplaces, particularly for manual working groups, pregnant women and hard to reach communities. London: NICE; 2008.

Nutbeam (1998) Nutbeam D. Evaluating health promotion – progress, problems and solutions. Health Promotion International. 1998;13:27-44.

Page 303

O’Connor-Fleming M L, Parker E. Health promotion principles and practice in the Australian context, 2nd edn. Sydney, Australia: Allen and Unwin; 2001.

Parry-Langdon N, Bloor M, Audrey S, et al. Process evaluation of health promotion interventions. Policy and Politics. 2003;31:207-216.

Pawson R, Tilley N. Realistic evaluation. London: Sage; 1997.

Rootman I, Goodstadt M, Hyndman B, et al, editors. Evaluation in health promotion: principles and perspectives. Denmark: WHO, 2001.

Smith G, Cantley C. Assessing health care: a study in organisational evaluation. Milton Keynes: Open University Press; 1985.

Springett (2001) Springett J. Appropriate approaches to the evaluation of health promotion. Critical Public Health. 2001;11:139-152.

Thorogood M, Coombes Y. Evaluating health promotion: practice and methods, 2nd edn. Oxford: Oxford University Press; 2004.

Page 304