Evaluation:
Developed for the Research Councils UK by People Science & Policy Ltd,Hamilton House, Mabledon Place, London WC1H 9BB.Tel: 020 7554 8620,e-mail: [email protected], www.peoplescienceandpolicy.com
The Research Councils UK and The Office of Science and Technology
Practical Guidelines
2
Acknowledgements 3Foreword 4
Introduction 5Who is this guide for? 5Using this guide 5
Building an Evaluation Strategy 6Objectives of evaluation 6Monitoring 7Setting aims 7Setting objectives 7Evaluating your project 8How much evaluation? 9Resources 12Reporting 12In-house or independent? 13Confidentiality 13If you run into trouble 14
Gathering Data – Tools and Techniques 15Quantitative research 15Qualitative research 18Observational research 20Visitors’ book 20Record keeping 20Media impact 20
Data Handling – Tools and Techniques 21Quantitative data 21Qualitative data 22Observational data 24
Reporting 25
Annexes 27
1 Evaluation Questions and Questionnaires 27
2 Summative Evaluation Schema 32
3 Glossary 33
4 Further reading 34
contents
3
People Science & Policy Ltd is grateful to Alison McLeod for hercontribution, especially in identifying existing sources that havehelped us to structure this guide and for the further reading annex.We would also like to thank all those ‘users’ who took time tocomment on drafts, particularly, Roy Lowry, Erinma Ochu, ElspethBartlet and Janet Haylett who attended a workshop to providecomments, and to all the Research Council staff who supported usthroughout the drafting process.
acknowledgements
4
We need this guide! Anyone wanting to talk withthe public about issues around science shouldread this before starting. Certainly anyoneseeking funding for engaging with the publicshould heed the advice here before writing anyproposals.You’ll be more successful in gettingmoney and having more impact if you do.
The whole arena of engaging with differentpublics is full of good ideas and committedpeople wanting to make a difference. But as acommunity we haven’t been good at capturingthe impact of what we do, nor sharing goodpractice. It’s not enough to do activities becausewe think they’re worthwhile; we must be clearabout the impacts we are trying to have andthen go about trying to measure and assessthem, and the processes we’re using.Then wemust reflect and consider how we might changeand improve what we do and share what we’velearned.
As well as recommending this study, it's alsoworth looking at the previous set of practicalguidelines* which offer ideas on encouragingtwo-way communication with the public.
The guide here provides excellent support tohelp us not only evaluate the impact of what wedo, but also to think through our approaches,motives and strategies. If we all use the guideproperly and share what we learn, we'll improvethe quality of activities, increase the impact ofwhat we do and earn the right to be takenseriously as a learning, reflective community whoreally can make a difference.
*Dialogue with the public: Practical guidelines, RCUK 2002.
Professor Kathy SykesCollier Chair for the Public Engagement in Science andEngineeringInstitute for Advanced StudiesUniversity of Bristol
FOREWORD
5
Who is this guide for?This guide is designed for those who lead projectsintended to engage general audiences in science,social science, engineering and technology and thesocial, ethical and political issues that new research inthese areas raises. It is intended to help projectmanagers evaluate individual projects, regardless oftheir experience of evaluation. If you run, or oversee,lots of projects and report on them as a group, thenyou should also look at the companion publicationdesigned for programme managers.
Evaluation techniques are based on social and marketresearch methods, so even though some material willbe familiar to some readers, the questions posed andthe thought processes suggested, ought to be relevantto all project managers.
The evaluation approaches covered in this guide aresuitable for a range of projects including:
l talksl shows l teachers’ packs l hands-on events l websites/CD-ROMsl exhibitionsl open days
and all the other ingenious ways that you are using toengage people with science.
Evaluation strategies should be integral to the projectdesign process and it might be important to talk topotential funders about evaluation when preparinggrant applications.
Using this guideThe guide has five main sections and a series ofannexes.
Section 2 looks at building an evaluation strategy andalso provides an overview of evaluation – what it’sabout, what it can do, what it can’t do and how youshould report your findings. It should also help you inmaking an application for funding.You may betempted to skip this section and get straight into thesections on tools and techniques. PLEASE DON’T.
Section 3 focuses on data collection.This sectiondescribes the different techniques, all of which stemfrom market and social research. It explains whatdifferent techniques can do and how you can makethem work together.
Section 4 gives guidance on how to analyse the datayou’ve collected.
Section 5 looks at how to draft your report.
Annex 1 provides some detailed advice onconstructing questionnaires and phrasing questions.
Annex 2 provides an overview of what techniques touse depending on the information you want.
Annex 3 is a glossary of terms.
Annex 4 suggests some further reading.
There is no magic formula for evaluation. In order toconstruct an evaluation strategy, you need to thinkabout your objectives, the data you can collect andthe reports you have to make. Done well,constructing an evaluation strategy is every bit assatisfying as designing the project of which it is a part.
Introduction1
6
Evaluation is said to be ‘very important’, described as‘very difficult’ and often not done very well. Peopletend to be frightened of evaluation because they seeit as a test and a threat. In essence though, evaluationseeks to ‘prove’ and ‘improve’. If you see it as anopportunity to prove whether you achieved what youset out to achieve, how well you did it and whatimpact your project has had and an opportunity toimprove what you do, you’ll get far more out of it.
Objectives of evaluation
Evaluation is a process that takes place before, duringand after a project. It includes looking at the quality ofthe content, the delivery process and the impact ofthe project or programme on the audience(s). Itrequires baselines to be set, quality criteria andthresholds to be determined and an understanding ofwhere to find, and how to collect, relevant data andanalyse it in a meaningful way. Knowing what, ifanything, has changed as a result of a project is notenough. It is also important to know why somethinghas changed and how a project can be improved.
Who is your evaluation for?As well as being useful to you, your evaluation may berequired by funders or other stakeholders, so you’llneed to check whether you’re meeting all yourobligations.
l Has the funder set specific reportingrequirements?
l Does your institution require certain data or doyour partners/stakeholders want to knowparticular things?
Make sure these are covered by your strategy.
BUILDING AN
This section should help you to plan your evaluation and support your grant application.
Sub-section Contents
Objectives of evaluation Making sure you know what you can achieve through your evaluation and that you meet everyone’s requirements
Monitoring Clarifying what is evaluation and what is monitoring
Setting aims Being clear about the difference between aims and objectives
Setting objectives
Evaluating your project Deciding where evaluation can help your project
How much evaluation Being clear about what you can evaluate
Reporting Making sure you’re meeting everyone’s needs
In-house or independent Can you manage the evaluation you think your project needs and your funders want?
Confidentiality Making sure you comply with research ethics and the Data Protection Act
If you run into trouble What to do if your project starts to go wrong
2
The objective of an evaluation should be to:
l establish whether the objectives of a project orprogramme have been met;
l identify whether anything has changed as a resultof the project or programme (often termedsummative evaluation);
l identify how the project could have been moreeffective;
l identify whether there have been unintendedoutcomes and what these were; and
l support the development of a project as part ofthe research and development process(sometimes known as formative evaluation).
EVALUATION STRATEGY
7
MonitoringThere is often confusion between monitoringand evaluation data.This may well arise becausethey can both be gathered in similar ways. Inessence monitoring is about counting things andensuring your project is on track. Evaluation isabout the impact of your project and ensuring itis well designed to make the maximum impact.
Things like audience numbers and numbers of events,numbers of CD ROMs distributed, etc. are monitoringdata, not evaluation data. However, you may need to do research to establish some of these figures, whichmeans you might think of it as evaluation data.The same basic tools for gathering and analysing data canbe used for evaluation and monitoring information,often achieving both at the same time.
Setting aimsThe first thing you need to do is to clarify your aim(s).What do you want to achieve? This is big picture stuff.There are several reasons why people andorganisations want to develop activities tocommunicate science to a wider audience.They mayaim to:
1 raise awareness of science and science-based issues;
2 promote an individual organisation;3 be publicly accountable;4 raise funding;5 recruit the next generation of scientists and
engineers;6 explain science to non-specialists;7 build appreciation of science and new
technologies; and8 obtain public views on science and science-
based issues.
Most project managers will recognise one or more ofthe aims listed above as representing their drivingforce(s).
Setting objectivesNext you need to set objectives – these are thethings you need to do to achieve your aim. Each ofthe motivating factors listed above will yield differenttypes of objectives. For example, an organisation that
wishes to promote itself will set a degree of brandrecognition as an objective but this may be irrelevantto an organisation seeking to recruit the nextgeneration of scientists and engineers.
Your objectives are the things against which you willbe evaluating yourself and against which other people(including funders) will be assessing you. Settingobjectives is an art. At the application stage, funders(whether internal or external) will want to see thatyou’re going to give them a lot for their money, but ifyou oversell at the beginning, can you deliver at theend? Good objective setting helps you think throughnot just the evaluation strategy, but the whole processof running the project. A nice by-product of this is thatit will help you to construct a clear application/proposal.
Make your objectives SMART, that is:
l Specific;l Measurable;l Achievable;l Relevant; and l Time-bound.
Each objective should be all of these, and they inter-relate. For example, an objective might not besufficiently specific to be measurable, so it can neverbe clear to what extent it was achieved. Similarly, anobjective set with a timescale that is beyond the scopeof the programme to measure is not relevant.Considering each element of SMART provides aprocess for establishing whether or not an objective isappropriate.
8
In setting objectives the two main pitfalls are: settingobjectives that you believe are important, but againstwhich you can’t measure your success; and settingobjectives because you know they are measurable,but are actually of little relevance. Remember thingslike audience numbers and numbers of events aremonitoring data, not evaluation data. So, although youwill be setting targets for these things, that follow theSMART criteria, you need to consider the impact ofyour project for your evaluation strategy.
Bear in mind that your funder may have higher level,overarching objectives and you need to ensure thatyour project is supporting the funder’s objectives.
Evaluating your project
Formative evaluation Some people call formative evaluation marketresearch or research and development.You should beusing formative evaluation during the development ofthe project to test ideas, concepts and prototypes onrepresentatives of the audience.This will help you toassess what sort of ‘product and delivery channel’ isgoing to be most effective at reaching and engagingyour target audiences.You may need to test thingsout with a sample audience several times during yourproject’s development.What you might regard asnothing more than good practice in projectmanagement or product development is in fact acentral part of an evaluation strategy.
The emphasis at this stage is likely to be ondiscussion-based tools (qualitative research) that willgive you an in-depth understanding of your audiences.If you are designing an exhibit, a CD or a websitethen watching people interacting with a prototypecan also provide useful input for finalising the
structure. Qualitative input at this stage can be crucialto understanding how to change your project toimprove its appeal.To make improvements, it’s nogood knowing people didn’t like it unless you knowwhat they didn’t like and understand why they didn’tlike it.
However, don’t over-egg the pudding.You need tothink about when formative evaluation is reallyneeded as it will add time and cost to your project. Ifyou’re using a tried and tested formula, such as adebate format, you may not need to include formativeevaluation in your project plan. A good guide is, if youhave any doubt about how something will be receivedby your audience, test it out first.
Evaluation of processesYou should be evaluating the process of managing anddelivering your project.This will help you to see whatlessons can be learned for next time.This informationis also useful to peers and colleagues, and should beshared with the wider community of sciencecommunicators, so that they can learn from yourexperiences.
For many projects, this type of evaluation can behandled entirely by the project team. If you think ofthe analogy of keeping lab books as a record ofwhat’s going on within a research project, it becomeseasy to see how this aspect of evaluation can becomeeasy to manage. If there are a number of peopleinvolved, make sure that you have scheduled in to thework programme opportunities to talk throughprogress and any difficulties or issues that are arising.Don’t forget to include this time when you’re costinga project, it will be time well spent and whenpresented as part of your overall evaluation strategywill show people making funding decisions that you’reon top of the project and know how to make it asuccess.
Summative evaluationSummative evaluation is the type of evaluation thatpeople are most familiar with.This looks at theoutcomes and measures whether or not you havehad any impact on the audience.You’ll be interested inquestions such as:
BUILDING AN EVALUATION STRATEGY2
There are three roles for your evaluation:
l To support the development of your project(Formative evaluation)
l To ensure you do it better next time (Evaluationof your processes); and
l To assess the final impact of your project(Summative evaluation).
9
l How much did the audience enjoy what youprovided?
l Did it change people’s understanding/knowledgeor attitudes?
l Has it influenced their actions/behaviour?l How big an impact did it have on those who
engaged?
So the emphasis is likely to be on numerical data butdepth of understanding can be important at thissummative stage. Qualitative data can be crucial inexplaining what lies behind your quantitative data.There is no reason why questionnaires cannot askquestions that will help you to understand ‘why’people give certain responses.This is especially true ifyou have qualitative research, perhaps from yourformative evaluation, on which to base your questions.
You need to think very carefully about the impactsyou’re aiming to achieve because to be able toevaluate impacts they must be measurable.You needto think about the realistic level of impact that youcan make and the practicalities of identifying thatimpact.
Remember, summative evaluation is mainly aboutimpacts.This means that the counting of outputs suchas number of CDs distributed, number of people atan event, number of hits on a website, etc. is not partof your evaluation.This is monitoring data, notevaluation data as discussed on page 7.
Benchmarking and baselinesIt sounds obvious but if you’re trying to changesomething, then you need to know the state of affairsbefore people interacted with your project, so thatyou can see if there’s been a change afterwards. If youwant to measure change, you’ll need quantitative data.
How much evaluation?Impact can be instantaneous or it can be longer-term.As a project manager you’ll need to think about whatsort of impact your project could have and whetheror not it is practical to measure it.The Kirkpatrickmodel, first developed in 1975, is helpful in setting outfour levels of potential impact in the box above.
For projects of up to about £150,000 it may not bepractical or cost effective, to evaluate on all fourlevels. Factors that you will need to consider are:
l the depth of interaction you’ll have with youraudiences;
l how long you’ll stay in contact with them; and l whether your actions are likely to play a large or
small part in shaping attitudes and behaviour.
Reaction – the initial responseAt the reaction level you may want to set objectivesregarding things like perceived levels of enjoyment andusefulness.These are important if the project is tohave an extended life, as this is what will bring peopleback and encourage them to recommend yourproject to others.
It is hard to imagine that any project manager will notwant to gather evaluation data at this level. Indeed, formany project managers, this will be the only practicaland appropriate level at which to evaluate.You canassess reaction in three main ways: by getting peopleto write down their responses – usually using astructured questionnaire; by talking to them one-to-one or in (focus) groups; and finally by observingthem. One, two or all three might be appropriate foryou and Annex 2 contains a schema to help you thinkabout which technique(s) to use.
There is a danger that evaluation of initial reactionscan become a simplistic assessment of whetherpeople enjoyed the project. Enjoyment is importantand you will want to know this, but you can learnmuch more than this from initial reactions.
The Kirkpatrick modell Reaction - the initial response to participation (in your
case this would include immediate feedback on the projectincluding things like facilities, such as the venue).
l Learning - changes in people’s understanding, or raisingtheir awareness of an issue (you might here be looking forwhat people had gained from engaging with your project).
l Behaviour - whether people subsequently modify whatthey do.
l Results - to track the long-term impacts of the project onmeasurable outcomes e.g. exam results.
10
If you want to know whether people enjoyed theproject/found it useful/learnt something, you can alsofind out what they particularly did or didn’t enjoy,what was most and least useful and in particular whatthey would change and why. Now, as well as knowingwhat people thought of your project, you’ll also havea good deal of data on how you can more effectivelymeet your audience’s needs later in this project or insubsequent activities.You can also get feedback on theenvironment, for example on the comfort of thevenue and quality of refreshments. All of these issuescan be investigated using one or more of the threebasic techniques.
The easiest time to get initial reactions is whenpeople are taking part in your project, whether this isattending an event or using a product such as awebsite or CD. However, you may also find that it isworthwhile getting a slightly more consideredresponse a short period of time after the actualinteraction, when people have had a chance to reflect.This will be more complex to arrange but it mightgive you additional information.
Increasingly funders are trying to capture data aboutwhat works, or doesn’t, and why, so that they canspread good practice.They want project managers tolearn from the experiences of others as well asthrough their own. It may be true that we learn byour mistakes, but there is no need for all of us tolearn by making the same mistakes. Funders willappreciate evaluation strategies that will providefeedback on lessons learned, good practice, successfuland unsuccessful approaches. People don’t like toreport negative outcomes from their projects but ifyou understand why something went wrong, this willhelp not only you but your funder and your peers inthe future. Presenting this as ‘lessons learned’ willenable better practices to evolve.
Learning – changes in understanding/raising awareness (what people have gained from yourproject)Information exchange plays a part in science andsociety dialogue-based events and many sciencecommunication activities are designed to be overtlyeducational.You might be trying to raise people’sawareness of an issue or an area of science rather
than just impart ‘facts’ but you should be learningwhat the public thinks as well. Knowing what differentparticipants have gained from the project, whetherabout an issue, a subject or from one another will bean important part of the evaluation.
If you want to know how much people have learnt,then you need to know what they knew before theyinteracted with your project (the baseline) and whatthey knew after they’d finished. If you’re consideringlooking for this amount of detail, you may also wantto know how long the effect has lasted, so you mayalso want to find out how much they still know,perhaps six months after your project has finished.This is starting to become a significant amount ofwork. It may be necessary and if it is you’ll need tolook carefully at how much this will cost and how youbuild data collection into the project timetable.
For learning amongst young people, it might bepossible to compare predicted exam grades withactual results, or whether there is any improvementover a previous level of testing. However, this data isvery sensitive at the individual level and schools arenot likely to provide it, so you will have to work withteachers to collect it.
2 BUILDING AN EVALUATION STRATEGY
11
If, however, you’re more interested in what, ifanything, people think that they have learnt, ratherthan trying to measure objectively how much theyhave learnt, then this is the sort of issue that can beinvestigated as part of gathering initial, or considered,reactions.You can ask people to tell you what theythink they have gained or whether they have a morecomplete view or understanding of the issue.
Behaviour – do people change what they do?At the third level of evaluation, we are starting to talkabout quite profound factors.You need to think quiteseriously about whether or not your project is likelyto have an effect on the way people behave, and evenif it does, is it practical to track and measure this?Unfortunately there is no simple formula that will giveyou an answer to whether or not you should betrying to track your project’s impact on behaviour.
At the extremes it’s easy to tell. A hands-on sciencefunfair with a budget of a few thousand pounds thatattracts some hundreds of visitors is looking to inspireand enthuse people and maybe encourage someyoung people to think about educational and careeroptions that they might not otherwise consider. Sochanging behaviour is a real and legitimate aim, but itis quickly apparent that the resources that would haveto be devoted to measuring whether or not suchchanges had occurred would be entirelydisproportionate to the scale of investment in theproject. For this sort of project, the first two levels ofevaluation are likely to be adequate.
At the other end of the scale a multi-million poundinvestment in the development of products tosupport science teaching may have changing thebehaviour of teachers as its primary focus. Such aninitiative or programme needs to undertakeevaluation that shows the impact on behaviourbecause of the size of the investment and presumedseriousness of the problem.
In the middle ground, you need to think about justhow important it is to know about the impact you’rehaving on behaviour. If it is really important, then whatare the resource implications and can you, yourpartners and your funders provide these resourcesand access to the appropriate expertise?
If you want to assess changes in behaviour you’ll needto know what the baselines were and you’ll certainlyneed some sort of ongoing contact to monitorchange.You might rely on self-reporting but you mightwant independent verification. Either way you willhave to identify resources and expertise capable ofdelivering this sort of evidence.
Results – long term impactWhile we all hope that our activities will have a majorimpact on people’s lives, in reality they will be just onefactor among many that influence people’s attitudesand behaviour. At this level of evaluation there can beprofound resource and methodological issues toaddress.The apparently straightforward way to assessyour long-term impact is to track the people withwhom you have engaged over an extended period.However if you only track the people you’ve engagedwith, there is no ‘control group’ to allow you toascribe changes to your project rather than to otherinfluences.The resource implications of trackingpeople that you’ve engaged with are considerable.Add a control group and it quickly becomes clear thatthis approach is only practical for large scale projectsthat have budgets to match.
To track school students’ performance it is relativelyeasy to get national data on performance in differentsubjects and some data on individual schools from theDepartment for Education and Skills website. Othernational data sets are commissioned by otherDepartments – for example the Labour Force Survey,managed by the Department of Trade and Industryprovides detailed data on the number of people indifferent occupations and can provide trend data.
However, you need to be reaching an awful lot ofpeople and having a big impact on them for this toshow up in national statistics. Moreover, disentanglingyour impact from the many other things that affectpeople’s behaviour is a difficult and highly skilled task,unlikely to be appropriate for most, if not all, individualscience and society projects.
12
It is difficult to provide indicative figures of reasonablecosts for different elements of an evaluation and theywill vary depending on whether you contact themyourself or commission someone external. Forexample, a ‘standard’ focus group might cost £2,500or more if commissioned commercially.
The resources available may constrain what evaluationyou can do.There is sometimes a tendency to set abudgetary limit on evaluation, 10% of the projectbudget is often quoted.This may not always besensible because in small projects this means you cando almost nothing. Also, if this is the first time you’vedone something it might be worth investing more in agood evaluation so that you can improve yourproject, you can then run smaller evaluations in thefuture. If you’re providing your time free of charge,don’t forget to impute a value to this to get the totalbudget.
It’s better to think about what information you need,how it can be collected and analysed and then to
consider whether the effort and cost of this work iscommensurate with the project. If it is not, are youbeing too ambitious or not ambitious enough withyour evaluation? See sections 3 and 4 for more detailsof what’s involved.
What is the right level for you?This is a judgement that only you can make. However,a broad recommendation is that all project managersshould include ‘reaction’ in their strategies; someshould also include ‘learning’, but very few ‘behaviour’or ‘results’. Don’t be afraid to say that it is not possibleor practical to evaluate at some levels, but don’t tryto use this argument to avoid undertaking work thatis easily achievable and that will, if done properly,provide you and your stakeholders with much usefulinformation.
ReportingGathering and analysing data is all very well, but untilyou start to make it work, you might as well have notbothered. Reporting, or sharing the data, is where thebenefits of good evaluation start to be realised.Thereare four main audiences for your evaluation:
l You and your team;l Your funder;l Other stakeholders; andl Your peers
Choose the most suitable format for your report. Itmay be in writing, but it may be that a presentationenables the team to reflect on their experiences.
You and your teamFrom the start we have emphasised the importanceof evaluation helping you to improve. So, the primaryaudience for much of the evaluation is your team.What have you learnt? How will you apply this in thefuture? Are there things that you’ve learnt that readacross not just to other science and society activitiesbut to your other work or your interactions withfunders and users?
Your funderWhoever has provided the resources for your projectwill probably want to know what they’ve got for theirmoney, so the first aim of your evaluation report is to
2If you’re going to run focus groups you’ll needto allow:
l Time to preparel Time to run the groupl Time to analyse the conversationl Time to write-up the resultsl The cost of a venue (including refreshments)l Payment of participants (if appropriate) l The finding/recruitment of participants
For a quantitative piece of work you’ll need:
l Time to write a questionnairel Time to find people to test it onl Time to re-draft itl To distribute it (which might take some time
and budget) l To collect and possibly ‘chase’ responsesl Time to analyse the datal Time to write-up the results.
BUILDING AN EVALUATION STRATEGY
Resources
13
show this. However, funders are increasingly interestedin sharing good practice, so if there are importantpieces of learning, share them.
Your funder may well have given you a standardformat for your report.You should use it, even if it isconstrictive.They probably use the standard format toallow them to sum data from across different projects,so they can report to their seniors and justify theirbudgets. If there are other things that you want to say,then send an additional note or report. Few fundersare likely to complain about getting too muchfeedback.You never know, you might want toapproach them again, in which case you want to haveleft the memory of how effectively you managed theproject and how thoroughly you evaluated it andreported on it.
Other stakeholdersYou may have had partners who contributedresources, people or expertise to the event. Makesure that you share your evaluation results with them.It will help them to assess the value of theircontributions and may influence their future decisionson getting involved in future activities, either with youor others.
Your peersFinally, there are your peers who are getting involvedin communication work.You may know some directly,but also talk to your funder about any networks theyhave for sharing good practice.Your evaluation couldprovide the answers that someone else has beenlooking for. It might be that you’ve cracked a problem,or that you stumbled into a trap that you can helpothers to avoid. It takes some courage to admit thelatter, but people will be very grateful and in thefuture they may repay you with valuable advice.
In-house or independent?A frequently asked question is do I need anindependent evaluation? Maybe - there are pros andcons to both options. Some people believe that ifevaluation is a tool by which to improve, then thereare clear benefits to self-evaluation.The feedback isdirect and you will have a real depth of understandingof your audience, although you may have to recognisethat there are limitations to the data gathering and
analysis skills to which you have access.You may needto bring in an outsider with specialist skills if you wantto answer more difficult questions about impact.
On the other hand, some people believe that if youevaluate your own project you will cheat to make itlook good, maybe even without realising it. Also, youshould consider whether people might give you morefavourable answers because they don’t want to hurtyour feelings.
If it is crucial that the evaluation is seen to beunbiased then an independent evaluation may be theonly option. Indeed, funders may well stipulate that anindependent evaluation is required, perhaps becauseof the large sums of money involved or because theyare trying to compare the merits of differentapproaches.This is where having clear objectives willreally help. If you haven’t got any, external evaluatorswill bring their own value judgements to bear.
The need for confidentiality may also influence yourchoice of internal or external evaluation.
ConfidentialityMarket and social research operates from the premisethat information given by respondents in researchprojects is confidential. Questionnaires usually reassurepeople that they will not receive marketinginformation or sales calls as a result of taking part andthat no one will know what they personally have said,other than the people processing the data.
What is the right level for you?This is a judgement that only you can make.However, a broad recommendation is that allproject managers should include ‘reaction’ in theirstrategies; some should also include ‘learning’, butvery few ‘behaviour’ or ‘results’. Don’t be afraid tosay that it is not possible or practical to evaluate atsome levels, but don’t try to use this argument toavoid undertaking work that is easily achievableand that will, if done properly, provide you andyour stakeholders with much useful information.
14
People may be happy to put their name to theirviews and responding to an evaluation of a scienceand society project is unlikely to cause problems forrespondents. Nevertheless, there may be cases wherepeople feel that you will pass on their views to othersand this may be detrimental to them in some way.Offering confidentiality doesn’t usually cause problemsin reporting and can be beneficial but it may meanyou can’t do the evaluation in-house.
2
If you run into troubleWe’ve interviewed some funders while preparing this guide. In general funders wantthe projects they’re supporting to be successful. So if you’re not reaching yourobjectives, but you know why and can set out a plan that will help you get back ontrack, most funders will give you a sympathetic hearing.You might want to revise theobjectives now that you know more about the impact you’re having. Of course, theearlier in the project you tell them what’s happening, the more sympathetic they’relikely to be. Remember, in most cases the programme manager is there to help you.
When setting objectives, the SMART criteria will drive you to be specific, but if this isyour first experience you may not know what represents an attainable and realistictarget.You will have to use your judgement and make sure that your formative andprocess evaluations give you the data to explain why you are over or under-achieving, and in the case of the latter what you can do about it. As evaluationdatabanks grow, funders may be able to provide advice that will give benchmarks. Inthe future your data will help others with this conundrum.
Asking people their views in front of others mayrestrict what they are prepared to say, depending onthe topic. Some people may be more forthcoming tosomeone unrelated to the project – another case foran independent evaluator.
You also need to be aware of the Data ProtectionAct see: www.informationcommissioner.gov.uk
BUILDING AN EVALUATION STRATEGY
15
Working through the process of developing SMARTtargets and objectives should have started youthinking about tools for collecting the information youwill need.The basic tools and techniques are based insocial and market research methods. Researchers inthese fields use certain terms with a commonunderstanding of what they mean and we’ve tried togive you that language here. It will help if you want tocommission an evaluation or if your fundercommissions an evaluation.You’ll all be speaking thesame language.
The main methods used in evaluation are:
l Quantitativel Qualitativel Observational l Record keepingl Visitors booksl Media impact
Quantitative researchQuantitative research is best suited to answeringquestions about how many people did or thoughtsomething.You can also ask them how much, to whatextent and other ‘measure’ type questions.There aretwo underlying principles to quantitative research:
l every respondent should be asked the samequestions in the same way so that the answerscan be added together; and
l the information you collect should berepresentative of all the people that took part in,or used, your project.
The questions can be asked by an interviewer face-to-face or over the telephone or people cancomplete a questionnaire themselves either on paperor through the Internet.Whatever the method, thequestions are highly structured to ensure consistency.You can use this structured questionnaire format tocollect factual and attitudinal data and to explore thereasons behind people’s initial answers. Often peoplecan do the same thing but for different reasons andbeing able to compare those with different rationales
can be important in making decisions on how toimprove a project.
Quantitative samplingStrictly speaking, when drawing a sample to berepresentative of your ‘users’ everyone you reach withyour project should have an equal chance of beingasked to respond, although there is a cost-qualitytrade-off and some sampling methods are better atthis than others. Really this is about avoiding bias andonly getting responses from certain types of peopleor people who liked your project. Just having a lot ofrespondents is not good enough if they are notrepresentative of your audience.
The types of sampling techniques you’ll be most likelyto use to collect quantitative data that isrepresentative of your users are:
Census – collecting information from everyone whoengaged with the project.
Systematic sampling – taking every ‘nth’ person whopasses a particular spot or accesses a website,
tools and techniques3Gathering data
Quantitative researchQuantitative research is best suited to answering questionsabout how many people did or thought something.You canalso ask them how much, to what extent and other‘measure’ type questions.There are two underlyingprinciples to quantitative research:
l every respondent should be asked the same questionsin the same way so that the answers can be addedtogether; and
l the information you collect is representative of all thepeople that took part in, or used, your project.
Strictly speaking, when drawing a sample to berepresentative of your ‘users’ everyone you reach with yourproject should have an equal chance of being asked torespond, although there is a cost-quality trade-off and somesampling methods are better at this than others. Really thisis about avoiding bias and only getting responses fromcertain types of people or people who liked your project. Itis important to note that just having a lot of respondents isnot good enough if they are not representative of youraudience.
16
requests a pack or whatever.
Quota sampling – if you know that 50% of youraudience will be girls (perhaps a school has told you)then you set a quota of 50% of your sample to begirls.Which girls you ask is then ‘random’; that is, thereis no particular reason why those you ask should bedifferent from those you don’t ask.
You can also use very simple methods to select asample, like those born on a certain date in themonth. For example, taking everyone born on thefifth, fifteenth and twenty-fifth of every month of theyear has been found to consistently yield a 10%sample.
These methods are all very well but you will find thatnot everyone you ask to take part will do so.Themain issue will be that those who enjoyed yourproject will be more likely to respond than those whodidn’t. Using interviewers means you can get a morerepresentative sample than relying on self-completionas there is a bit of pressure on people to take partbecause they won’t (generally) want to be rude. But ifyou can’t be sure that those who didn’t respond areno different from those who did, make sure youinclude the limitations of your data in your report.And you need to ask at least 100 people before you
can start stating percentages, even if you ask everyonewho took part.
Quantitative data collection techniquesHaving constructed your sample, or decided on acensus approach, you need to think about which datacollection technique to use.There are four basicquantitative data collection techniques, each of whichhas strengths and weaknesses.The four options are:
l Face-to-face interviewsl Telephone interviewsl Self-completion on paperl Self-completion electronically (e-mail or Internet)
For small, live events, the most likely method will beself-completion on paper, where questionnaires aredistributed and attendees are encouraged tocomplete and return them at the end of the event orpost them back later. For some events, face-to-faceinterviews with a sample of attendees is anotheroption.These are tools for gathering instant feedback,but you may want a considered response, in whichcase posting out paper questionnaires, usingtelephone or e-mail follow-ups or sendinginterviewers out to conduct face-to-face interviewsare all possible options.The table below sets out basicstrengths and weaknesses of the different options.
Type of Survey Strengths Weaknesses
Face to Face High quality data.You can be more confident that interviewees have understood the question.You can repeat questions that interviewees have not understood.You can use additional material such as show cards or pictures to help people respond.You know the right people have responded.You can use longer questionnaires so investigate more issues (up to 45 minutes) Interviewers with different language skills can help you to access communities whose firstlanguage is not English.
Quantitative data collection techniques
Resource intensive so most expensive.
gathering data - tools and techniques3
Telephone Moderately high quality data.You can repeat questions that interviewees have not understood.Interviewers with different language skills can help you to access communities whose first language is not English.You have a good chance of making sure the right people have responded.
Self-completion on paper Moderate quality data.Relatively cheap to undertake (but remember postal costs if using this approach).You can use longer questionnaires but you will get a better response if they are short.Translation can help you to access communities whose first language is not English.
Internet/email Moderate quality data.You can use additional material such as pictures to help people respond.Relatively cheap to undertake.
17
Quantitative data collection techniques cont.
Quite resource intensive so expensive but cheaper thanface-to-face.You cannot be as confidentthat interviewees haveunderstood the question aswith face-to-face.You cannot use additionalmaterial such as show cards orpictures to help peoplerespond although it might be possible to post, fax or emailmaterials in advance.You need to use fairly shortquestionnaires (10-15 minutesis a general maximum).Many people (30%-40%) areex-directory and youngerpeople may not have a fixedphone.
You cannot be confident thatinterviewees have understoodthe question.People can look ahead at thequestions, which might biastheir answers to somequestions.You cannot be sure that theright people have responded.
You cannot be confident thatinterviewees have understoodthe question.People can look ahead at thequestions, which might biastheir answers to somequestions.You cannot be sure that theright people have responded.Only respondents withInternet access can take part.
18
You probably want to be thinking about doing exitsurveys as people leave events or follow-up surveysby telephone, post or email – depending on whatinformation you have about people. If you’vedeveloped a website you can implant what are called‘pop-up’ questionnaires to collect data from every nthvisitor or collect email addresses to send aquestionnaire later. Our experience is that emails sentto specific individuals yield higher response rates thanpop-up questionnaires.
Constructing a questionnaire There are many factors to be considered in designinga questionnaire.The length, structure and layout (forself-completion questionnaires) will impact on theresponse rate. Remember that the questions you askwill influence people’s answers to later questions, soyou need to think about the order in which you askquestions. Questions can be ‘pre-coded’ where therespondent selects an answer from a list or ‘open-ended’ where respondents can write in their owncomments. Some pre-coded questions also allow an‘other, specify’ category where those who have notfound a pre-code to tick can write in their views.Remember, if you use open-ended questions,someone will have to read all the responses.
Annex 1 looks at the factors to be considered andquestioning techniques, such as attitude statementsand scales, that will give you appropriate data.Although Annex 1 is written for self-completion paper
questionnaires, the same basic principles apply for alldata collection methods.
Qualitative researchQualitative methods enable you to address thedeeper questions, such as why people did or didn’tlike a project, why they felt it was good or bad andwhat you could change about it.
Social and market researchers use the term‘qualitative research’ to refer to individual, one-to-onein-depth interviews and group discussions/focusgroups conducted by someone who has beeninvolved in the whole process of the evaluation andwho therefore has a deep understanding of theobjectives of the project.This is likely to be either youor one of your team and because qualitative methodsallow you to interact directly with ‘users’ you can testout ideas that you form during the evaluation process.
In-depth interviews are useful for talking to those withbusy diaries and in situations where replies may besensitive, so people would be reluctant to say things infront of others.They are also ideal when you want tocollect details that are likely to be individual, such aswork histories.
Group discussions are ideal for situations where youwant people to bounce ideas off one another andbring different perspectives and experiences to bear.They can be particularly useful in formative evaluationas this type of research can give you quick feedbackon how potential audiences view your emerging ideas.You should use small groups (6-8 people) toencourage discussion. Some people won’t talk in a biggroup.The groups can be constructed in a number ofways.You might put similar people together toencourage them to explore the issues in depth fromsimilar perspectives. Alternatively, you might createmixed groups to give people exposure to differentviewpoints. Men tend to dominate mixed-genderdiscussions, so you should think carefully aboutseparating men from women in discussion groups.
Usual practice in qualitative research is for afacilitator/moderator to use a ‘topic guide’ to managethe discussions.This sets out the issues you think you’llwant to cover in the session and any information that
Qualitative researchQualitative methods enable you to address thedeeper questions, such as why people did or didn’tlike a project, why they felt it was good or bad andwhat you could change about it.
Qualitative research is about depth ofunderstanding, so samples tend to be small.You’llfind that you don’t need to talk to very manypeople before you stop getting new information.People tend to be selected to give you a cross-section of your audience rather than arepresentative sample.
3gathering data - tools and techniques
19
you think you’ll need to give people, but unlike in aquantitative survey there is not a set pattern tofollow.The facilitator should respond to theparticipants/interviewees, following up issues that theyraise, whilst also ensuring that you cover all the issuesyou want to.This approach allows respondents to addin things you might not have thought to ask about byallowing them to take the lead rather than being ledby a structured questionnaire.
Attention also needs to be paid to how thediscussion will be recorded, particularly if using anexternal facilitator. Since the facilitator needs to focuson directing the group, a tape-recording of the eventis usually essential. At the same time, a secondobserver is sometimes useful to take notes during thediscussion, as well as to provide support to thefacilitator in bringing up issues that have not beencovered (although be aware that this may alter thedynamic of the group).The facilitator should also keepa record of their thoughts immediately after theevent. Direct quotes can be useful in reporting tofunders as they often give a flavour of the project,participants and how things went.
Qualitative samplingQualitative research is about depth of understanding,so samples tend to be small.You’ll find that you don’tneed to talk to very many people before you stop
getting new information. People tend to be selectedto give you a cross-section of your audience ratherthan a representative sample. Instead of using thestructured approaches described above to selectpeople, you pick people who meet your criteria forinclusion as and when you find them.The small size ofyour sample and the way it has been selected meansthat the results are not statistically representative ofthe group your participants ‘represent’. Not only mightyou have missed some marginal views because yoursample is small, you have no idea what percentage ofthe total population will hold any of the views youidentify because of the way in which they have beenselected.
To ensure that you include the right sorts of people,you should think about the main variables, orcharacteristics, that you think will be important, givenyour project. For example, are men and women likelyto respond differently? Are younger people likely tobe different from older people? Are those in urbanareas likely to be different from those in rural areas? Ifthe answer to these types of questions is yes, thenyou need to make sure you include men and women,younger and older people and urban and ruraldwellers in your sample. And although more people inBritain live in urban areas than live in rural areas, youmight include the same number of each group in yourresearch. However, you don’t need to cover everysegment you identify explicitly – so you don’t needolder men in rural areas, older women in rural areas,and so on.You could just have older men in ruralareas and younger women in rural areas with youngermen in urban areas and older women in urban areas.Running focus groups can be very expensive,especially if you use an outside contractor, so you mayneed to prioritise your groups.
AnecdoteWhile any feedback is better than none, you need tobe careful about how you interpret ad hoccomments. Qualitative research is not anecdotal.Thesample and the discussion are structured byresearchers against clear objectives for the project.
The sample is constructed to ensure it is balancedbecause you need to make sure that the feedbackyou are getting is balanced and that the one teacher
20
who made a comment is not atypical.The issues youcover should meet your agenda and you should bearin mind that any single individual may have their ownagenda.
However, given the budgets that are sometimesavailable for science and society activities, you shouldthink of conversations in the margins of an event asproviding useful feedback, and as providing ideas to beinvestigated more thoroughly.
Observational researchObservation involves the planned watching, recording,and analysis of observed behaviour as it occurs in a‘natural’ setting. In most cases you will be observingpeople interacting with your project. It is particularlyuseful for understanding how people use websites andCD ROMS or flow through an exhibition, as well as toexplore how to get more people to more activelyengage with talks and discussions.
Observation throughout the evaluation process:
l enables evaluators to:– understand individuals’ engagement with
specific tasks and processes;– understand individuals’ attitudes and
relationships in context;– define key issues that may be followed-up in
interviews and surveys;l helps the evaluator to form relationships with the
participants, which will help with any follow-upinterviewing;
l eliminates the bias of self-reporting inherent ininterviews and surveys.
You can observe and make notes on how individualsinteract with your project but you can also useobservational methods to compare group dynamicsacross events.This is done by having a structure againstwhich to record details in which you are interested, forexample: the order in which web pages are accessed,the number and type of participants at an event, levelof input to discussions, types of people who activelyparticipate, main subjects of concern and so on.
Visitors’ bookA visitors’ book is a good way of capturing thethoughts of visitors and getting feedback. However,only those who are highly motivated will givecomments. So the comments, while helpful in thedevelopment of your project and similar, futureprojects, will not be representative of your achievedaudience or your initial target audience.
Record keepingRecord keeping involves reflection and might bethought of as self-observation. At its simplest youcould keep a “field diary”, which records yourthoughts and feelings throughout the process as wellas your reflections on the process itself.This forms arecord of what happened and when, and is a usefulresource when looking at how you could do thingsbetter in the future.You could also ask users of yourproduct to keep records of their interactions andtheir thoughts over a period of time.
Media impactIf you want to raise awareness you might set anobjective about press or media coverage. Measuringthe impact of this can be very difficult. Some peoplemeasure column inches and use the sales orreadership figures of the publication to estimate thenumbers reached. However, not everyone reads everypage of a newspaper or magazine and the impact onreaders is generally unknown.
When it comes to television and radio programmes,viewing and listening figures may be available.Nevertheless, even in television, where there is aconsiderable amount of programme research, data onimpact rather than enjoyment, is unlikely to beavailable.
3 gathering data - tools and techniques
data handling
21
The previous section has looked at the variousmethods for collecting data. Here we look at how youcan analyse it. For the purposes of this guide, we willassume that data collection and analysis is beinghandled in-house rather than through specialist sub-contractors. If you are using sub-contractors but theyare not providing you with a final report, then you willneed to specify the analysis that you want and theformat in which you want the data. Remember thatyou may need to turn it into a format required byyour funder.
Quantitative data
CodingThe first thing to do with your questionnaires is tocode any questions where respondents have enteredtheir own answers rather than ticking a box. Readthrough each question at a time, that is, look at all theresponses to Question 1 together, all those toQuestion 2 together and so on.You should be lookingfor similar responses so that you can draw-up a ‘codeframe’ for the question.This allows you to addtogether similar responses from different people.Once you have your code frame you will give eachcode a number.Then you need to read eachquestionnaire and put the appropriate code or codes(people may have said more than one thing) by theside of the question. It is this number that you willenter in your spreadsheet, not the verbatimcomments.
Data entryIf you are using paper questionnaires you will have toinput your data. Data entry is a time consuming andrelatively specialist task.You have to make very sureyou don’t make mistakes. For small amounts of datayou might do it in-house, for larger amounts considerusing a specialist data entry firm, the speed and qualityof entry is likely to yield dividends and the cost is onlygoing to be a few hundred pounds for the size of jobyou’re likely to have.
If you only have a small number of respondents,perhaps less than 50, you could do your analysis byhand by just counting through the questionnaires.However, if you want to do any analysis beyond totalcounts of how many people gave each answer, or youhave more respondents, then the simplest way toanalyse small datasets is to use spreadsheets. It really isworth the time to enter the data.
AnalysisFigure 1 overleaf shows an example spreadsheet withthe raw data entered. Each column represents a singlerespondent and each row a possible answer to a
tools and techniques4
Quantitative data
The first thing to do with your questionnaires is tocode any questions where respondents haveentered their own answers rather than ticking abox.
If you are using paper questionnaires you will haveto input your data.
If you only have a small number of respondents,perhaps less than 50, you could do your analysisby hand by just counting through thequestionnaires.
22
data handling - tools and techniques4
question. So, for example, for Q1 the possible answersare Wed and Sat. ‘1’ indicates the answer is ‘yes’ and ‘0’that it is ‘no’. So the person whose answers areentered in column ‘C’ said ‘yes’ to ‘Wed’ and ‘no’ to‘Sat’.
Summing across the rows gives you the total numberof responses for that answer across the sample.However, you’re quite likely to want to analyse thedata against key variables, such as age or gender.Copying the raw data into another sheet (keep amaster copy of the raw data just in case), will allowyou to sort responses, for example by gender. AsFigure 2 opposite shows, you now know what yourfemale participants thought for every question.
For large datasets and complex surveys it is better touse bespoke packages for data analysis, but for the
purposes of this guide, we’ll assume that if you aregathering this much data you already have expertisein that area or are buying it in from a survey company.
Qualitative dataQualitative data is gathered by recording thediscussions. Recording may be literal through audio orvideo (in either case permission should be soughtfrom the respondents before recording starts) or vianote-taking to record key points. Bear in mind that ifyou choose to take notes rather than tape record,you will loose some of the richness of the data andyou will never be able to recapture it. Most social andmarket researchers will tape record focus groups andinterviews so they can concentrate on responding to,and observing, the interviewee.
You can use flip charts in a group situation and this
Figure 1. Raw Data.
23
allows respondents to confirm that you haveaccurately recorded what they meant.This approachalso means that some analysis is being undertaken insitu as key points are identified and recorded by thegroup supported to a greater or lesser extent by thefacilitator.
Analysis of recorded conversations can be undertakenby making transcripts or by listening back to the tapes,making notes and recording quotes.
What to look for:
l main and sub-themes and issues (across differentgroups/individuals);
l ideas from participants that will support thedevelopment of your project;
l tracking individual views through the discussion,
Figure 2. Female responses.
exploring how and why views change (if they do)and any preconceived or hyperbolic views;
l the context, and thus the interpretation, ofcomments;
l illustrative quotes for use in the final report; andl the language used – this will help with the design
of quantitative questionnaires.
It is unlikely that you will be using qualitative data toprove or disprove a hypothesis, rather you will belooking at the data to see what issues emerge from it.So the approach is not ‘Was the event boring becausethe speaker was no good?’ rather it is ‘How enjoyablewas the event?’, ‘Why was it enjoyable/not enjoyable?’
In essence, interview data can be treated in two ways.Some people take comments at face value andcategorise the text into themes. It is important to
24
4remember though, that qualitative research is aboutmore than just what people say. People do not alwaysexpress themselves clearly, may contradict themselvesand their body language will add to yourunderstanding of what they mean.Your understandingof what they mean is important but you need torecognise that it is your understanding.
One of the simplest ways to analyse qualitative data,that allows you to incorporate the context of thediscussion, is ‘charting’. Listening back to the tapes orworking through transcripts, you identify the mainissues or themes raised in the discussions.You plotwho made each (relevant) comment, leading you tobe able to identify the type of person who raisedeach issue and therefore for whom this was animportant point. However, as you work through eachdiscussion charting it, you take into account contextand intended meaning as well as the pure text.
Observational dataObservational data provides a useful context forunderstanding the results that emerge from surveysand focus groups.There is an old adage in socialresearch which states that ‘what people do is notwhat necessarily what they say they do’ andobservational data, taken with other information, canhighlight these inconsistencies. For example, afacilitator’s observations of group members’ behaviourwill reveal whether a particular individual who hadnot been contributing to the discussion did sobecause they felt alienated by the process or simplybecause they were shy.
As mentioned on page 20, observational data cantake a number of forms.The hand-written notes orfield diaries taken by the evaluator throughout theprocess forms a record that can be subjected toanalysis in the same way as qualitative data frominterviews and focus groups. At the same time,systematised observations can range from quitesimple tallies of attendance at an event and basicdemographic information, to fairly complexcategorisations and codings of behaviour.This data canbe fairly straightforwardly analysed in a standardspreadsheet package.
What to look for:
l main and sub-themes and issues (across differentgroups/individuals);
l ideas from participants that will support the development ofyour project;
l tracking individual views through the discussion, exploringhow and why views change (if they do) and anypreconceived or hyperbolic views;
l the context, and thus the interpretation, of comments;l illustrative quotes for use in the final report; andl the language used – this will help with the design of
quantitative questionnaires.
data handling - tools and techniques
25
5 reporting
Your report should be structured around thequestions/objectives your evaluation set out toaddress.These should stem from the objectives you
Content
Some people, especially senior people in funding organisations or thoseon assessment panels, will only read this section. It should pull out the keypoints but the structure should mirror the structure of the full report sothat anyone who wants more information on a certain section can easilyfind it.This means you should write this section last.
Sets out:l the background to your projectl why you wanted to do the projectl what you hoped to achieve and whyl the aims and objectives of the projectl the aims and objectives of the evaluationl the structure of the remainder of the report.
A brief description of the project
The objective and data relating to whether it was met with somediscussion as to why the actual outcome occurred
As above
Describe any unexpected outcomes and whether they are good or bad.
A summing up of the key achievements of your project, its strengths andweaknesses
What you would do differently next time and whyKey learning points for othersInclude discussion of unexpected outcomes and how to ensure theyeither occur again or not, as appropriate
Include:l full details of your methodologyl how you selected your samplel copies of questionnaires and topic guidesl some information about how you analysed your data
Section
Executive summary
Introduction
The project
Objective 1
Objective 2, 3 etc.
Unexpected outcomes
Conclusions
Lessons learnt
Annexes
set for your project. Monitoring information is alsolikely to have a role in your final report. For clarity, youshould have:
26
For the sections on the objectives, try to turn theobjective into a question or several questions.Thenassess all the information you have from bothmonitoring and the evaluation and allocate each pieceof information to a section of the report. If you haveinformation that doesn’t add anything, ignore it. Don’tfeel you have to report every piece of detail. It can beuseful to have short ‘conclusions’ at the end of eachmain section that sums up the main points from thesection.
The use of charts to illustrate numerical data, whichcan easily be derived if you have your data in aspreadsheet such as Excel, will help you and thereader identify highs and lows and trends.Think abouthow you would present numerical data from yourresearch. Use bullet points under tables, charts andgraphs to highlight the main points. Quotes frominterviews and focus groups will serve to bring to lifethe spirit of the project. Again, think about how youmight use images to illustrate your research findings.
Some people like to have this sort of detail inannexes to keep the main report short.You’ll find thatdifferent people have different preferences about howdata is presented, so it’s sensible to check with keyreaders, such as your funder, to see what they prefer.
The conclusions section should pull together all thedata and get to the nub of ‘what does it all mean’? ‘Sowhat?’What do the preceding sections tell you in anutshell? Some people will only read this section,especially if there isn’t an executive summary.
You should use Plain English.You want your report tobe accessible to as many people as possible.The mainaudiences will be other people like you and yourteam and your funder. Remember, many funders willnot be evaluation experts either.
Don’t be tempted to use jargon.There are occasionshowever, when you’ll want to use very precise terms.If you do, make sure you define them somewhere soall your readers know what you mean.
Don’t be afraid to make value judgements or giveyour views, especially when you’re considering howyou might do things differently next time. Remember,you have learnt a lot from doing your project and canspeak with some authority, especially if you’ve backedit up with good evaluation data.
reporting5
27
6 Annexes1 Evaluation questions and questionnaires
Lengthl Keep it focused, simple to complete, and as short
as possible – one to two pages.This will maximisethe response rate.
l It is tempting to throw in lots of questions – butthe longer the questionnaire, the less likely peopleare to fill it in, and the more likely that you willhave missing answers. It will also take you longerto analyse and process the information.
Make the respondent’s experience positive l You should make sure that the respondent finds
the experience straightforward and useful; theymay even gain something from the process.
l If you are using pre-coded questions, you need tobe confident that the categories chosen reflect thespectrum of actual experience. If you ask peoplewhat their favourite subjects were at school andoffer ‘maths’, ‘science’, ‘design and technology’ and‘other’ as options, but your project was based inan art gallery, you’re not likely to get a veryaccurate picture of people’s real favourites.You canalways have an ‘other, write in’ category to captureanything you’ve missed but remember you’ll haveto read it before you can analyse it.
l Make sure that your language is appropriate toyour audience.This is especially important withyounger audiences, but also bear in mind thatgeneral literacy levels are not the same as thoseenjoyed by graduates and that for some peopleEnglish may not be their first language.
l Use colour, pictures, cartoon “smiley faces” orother lighter approaches to match the mood ofyour event.
l Make sure your respondent has the chance to saywhat is on their mind eg by using a general open-ended question at the end – which we suggestyou always include.
l If possible, pilot the questionnaire on a few peoplebefore circulating it widely; piloting will help youidentify any difficulties with wording or concepts.You’ll be using formative evaluation to improveyour summative evaluation. Even piloting it onyour colleagues, friends or family will provide someuseful feedback on how to clarify the questionsand the way you ask them.
Structurel Design the questionnaire as a funnel, moving from
simple, unthreatening and non-sensitive questions,to those that require more thought fromrespondents and maybe more personalinformation.
l Most questionnaires will benefit from a mix ofclosed (pre-coded) and open questions, wherepeople enter their response in their own words.
l Avoid long batteries of scales, as respondents willdrift into automatic pilot - break up questionsvisually, if the questionnaire is long.
l Sensitive and demographic data (eg facts aboutage, sex, education, ethnicity) are usually bestplaced at the end
l Do not ask for information that you do not planto use: it wastes everyone’s time
Analysisl Plan the time and resource needed for coding,
data entry, analysis and reporting.This will help youto decide whether you can handle the evaluationyourself, or whether it might be best to have itmanaged by a third party.
l A simple spreadsheet will allow you to do quite alot of analysis of the data.
Maximising the response ratel Distribute questionnaires at the start of the event,
and ask people to complete it before they leave.l Make it short, simple and relevant.l Consider providing an incentive to complete the
questionnaire - something like a free gift may beappropriate.
l Use pre-paid envelopes to increase responsewhere you have had to distribute questionnairesby mail or where you think people will want topost back the questionnaire.
l Follow-up by telephone can be relatively quickand improve the response rate significantly. It canbe done by an administrator, if they are wellbriefed and have a structured proforma forquestioning and recording responses.
Issues in questionnaire designMost people like to put themselves and theirbehaviour in a good light. If you ask a question which
Designing questionnaires for self completion quantitative surveys
28
6 1 Evaluation questions and questionnaires
embarrasses the respondent or makes him/her feelstupid, they may massage the truth.
So, for example, asking ‘How often do you eat chips?’rather than ‘Do you eat chips?’ gives people permissionto say that they do and those that don’t, just saynever!
Reluctance to give feedbackThis can be the curse of feedback questionnaires –people either don’t want to hurt your feelings, sotone down their comments; or forget themselves andlaunch scathing attacks that don’t really help you toimprove.The key is to ensure that people understandthat their feedback is important and can help you.Emphasise that critical feedback will help you getbetter and that it won’t hurt your feelings (eventhough it might) and that positive feedback also helps,because it shows you what works well.
The problem with leaving questionnaires for theaudience to complete on leaving an event is thatthose who had a great time are most likely to fill it in.Those who hated it are more likely to fill it in thanthose who had an OK time. So unless you have someway of randomly selecting people and ensuring that ahigh proportion of those you target complete thequestionnaire, you must be aware that the results willnot necessarily be representative of your wholeaudience and you will not be able to tell in what waysor to what extent the results are biased.
This is not to say that the data is useless but it needscareful interpretation. One thing to take into accountis the proportion of your sample that responded,assuming your sample was properly selected.Thehigher the response rate, the more representative theresults will be. If everyone who engaged with yourproject responded, the data is robust, even if therewere only six people. Just don’t try percentaging onthis many.
Confidentiality/data protectionYou must take all reasonable steps to make sure thatthe respondent is not adversely affected by takingpart in evaluation.You must keep their responsesconfidential, unless you have their permission, and youmust not do anything with their responses that youhave not informed them about. So unless you made itclear when you gave them the questionnaire or onthe questionnaire itself, you cannot use the results tobuild a database for marketing, for example.You need
to take particular care with children and teenagers.
There are two useful sources of information: the DataProtection website, and the Market Research Society,which has various codes of conduct relating to dataprotection and confidentiality issues.
For the Data Protection Commissioner:www.informationcommissioner.gov.uk
For market research guidelines:www.mrs.org.uk
Avoiding biasThere are many different sources of potential bias inresearch.These include:l Interviewer/questionnaire bias (leading questions)l Methodological bias (for example, Internet surveys
exclude people who do not have internet access)l Sampling bias (for example, asking for feedback
only from those who asked questions at an eventmay be a very poor indication of how theaudience as a whole felt)
l Response bias (those who completequestionnaires may be very different from thosewho don’t).This is a major issue for self-completion forms.
l In qualitative research, the way that you framediscussions and who is present in the group, aswell as the way you look and speak – can have asignificant effect on response. It’s impossible toentirely neutralise these things, but you can at leastbe aware of the effect you may have.
Using scalesThe 1-5 Likert scale is the most commonly usedform of rating. It is simple to understand and relativelydiscriminating.The scale is commonly anchoreddescriptively eg 5= Agree strongly, 4= agree, 3=neither agree nor disagree, 2 = disagree, 1 = disagreestrongly.You can also add a ‘don’t know’ category, if itseems a likely answer. Other scales that are usedinclude scoring on a line of one to ten or apercentage score.
Another way of differentiating between people’s viewis to present them with statements that therespondent chooses between.These are oftenordered on an implicit scale but you are asking therespondent to tick the one that best fits their viewwhen in fact they may not agree with any of thosepresented. An example of this type of question is:
29
Which of the following statements best reflectsyour feelings about science today?
a) It’s continually making our lives safer and betterb) It’s changed many things for good, but I wonder how
much more there is that can be achieved.c) It’s producing lots of new things but I’m not sure we
need them all d) It’s out of control and damaging our lives and
environment
When it comes to the analysis all you can really do ispresent the percentage of respondents who agreewith each statement.
Rank ordering is best avoided – many respondentswon’t do it properly, unless you stick to asking for first,second and third choices. Otherwise people getconfused and get pushed in to declaring preferencesthey don’t really have or they may just give up.Youcan get round this by asking how important an issueis, using a Likert scale; however, it can be difficult forthis kind of question to be discriminating.
With attitudinal questions, if you have a set of them,some should be positive about your project and anequal number should be negative. In general peopleare more likely to agree with statements than todisagree and you need to be aware of this in youranalysis. Be careful not to have statements such as‘Science is not good’. It’s very difficult for people toknow whether they should agree or disagree. If you’reusing negative statements keep it short – for example,‘Science is bad’.
Asking people to complete their questionnaires at theend of engaging with your project will increase yourresponse rate and may improve the quality of theinformation you gather. But it will mean that you canonly ask relatively few questions because people willonly be prepared to give a limited amount of time.You also need to think about the time people havefor reflection.You may get a more consideredresponse when people have had a chance to thinkabout the project later.
Who was there?There are a tremendous number of variables that canbe explored.You need to focus on what matters inrelation to your objectives. Is it people from ageographic location, of a particular age or a certainmindset that were important to you? Did you want amix of gender and/or ethnicity or were you specificallytargeting one group? To get basic socio-demographicinformation use simple tick boxes, some examples ofwhich are shown below.
Are youMaleFemale
Are youEmployed – full-timeEmployed – part-timeNot currently employed but looking for workRetired Other
Which of the groups listed belowdo you consider yourself to belong to
WhiteAsian BritishIndianPakistaniBangladeshiBlack BritishCaribbeanAfricanChineseMixed raceAny other ethnic group
And you can add any other groups you areparticularly interested in targeting.
What was your age last birthday?
Less than 1616 - 3031 - 4546 - 5051- 65Over 65
What were they like?For attitudinal information it is probably moreappropriate to use Likert scales and some examplesare shown overleaf:
30
6 1 Evaluation questions and questionnaires
For the following statements do you agree strongly, agree, neither agree nor disagree, disagree or disagree strongly?
There are many more examples of attitudestatements that have been used in the past in Scienceand the Public: A review of science communicationand public attitudes to science in Britain,The Office ofScience and Technology and The Wellcome Trust,October 2000. ISBN 1 841290 25 4.That report alsogives the figures for the responses of a nationallyrepresentative sample of adults to these statements.This means that you can develop a picture of howtypical your audience is of the wider public.
Statement Agree Agree Neither Disagree Disagree Don’tStrongly Strongly Know
The speed of development in science and technology means that it cannot be properly controlled by Government
Science is getting out of control and there is nothing we can do to stop it
It is important that young people have a grasp of science and technology
The benefits of science are greater than the harmful effects
Science and technology are making our lives healthier, easier and more comfortable
Did the event work?At one level you might want to know if people simplyenjoyed the engaging with the project. Give them thechance to tell you, but you can get more value byfollowing up the question as shown below.
Strongly agree Agree Disagree Strongly disagree
I enjoyed the event
Please write in which part of the event you enjoyed the most
Please write in which part of the event you enjoyed the least
What do you think I should change about the event?
31
This ‘write in’ approach works best for small numbersof respondents, because of the amount of data that itgenerates that you’ll need to analyse. For largernumbers you can think about likely answers anddevelop these as ‘pre-coded’ responses that peoplecan choose from, but always leave open the “other”option so that you can capture answers that youhadn’t thought of.
You might be looking for more sophisticatedfeedback. If the primary function of the project wasto give participants the chance to contribute theirviews and comments, it is important to see whetherthis has been achieved and what factors haveenabled, or hindered, effective participation.
Were you able to express your views freelyand openly?
Please put X in the appropriate box
Yes completely
Yes but sometimes I felt nervous
Not as much as I would have liked (if you tick this please say why in the box below)
Not at all(if you tick this please say why in the box below)
Don’t Know
Why was this?
If information provision was part of the process wasit accessible and useful? Similarly, if you were using‘experts’ how was their contribution rated?
You might also think about what the expertsexperienced. Did they enjoy the process, what havethey learned, have their attitudes changed?
The next examples show some of the questions thatyou might consider asking. For these ‘did it work foryou?’ questions, the most valuable bit of feedback canbe the why or why not that underpins the yes or noanswer, so it is always worth leaving some space forthis.
Did you understand the explanation of thescience?
Please put X in the appropriate box
Yes, easily
Yes, but only after the discussion
Not very well(if you tick this please say what might have helped in the box below)
Not at all(if you tick this please say what might have helped in the box below)
Don’t Know
What might have helped you understand the science more easily?
Did you find the experts . . . . . . ?
Please put X in any box you agree with
HelpfulConfusingAble to answer my questionsSelf importantDid not want to listen to my opinionsAble to explain themselves clearlyEager to listen
If you’d like to say anything else about the experts,please write it in the box below
Designing open-ended questionsl Use broad openers:Who, what, where, when, and
(especially) whyl Balanced open questions (what did you like, what
did you dislike) help the respondent structure ananswer without feeling pressured to give aparticular reply
l In face-to-face interviews conducted by aninterviewer, using open phrases such as ‘Tell meabout’ or ‘Tell me more..’ encourages people to talk
l Avoid questions that can be answered with asimple yes or no
l Avoid asking more than one question at the sametime
32
6 2 summative evaluation schema
Tom
easu
rech
ange
you
need
toha
vea
base
line
from
befo
reth
eau
dien
ceen
gage
dw
ithyo
urpr
ojec
tan
dan
othe
rse
tof
data
take
naf
ter
they
took
part
inyo
urpr
ojec
t.Yo
uw
illne
edto
ask
the
sam
equ
estio
nsbe
fore
and
afte
r.
Num
ber
ofpe
ople
Type
sof
peop
le
Benc
hmar
k
Cha
nge
view
s/at
titud
es
Cha
nge
beha
viou
r
Incr
ease
inte
rest
Incr
ease
know
ledg
e
Qua
lity/
fitfo
rpu
rpos
e
Stre
ngth
s
Wea
knes
ses
Inte
ract
ion
with
proj
ect
Dia
logu
e
Obt
ain
view
son
issue
Dis
cuss
ion/
Mee
ting/
Talk
Cou
ntpe
ople
onen
try
Cat
egor
isepe
ople
atre
gist
ratio
nor
byob
serv
atio
nor
ques
tionn
aire
Ask
peop
lefo
rba
selin
evi
ews
ona
pape
rqu
estio
nnai
rew
hile
they
wai
tfo
rth
eev
ent
tost
art
orw
hen
they
regi
ster
toco
me.
Obs
erve
the
even
t.U
seex
itqu
estio
nnai
res
and/
orfo
llow
-up
focu
sgr
oups
orqu
estio
nnai
res.
Obs
erva
tion
ofdy
nam
ics
will
help
you
plan
bett
erev
ents
inth
efu
ture
.
List
enin
gto
the
conv
ersa
tions
,rec
ord
key
poin
ts.
Web
site
Cou
nthi
ts
Pop-
upqu
estio
nnai
res
onth
esit
eor
regi
stra
tion
proc
edur
es
Regi
stra
tion
ques
tion-
naire
onth
esit
eto
gath
erin
form
atio
n.
Incl
ude
ques
tions
onth
isin
aqu
estio
nnai
reho
sted
onth
esit
e.Re
cord
dwel
ltim
epe
rpa
gean
dpa
gere
ques
ts.
Reco
rdth
eor
der
inw
hich
page
sar
eac
cess
edan
ddw
ellt
ime
per
page
.
An
inte
ract
ive
emai
lfa
cilit
yw
illal
low
this.
Prod
ucts
egpo
ster
/C
DRO
M/v
ideo
Num
ber
dist
ribut
ed
Use
ofor
der/
requ
est
form
san
dqu
estio
nnai
res
Dist
ribut
ion
met
hods
will
affe
ctth
eab
ility
toco
llect
initi
alda
ta.
Usin
gan
orde
ring
mec
hani
smal
low
sda
tato
bega
ther
ed.
Follo
w-u
pqu
estio
nnai
res
and
focu
sgr
oups
.
Obs
erva
tion
ofus
ers
and
ques
tionn
aire
s.
Not
ago
odm
ediu
mfo
rge
ttin
gpe
ople
’svi
ews.
Can
use
thes
eas
ast
imul
usan
dth
enus
egr
oup
disc
ussio
nsan
dqu
estio
nnai
res
Exhi
bitio
n/O
pen
day
Cou
ntpe
ople
onen
try
Cat
egor
isepe
ople
onen
try
byre
gist
ratio
nor
ques
tionn
aire
Ask
for
base
line
view
son
apa
per
ore-
mai
lqu
estio
nnai
rew
hen
peop
lere
gist
erto
com
eor
buy
ticke
ts.
Exit
orfo
llow
-up
ques
tionn
aire
s.Sh
ort
face
-to-
face
inte
rvie
ws
durin
gth
eev
ent.
Obs
erva
tion.
Obs
erva
tion.
In-d
epth
inte
rvie
ws
orfo
cus
grou
psan
dqu
estio
nnai
res.
Feed
back
from
staf
f/col
leag
ues.
Com
men
tbo
oks
and
exit
ques
tionn
aire
s.Bu
ildin
oppo
rtun
ities
for
staf
f/col
leag
ues
toen
gage
with
visit
ors.
Show
/Pla
y
Cou
ntau
dien
ce
Use
ticke
tsa
les
orbo
okin
gm
echa
nism
sto
gath
erin
form
atio
n
Ask
for
base
line
view
son
apa
per
ore-
mai
lqu
estio
nnai
rew
hen
peop
lebo
okor
buy
ticke
ts.
Follo
w-u
pqu
estio
nnai
res.
Gro
updi
scus
sions
.
Obs
erva
tion.
Que
stio
nnai
res
Not
usua
llyde
signe
dfo
rgi
ving
feed
back
.Can
use
deba
teaf
ter
the
perf o
rman
ce.
Com
petit
ion
Cou
nten
trie
s
Use
entr
ies
toga
ther
data
onty
pes
ofen
tran
ts
Build
ing
inan
initi
alda
taga
ther
ing
exer
cise
toth
eco
mpe
titio
npr
oces
sw
illal
low
base
line
data
tobe
gath
ered
.
Use
entr
ym
echa
nism
toga
ther
feed
back
.
Impl
icit
inta
king
part
,us
een
try
num
bers
asa
mea
sure
.
Can
build
this
into
entr
ypr
oces
s,bu
tno
ta
norm
alm
echa
nism
for
gett
ing
peop
le’s
view
s.
Mon
itor
ing
Dat
a
Eva
luat
ion
Dat
a
Her
ew
ese
tou
ta
sche
ma
tohe
lpyo
uth
ink
abou
tth
ety
peof
info
rmat
ion
you
wan
tde
pend
ing
onth
ede
liver
ym
etho
dyo
uar
eus
ing
and
how
you
mig
htob
tain
info
rmat
ion
tose
ew
heth
eror
not
you’
vem
etyo
urob
ject
ives
.
33
3 glossary
AimThe aim of your activity is what you ultimately want toachieve. The aim is supported by a number of objectivesthat will help you to realise the overall goal.
AudienceThe audience is the people with whom you are trying toengage.
BaselineA measure at the beginning of a project that enablesdetermination of change, if any.
CensusA survey that collects data from everyone.
ChartingA method for analysing qualitative research data.
DataInformation collected through monitoring and research.
EvaluationEvaluation helps you to see whether or not you haveachieved your objectives and to identify ways to improvewhat you do.
Evaluation strategyThe plan through which you will determine whether or notyou have achieved your objectives.
Exit surveyA survey of people undertaken as they leave an event,exhibition, etc. Usually conducted by an interviewer ratherthan a researcher.
Face-to-face interviewsUsed in market and social research to mean structuredquantitative surveys conducted face-to-face.
Focus groupA research method that involves a group of usually 6-8people convened to discuss a particular topic.
Formative evaluationResearch that takes place during the development of aproject to ensure it meets the audiences’ needs.
FundersThe funders are the people who provide the resources thatallow you to undertake your project.
In-depth interviewAn interview conducted by a researcher using a topic guide,which allows respondents to express themselves in theirown way and raise issues the researcher has notconsidered.
InterviewerA person who conducts interviews following apredetermined questionnaire designed by a researcher.
MilestonesMilestones are interim measures that allow you to monitorwhether or not you are on track to meet your objectives.
ObservationFormalised observation of behaviour, either directly or froma recording.
ObjectivesObjectives are the tangible things through which you willachieve your overall aim.
OutcomesOutcomes are measures of the impact you have had onpeople.
OutputsOutputs are the things that you produce as part of theactivity e.g. a website, a leaflet.
Pop-up questionnaireA questionnaire that literally ‘pops-up’ on entering a websiteto collect information about users and usage of the site.
ProgrammeThere is no blueprint for a programme, it is likely to containone or more of the following features:
l A funding mechanism to which other people ororganisations can apply;
l A budget to commission specific pieces of work;l Resources to undertake in-house activities; andl A reporting process, through which the programme
manager bids for resources and accounts for their use.
Programme managerSomeone who has overall responsibility for deliveringagainst a set of pre-determined objectives, and uses avariety of activities and actions to achieve these objectives.
ProjectUnless a specific project or approach is being discussed, weuse the term project as an all-encompassing phrase for talks,shows, teachers’ packs, hands-on events, websites and themany other ways that scientists are using to engage generalaudiences.
Project managementProject management in this context is simply theprocedures through which you ensure that you deliver yourproject.
Project managerThe project manager is the person ultimately responsiblefor the activity.
Qualitative researchTechniques that allow people to express themselves in theirown words and to raise their concerns, usually via in-depthinterviews and focus groups conducted by researchers,helps you understand why people do or say what they door say.
Quantitative researchTechniques that ask people the same questions in such away as to enable the answers to be added together for asample that is representative of the target group, thusproviding numerical data on the percentage of people withparticular views or behaviour.
34
QuestionnaireA structured set of questions calling for a precise responsethat allows answers from all those who complete it to beadded together.
Quota samplingSetting ‘quotas’ to ensure a sample has the same percentageof people with specific characteristics as the population ofinterest. Requires other data giving the information on thepopulation.
ResearcherA person who is involved in designing and overseeing aresearch or evaluation project.
SamplingA way of selecting people to take part in research thatensures they are chosen to be representative of thepopulation of interest, although not always in a statisticalsense.
SMARTAll objectives should be SMART, which stands for:
l Specific;l Measurable;l Achievable;l Relevant; and l Time-bound.
StakeholdersThose who have a legitimate interest in your activity, e.g.audiences and funders.
Sub-contractorsSub-contractors are people or organisations employed bythe project manager to deliver specifically defined productsor services.
Summative evaluationEvaluation at the end of a project that determines whetheror not the objectives have been met.
Systematic samplingTaking every ‘nth’ person who engages with a project.Produces a statistically representative sample.
Topic guideA list of questions and issues a researcher wants to coverduring an in-depth interview or focus group.
UserFor the purposes of this guide, someone who engages witha project.
6 3 glossary
4 further reading
Finding information on evaluationThe term ‘evaluation’ is used widely in education, socialpolicy, and training, and it is in these areas that one can findpapers and books devoted to evaluation as a distincttradition. Elsewhere, tools and techniques used in evaluationare simply the application of research methodologies.Thereis little written that is specific to issues in sciencecommunication – it is often a case of borrowing andadapting methods that have been used successfully in otherfields.
In the following sections we offer suggestions for furtherreading, grouped by topic.Where possible, we offer a briefreview of the contents, with an indication of length and levelof difficulty.
Evaluation MethodsBreakwell, G. and Millward, L, (1995) Basic evaluation methodsBPS Books, Leicester. 145 pp
A good general introduction to evaluation, which can beapplied in a range of settings. Contains case studies andexamples that are relevant to science communication, e.g.evaluation of a museum exhibition, and covers a wide rangeof research methods, including questionnaire constructionand time series. Contains an interesting section on the
politics of communicating evaluation results to audiences.
Rossi, P, Lipsey, M and Freeman, H. (2004) Evaluation: asystematic approach. Sage,Thousand Oaks, 470 pp.
US textbook on programme evaluation for education andsocial policy. Moderately heavyweight, but extremelythorough and readable.
General introductions to research methodsThere are numerous books on data collection, projectdesign and statistical analysis out there, and to some extentit is a case of simply going to a large city bookshop andhaving a look at what’s available and what kind of approachsuits your own way of thinking. Here are some possibilities.Most of these, inevitably, are aimed at students.
Quick general readsAllison, B. O’Sullivan, B. Owen, A, Rice, J Rothwell, A and Saunders,C. (1998) Research skills for students. Kogan Page, 124 pp
A very practical overview of research methods in the socialsciences, which is particularly good on the mechanics ofsampling, surveys, and questionnaire design. An introductorylevel which assumes no prior experience.
35
Cassell, C. and G. Symon, Eds. (2003). Qualitative methods inorganizational research: a practical guide. London, Sage.
While aimed at organizational researchers, this is a practicalguide to a wide range of qualitative research techniques.
Research and evaluation in educationArticles and textbooks written with teachers andcurriculum developers in mind will be directly relevant tosome Research Council audiences.Those outside educationmay find the thinking about evaluation relevant.
Bennet, J. (2003). Evaluation methods in research. London,Continuum.
This short guide to evaluation methods is aimed at ateacher/researcher audience, and contains an excellentchapter on planning and conducting an evaluation.The styleis informal, aiming to provide a realistic, jargon-freeintroduction to questionnaires, interviews, and observationalstudies. Contains some very helpful examples of realquestionnaires and other materials.
Gorard, S. (2001) Quantitative methods in educational research.The role of numbers made easy. Continuum Press, New York.200 pp
A comprehensive guide to quantitative research methodsand basic studies, written from an educational researchperspective. Includes a chapter on experimental designs.Useful across a range of experience levels, and written in astraightforward, accessible style.
Research and evaluation in training Training (i.e. adult skills training or managerial trainingprovided in the workplace) may not seem immediatelyrelevant to engagement with science, but there are anumber of parallels that make literature on trainingevaluation quite relevant to scientists.
Easterby-Smith, M. (1994). Evaluating managementdevelopment, training and education. Aldershot, Gower.
Easterby-Smith is an academic and evaluation guru, who haswritten a number of guides to evaluation.
Research and evaluation in health and social carePope, C and Mays, N (2000) Qualitative research in healthcare. BMJ Books, 1996. 107 pp
A useful introduction to qualitative methods for scientists,explaining where and how qualitative methods can be usedto understand behaviour. Includes chapters on interviews,focus groups, observation, Delphi methods, case studies andaction research.There is also a discussion on judging quality.
Gomm, R. and Davies, C. (2000) Using evidence in health andsocial care. OUP/Sage. 175 pp
A critical examination, at an introductory level, of evidence-based practice within the sector. Good discussion of theissues involved in putting good-quality approaches intopractice in the field. Plenty of case studies and examples.
Bell, J. (1993) Doing your research project. OUP, Buckingham,176 pp
This is (or was) a set text for introductory courses onresearch methods for undergraduate and postgraduatestudents in education and social science. A useful andpractical overview of data collection methods(questionnaires, diaries and observation studies).Thesections on literature review and defining research questionsare likely to be less useful.
Longer general handbooksHussey, J. and Hussey, R. (1997). Business Research. London,MacMillan.
Chapter 6 (Collecting the original data) has a goodoverview of how to construct a quantitative questionnaire,along with basic aspects of sampling. Quite simple andstraightforward advice. Although it assumes an academicpurpose (like conducting an MBA project) it also assumesno prior knowledge.
Neuman,W. L., Ed. (2000). Social research methods: qualitativeand quantitative approaches, Needham Heights.
Includes a chapter on ‘Science and Research.’ Straightforwardand sound introduction to social research methods, aimedat undergraduates.
Wilkinson, D and Birmingham, P. (2003) Using researchinstruments: a guide for researchers. Routledge Falmer, London.
This book has very good sections on designingquestionnaires and discussion guides.
Graziano, A. and Raulin, M. (5th ed 2004) Research methods: aprocess of enquiry. Allyn and Bacon, Boston. 452 pp.
‘Explains the entire range of research methodologies inpsychology’; also comes with a free CD-Rom.Very clear andwell-written, and comprehensive in its coverage of socialscience methodology. Includes a chapter on doing programevaluation using control groups and experimental designs.Probably more detailed than most practitioners wouldneed, but written in a style that can take the reader fromintroductory level to advanced level. Also contains accessiblechapters on philosophical and methodological issues.
Qualitative methodsVaughn, S, Schumm, JS and Sinagub, J. (1996) Focus groupinterviews in education and psychology. Sage,Thousand Oaks.170 pp
A thorough guide to setting, running and analysing groupdiscussions, which includes a chapter on special issues toconsider when interviewing children and young people.Concentrates on how to set-up and facilitate sessions,rather than the analysis and interpretation side.
36
4 FURTHER READING
Market research methodsKnowledge of market research methods is quite helpful forpeople creating projects that engage with the public,because many market research techniques involveresearching attitudes amongst samples of the public.Thereare some simple introductory books available, mainly aimedat people in business who want to research new products.Unfortunately, there are very few good books that explainsurvey research: much of the knowledge remains with theagencies that set-up market research projects.
Hague, P and Harris, P (1994) Sampling and statistics. KoganPage, London. 128 pp
Written from a market research/opinion survey point ofview, this short book explains the basics of sampling theoryand sampling frames as applied to research with the public.It covers sampling error and confidence limits in somedetail.
Other approaches
Action researchIn action research, members of a project team conduct aproject engaging in a cycle of reflection, learning andadaptation, which is intended to improve the project as theygo.Within social policy research, action research usuallyrefers to a process of practitioners conducting smallexperimental projects, with plenty of input from grass-rootsusers.The strength of action research is the way in which itencourages those new to research to create experimentsand learn from them.The weaknesses, mostly perceivedfrom outside the paradigm, are that action research projectsmay fail to learn from other people’s experience (becauseof the focus on learning as you go).
Costello, P. (2003) Action research. Continuum, London.
A simple and extremely practical guide to setting up andrunning an action research project. Assumes a teacheraudience, but the generality of the approach makes it usefulfor any discipline.
McNiff, J. (2002) Action research: principles and practice.Routledge Falmer, London. 163 pp.
An introduction to action research, but more extensive anddetailed than Costello’s book.Through the author’sexamples of her own action research projects in practicethis book brings action research to life as an approach.
Case study researchStake, R. (1995) The art of case study research. Sage,ThousandOaks. 175 pp
A discursive treatment of case studies in educationalresearch, which covers both the practicalities of case studyresearch, and the academic methodological issues that itraises.
Web ResourcesThe main web resources are the websites of variousevaluation societies (usually specialising in educational orsocial policy research).There are also some useful ‘hints andtips’ sites, mostly US-based, written by academics andgeneral enthusiasts.
www.evaluation.org.ukWebsite of the UK Evaluation society, with a document onbest practice.
www.eval.orgThe American Evaluation Association has an excellentwebsite with links to plenty of full-text documents onevaluation.
www.reviewing.co.uk/evaluation/index.htmGeneral articles on getting the best out of courseevaluation, with tips and links.
http://gsociology.icaap.org/methods/Set of links to US-based evaluation resources.
www.hm-treasury.gov.uk/spending_review/spend_sr04/associated_documents/spending_sr04_science.cfmThe UK Treasury ‘Green Book’ is a useful guide on howevaluation fits into a project cycle.
www.mrs.org.uk The UK Market Research Society’s website is a good placeto find information on selecting suppliers, and up-to-dateguidance on ethics and confidentiality in interviewing thepublic.
www.dfes.gov.ukDepartment of Education and Skills has data on individualschool examination results at various Key Stages, GCSE and‘A’ level.
6
Research Councils UKPolaris House, North Star AvenueSwindon,Wiltshire SN2 1ETUnited Kingdom
Tel: +44 (0)1793 444420Fax: +44 (0)1793 444409Website: www.rcuk.ac.uk
THE RESEARCH COUNCILS ARE:
n Biotechnology and Biological Sciences Research Councilwww.bbsrc.ac.uk
n Council for the Central Laboratory of the Research Councilswww.cclrc.ac.uk
n Economic and Social Research Councilwww.esrc.ac.uk
n Engineering and Physical Sciences Research Councilwww.epsrc.ac.uk
n Medical Research Councilwww.mrc.ac.uk
n Natural Environment Research Councilwww.nerc.ac.uk
n Particle Physics and Astronomy Research Councilwww.pparc.ac.uk
n The Arts and Humanities Research Boardwww.ahrb.ac.uk