Principles of survey research: A few top tips.

Whether you work in academia, professional services, charities or most parts of the pubic sector, the chances are that you will have been asked to complete a survey at some point in recent months. At the same time, it is likely that you will have also considered using a survey to acquire information for your organisation (i.e. on employee wellbeing) or to evaluate certain aspects of your work (i.e. an educational intervention in schools). If you or your organisation are new to survey research, here are a few basic top tips to consider before you begin.

1. PLANNING

- Start by thinking about what you want to measure in the most holistic terms and then what you might need to know in order to verify those different lines of inquiry. You should have taken the time to explore existing literatures and evidence bases before formulating ideas that you would like to explore inductively or alternatively hypotheses that you would like to test/falsify. Working backwards from an envisioned endpoint (or at least a well thought out research journey) is much better than fielding a survey quickly and then realising you didn't ask the right questions to address important ideas that only come to you down the line.

2. SAMPLING

- If you're fielding a survey to a set target population (i.e. bankers, activists etc), think about how you will make that sample representative (or at least diverse). This is important if you want to be sure that your findings are robust and that the claims you make (or actions you take) on the basis of them are well founded. So, for example, let's imagine that you're sampling from the general public. You can either (a) set representative quotas in your sampling procedure (most paid survey platforms will do this for you upon request), or (b) settle for a diverse convenience sample (i.e. anyone that will respond) and then weight your sample statistically at a later date by key characteristics (age, gender etc.). In either case, you need to think about which characteristics you care most about in terms of the target population (i.e. is it especially important that your sample is ethnically representative? And, if so, have you asked about ethnicity in sufficient depth, i.e. enough response categories to allow for this). 

- It's always a good idea to collect key demographic and socio-economic data on your respondents, whatever the field of inquiry, as these variables tend to explain a lot about people's attitudes and behaviours across different domains of action. To ensure that you are able to compare your findings to those of other surveys, I recommend mirroring questions (and their response categories) from existing surveys (for example, check out the British Election Study, European Social Survey, or the World Values Survey).

- If you're doing a longitudinal study, then you also need to think about the survey you field in, say, three years' time. Above all, you need to decide how you will go about evidencing change or stasis in your survey findings. Again, there are two routes. In the first instance, you can run the survey confidentially but not anonymously. The advantage is that you can re-contact the same participants in three years' time and be absolutely sure that you are measuring within-participant change. The downside is that response rates tend to be lower if people know that their details are being retained. It also requires more ethical consideration before the project commences in terms of meeting current GDPR requirements. The second option is to run two cross-sectional surveys of the same target population (now and in three years). You might not get exactly the same participants and, if the survey is anonymous, you might not know whether or not you have anyway. However, you could then 'match' your data on key characteristics. Put simply, it is possible to link participants now with participants in three years' time on variables like education, occupation etc. and then use them as a before/after observation of your chosen phenomena (on the assumption that they will think, behave, or experience things similarly to one another). The advantage is that you get around ethical dilemmas and tend to accrue higher response rates. The downside is that your results are less robust. 

- If you're measuring the impact of a specific project on a target population, then you are essentially running a field experiment. Ideally you would run a survey before the sample has been 'treated' (i.e. experienced your intervention), immediately afterwards, and then further down the line. Whatever your design, you need to think about how to verify any change in chosen phenomena that does occur. In other words, you need a control group. You should consider whether (a) there is a sample population operating in a similar context that hasn't been exposed to your project or (b) whether there is a subset of your chosen sample population that didn't experience your project or experienced it differently. You can then field your surveys to both and compare the results over time to add legitimacy to claims of impact. 

3. ITEM DESIGN

Once you've decided what you're seeking to explain as well as what you think is explaining it, then you need to think about how to measure each or both. 

- If you're looking at trying to explain behaviours and attitudes, you need to decide whether there is (a) an independent record of what you're investigating that is linked to your participants or (b) whether you need participants to self-report behaviours/attitudes in your survey. If the latter (which is more often the case), then you need to think about how to combat social desirability bias. For example, participants are likely to downplay behaviours such as choosing not to recycle but inflate virtuous behaviours such as helping others. You need to decide how to combat this - either through assuring anonymity, using clever question wording (such as getting participants to rate portraits of other people's behaviours and comparing their ratings), or through post-hoc statistical corrections. Above all, make sure that your questions are neutrally worded. Avoid adjectival phrases or overly complex sentence structures.

- If you're measuring attitudes or opinions, then you need to decide between using single- or multi-item questions. E.g. are you asking someone 'how much do you trust x?' or are you asking them five related questions that produce an aggregate score for trust in x. Multi-item question batteries are more internally and externally reliable (and provide more nuance to your analysis), but they obviously increase the length of your questionnaire. 

- When measuring attitudes and behaviours, you also need to consider the response options that you give your participants. This can actually influence the way that they will respond. You might, for instance, request responses on a continuous numerical scale (i.e. 0-10); you might use an ordinal Likert scale that gives five or so ordered options (i.e. from strongly disagree to strongly agree); or you might ask participants to rank response options. There are advantages and disadvantages to each that are worth reading about and considering before choosing.

- If you're fielding your surveys in comparative settings, think about how you are translating your questions and whether the wording is understood the same in each language. If respondents are given an English survey but only speak English as a second language, you also need to consider whether the questions are easily interpretable to that subsample. 

- If you're creating a new measurement scale (what is known as a battery of items) to measure a particular phenomenon, then it is a good idea to pilot it. Then you can assess how well each of the items works as a measure of your chosen phenomenon and tweak the ones that perform badly before fielding it to a full sample of your target population. For most attitudes, behaviours and opinions in political science, there are already well-verified survey batteries available to copy from comparative surveys and studies. Have a look around before you decide to create your own.

4. QUESTIONNAIRE DESIGN

- Think about the order of your questions and how one question might prime answers to another one. For example, you wouldn't want to ask someone to self-report attitudes to climate change right before you ask them to tick a variety of related daily behaviours. Ideally, you want your participants to self-report the thing you're seeking to explain without being primed. 

- If you're asking a list of questions about the same or similar topics, or you're testing a new multi-item battery of questionnaire items to measure an attitude or opinion, then randomise the order of your questions/items between participants. This will guard against order effects and survey fatigue in your data. 

- Present your questions thematically or in separate blocks wherever possible. Avoid crowded pages of questions that might cause cognitive overload.

5. FIELDING THE SURVEY

- You need to decide how you will administer the survey. Essentially there are three options: electronically, by mail/paper, and over the phone. There are now lots of platforms for fielding electronic surveys that are free (i.e. survey monkey) or come with a fee (i.e. Qualtrics). Depending on the complexity of your survey, most of the free ones will do the job. If you're issuing paper surveys or conducting them over the phone, then you need to consider the costs associated in terms of money (i.e. for postage and pre-paid return envelopes etc.) as well as time (i.e. how many researchers have you got per c.100 of your target population). Whilst you cannot always be sure of response validity using electronic surveys (i.e. the person you want to take the survey actually took it), people tend to answer more honestly when they're alone than when they're participating over the phone. At the same time, there's less pressure to participate in response to an email as compared to a human voice, so response rates can be lower. The best response rates are usually acquired through multiple waves of data collection that utilise different means (i.e. letters followed by emails, which are then followed up by phone calls etc.). And always send reminders.

Previous
Previous

Burnt out: the emotional toll of being a politician revealed

Next
Next

A-Level Results 2020: why data shouldn’t speak for themselves.