Tips from J-PAL Africa’s Research Team
Nilmini Herath, Emmanuel Bakirdjian, Wim Louw
About J-PAL Africa
The Africa office of the Abdul Latif Jameel Poverty Action Lab (J-PAL Africa) is part of the Southern Africa Labour and Development Research Unit (SALDRU), based the University of Cape Town. We support researchers running randomised evaluations in Africa, conduct trainings to build capacity in understanding and running randomised evaluations, and work with policymakers to help them to leverage rigorous evidence to make policies more effective and scale up the most promising programmes.
The J-PAL Africa research division has conducted several randomised evaluations in South Africa using in-house data collection. Many of our studies tested interventions designed to help young unemployed job seekers in Johannesburg to find employment.
Through this work, we have surveyed over 10,000 young people between the ages of 18-34 who are based in urban areas. Most of our studies involve a baseline survey and follow-up data collection, often at multiple points during the study. We have trained many field teams and have used a range of survey methods, questionnaires, and data quality assurance techniques.
We have learned a tremendous amount about the practical steps that can be taken to maximise response rates and achieve excellent data quality with this study population. In recent studies, we have achieved midline response rates of over 96% (2-3 months after intervention) and endline response rates of 85%-90% (1 year or more after intervention).
Over the years, we have learned how to develop skilled field teams, adept at navigating the diverse, multilingual populations of South Africa’s urban centres. Moreover, our data quality checks have grown more sophisticated with each study, ensuring that issues are detected, recorded, and resolved.
This post outlines some initial reflections. A more comprehensive brief will be posted soon on the SALDRU website.
We also link to resources that may be useful to researchers planning to conduct surveys.
Our lessons learned go beyond just studies on youth unemployment, and we hope that the resource will be helpful to researchers planning to conduct surveys with similar populations.
|The Impact of Providing Information about Job Seekers’ Skills on Employment in South Africa||2016||Eliana Carranza
|6,900 job seekers||Johannesburg, South Africa||https://www.povertyactionlab.org/evaluation/impact-providing-information-about-job-seekers-skills-employment-south-africa|
|The impact of a job search planning intervention on job search efficiency and employment among youth in South Africa||2015||Martin Abel
|1,097 job seekers||Krugersdorp, Alexandra, and Soweto||https://www.povertyactionlab.org/evaluation/job-search-planning-employment-south-africa|
|The role of reference letters and skill accreditation in the South African labour market||2015||Martin Abel
|441 job seekers (Experiment 1); 1,267 job seekers (Experiment 2); 498 job seekers (Experiment 3)||Johannesburg, South Africa||https://www.povertyactionlab.org/evaluation/role-reference-letters-and-skill-accreditation-south-african-labour-market|
|Transport subsidies and job matchmaking in South Africa||2013||Abhijit Banerjee
|1,200 job seekers||Johannesburg, South Africa||https://www.povertyactionlab.org/evaluation/transport-subsidies-and-job-matchmaking-south-africa|
|Other||various||various||Multiple pilot studies with over 1,000 job seekers combined||Johannesburg, South Africa|
Reaching participants, staying in touch
Young job seekers in Johannesburg are mobile, change numbers often, are in-and-out of work, and can be difficult to reach soon after initial contact. In our experience, we have found the following approaches to be helpful for minimising attrition over time:
- Conduct a baseline, and particularly an in-person first interaction when possible — by visiting participants at their home or by inviting participants to a central venue. Establishing the first contact with respondents in person builds a stronger research relationship, reinforces trust in our study, helps clarify potential misunderstandings about the intervention and expectations, and increases the likelihood of reaching respondents in future survey rounds.
- For follow-up surveys, we have seen good results using phone surveys conducted 2-3 months and approximately 1 year after the first interaction.
- We have also found short SMS surveys to be effective in the very short-term (2-3 days after the first interaction).
- By far the most important technique we have used for maximising response rates over rounds is collecting multiple contact details for each respondent:
- We collect participants’ current number(s), email address (if they have one), a family member’s phone number, and a friend’s phone number.
- We have not found email to be an effective channel for communication.
- Because these contact details are so important, we put measures in place to avoid data-entry errors, using length constraints in our surveys (see the following section), or by using a double-entry approach.
- We have also experimented with, and have found useful, verifying phone numbers during surveying, when feasible. For example: phoning the participant’s number during the survey, to check that it works, or sending a verification SMS.
- We recommend building overtime into the study to survey participants outside of work-hours: attempting to contact hard-to-reach participants in the evenings or on weekends. This not only ensures that we reach those who are harder to find during the day, but also ensures that our studies are not biased by only capturing findings of specific groups.
- A small incentive, in the form of an airtime transfer or voucher improves completion rates, and improves the participants’ experience.
Be clear with participants at the outset on when and how communication will take place over time, and be open about how long a survey is likely to take. Positive, unambiguous interactions are helpful for re-establishing contact.
With enumerator-led surveys, having a team of field staff who can deliver the questionnaire with care and consistency is crucial to ensuring good data quality.
- Our field team members receive training over multiple days. As a rule of thumb, for a 20-30 minute long phone survey, we would train our enumerators in a classroom-like setting for 3-4 full days.
- The core components of our training include:
- Background on the study and the research question;
- General guidelines on surveying best practice, e.g. staying neutral, dealing with sensitive questions;
- A detailed run-through of the survey;
- Surveying practice and role-play, both on paper and with the intended (e.g. tablet-based) technology;
- Translating selected survey questions, and reviewing difficult terms;
- Fieldwork logistics and protocols;
- Several short comprehension tests of the material presented (paper and practical).
- Crucially, we discuss and work through the structure and content of the survey in great detail. This helps to anticipate a range of potential challenges, from interpretational ambiguity to respondent sensitivity, and to prepare solutions to these challenges. This works well when structured as a collaborative exercise, where trainee enumerators are encouraged to highlight potential issues, and ultimately helps refine the survey design itself.
- We recommend role-playing — pairing enumerators and alternating enumerator/participant roles — and scenario-type exercises.
- Since Johannesburg is a highly multilingual environment, we select surveyors accordingly, and build teams that are multilingual and able to navigate conversations that may involve more than one language, code-switching, and “on the fly” translation. During training and debriefings, we devote time to translating survey questions as a team, and to discussing how to clarify terms or phrasings, and how to address questions from participants, in multiple languages.
- Formal translation of survey materials may be necessary when the respondent has to read directly. In such cases, we recommend forward and backward translation to ensure consistency. We also suggest consulting with a field team from that area on language use.
Monitoring Data Quality
We have found a Computer Assisted Personal Interviewing (CAPI) approach to be highly effective and efficient — using services like SurveyCTO on tablets, phones, or desktop computers (depending on the setting).
CAPI, as opposed to paper-based surveying, allows much greater control over how a survey takes place, and has the flexibility to include pop-up notes, constraints on input type, on the fly calculations, flags, section timings, pre-filled and individualised questions, and, when feasible, the ability to trigger other actions like sending verification SMSes or emails on survey completion. It also means data is available faster — instantly, or at the end of the day, depending on connectivity constraints — and data quality checks can be performed immediately.
Here are a few approaches that we use to ensure high data quality:
- We recommend introducing high-frequency checks (HFCs) as soon as possible. HFCs are used to check the distribution of key variables collected by each surveyor, and to flag any potential issues, for example: number off missing values, number of outliers, number of “skipped” questions, average time per survey section, etc.
- HFCs can be implemented by simply tabulating key outcomes by surveyor in Stata/R. To automate these checks, researchers can generate HFCs reports that can be updated on a daily basis. We recommend Stata’s dynamic document commands, or R and R Markdown.
- It is also standard J-PAL practice to perform spot-checks or “backchecks” which is a process of revisiting and verifying a randomly selected percentage of completed surveys. This may involve listening to recordings of surveys, if feasible, or contacting a percentage (10-15%) of respondents again to check a subset of survey responses against the original survey.
- We recommend regular debriefings with the field team, covering topics that come up in HFCs and spot-checks, and discussing sticking points or questions about the survey.
- During training, when using a CAPI-type set-up, we encourage field-team members to try to “break the survey”, as a way of anticipating potential issues before going to field. This may include entering unrealistic values, or inconsistent answers, and seeing how the survey form behaves, and then deciding whether the form needs to be adjusted. It also provides the opportunity to test how well the high-frequency checks are running.
We hope these tips and best practices are helpful in conducting your research! To learn more about J-PAL Africa’s Research work in South Africa, visit the J-PAL Africa website.
- J-PAL Research Resources, “Measurement & Data Collection”: https://www.povertyactionlab.org/research-resources/measurement-and-data-collection
- Development Impact Evaluation (DIME) in the Research Group of the World Bank, “Monitoring Data Quality”: https://dimewiki.worldbank.org/wiki/Monitoring_Data_Quality
- Innovations for Poverty Action (IPA), examples of HFCs: https://github.com/PovertyAction/high-frequency-checks
- World Bank “Development Impact” blog, “Electronic versus paper-based data collection: http://blogs.worldbank.org/impactevaluations/electronic-versus-paper-based-data-collection-reviewing-debate
- J-PAL Research Resources, “Software & Tools”: https://www.povertyactionlab.org/research-resources/software-and-tools
- For coding advice on SurveyCTO see the World Bank’s Development Impact Evaluation (DIME) unit’s set of resources: https://dimewiki.worldbank.org/wiki/SurveyCTO_Coding_Practices
- SurveyCTO online course:
- J-PAL online course for designing and running randomised evaluations (especially the unit on Collecting and managing data):
- MITx MicroMasters Program in Data, Economics, and Development Policy: https://www.povertyactionlab.org/training/micromasters
- R for Data Science by Garrett Grolemund and Hadley Wickham: http://r4ds.had.co.nz/
- Hands-On Programming with R by Garrett Grolemund: https://rstudio-education.github.io/hopr/index.html