WHO 032865 orig

The primary objective of a vaccination coverage survey is to provide a coverage estimate for selected vaccines. This page provides a list of documents related to the design, implementation, analysis, and reporting of vaccination coverage surveys. The documents have been organized into the categories below and are provided here to assist in conducting an effective vaccination coverage survey.

This page was created and is maintained by the WHO Expanded Programme on Immunization (EPI) Strategic Information Group. Much of this material was brought together at the "WHO Tools and Guidance on Immunization Data Quality and Vaccination Coverage Survey" meeting, which took place in Istanbul, Turkey in December 2015. For more information on this meeting, click here. To access the meeting report, click here.

 

Introductory presentations 

This collection contains a series of presentations outlining the basic steps of a vaccination coverage survey, as well as some presentations on commonly asked questions and variations on a coverage survey. The collection also can be found in the TechNet Resource Library:

http://www.technet-21.org/library/main/collection?cid=56e44c55eed99

Title Author Year Type Language
Coverage survey variations WHO 2015 Presentation English
FAQ presentation WHO 2015 Presentation English
Methods used for monitoring WHO 2015 Presentation English
Step 1, 2, & 3 presentation WHO 2015 Presentation English
Step 4, 5, & 6 presentation WHO 2015 Presentation English
Step 7, 8, & 9 presentation WHO 2015 Presentation English
Step 10, 11, & 12 presentation WHO 2015 Presentation English
Step 13 presentation WHO 2015 Presentation English
Step 14 presentation WHO 2015 Presentation English
Step 15 & 16 presentation WHO 2015 Presentation English
Step 17 presentation WHO 2015 Presentation English

 

Analytic plan

This collection includes documents to assist with the development of an analytic plan for vaccination coverage survey data. The collection also can be found in the TechNet Resource Library:

http://www.technet-21.org/library/main/collection?cid=56e00e70eb407

Title Author Year Type Language
Analytic Definitions WHO 2016 Document English
Definitions of Timeliness WHO 2016 Document English
Suggested Analyses WHO 2016 Document English

 

Budget examples

This collection includes documents to assist with the development of a budget for a vaccination coverage survey. The collection also can be found in the TechNet Resource Library:

http://www.technet-21.org/library/main/collection?cid=56e403d85a4b7

Title Author Year Type Language
Budget Example Document English
DHS Budget Example DHS Document English
DHS Budget Template DHS Document English
MICS Budget Template MICS Document English
WHO Cluster Survey Budget Template WHO 2015 Document English

 

Report outline

This collection contains an outline for developing a report for a vaccination coverage survey. The collection also can be found in the TechNet Resource Library:

http://www.technet-21.org/library/main/collection?cid=56f0f967d21f0

Title Author Year Type Language
Vaccination Coverage Survey Report outline WHO 2016 Document English

 

Request for proposal templates

This collection contains several examples of request for proposals. The collection also can be found in the TechNet Resource Library:

http://www.technet-21.org/library/main/collection?cid=56f0f8c8efe49

Title Author Year Type Language
Request For Proposal - Confidentiality WHO Document English
Request For Proposal Template - 50k+ WHO Document English
Request For Proposal Template - less than 50k WHO Document English

 

Sample terms of reference and profiles

This collection contains several sample terms of reference for positions such as survey coordinator and interviewers. The collection also can be found in the TechNet Resource Library:

http://www.technet-21.org/library/main/collection?cid=56e44737000a3

Title Author Year Type Language
Sample personnel descriptions (for surveys) - Haiti MOH - CDC - IHE Document English
ToR - Consultant for pilot test of post SIA survey MOH St Lucia - PAHO Document English
ToR - Consultant for pilot test of post intensified immunization activity MOH St Lucia - PAHO Document English
ToR - Coordinator for MMR vaccination coverage survey MOH St Lucia - PAHO Document English
ToR - Data analysis and report writing for MMR vaccination coverage survey MOH St Lucia - PAHO Document English
ToR - Data entry clerk for MMR vaccination coverage survey MOH St Lucia - PAHO Document English
ToR - Data management coordinator for MMR vaccination coverage survey MOH St Lucia - PAHO Document English
ToR - Interviewers for MMR vaccination coverage survey MOH St Lucia - PAHO Document English
ToR - PH Nurse and Nurse Practitioner for MMR vaccination coverage survey MOH St Lucia - PAHO Document English
ToR - Survey coordinator for supplemental national coverage survey MOH St Lucia - PAHO Document English

 

Survey forms and tools

This collection contains useful documents for carrying out a vaccination coverage survey, including sample surveys, sample log sheets, a project timeline, and equipment lists. The collection also can be found in the TechNet Resource Library:

http://www.technet-21.org/library/main/collection?cid=56e40a6631c5a

Title Author Year Type Language
Sample Equipment List - DHS DHS Document English
Sample MR coverage log - Haiti MOH CDC - IHE 2012 Document English
Sample Recruitment Log - Haiti MOH CDC - IHE 2012 Document English
Sample consent form - Haiti MOH CDC - IHE 2012 Document English
Sample household log sheet - Haiti MOH CDC - IHE 2012 Document English
Sample questionnaire - Bolivia MOH-PODEMA-PAHO 2013 Document English
Sample questionnaire - Haiti MOH CDC - IHE 2012 Document English
Sample timeline (1) - Haiti MOH CDC - IHE 2012 Document English
Sample timeline (2) - Haiti MOH-CDC IHE 2012 Document English

 

Vaccination card picture management

This collection contains information on the protocol for collecting and managing vaccination card pictures. The collection also can be found in the TechNet Resource Library:

http://www.technet-21.org/library/main/collection?cid=56f0f9a15faa3

Title Author Year Type Language
Standard Operating Procedures (SOPs) - Vaccination Card Picture Management WHO 2016 Document English

 

Current WHO reference manuals

This collection contains current WHO reference manuals on vaccination coverage surveys. The collection also can be found in the TechNet Resource Library:

http://www.technet-21.org/library/main/collection?cid=56e436326f970

 

Other WHO reference manuals

This collection contains various WHO reference manuals related to vaccination coverage surveys. The collection also can be found in the TechNet Resource Library:

http://www.technet-21.org/library/main/collection?cid=56e43bd1a380d

 

DHS training manuals

This collection contains documents to assist with the training of multiple members within a vaccination coverage survey team. The collection also can be found in the TechNet Resource Library:

http://www.technet-21.org/library/main/collection?cid=56e405543e4c4

 

Coverage survey publications

This collection contains several publications on vaccination coverage surveys from various regions and countries. The collection also can be found in the TechNet Resource Library:

http://www.technet-21.org/library/main/collection?cid=56f29b073d461

Title Author Year Type Language
2005-06 and 2011-12 Honduras Demographic and Health Survey Analysis of Vaccination Timeliness, Co-administration and Factors Associated with Vaccination Status - Draft Report of Findings PAHO, CDC 2014 Document English
Book: Brown 2002 (Various countries) Brown 2002 Journal article English
Publication: Barata J Epidemiol Community Health 2012 (Brazil) Barata 2012 Journal article English
Publication: Barker Stat Med 2005 (United States) Barker 2005 Journal article English
Publication: Bennett Int J Epi 1994 (Uganda) Bennett 1994 Journal article English
Publication: Caini Vaccine 2013 (Niger) Caini 2013 Journal article English
Publication: Cotter Epi Infec 2003 (Zimbabwe) Cotter 2003 Journal article English
Publication: Diaz-Ortega Salud Publica Mex 2013 (Mexico) Diaz-Ortega 2013 Journal article English
Publication: Gareaballah WHO Bulletin 1989 (Sudan) Gareaballah 1989 Journal article English
Publication: Grais Emerg Themes Epi 2007 (Niger) Grais 2007 Journal article English
Publication: Gunthmann Vaccine 2012 (Brazil) Gunthmann 2012 Journal article English
Publication: Henderson WHO Bulletin 1982 (Various countries) Henderson 1982 Journal article English
Publication: Hu Asia Pacific J Public Health 2015 (China) Hu 2015 Journal article English
Publication: Jahn Trop Med Int Health 2008 (Malawi) Jahn 2008 Journal article English
Publication: Kaiser WHO Bulletin 2015 (Africa) Kaiser 2015 Journal article English
Publication: Kim PloS One 2012 (Niger) Kim 2012 Journal article English
Publication: Langsten Soc Sci Med 1998 (Egypt) Langsten 1998 Journal article English
Publication: Luman Am J Prev Med 2001 (United States) Luman 2001 Journal article English
Publication: Luman BMC Public Health 2008 (Northern Mariana Islands) Luman 2008 Journal article English
Publication: Luman Int J Epi 2007 (Ethiopia) Luman 2007 Journal article English
Publication: Luman Vaccine 2008 (Northern Mariana Islands) Luman 2008 Journal article English
Publication: Milligan Int J Epi 2004 (Gambia) Miliigan 2004 Journal article English
Publication: Minetti Emerg Themes Epi 2012 (Mali) Minetti 2012 Journal article English
Publication: Murray Lancet 2003 (Global) Murray 2003 Journal article English
Publication: Ozcirpici BMC Public Health 2014 (Turkey) Ozcirpici 2014 Journal article English
Publication: PAHO Newsletter 2014 (Honduras) PAHO 2014 Journal article English
Publication: Pezzoli Trop Med Int Health 2009 (Boliva) Pezzoli 2009 Journal article English
Publication: Rainey Vaccine 2012 (Haiti) Rainey 2012 Journal article English
Publication: Sanchez BMC Public Health 2015 (Venezuela) Sanchez 2015 Journal article English
Publication: Sheth Natl J Com Med 2012 (India) Sheth 2012 Journal article English
Publication: Shimabukuro J Public Health Manag Pract 2007 (United States) Shimabukuro 2007 Journal article English
Publication: Suarez-Castaneda Vaccine 2014 (El Salvador) Suarez-Castaneda 2014 Journal article English
Publication: Suarez-Castaneda Vaccine 2015 (El Salvador) Suarez-Casteneda 2015 Journal article English
Publication: Tohme Tropical Med & Int Health 2014 (Haiti) Tohme 2014 Journal article English
Publication: Valadez Am J Pub Health 1992 (Costa Rica) Valadez 1992 Journal article English
Publication: Xu Hum Vaccin Immunother 2012 (China) Xu 2012 Journal article English
Yellow fever vaccination coverage following massive emergency immunization campaigns in rural Uganda, May 2011: a community cluster survey Bagonza J, Rutebemberwa E, Mugaga M, Tumuhamye N, Makumbi I 2013 Journal article English

 

Methodological publications

This collection contains several technical and methodological publications on vaccination coverage surveys. The collection also can be found in the TechNet Resource Library:

http://www.technet-21.org/library/main/collection?cid=56e41182ec3f3

Publication: Dean J survey stats & methods 2015

Publication abstract: In survey settings, a variety of methods are available for constructing confidence intervals for proportions. These methods include the standard Wald method, a class of modified methods that replace the sample size with the survey effective sample size (Wilson, Clopper-Pearson, Jeffreys, and Agresti-Coull), and transformed methods (Logit and Arcsine). We describe these seven methods, two of which have not been previously evaluated in the literature (the modified Jeffreys and Agresti-Coull intervals). For each method, we describe two formulations, one with and one without adjustment for the design degrees of freedom. We suggest a definition of adjusted effective sample size that induces equivalency between different confidence interval expressions. We also expand on an existing framework for truncation that can be used when data appear to be more efficient than a simple random sample or when data have standard error equal to zero. We compare these methods using a simulation study modeled after the 30 × 7 design for immunization surveys. Our results confirmed the importance of adjusting for the design degrees of freedom. As expected, the Wald interval performed very poorly, frequently failing to achieve the nominal coverage level. For similar reasons, we do not recommend the use of the Arcsine interval. When the intracluster correlation coefficient is high and the prevalence, p, is less than 0.10 or greater than 0.90, the Agresti-Coull and Clopper-Pearson intervals perform best. In other settings, the Clopper-Pearson interval is unnecessarily wide. In general, the Logit, Wilson, Jeffreys, and Agresti-Coull intervals perform well, although the Logit interval may be intractable when the standard error is equal to zero.

Link: http://www.technet-21.org/library/main/2420-survey-publication-dean-j-survey-stats-methods-2015

Publication: Barker Am J Epi 2002

Publication abstract: Eliminating health disparities in vaccination coverage among various groups is a cornerstone of public health policy. However, the statistical tests traditionally used cannot prove that a state of no difference between groups exists. Instead of asking, "Has a disparity--or difference--in immunization coverage among population groups been eliminated ?," one can ask, "Has practical equivalence been achieved?" A method called equivalence testing can show that the difference between groups is smaller than a tolerably small amount. This paper demonstrates the method and introduces public health considerations that have an impact on defining tolerable levels of difference. Using data from the 2000 National Immunization Survey, the authors tested for statistically significant differences in rates of vaccination coverage between Whites and members of other racial/ethnic groups and for equivalencies among Whites and these same groups. For some minority groups and some vaccines, coverage was statistically significantly lower than was seen among Whites; however, for some of these groups and vaccines, equivalence testing revealed practical equivalence. To use equivalence testing to assess whether a disparity remains a threat to public health, researchers must understand when to use the method, how to establish assumptions about tolerably small differences, and how to interpret the test results.

Link: http://www.technet-21.org/library/main/2421-survey-barker-am-j-epi-2002

Publication: Minetti Emerg Themes Epi 2012

Publication abstract: BACKGROUND: Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. METHODS: We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. RESULTS: VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. CONCLUSIONS: Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

Link: http://www.technet-21.org/library/main/2422-survey-publication-minetti-emerg-themes-epi-2012

Publication: Murray Lancet 2003

Publication abstract: BACKGROUND: Monitoring and assessment of coverage rates in national health programmes is becoming increasingly important. We aimed to assess the accuracy of officially reported coverage rates of vaccination with diphtheria-tetanus-pertussis vaccine (DTP3), which is commonly used to monitor child health interventions. METHODS: We compared officially reported national data for DTP3 coverage with those from the household Demographic and Health Surveys (DHS) in 45 countries between 1990 and 2000. We adjusted survey data to reflect the number of valid vaccinations (ie, those administered in accordance with the schedule recommended by WHO) using a probit model with sample selection. The model predicted the probability of valid vaccinations for children, including those without documented vaccinations, after correcting for bias from differences between the children with and without documented information on vaccination. We then assessed the extent of survey bias and differences between officially reported data and those from DHS estimates. FINDINGS: Our results suggest that officially reported DTP3 coverage is higher than that reported from household surveys. This size of the difference increases with the rate of reported coverage of DTP3. Results of time-trend analysis show that changes in reported coverage are not correlated with changes reported from household surveys. INTERPRETATION: Although reported data might be the most widely available information for assessment of vaccination coverage, their validity for measuring changes in coverage over time is highly questionable. Household surveys can be used to validate data collected by service providers. Strategies for measurement of the coverage of all health interventions should be grounded in careful assessments of the validity of data derived from various sources.

Link: http://www.technet-21.org/library/main/2423-survey-publication-murray-lancet-2003

Publication: Burton Lancet 2009

Publication abstract: WHO and UNICEF welcome Stephen Lim and colleagues' contribution to the more accurate measurement of immunisation coverage, and fully concur with the recommendations to improve routine immunisation monitoring systems and the need to validate results of these systems periodically with surveys. We feel that such efforts are necessary not only to document progress towards international goals and to meet donor reporting requirements but, even more importantly, to improve immunisation service delivery at local and national levels.

Link: http://www.technet-21.org/library/main/2424-survey-publication-burton-methods-lancet-2009

Publication: Hancioglu PLoS Med 2013

Publication abstract: Household surveys are the primary data source of coverage indicators for children and women for most developing countries. Most of this information is generated by two global household survey programmes—the USAID-supported Demographic and Health Surveys (DHS) and the UNICEF-supported Multiple Indicator Cluster Surveys (MICS). In this review, we provide an overview of these two programmes, which cover a wide range of child and maternal health topics and provide estimates of many Millennium Development Goal indicators, as well as estimates of the indicators for the Countdown to 2015 initiative and the Commission on Information and Accountability for Women's and Children's Health. MICS and DHS collaborate closely and work through interagency processes to ensure that survey tools are harmonized and comparable as far as possible, but we highlight differences between DHS and MICS in the population covered and the reference periods used to measure coverage. These differences need to be considered when comparing estimates of reproductive, maternal, newborn, and child health indicators across countries and over time and we discuss the implications of these differences for coverage measurement. Finally, we discuss the need for survey planners and consumers of survey results to understand the strengths, limitations, and constraints of coverage measurements generated through household surveys, and address some technical issues surrounding sampling and quality control. We conclude that, although much effort has been made to improve coverage measurement in household surveys, continuing efforts are needed, including further research to improve and refine survey methods and analytical techniques.

Link: http://www.technet-21.org/library/main/2425-survey-publication-hancioglu-plos-med-2013

Publication: Henderson WHO Bulletin 1982

Publication abstract: A simplified cluster sampling method, involving the random selection of 210 children in 30 clusters of 7 each, has been used by the Expanded Programme on Immunization to estimate immunization coverage. This paper analysis the performance of the method in 60 actual surveys and 1500 computer simulated surveys. Although the method gives a proportion of results with confidence limits exceeding the desired maximum of 10 absolute percentage points, it is concluded that is performs satisfactorily.

Link: http://www.technet-21.org/library/main/2426-survey-publication-henderson-who-bulletin-1982

Publication: Luman Int J Epi 2007

Publication abstract: BACKGROUND: Measuring vaccination coverage permits evaluation and appropriate targeting of vaccination services. The cluster survey methodology developed by the World Health Organization, known as the 'Expanded Program on Immunization (EPI) methodology', has been used worldwide to assess vaccination coverage; however, the manner in which households are selected has been criticized by survey statisticians as lacking methodological rigor and introducing bias. METHODS: Thirty clusters were selected from an urban (Ambo) and a rural (Yaya-Gulelena D/Libanos) district of Ethiopia; vaccination coverage surveys were conducted using both EPI sampling and systematic random sampling (SystRS) of households. Chi-square tests were used to compare results from the two methodologies; relative feasibility of the sampling methodologies was assessed. RESULTS: Vaccination coverage from a recent measles campaign among children aged 6 months through 14 years was high: 95% in Ambo (both methodologies), 91 and 94% (SystRS and EPI sampling, respectively, P-value = 0.05) in Yaya-Gulelena D/Libanos. Coverage with routine vaccinations among children aged 12-23 months was <20% in both districts; in Ambo, EPI sampling produced consistently higher estimates of routine coverage than SystRS. Differences between the two methods were found in demographic characteristics and recent health histories. Average time required to complete a cluster was 16h for EPI sampling and 17 h for SystRS; total cost was equivalent. Interviewers reported slightly more difficulty conducting SystRS. CONCLUSIONS: Because of the methodological advantages and demonstrated feasibility, SystRS would be preferred to EPI sampling in most situations. Validating results in additional settings is recommended.

Link: http://www.technet-21.org/library/main/2428-survey-publication-luman-int-j-epi-2007

Publication: Grais Emerg Themes Epi 2007

Publication abstract: In two-stage cluster surveys, the traditional method used in second-stage sampling (in which the first household in a cluster is selected) is time-consuming and may result in biased estimates of the indicator of interest. Firstly, a random direction from the center of the cluster is selected, usually by spinning a pen. The houses along that direction are then counted out to the boundary of the cluster, and one is then selected at random to be the first household surveyed. This process favors households towards the center of the cluster, but it could easily be improved. During a recent meningitis vaccination coverage survey in Maradi, Niger, we compared this method of first household selection to two alternatives in urban zones: 1) using a superimposed grid on the map of the cluster area and randomly selecting an intersection; and 2) drawing the perimeter of the cluster area using a Global Positioning System (GPS) and randomly selecting one point within the perimeter. Although we only compared a limited number of clusters using each method, we found the sampling grid method to be the fastest and easiest for field survey teams, although it does require a map of the area. Selecting a random GPS point was also found to be a good method, once adequate training can be provided. Spinning the pen and counting households to the boundary was the most complicated and time-consuming. The two methods tested here represent simpler, quicker and potentially more robust alternatives to spinning the pen for cluster surveys in urban areas. However, in rural areas, these alternatives would favor initial household selection from lower density (or even potentially empty) areas. Bearing in mind these limitations, as well as available resources and feasibility, investigators should choose the most appropriate method for their particular survey context.

Link: http://www.technet-21.org/library/main/2427-survey-publication-grais-emerg-themes-epi-2007

Publication: Valadez Am J Pub Health 1992

Publication abstract: In the absence of vaccination card data, Expanded Program on Immunization (EPI) managers sometimes ask mothers for their children's vaccination histories. The magnitude of maternal recall error and its potential impact on public health policy has not been investigated. In this study of 1171 Costa Rican mothers, we compare mothers' recall with vaccination card data for their children younger than 3 years. Analyses of vaccination coverage distributions constructed with recall and vaccination-card data show that recall can be used to estimate population coverage. Although the two data sources are correlated (r = .71), the magnitude of their difference can affect the identification of the vaccination status of an individual child. Maternal recall error was greater than two doses 14% of the time. This error is negatively correlated with the number of doses recorded on the vaccination card (r = -.61) and is weakly correlated with the child's age (r = -.35). Mothers tended to remember accurately the vaccination status of children younger than 6 months, but with older children, the larger the number of doses actually received, the more the mother underestimated the number of doses. No other variables explained recall error. Therefore, reliance on maternal recall could lead to revaccinating children who are already protected, leaving a risk those most vulnerable to vaccine-preventable diseases.

Link: http://www.technet-21.org/library/main/2432-survey-publication-valadez-am-j-pub-health-1992

Publication: Langsten Soc Sci Med 1998

Publication abstract: Estimates of immunization coverage in developing countries are typically made on a "card plus history" basis, combining information obtained from vaccination cards with information from mothers' reports, for children for whom such cards are not available. A recent survey in rural lower Egypt was able to test the accuracy of mothers' reports for a subset of children whose cards were not seen at round 1 of the survey but were seen a year later at round 3. Comparisons of the unsubstantiated reports at round 1 with information recorded from cards seen at round 3 indicate that mothers' reports are of very high quality; mothers' reports at round 1 were confirmed by card data at round 3 for between 83 and 93%, depending on vaccine, of children aged 12-23 months, and for 88 to 98% of children aged 24-35 months. Mothers of children who had not been vaccinated were more likely to give consistent responses than were mothers of vaccinated children. Thus, these "card plus history" estimates slightly understate true coverage levels. Most of the inconsistencies between round 1 and round 3 data apparently arose from interviewer or data processing error rather than from misreporting by mothers.

Link: http://www.technet-21.org/library/main/2431-survey-publication-langsten-soc-sci-med-1998

Publication: Gareaballah WHO Bulletin 1989

Publication abstract: Estimates of measles vaccination coverage in the Sudan vary on average by 23 percentage points, depending on whether or not information supplied by mothers who have lost their children's vaccination cards is included. To determine the accuracy of mother's reports, we collected data during four large coverage surveys in which illiterate mothers with vaccination cards were asked about their children's vaccination status and their answers were compared with the information given on the cards. Mothers' replies were very accurate. For example, for measles vaccination, the data supplied were both sensitive (87%) and specific (79%) compared with those on the vaccination cards. For both DPT and measles vaccination, accurate estimates of the true coverage rates could therefore be obtained by relying solely on mothers' reports. Within +/- 1 month, 78% of the women knew the age at which their children had received their first dose of polio vaccine. Ignoring mothers' reports of their children's vaccination status could therefore result in serious underestimates of the true vaccination coverage. A simple method of dealing with the problem posed by lost vaccination cards during coverage surveys is also suggested.

Link: http://www.technet-21.org/library/main/2430-survey-publication-gareaballah-who-bulletin-1989

Publication: Turner Int J Epi 1996

Publication abstract: BACKGROUND: Although the Expanded Programme on Immunization (EPI) cluster survey methodology has been successfully used for assessing levels of immunization programme coverage in developing country settings, certain features of the methodology, as it is usually carried out, make it less-than-optimal choice for large, national surveys and/or surveys with multiple measurement objectives. What is needed is a 'middle ground' between rigorous cluster sampling methods, which are seen as unfeasible for routine use in many developing country settings, and the EPI cluster survey approach. METHODS: This article suggests some fairly straightforward modifications to the basic EPI cluster survey design that put it on a solid probability footing and render it easily adaptable to differing and/or multiple measurement objectives, without incurring prohibitive costs or adding appreciably to the complexity of survey operations. The proposed modifications concern primarily the manner in which households are chosen at the second stage of sample selection. CONCLUSIONS: Because the modified sampling strategy maintains the scientific rigor of conventional cluster sampling methods while retaining many of the desirable features of the EPI survey methodology, the methodology is likely to be a preferred 'middle ground' survey design, relevant for many applications, particularly surveys designed to monitor multiple health indicators over time. The fieldwork burden in the modified design is only marginally higher than in EPI cluster surveys, and considerably lower than in conventional cluster surveys.

Link: http://www.technet-21.org/library/main/2429-survey-publication-turner-int-j-epi-1996

Publication: Lemeshow Int J Epi 1985

Publication abstract: A Monte Carlo simulation study was designed to evaluate the sample survey technique currently used by the Expanded Programme on Immunization (EPI) of the World Health Organization. Of particular interest was how the EPI strategy compared to a more traditional sampling strategy with respect to bias and variability of estimates. It was also of interest to investigate whether the estimates of population vaccination coverage were accurate to within 10 percentage points of the actual levels. It was found that within particular clusters, the EPI method was particularly sensitive to pocketing of vaccinated individuals, but the more traditional method gave more accurate and less variable results under a variety of conditions. However, the stated goal of the EPI, of being able to produce population estimates accurate to within 10 percentage points of the true levels in the population, was satisfied in the artificially created populations studied.

Link: http://www.technet-21.org/library/main/2435-survey-publication-lemeshow-int-j-epi-1985

Publication: Bennett Int J Epi 1994

Publication abstract: BACKGROUND: Cluster sample surveys of health and nutrition in rural areas of developing countries frequently utilize the EPI (Expanded Programme on Immunization) method of selecting households where complete enumeration and systematic or simple random sampling (SRS) is considered impractical. The first household is selected by choosing a random direction from the centre of the community, counting the houses along that route, and picking one at random. Subsequent households are chosen by visiting that house which is nearest to the preceding one. METHODS: Using a computer, and data from a survey of all children in 30 villages in Uganda, we simulated the selection of samples of size 7, 15 and 30 children from each village using SRS, the EPI method, and four different modifications of the EPI method. RESULTS: The choice of sampling scheme for households had very little effect on the precision or bias of estimates of prevalence of malnutrition, or of recent morbidity, with EPI performing as well as SRS. However, the EPI scheme was inefficient and showed bias for variables relating to child care and for socioeconomic variables. Two of the modified EPI schemes (taking every fifth house and taking separate EPI samples in each quarter of the community) performed in general much better than EPI and almost as well as SRS. CONCLUSIONS: These results suggest that the unmodified EPI household sampling scheme may be adequate for rapid appraisal of morbidity prevalence or nutritional status of communities, but that it may not be appropriate for surveys which cover a wider range of topics such as health care, or seek to examine the association of health or nutrition with explanatory factors such as education and socioeconomic status. Other factors such as cost and the ability to monitor interviewers' performance should also be taken into account.

Link: http://www.technet-21.org/library/main/2434-survey-publication-bennett-int-j-epi-1994

Publication: Eisele PLoS Med 2013

Publication abstract: Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.

Link: http://www.technet-21.org/library/main/2433-survey-publication-eisele-plos-med-2013

Publication: Luman BMC Public Health 2008

Publication abstract: Background Lack of methodological rigor can cause survey error, leading to biased results and suboptimal public health response. This study focused on the potential impact of 3 methodological "shortcuts" pertaining to field surveys: relying on a single source for critical data, failing to repeatedly visit households to improve response rates, and excluding remote areas. Methods In a vaccination coverage survey of young children conducted in the Commonwealth of the Northern Mariana Islands in July 2005, 3 sources of vaccination information were used, multiple follow-up visits were made, and all inhabited areas were included in the sampling frame. Results are calculated with and without these strategies. Results Most children had at least 2 sources of data; vaccination coverage estimated from any single source was substantially lower than from all sources combined. Eligibility was ascertained for 79% of households after the initial visit and for 94% of households after follow-up visits; vaccination coverage rates were similar with and without follow-up. Coverage among children on remote islands differed substantially from that of their counterparts on the main island indicating a programmatic need for locality-specific information; excluding remote islands from the survey would have had little effect on overall estimates due to small populations and divergent results. Conclusion Strategies to reduce sources of survey error should be maximized in public health surveys. The impact of the 3 strategies illustrated here will vary depending on the primary outcomes of interest and local situations. Survey limitations such as potential for error should be well-documented, and the likely direction and magnitude of bias should be considered.

Link: http://www.technet-21.org/library/main/2440-survey-publication-luman-bmc-2008

Publication: Miles Vaccine 2013

Publication abstract: Immunization programs frequently rely on household vaccination cards, parental recall, or both to calculate vaccination coverage. This information is used at both the global and national level for planning and allocating performance-based funds. However, the validity of household-derived coverage sources has not yet been widely assessed or discussed. To advance knowledge on the validity of different sources of immunization coverage, we undertook a global review of literature. We assessed concordance, sensitivity, specificity, positive and negative predictive value, and coverage percentage point difference when subtracting household vaccination source from a medical provider source. Median coverage difference per paper ranged from -61 to +1 percentage points between card versus provider sources and -58 to +45 percentage points between recall versus provider source. When card and recall sources were combined, median coverage difference ranged from -40 to +56 percentage points. Overall, concordance, sensitivity, specificity, positive and negative predictive value showed poor agreement, providing evidence that household vaccination information may not be reliable, and should be interpreted with care. While only 5 papers (11%) included in this review were from low-middle income countries, low-middle income countries often rely more heavily on household vaccination information for decision making. Recommended actions include strengthening quality of child-level data and increasing investments to improve vaccination card availability and card marking. There is also an urgent need for additional validation studies of vaccine coverage in low and middle income countries.

Link: http://www.technet-21.org/library/main/2439-survey-publication-miles-vaccine-2013

Publication: Luman Vaccine 2008

Publication abstract: Public health programs rely on household-survey estimates of vaccination coverage as a basis of programmatic and policy decisions; however, the validity of estimates derived from household-retained vaccination cards and parental recall has not been thoroughly evaluated. Using data from a vaccination coverage survey conducted in the Western Pacific's Northern Mariana Islands, we compared results from household data sources to medical record sources for the same children. We calculated the percentage of children aged 1, 2, and 6 years who received all vaccines recommended by age 12 months, 24 months, and for school entry, respectively. Coverage estimates based on vaccination cards ranged from 14% to 30% in the three age groups compared to 78-91% for the same children based on medical records. When cards were supplemented by parental recall, estimates were 51-53%. Concordance, sensitivity, specificity, positive and negative predictive values, and kappa statistics generally indicated poor agreement between household and medical record sources. Household-retained vaccination cards and parental recall were insufficient sources of information for estimating vaccination coverage in this population. This study emphasizes the importance of identifying reliable sources of vaccination history information and reinforces the need for awareness of the potential limitations of vaccination coverage estimated from surveys that rely on household-retained cards and/or parental recall.

Link: http://www.technet-21.org/library/main/2438-survey-luman-vaccine-2008

Book: Brown 2002

Publication abstract: Background This study aims to assess of the quality of child immunization coverage estimates obtained in 101 national population-based surveys in mostly developing countries. Methods The Demographic and Health Surveys (DHS) and UNICEF's Multiple Indicator Cluster Sample (MICS) surveys provide national immunization coverage estimates for children aged 12-23 months once every three to five years in many developing countries. The data are collected by interview from a nationally representative sample of households. 83 DHS and 18 MICS surveys were included. Findings 85% of mothers reported that they had ever received a health card for their child. 81% still had the card at the time of the interview, and nearly two-thirds of these presented the card to the interviewer. Cards were therefore observed for 55% of children overall. Rural and less educated mothers were less likely to report receiving health cards. Recall of additional immunizations by mothers that presented a card ranged from 1 to 3%. Recall of immunizations by mothers who reported never receiving a card ranged from 9 to 32%. Coverage among those who did not show a card rarely exceeded coverage among those who did, and there was good correlation between DPT and OPV doses received according to health card and recall data. Conclusion Though maternal recall data are known to be less accurate than health card data, we found no major systematic weaknesses in recall and believe that inclusion of recall data yields more accurate coverage estimates.

Link: http://www.technet-21.org/library/main/2436-survey-brown-global-book-2002

Bharti, 2016, Measuring populations to improve vaccination coverage

In low-income settings, vaccination campaigns supplement routine immunization but often fail to achieve coverage goals due to uncertainty about target population size and distribution. Accurate, updated estimates of target populations are rare but critical; short-term fluctuations can greatly impact population size and susceptibility. We use satellite imagery to quantify population fluctuations and the coverage achieved by a measles outbreak response vaccination campaign in urban Niger and compare campaign estimates to measurements from a post-campaign survey. Vaccine coverage was overestimated because the campaign underestimated resident numbers and seasonal migration further increased the target population. We combine satellite-derived measurements of fluctuations in population distribution with high-resolution measles case reports to develop a dynamic model that illustrates the potential improvement in vaccination campaign coverage if planners account for predictable population fluctuations. Satellite imagery can improve retrospective estimates of vaccination campaign impact and future campaign planning by synchronizing interventions with predictable population fluxes.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/3519

Brown, 2014, Lot Quality Assurance Sampling to Monitor Supplemental Immunization Activity Quality: An Essential Tool for Improving Performance in Polio Endemic Countries

Monitoring the quality of supplementary immunization activities (SIAs) is a key tool for polio eradication. Regular monitoring data, however, are often unreliable, showing high coverage levels in virtually all areas, including those with ongoing virus circulation. To address this challenge, lot quality assurance sampling (LQAS) was introduced in 2009 as an additional tool to monitor SIA quality. Now used in 8 countries, LQAS provides a number of programmatic benefits: identifying areas of weak coverage quality with statistical reliability, differentiating areas of varying coverage with greater precision, and allowing for trend analysis of campaign quality. LQAS also accommodates changes to survey format, interpretation thresholds, evaluations of sample size, and data collection through mobile phones to improve timeliness of reporting and allow for visualization of campaign quality. LQAS becomes increasingly important to address remaining gaps in SIA quality

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/3520

Cutts, 2013, Measuring Coverage in MNCH: Design, Implementation, and Interpretation Challenges Associated with Tracking Vaccination Coverage Using Household Surveys

Vaccination coverage is an important public health indicator that is measured using administrative reports and/or surveys. The measurement of vaccination coverage in low- and middle-income countries using surveys is susceptible to numerous challenges. These challenges include selection bias and information bias- which cannot be solved by increasing the sample size- and the precision of the coverage estimate- which is determined by the survey sample size and sampling method. Selection bias can result from an inaccurate sampling frame or inappropriate field procedures and- since populations likely to be missed in a vaccination coverage survey are also likely to be missed by vaccination teams- most often inflates coverage estimates. Importantly- the large multi-purpose household surveys that are often used to measure vaccination coverage have invested substantial effort to reduce selection bias. Information bias occurs when a child's vaccination status is misclassified due to mistakes on his or her vaccination record- in data transcription- in the way survey questions are presented- or in the guardian's recall of vaccination for children without a written record. There has been substantial reliance on the guardian's recall in recent surveys- and- worryingly- information bias may become more likely in the future as immunization schedules become more complex and variable. Finally- some surveys assess immunity directly using serological assays. Sero-surveys are important for assessing public health risk- but currently are unable to validate coverage estimates directly. To improve vaccination coverage estimates based on surveys- we recommend that recording tools and practices should be improved and that surveys should incorporate best practices for design- implementation- and analysis.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/511

Cutts, 2016, Monitoring vaccination coverage: Defining the role of surveys

Vaccination coverage is a widely used indicator of programme performance, measured by registries, routine administrative reports or household surveys. Because the population denominator and the reported number of vaccinations used in administrative estimates are often inaccurate, survey data are often considered to be more reliable. Many countries obtain survey data on vaccination coverage every 3-5years from large-scale multi-purpose survey programs. Additional surveys may be needed to evaluate coverage in Supplemental Immunization Activities such as measles or polio campaigns, or after major changes have occurred in the vaccination programme or its context. When a coverage survey is undertaken, rigorous statistical principles and field protocols should be followed to avoid selection bias and information bias. This requires substantial time, expertise and resources hence the role of vaccination coverage surveys in programme monitoring needs to be carefully defined. At times, programmatic monitoring may be more appropriate and provides data to guide program improvement. Practical field methods such as health facility-based assessments can evaluate multiple aspects of service provision, costs, coverage (among clinic attendees) and data quality. Similarly, purposeful sampling or censuses of specific populations can help local health workers evaluate their own performance and understand community attitudes, without trying to claim that the results are representative of the entire population. Administrative reports enable programme managers to do real-time monitoring, investigate potential problems and take timely remedial action, thus improvement of administrative estimates is of high priority. Most importantly, investment in collecting data needs to be complemented by investment in acting on results to improve performance.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/3522

Cutts, 2016, Reply to comments on Monitoring vaccination coverage: Defining the role of surveys

Dear Editor, We thank Pond and Mounier-Jack for their comments on our paper, ‘‘Monitoring vaccination coverage: Defining the role of surveys” [1]. We agree that for many countries, administrative estimates of coverage are greatly inflated and misleading for programme planning purposes. The robustness of the WHO-UNICEF estimates of national immunization coverage (WUENIC) depends on the quality of the underlying data reviewed, which include administrative reports, as well as probability and non-probability sample surveys. In 2012, the Grade of Confidence (GoC) was introduced as a means of conveying uncertainty in WUENIC [2] and is low in the seven conflict-affected countries listed by Pond and Mounier-Jack. Table 1 shows that in five of these countries, vaccination cards were available for less than half the children surveyed; when card availability is low, it is particularly difficult to compare coverage trends. For example, in Nigeria, the proportion of children with DTP3 according to card was similar in surveys in 2010, 2011 and 2013, but in the EPI survey of 2010 a verbal history of vaccination was reported for 43% of children, more than double that of previous or subsequent surveys. Elsewhere, results from surveys did not always match expected trends (e.g. no apparent fall in coverage between surveys despite a 7 month stockout of DTP in one country), and some results were very unlikely (e.g. zero dropout between DTP1 and DTP3 in one Multiple Indicator Cluster Survey (MICS) (data from country reports at http://apps.who.int/ immunization_monitoring/globalsummary/wucoveragecountrylist. html)). The updated WHO guidelines on vaccination coverage surveys (http://www.who.int/immunization/monitoring_surveillance/ Vaccination_coverage_cluster_survey_with_annexes.pdf) discuss the challenges of using a new survey to compare with an older one, particularly an immunization coverage survey – these often lacked information on likely biases and confidence intervals were either not reported or not very meaningful from non-probability samples. The best way to compare results from different surveys is to plan a pair of surveys for such a purpose and work very hard to ensure standardised, well-documented and high quality data collection in both. Pond and Mounier-Jack suggest that two such surveys are feasible within each 5 years period. We would be reluctant to stipulate any particular interval as the usefulness of repeat surveys will depend in part on the likelihood of a change in coverage having occurred (which can be predicted from monitoring other indicators) [1] and the availability of accurate documentation of vaccination status on home-based or clinic records. Most of all, surveys should lead to action to strengthen programme performance and this is likely the weakest link in many countries, including those affected by conflict.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/3521

Cutts, 2016, Seroepidemiology: an underused tool for designing and monitoring vaccination programmes in low- and middle-income countries

Seroepidemiology, the use of data on the prevalence of bio-markers of infection or vaccination, is a potentially powerful tool to understand the epidemiology of infection before vaccination and to monitor the effectiveness of vaccination programmes. Global and national burden of disease estimates for hepatitis B and rubella are based almost exclusively on serological data. Seroepidemiology has helped in the design of measles, poliomyelitis and rubella elimination programmes, by informing estimates of the required population immunity thresholds for elimination. It contributes to monitoring of these programmes by identifying population immunity gaps and evaluating the effectiveness of vaccination campaigns. Seroepidemiological data have also helped to identify contributing factors to resurgences of diphtheria, Haemophilus Influenzae type B and pertussis. When there is no confounding by antibodies induced by natural infection (as is the case for tetanus and hepatitis B vaccines), seroprevalence data provide a composite picture of vaccination coverage and effectiveness, although they cannot reliably indicate the number of doses of vaccine received. Despite these potential uses, technological, time and cost constraints have limited the widespread application of this tool in low-income countries. The use of venous blood samples makes it difficult to obtain high participation rates in surveys, but the performance of assays based on less invasive samples such as dried blood spots or oral fluid has varied greatly. Waning antibody levels after vaccination may mean that seroprevalence underestimates immunity. This, together with variation in assay sensitivity and specificity and the common need to take account of antibody induced by natural infection, means that relatively sophisticated statistical analysis of data is required. Nonetheless, advances in assays on minimally invasive samples may enhance the feasibility of including serology in large survey programmes in low-income countries. In this paper, we review the potential uses of seroepidemiology to improve vaccination policymaking and programme monitoring and discuss what is needed to broaden the use of this tool in low- and middle-income countries.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/3523

Dietz, 2004, Assessing and monitoring vaccination coverage levels: lessons from the Americas
Hund, 2015, Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/1475

Jandee, 2015, Effectiveness of Using Mobile Phone Image Capture for Collecting Secondary Data: A Case Study on Immunization History Data Among Children in Remote Areas of Thailand

Entering data onto paper-based forms, then digitizing them, is a traditional data-management method that might result in poor data quality, especially when the secondary data are incomplete, illegible, or missing. Transcription errors from source documents to case report forms (CRFs) are common, and subsequently the errors pass from the CRFs to the electronic database. OBJECTIVE: This study aimed to demonstrate the usefulness and to evaluate the effectiveness of mobile phone camera applications in capturing health-related data, aiming for data quality and completeness as compared to current routine practices exercised by government officials. METHODS: In this study, the concept of "data entry via phone image capture" (DEPIC) was introduced and developed to capture data directly from source documents. This case study was based on immunization history data recorded in a mother and child health (MCH) logbook. The MCH logbooks (kept by parents) were updated whenever parents brought their children to health care facilities for immunization. Traditionally, health providers are supposed to key in duplicate information of the immunization history of each child; both on the MCH logbook, which is returned to the parents, and on the individual immunization history card, which is kept at the health care unit to be subsequently entered into the electronic health care information system (HCIS). In this study, DEPIC utilized the photographic functionality of mobile phones to capture images of all immunization-history records on logbook pages and to transcribe these records directly into the database using a data-entry screen corresponding to logbook data records. DEPIC data were then compared with HCIS data-points for quality, completeness, and consistency. RESULTS: As a proof-of-concept, DEPIC captured immunization history records of 363 ethnic children living in remote areas from their MCH logbooks. Comparison of the 2 databases, DEPIC versus HCIS, revealed differences in the percentage of completeness and consistency of immunization history records. Comparing the records of each logbook in the DEPIC and HCIS databases, 17.3% (63/363) of children had complete immunization history records in the DEPIC database, but no complete records were reported in the HCIS database. Regarding the individual's actual vaccination dates, comparison of records taken from MCH logbook and those in the HCIS found that 24.2% (88/363) of the children's records were absolutely inconsistent. In addition, statistics derived from the DEPIC records showed a higher immunization coverage and much more compliance to immunization schedule by age group when compared to records derived from the HCIS database. CONCLUSIONS: DEPIC, or the concept of collecting data via image capture directly from their primary sources, has proven to be a useful data collection method in terms of completeness and consistency. In this study, DEPIC was implemented in data collection of a single survey. The DEPIC concept, however, can be easily applied in other types of survey research, for example, collecting data on changes or trends based on image evidence over time. With its image evidence and audit trail features, DEPIC has the potential for being used even in clinical studies since it could generate improved data integrity and more reliable statistics for use in both health care and research settings.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/1469

Lessler, 2011, Measuring the Performance of Vaccination Programs Using Cross-Sectional Surveys: A Likelihood Framework and Retrospective Analysis

The performance of routine and supplemental immunization activities is usually measured by the administrative method: dividing the number of doses distributed by the size of the target population. This method leads to coverage estimates that are sometimes impossible (e.g., vaccination of 102% of the target population), and are generally inconsistent with the proportion found to be vaccinated in Demographic and Health Surveys (DHS). We describe a method that estimates the fraction of the population accessible to vaccination activities, as well as within-campaign inefficiencies, thus providing a consistent estimate of vaccination coverage. Methods and Findings: We developed a likelihood framework for estimating the effective coverage of vaccination programs using cross-sectional surveys of vaccine coverage combined with administrative data. We applied our method to measles vaccination in three African countries: Ghana, Madagascar, and Sierra Leone, using data from each country’s most recent DHS survey and administrative coverage data reported to the World Health Organization. We estimate that 93% (95% CI: 91, 94) of the population in Ghana was ever covered by any measles vaccination activity, 77% (95% CI: 78, 81) in Madagascar, and 69% (95% CI: 67, 70) in Sierra Leone. ‘‘Within-activity’’ inefficiencies were estimated to be low in Ghana, and higher in Sierra Leone and Madagascar. Our model successfully fits age-specific vaccination coverage levels seen in DHS data, which differ markedly from those predicted by naı¨ve extrapolation from country-reported and World Health Organization–adjusted vaccination coverage. Conclusions: Combining administrative data with survey data substantially improves estimates of vaccination coverage. Estimates of the inefficiency of past vaccination activities and the proportion not covered by any activity allow us to more accurately predict the results of future activities and provide insight into the ways in which vaccination programs are failing to meet their goals.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/3546

Liu, 2013, Measuring Coverage in MNCH: A Validation Study Linking Population Survey Derived Coverage to Maternal, Newborn, and Child Health Care Records in Rural China

Accurate data on coverage of key maternal, newborn, and child health (MNCH) interventions are crucial for monitoring progress toward the Millennium Development Goals 4 and 5. Coverage estimates are primarily obtained from routine population surveys through self-reporting, the validity of which is not well understood. We aimed to examine the validity of the coverage of selected MNCH interventions in Gongcheng County, China. Method and Findings: We conducted a validation study by comparing women’s self-reported coverage of MNCH interventions relating to antenatal and postnatal care, mode of delivery, and child vaccinations in a community survey with their paper- and electronic-based health care records, treating the health care records as the reference standard. Of 936 women recruited, 914 (97.6%) completed the survey. Results show that self-reported coverage of these interventions had moderate to high sensitivity (0.57 [95% confidence interval (CI): 0.50–0.63] to 0.99 [95% CI: 0.98–1.00]) and low to high specificity (0 to 0.83 [95% CI: 0.80–0.86]). Despite varying overall validity, with the area under the receiver operating characteristic curve (AUC) ranging between 0.49 [95% CI: 0.39–0.57] and 0.90 [95% CI: 0.88–0.92], bias in the coverage estimates at the population level was small to moderate, with the test to actual positive (TAP) ratio ranging between 0.8 and 1.5 for 24 of the 28 indicators examined. Our ability to accurately estimate validity was affected by several caveats associated with the reference standard. Caution should be exercised when generalizing the results to other settings. Conclusions: The overall validity of self-reported coverage was moderate across selected MNCH indicators. However, at the population level, self-reported coverage appears to have small to moderate degree of bias. Accuracy of the coverage was particularly high for indicators with high recorded coverage or low recorded coverage but high specificity. The study provides insights into the accuracy of self-reports based on a population survey in low- and middle-income countries. Similar studies applying an improved reference standard are warranted in the future.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/3524

Luman, 2007, Use and abuse of rapid monitoring to assess coverage during mass vaccination campaigns

This article describes the intended use of Rapid Coverage Monitoring tool, and some of the ways in which it has been misused for unintended purposes.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/760

MacNeil, 2014, Issues and considerations in the use of serologic biomarkers for classifying vaccination history in household surveys

Accurate estimates of vaccination coverage are crucial for assessing routine immunization program performance. Community based household surveys are frequently used to assess coverage within a country. In household surveys to assess routine immunization coverage, a child's vaccination history is classified on the basis of observation of the immunization card, parental recall of receipt of vaccination, or both; each of these methods has been shown to commonly be inaccurate. The use of serologic data as a biomarker of vaccination history is a potential additional approach to improve accuracy in classifying vaccination history. However, potential challenges, including the accuracy of serologic methods in classifying vaccination history, varying vaccine types and dosing schedules, and logistical and financial implications must be considered. We provide historic and scientific context for the potential use of serologic data to assess vaccination history and discuss in detail key areas of importance for consideration in the context of using serologic data for classifying vaccination history in household surveys. Further studies are needed to directly evaluate the performance of serologic data compared with use of immunization cards or parental recall for classification of vaccination history in household surveys, as well assess the impact of age at the time of sample collection on serologic titers, the predictive value of serology to identify a fully vaccinated child for multi-dose vaccines, and the cost impact and logistical issues on outcomes associated with different types of biological samples for serologic testing.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/3531

Ngandu, 2016, Does adjusting for recall in trend analysis affect coverage estimates for maternal and child health indicators? An analysis of DHS and MICS survey data

The Demographic and Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS) are the major data sources in low- and middle-income countries (LMICs) for evaluating health service coverage. For certain maternal and child health (MCH) indicators, the two surveys use different recall periods: 5 years for DHS and 2 years for MICS. Objective: We explored whether the different recall periods for DHS and MICS affect coverage trend analyses as well as missing data and coverage estimates. Designs: We estimated coverage, using proportions with 95% confidence intervals, for four MCH indicators: intermittent preventive treatment of malaria in pregnancy, tetanus vaccination, early breastfeeding and postnatal care. Trends in coverage were compared using data from 1) standard 5-yearDHS and 2-year MICS recall periods (unmatched) and 2) DHS restricted to 2-year recall to match the MICS 2-year recall periods (matched). Linear regression was used to explore the relationship between length of recall, missing data and coverage estimates. Results: Differences in coverage trends were observed between matched and unmatched data in 7 of 18 (39%) comparisons performed. The differences were in the direction of the trend over time, the slope of the coverage change or the significance levels. Consistent trends were seen in 11 of the 18 (61%) comparisons. Proportion of missing data was inversely associated with coverage estimates in both short (2 years) and longer (5 years) recall of the DHS (r0.3, p0.02 and r0.4, p0.004, respectively). The amount of missing information was increased for longer recall compared with shorter recall for all indicators (significant odds ratios ranging between 1.44 and 7.43). Conclusions: In a context where most LMICs are dependent on population-based household surveys to derive coverage estimates, users of these types of data need to ensure that variability in recall periods and the proportion of missing data across data sources are appropriately accounted for when trend analyses are conducted.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/3525

Okayasu, 2014, Cluster Lot Quality Assurance Sampling: Effect of Increasing the Number of Clusters on Classification Precision and Operational Feasibility

To assess the quality of supplementary immunization activities (SIAs), the Global Polio Eradication Initiative (GPEI) has used cluster lot quality assurance sampling (C-LQAS) methods since 2009. However, since the inception of C-LQAS, questions have been raised about the optimal balance between operational feasibility and precision of classification of lots to identify areas with low SIA quality that require corrective programmatic action. METHODS: To determine if an increased precision in classification would result in differential programmatic decision making, we conducted a pilot evaluation in 4 local government areas (LGAs) in Nigeria with an expanded LQAS sample size of 16 clusters (instead of the standard 6 clusters) of 10 subjects each. RESULTS: The results showed greater heterogeneity between clusters than the assumed standard deviation of 10%, ranging from 12% to 23%. Comparing the distribution of 4-outcome classifications obtained from all possible combinations of 6-cluster subsamples to the observed classification of the 16-cluster sample, we obtained an exact match in classification in 56% to 85% of instances. CONCLUSIONS: We concluded that the 6-cluster C-LQAS provides acceptable classification precision for programmatic action. Considering the greater resources required to implement an expanded C-LQAS, the improvement in precision was deemed insufficient to warrant the effort. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/3527

Pond, 2016, Comments on “Monitoring vaccination coverage: Defining the role of surveys”

Response to an article - Dear Editors, Felicity Cutts and co-authors [Vaccine 34 (2016) 4103–4109] provide a good overview of the role of household surveys in monitoring of immunization coverage. More should be said, however, about the optimal monitoring strategy for lower coverage countries. The most recent WHO/UNICEF estimates of national immunization coverage (WUENIC) suggest that for 26 (57%) of 46 countries with 2015 DTP3 coverage below 85%, the administrative data over-estimate coverage by from 10 percentage points to as much as 40 percentage points.1 Of course, as the article points out, coverage surveys provide an imperfect ‘‘gold standard” with which to assess even national (let alone sub-national) immunization coverage. However, when national coverage is low it is essential to periodically attempt to validate the administrative coverage estimate with an estimate from a high quality nationwide household survey.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/3529

Rhoda, 2010, LQAS: User Beware

Researchers around the world are using Lot Quality Assurance Sampling (LQAS) techniques to assess public health parameters and evaluate program outcomes. In this paper, we report that there are actually two methods being called LQAS in the world today, and that one of them is badly flawed. Methods This paper reviews fundamental LQAS design principles, and compares and contrasts the two LQAS methods. We raise four concerns with the simply-written, freely-downloadable training materials associated with the second method. Results The first method is founded on sound statistical principles and is carefully designed to protect the vulnerable populations that it studies. The language used in the training materials for the second method is simple, but not at all clear, so the second method sounds very much like the first. On close inspection, however, the second method is found to promote study designs that are biased in favor of finding programmatic or intervention success, and therefore biased against the interests of the population being studied. Conclusion We outline several recommendations, and issue a call for a new high standard of clarity and face validity for those who design, conduct, and report LQAS studies.

Link: www.technet-21.org/en/library/main/explore/programme-management/3407

Travassos, 2016, Immunization Coverage Surveys and Linked Biomarker Serosurveys in Three Regions of Ethiopia
Weber, 2009, Consultancy services for conducting an evaluation of immunization coverage monitoring methodology and process

Good quality immunization data are crucial for an accurate monitoring of progress towards immunization related targets. The accuracy of immunization data has raised serious concerns. Immunisation coverage figures from various sources referring to the same similar geographical area or target group are often inconsistent. The survey aims at describing the perceptions and experience of selected immunisation stakeholders in relation to the use, quality and ways to improve immunisation coverage data. A web-based questionnaire was elaborated, piloted and sent out to around 250 institutions involved in immunisation programmes including funding and research agencies, health policy decision makers, technical experts, and managers of immunisation programmes in 80 countries. This report presents data from 55 responses, mainly from EPI managers and WHO / UNICEF offices at country level. Further information expected from global funding agencies, research institutions and technical organisations will be included in the final survey report. Findings have to be interpreted with caution because responses may not necessarily reflect true opinions or facts. Administrative data is the most common source of data used.

Link: www.technet-21.org/en/library/main/explore/immunization-information-systems-coverage-monitoring/3533

Title Author Year Type Language
Assessing and monitoring vaccination coverage levels: lessons from the Americas. Dietz- Vance; Venczel- Linda; Izurieta- Héctor; Stroh- George; Zell- Elizabeth R; Monterroso- Edgar & Tambini- Gina 2004 Journal article English
Assessing equivalence: an alternative to the use of difference tests for measuring disparities in vaccination coverage. Barker 2002 Journal article English
Bennett Int J Epi 1994 Bennett 1994 Journal article English
Brown 2002 Brown 2002 Journal article English
Burton Lancet 2009 Burton 2009 Journal article English
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello 2015 Journal article English
Cluster Lot Quality Assurance Sampling: Effect of Increasing the Number of Clusters on Classification Precision and Operational Feasibility Hiromasa Okayasu,1 Alexandra E. Brown,1 Michael M. Nzioki,2 Alex N. Gasasira,2 Marina Takane,1 Pascal Mkanda,2 Steven G. F. Wassilak,3 and Roland W. Sutter1 Journal article English
Comments on ‘‘Monitoring vaccination coverage: Defining the role of surveys” Pond R1, Mounier-Jack S2. 2016 Journal article English
Consultancy services for conducting an evaluation of immunisation coverage monitoring methodology and process Wolfgang Weber, EHG Xavier Bosch-Capblanch, SCIH/STI 2009 Document English
Dean J survey stats & methods 2015 Dean 2015 Journal article English
Does adjusting for recall in trend analysis affect coverage estimates for maternal and child health indicators? An analysis of DHS and MICS survey data Ngandu NK1, Manda S2,3, Besada D4, Rohde S4, Oliphant NP5, Doherty T4,6 2016 Journal article English
Effectiveness of Using Mobile Phone Image Capture for Collecting Secondary Data: A Case Study on Immunization History Data Among Children in Remote Areas of Thailand Jandee, Kasemsak; Kaewkungwal, Jaranit; Khamsiriwatchara, Amnat; Lawpoolsri, Saranath; Wongwit, Waranya; Wansatid, Peerawat 2015 Journal article English
Eisele PLoS Med 2013 Eisele 2013 Journal article English
Gareaballah WHO Bulletin 1989 Gareaballah 1989 Journal article English
Grais Emerg Themes Epi 2007 Grais 2007 Journal article English
Hancioglu PLoS Med 2013 Hancioglu 2013 Journal article English
Henderson WHO Bulletin 1982 Henderson 1982 Journal article English
Immunization Coverage Surveys and Linked Biomarker Serosurveys in Three Regions in Ethiopia Travassos MA1, Beyene B2, Adam Z3, Campbell JD1, Mulholland N4, Diarra SS5, Kassa T2, Oot L3, Sequeira J3, Reymann M1, Blackwelder WC1, Wu Y1, Ruslanova I1, Goswami J1, Sow SO5, Pasetti MF1, Steinglass R3, Kebede A2, Levine MM1. 2016 Journal article English
Issues and considerations in the use of serologic biomarkers for classifying vaccination history in household surveys Adam MacNeil∗, Chung-won Lee, Vance Dietz Journal article English
LQAS: User Beware Rhoda, Dale A; Fernandez, Soledad A; Fitch, David J; Lemeshow, Stanley 2010 Journal article English
Langsten Soc Sci Med 1998 Langsten 1998 Journal article English
Lemeshow Int J Epi 1985 Lemeshow 1985 Journal article English
Lot Quality Assurance Sampling to Monitor Supplemental Immunization Activity Quality: An Essential Tool for Improving Performance in Polio Endemic Countries Brown AE1, Okayasu H1, Nzioki MM2, Wadood MZ3, Chabot-Couture G4, Quddus A5, Walker G1, Sutter RW1. 2014 Journal article English
Luman BMC Public Health 2008 Luman 2008 Journal article English
Luman Int J Epi 2007 Luman 2007 Journal article English
Luman Vaccine 2008 Luman 2008 Journal article English
Measuring Coverage in MNCH: A Validation Study Linking Population Survey Derived Coverage to Maternal, Newborn, and Child Health Care Records in Rural China Li Liu1, Mengying Li1, Li Yang2, Lirong Ju3, Biqin Tan4, Neff Walker1, Jennifer Bryce1, Harry Campbell5, Robert E. Black1, Yan Guo6* 2013 Journal article English
Measuring coverage in MNCH: design- implementation- and interpretation challenges associated with tracking vaccination coverage using household surveys. Cutts- Felicity T; Izurieta- Hector S & Rhoda- Dale a 2013 Journal article English
Measuring populations to improve vaccination coverage Nita Bharti1,2, Ali Djibo3, Andrew J. Tatem4,5,6, Bryan T. Grenfell4,7 & Matthew J. Ferrari1,8 2016 Journal article English
Measuring the performance of vaccination programs using cross-sectional surveys: a likelihood framework and retrospective analysis. Lessler J, Metcalf CJE, Grais RF, Luquero FJ, Cummings DAT, Grenfell BT 2011 Journal article English
Miles Vaccine 2013 Miles 2013 Journal article English
Minetti Emerg Themes Epi 2012 Minetti 2012 Journal article English
Monitoring vaccination coverage: Defining the role of surveys Cutts FT1, Claquin P2, Danovaro-Holliday MC3, Rhoda DA4. 2016 Journal article English
Murray Lancet 2003 Murray 2003 Journal article English
Reply to comments on Monitoring vaccination coverage: Defining the role of surveys. Cutts FT1, Claquin P2, Danovaro-Holliday MC3, Rhoda DA4. 2016 Journal article English
Seroepidemiology: an underused tool for designing and monitoring vaccination programmes in low- and middleincome countries Cutts FT1, Hanson M2. 2016 Journal article English
Turner Int J Epi 1996 Turner 1996 Journal article English
Use and abuse of rapid monitoring to assess coverage during mass vaccination campaigns. Luman- Elizabeth T; Cairns- K Lisa; Perry- Robert; Dietz- Vance & Gittelman- David 2007 Document English
Valadez Am J Pub Health 1992 Valadez 1992 Journal article English