Both measles and rubella infections generate detectable IgM within the first few days of onset of clinical signs and the proportion of cases that can be confirmed through IgM detection by EIA usually peaks at 4-5 days after onset, respectively (see chapter 4). An IgM EIA is the recommended assay to confirm measles and rubella cases in the GMRLN as it can be used on a single blood sample collected from a suspected case at the first contact with the health facility with a high degree of sensitivity and specificity. A trouble-shooting guide is provided in Annex 12.1.

A number of commercial IgM EIAs for measles and rubella have been independently assessed (Annex 4.1) where both capture and indirect format assays were reported to have similar performance. However, EIAs are biological assays and the performance of the assay can be affected by both the quality of the assay at the time the test is carried out and how well the technician performs the test. Locally produced kits must be evaluated for accuracy. The recommendations for conducting an evaluation of EIAs for IgM detection by network laboratories is provided in Annex 12.2. (link pdf)

Several approaches should be taken to monitor the precision and accuracy of the results obtained from EIAs for detection of measles or rubella IgM antibodies. Each of the following methods listed is discussed in detail below:
1. Quality control programme
2. Confirmatory testing
3. Internal quality assurance
4. Supervisory review and evaluation

(1) Quality control programme

A quality control programme is a systematic process that monitors the validity of an assay by incorporating a method to measure accuracy and precision. While QA includes activities or measures taken to prevent errors, QC activities are designed to detect errors. Steps for QC are completed prior to release of the test result. Ideally, three QC samples or controls should be selected to represent high, normal, and low values. The aim is to include controls that are sufficient to differentiate between normal variation and technical errors.

Guidance for troubleshooting and steps for corrective action must be included in the QC programme. The performance of any assay can be influenced by a multitude of factors including the variability of some lots of microplates, and the performance of the components or reagents in the assay. External factors, such as the operator, incubation time and temperature, and the accuracy of delivery devices must be taken into account. For this reason, it is critical to maintain equipment maintenance records, pipette calibration certifications and logbooks for daily temperature measurements of incubators/refrigerators/freezers to avoid variation due to mechanical problems. In addition, these records provide evidence of adherence to QA procedures that are assessed during WHO evaluations as well as internal audits.

QC materials such as kit controls and in-house controls (IHC) are run to quantify the normal variability of an assay by establishing a normal range. Quality control material should be available in sufficient quantities to minimize the number of times that control ranges must be established. The QC mean should be established by running the QC sample 20 times to quantify normal variation and establish ranges for each QC sample. For more information, refer to the WHO QMS Handbook, section 7-3, Establishing the control range for control materials [1].

For IgM testing, statistical analysis can be used for the daily monitoring of values obtained from IHC samples. Use of a Levey–Jennings chart is recommended to visualize the performance range of an assay and to monitor trends or variation in assay performance. If controls are out of range, corrective action and troubleshooting should be undertaken and the problem should be resolved, and the test repeated before reporting assay results. Westgard rules can be applied to determine whether the results from the IHC sample are valid and sample results can be reported, or if they need to be rerun. Westgard rules are helpful to avoid rejecting runs that may be acceptable for a particular assay and can be used to detect both random and systematic errors. For additional guidance for the use of Levey-Jennings charts and Westgard rules, see Annex 12.3.

The QC protocols include guidelines for addressing any detected errors (troubleshooting) and specifies the corrective action and any remedial action that may be required. Remedial action, or remediation, addresses any consequences that may result from an error. For example, if an erroneous result has been reported, it is essential to immediately notify all persons concerned about this error and to provide the correct result. Corrective actions address the cause of the error.

If a test was done incorrectly, resulting in an incorrect result, corrective actions identify why the test was not performed properly and steps required to prevent a reoccurrence. As an example, the IHC and kit control of an assay may have been giving results that are >3 standard deviations of the mean values, and the corrective actions would be to identify the cause of the problem, rectify and repeat the assay.

(2) Confirmatory testing

Confirmatory testing or retesting and validation of a subset of a national laboratory’s samples by a designated reference laboratory provide an appropriate external measure of a laboratory’s performance over a much longer period of time. Specimens sent for validation should be representative of all results determined by the national laboratory (positive, negative and equivocal) and be chronologically and geographically representative of the country and selected from multiple outbreaks, if applicable.

The proportion of specimens sent to the RRL for confirmatory testing is dependent on the quality of the laboratory and may range from 10-100% with the lower range for a fully accredited laboratory and 100% for a laboratory which has failed accreditation. In general, for fully accredited laboratories, a minimum of 15 samples per shipment should be sent annually, or more frequently if requested by the RLC. The proposed list of selected samples (number and distribution) should be shared with the RLC and Reference Laboratory Director for endorsement before being sent to the RRL.

As with any shipment of samples, the sending laboratory should communicate with the receiving laboratory to determine the optimum time to send and by which means. All samples shipped should be a minimum of 200μl with at least 150μl of the original samples kept back in the national laboratory in case of the need for retesting.

The reference laboratory should ideally use the same commercial EIA as the national laboratory and provide turnaround of results within 14 days of receipt to ensure that any issues detected with the confirmatory testing are resolved as quickly as possible. The optical density values may show considerable variation between the RRL and the NL but concordance of final results should be high, with the exception of results in the equivocal range or for positive and negative samples with test values close to the cut-offs. For the Siemens assay, a formula to estimate concordance and discordance between two tests of the same sample is provided in Annex 12.4.

Any discordant result should be retested by the RRL and if the discordance remains, the RRL should consult with the RLC and National Laboratory Director to identify a course of action to identify possible causes. The first step for the National laboratory with any discordant sample is to retest it and report the result to the RRL and RLC. Although a score of 90% is a passing score for confirmatory testing, laboratories should carefully review every discordant result and try to resolve any issues detected. Careful documentation of the discordant result and the process to resolve it should be kept and this will be reviewed at the time of the on-site review.

(3) Internal quality assurance

Reproducibility of results can be evaluated by re-labelling one or more aliquots (replicates) of a laboratory sample which are then tested as separate samples. This procedure monitors the performance of an assay as well as the person performing the assay. The person responsible for QA or the laboratory director should establish a schedule and identify a sufficient number of samples to be tested as replicates. By testing replicate samples in the same assay and/or in different assays, intra- and inter-assay variation can be determined respectively and coefficient values (CVs) calculated. It is generally accepted that intra-assay CVs should be less than 10% and inter-assay CVs less than 15%.

(4) Supervisory review and evaluation

In performing an assay, one of the most common sources of error, and usually among the easiest to address, is the operator. Poor performance may not be due to the operator’s skill level or failure to follow the SOP correctly, but rather may be due to other factors. Among the potential factors that can affect operator performance:
• Time pressure to complete a heavy workload
• Distractions of phone calls or colleagues interrupting workflow
• Incorrectly labelled samples or selection of incorrect samples

It is critical that a supervisor or other highly experienced staff member review the performance of every assay and confirm that all the validity criteria are met and that no transcription errors have occurred before signing the worksheet. Sample results should not be reported without supervisory sign-off that the assay has been performed according to accepted criteria. Errors may also be due to poorly or incorrectly written procedures or SOPs and failed or out of calibration equipment.