This page was developed by M.Peyre, A. Delabouglise and C.Calba, CIRAD-AGIRs
Acceptability and Engagement
- Qualitative assessment methods
Method type | References | Strenghts | Limits | |
Opinion survey | Nsubuga et al, 2002 Riera-Montes and Velicko 2011 | Rapid and not too ressource consuming | Limited flexibility, limited understanding of factors affecting acceptability | |
Participatory approach | Sawford et al, 2012; Bronner et al, 2014 | Allows to identify factors influencing reporting attitude and perception of surveillance | Time consuming, purely qualitative | |
- Semi-quantitative assessment methods
Method type | References | Strenghts | Limits | |
Structured questionnaire survey (OASIS fr OASIS En) | Hendrikx et al., 2011 | Allows to identify targeted corrective actions | limited flexibility, based on pre-defined requirement criteria which may not apply to all cases | |
Participatory approach | Elbers et al, 2010; Paterson et al., 2012 | Allows to identify factors influencing reporting attitude and perception of surveillance | Time consuming | |
Participatory approach (AccePT) | Calba et al., 2015 | Well documented method, step by step approach; semi-quantification of level of acceptability per actors and per aspect of the system, provide context-dependant recommendations, information related to the context | Time consuming, specific training required, highly dependant on stakeholders' willingness to participate |
- Quantitative assessment methods
Method type | References | Strenghts | Limits | |
Conjoint analysis | Delabouglise et al,2015 Pham et al., 2016 (submitted) | Quantitative estimation of factors (preferences and anticipations) affecting acceptability either positively or negatively | Time consuming, specific training required, highly dependant on stakeholders' willingness to participate, failure to collect relevant data may occur | |
Availability and sustainability
- Qualitative methods
Method type | References | Strenghts | Limits | |
Opinion survey | Clothier HJ, et al. 2005 | | Based on individual perception | |
Structured questionnaire survey | Hendrikx et al., 2011 | Allows to identify targeted corrective actions | Limited flexibility, based on pre-defined requirement criteria which may not apply to all cases | |
Bias
- Quantitative assessment methods
Method type | References | Strengths | Limits | |
Multilist CRC | Hook EB. 1995 (human health); Del Rio Vilas VJ, Pfeiffer DU. 2010 (animal health); Vergne T. 2015 (animal health) | Quantitative estimation of the bias. May also allow the identification of the variables significantly associated with the under-reporting rate. | Need data produced by multiple surveillance components. Surveillance components should not be mutually exclusive. | |
Unilist CRC | Del Rio Vilas VJ, Böhning D. 2008; Hook EB. 1995 (human health); Vergne T. 2015 (animal health) | Quantitative estimation of the bias. May also allow the identification of the variables significantly associated with the under-reporting rate. | Need data allowing the successive detection by the surveillance system of the epidemiological units presenting the characteristics of interest. | |
Data-driven mathematical model | Baguelin, 2013 | Allows inferring other transmission parameters at the same time | Heavy in terms of computer power and programming skills | |
| | | | |
Flexibility
- Qualitative assessment methods
Method type | References | Strengths | Limits | |
Opinion survey | Jefferson H, et al. 2008 | Allows to identify potential factors influencing flexibility | Based on individual perception, purely qualitative | |
Semi-structured interviews; inspections; descriptive analysis | Paterson et al, 2012; Riera-Montes and Velicko 2011 | Allows to identify potential factors influencing flexibility | Based on individual perception, purely qualitative | |
- Semi-quantitative assessment methods
Multiple hazard
- Qualitative assessment methods
Method type | References | Strengths | Limits | |
Opinion survey | Bingle et al, 2005 | | Based on individual perception, purely qualitative | |
Precision
- Quantitative assessment methods
Method type | References | Strengths | Limits |
Multilist CRC | | Identification of the variables significantly associated with the under-reporting rate. | Need data allowing the successive detection by the surveillance system of the epidemiological units presenting the characteristics of interest. |
Representativeness
- Quantitative assessment methods
Method type | References | Strengths | Limits | |
Unilist CRC | Hook EB. 1995 (human health); Vergne T. 2015 (animal health) | Identification of the variables significantly associated with the under-reporting rate. | Need data produced by multiple surveillance components. Surveillance components should not be mutually exclusive. | |
Multilist CRC | Del Rio Vilas VJ, Böhning D. 2008; Hook EB. 1995 (human health); Vergne T. 2015 (animal health) | Identification of the variables significantly associated with the under-reporting rate. | Need data allowing the successive detection by the surveillance system of the epidemiological units presenting the characteristics of interest. | |
Spatial evaluation | Lynn T, et al. 2007 | Identification of poorly represented geographical areas | Need accurate data on the spatial distribution of the target population | |
Use of outputs from other surveillance components | Macarthur C, Pless IB. 1999 | Regression analysis reduces the effects of confounding variables | One other surveillance component used as a standard reference. The two components must not be mutually exclusive | |
- Semi-quantitative assessment methods
Risk based criteria definition
- Qualitative assessment methods
Method type | References | Strengths | Limits |
EVARISK | RISKSUR research project | Provides information on the strenght of the risk based component, based on the quality of the risk criteria definition. | Does not provide specific recommendations on how to improve the risk definition as such, this information has to be retreived from the evaluation grid. |
Sensitivity
- Quantitative assessment methods
Method type | References | Strengths | Limits | |
Multilist CRC | Hook EB. 1995 (human health); Vergne T. 2015 (animal health) | Quantitative estimation of the sensitivity. May also allow the identification of the variables significantly associated with the under-reporting rate. | Need data produced by multiple surveillance components. Surveillance components should not be mutually exclusive. | |
Unilist CRC | Del Rio Vilas VJ, Böhning D. 2008; Hook EB. 1995 (human health); Vergne T. 2015 (animal health) | Quantitative estimation of the sensitivity. May also allow the identification of the variables significantly associated with the under-reporting rate. | Need data allowing the successive detection by the surveillance system of the epidemiological units presenting the characteristics of interest. | |
Stochastic modelling | Audigé L and Becket S. 1999; Cameron AR, Baldock FC, 1998 (for integration of Se and Sp of diagnositic tests); Audigé L and Becket S. 1999 | Stochastic approach: acount for probabilistic distributions. | Assumption of representativeness of the sample. Not applicable to risk based surveillance | |
Stochastic scenario tree modelling | Martin PAJ et al. 2007; Martin PAJ 2008 | Stochastic approach: acount for probabilistic distributions. Enables all available evidence about disease status to be used, explicitly, transparently and quantitatively. Applicable to all components of surveillance, including risk-based surveillance designs. | Use of expert opinion. Further work is required to develop acceptable approaches of expert opinion to generate inputs for this type of model | |
Stochastic scenario trees modelling using matrix algebra and Bayesian belief networks | Hood GM, et al. 2009 | Like scenario tree modelling, stochastic approach acounts for probabilistic distributions. enables all available evidence about disease status to be used, explicitly, transparently and quantitatively. Applicable to risk-based surveillance. Formulation as a matrix permits an automatisation of the analysis. | Matrix formulation can make implementation tedious. Use of expert opinion. Further work is required to develop acceptable approaches of expert opinion to generate inputs for this type of model | |
Ratio of number cases captured by the active surveillance and total number of cases captured | Lynn T, et al. 2007 | Simple method | Assumption of perfect specificity. Assumption that the denominator is the total number of cases, which is most likely unrealistic: there are always missed cases. Sensitivity ratio nearly always overestimated. | |
Epidemiological approach | Siegrist et al 2004, Verma et al. 2014; Watkins RE et al 2006 | Relies solely on actual data, no simulation is conducted that might inadvertently introduce bias into the assessment. Allows complexities associated with the determination of occurrence of events to be considered for each potential outbreak | There remains uncertainty about the exact start, detection and end date of outbreaks and size of outbreaks. Epidemiological investigations can be resource intensive, and detailed descriptions of the investigations performed and the decision-making processes used are required to fully understand the basis of the outbreak definition applied. variability in opinion among experts must be appropriately | |
Assessment of syndromic surveillance outputs using another surveillance component as a “gold standard” (derived approach) | Zhang, 2014; Watkins RE et al 2006 | Relies solely on actual data; no simulation is conducted that might inadvertently introduce bias into the assessment | Assumption of perfect sensitivity and specificity of the surveillance component used as “gold standard”The two components must not be mutually exclusive | |
Simulation approach | Mandl et al. 2004, Izadi M, et al. 2009, Jafarpour et al. 2015; Watkins RE et al 2006 | Enables to determine the occurence and timing of outbreaks within the data. Possible to apply it in case of lack of real surveillance data. Enables quantitative replicable evaluation of performance indicators. | Parameters of simulations influence the evaluation outcomes which may not reflect the system or process being modelled. The simulated outbreaks may not reflect the pattern of true outbreak in real conditions. Therefore the usefulness of synthetic data for evaluation is linked to the assumptions used to construct the data, which influences the ability to generalise evaluation findings to the authentic context. | |
Bayesian Network Model | Izadi M, et al. 2009, Jafarpour et al. 2015; Izadi M, et al. 2009, Jafarpour et al. 2015 | Same advantages as other methods using simulation approach. Use of bayesian network allows to assess the effect of a change in one algorithm parameter and one performance attribute on the level ofall performance attributes. | Same limitations and assumptions as other methods using simulation approach. Use of bayesian network is intensive in programming skills. | |
Data-driven mathematical model | Baguelin, 2013 | Allows inferring other transmission parameters at the same time | Heavy in terms of computer power and programming skills | |
In situ observation | Paterson et al, 2012 | Observation in situ: no record bias | Direct observation on the field: ressource and time consuming. Only a rough estimate of the rate of underreporting of observed cases by the local stakeholders. Does not account for unobserved cases. | |
- Semi-quantitative assessment methods
Specificity
- Quantitative assessment methods
Method type | References | Strengths | Limits | |
Use of outputs from other surveillance components | Zhang, 2014; Watkins RE et al 2006 | Relies solely on actual data; no simulation is conducted that might inadvertently introduce bias into the assessment | Assumption of perfect sensitivity and specificity of the surveillance component used as “gold standard”. The two components must not be mutually exclusive | |
Epidemiological approach | Siegrist et al 2004, Verma et al. 2014; Watkins RE et al 2006 | Relies solely on actual data; no simulation is conducted that might inadvertently introduce bias into the assessment. Allows complexities associated with the determination of occurrence of events to be considered for each potential outbreak | There remains uncertainty about the exact start, detection and end date of outbreaks and size of outbreaks. Epidemiological investigations can be resource intensive, and detailed descriptions of the investigations performed and the decision-making processes used are required to fully understand the basis of the outbreak definition applied. variability in opinion among experts must be appropriately managed. | |
Simulation approach | Mandl et al. 2004, Izadi M, et al. 2009, Jafarpour et al. 2015; Watkins RE et al 2006 | Enables to determine the occurence and timing of outbreaks within the data. Possible to apply it in case of lack of real surveillance data. Enables quantitative replicable evaluation of performance indicators. | Parameters of simulations influence the evaluation outcomes which may not reflect the system or process being modelled. The simulated outbreaks may not reflect the pattern of true outbreak in real conditions. Therefore the usefulness of synthetic data for evaluation is linked to the assumptions used to construct the data, which influences the ability to generalise evaluation findings to the authentic context. | |
Bayesian Network Model | Izadi M, et al. 2009, Jafarpour et al. 2015; Izadi M, et al. 2009, Jafarpour et al. 2015 | Same advantages as other methods using simulation approach. Use of bayesian network allows to assess the effect of a change in one algorithm parameter and one performance attribute on the level ofall performance attributes. | Parameters of simulations influence the evaluation outcomes which may not reflect the system or process being modelled. The simulated outbreaks may not reflect the pattern of true outbreak in real conditions. Therefore the usefulness of synthetic data for evaluation is linked to the assumptions used to construct the data, which influences the ability to generalise evaluation findings to the authentic context. | |
- Semi-quantitative assessment methods
Method type | References | Strengths | Limits | |
Structured questionnaire survey (OASIS) | Hendrikx et al., 2011 | Allows to identify targeted corrective actions | Scoring. Not a real measure of specificity. Based on pre-defined requirement criteria which may not apply to all cases | |
Surveillance system organisation
Method type | References | Strengths | Limits |
SWOT (Strenghts/Weaknesses/ Opportunity/ Threats) | | Take into consideration internal aspects of the system but also external factors affecting the system performances | Requires a very good knowledge of the system and/or involvement of the right system actors in the analysis. No standard method. |
Structured questionnaire survey (OASIS) | Hendrikx et al., 2011 | Ready to use questionnaire to describe the system organisation in details. Ready to use evaluation grid to assess the strenghts and weaknesses of the system. Allow to identify corrective action to target | The questionnaire should be filled in with expert of the surveillance system under evaluation. Evaluation criteria pre-defined which reduce the flexibility of the tool. Some results might not fit all systems. However, the scoring could be reviewed and amended. |
SERVAL | Drewe et al., 2015 | Provides a series of questions to assess the organisation of the system and also provides an evaluation framework and workplan | Should be used by expert in the system and by people with knowledge on evaluation. The tool does not provide guidance on recommendations for corrective actions. |
System mapping | | Provide a detailed description of the surveillance system network of actors and actions linking the different actors together. | No standard method available. Should be performed by people with very good knowledge of the system. Do not provide information on the strenghts and weaknesses, should be combined with SWOT/OASIS or SERVAL method |
| | | |
Timeliness
- Quantitative assessment methods
Method type | References | Strengths | Limits | |
Analysis of the surveillance historical data | Takahashi T et al 2004 | Simple method | Long study period needed. Does not take into consideration all parameters. Only an estimate of the time between detection and notification but not a complete measure of timeliness (start date of outbreak unknown). | |
Analysis of the surveillance historical data | Del Rocio Amezcua et al. 2010; Riera-Montes and Velicko 2011 | Simple method | Only an estimate of the time between detection and notification but not a complete measure of timeliness (start date of outbreak unknown) | |
Epidemiological approach | Siegrist et al 2004; Watkins RE et al 2006 | Estimation of true timeliness (from the outbreak start date to the capture date). Relies solely on actual data; no simulation is conducted that might inadvertently introduce bias into the assessment. Allows complexities associated with the determination of occurrence of events to be considered for each potential outbreak. | There remains uncertainty about the exact start, detection and end date of outbreaks and size of outbreaks. Epidemiological investigations can be resource intensive, and detailed descriptions of the investigations performed and the decision-making processes used are required to fully understand the basis of the outbreak definition applied. variability in opinion among experts must be appropriately managed. | |
Use of outputs from other surveillance components | Zhang, 2014; Watkins RE et al 2006 | Relies solely on actual data; no simulation is conducted that might inadvertently introduce bias into the assessment | Assumption that the surveillance component used as “gold standard” immediately detects the outbreak, which is most likely unrealistic. The two components must not be mutually exclusive. | |
Bayesian Network Model | Izadi M, et al. 2009, Jafarpour et al. 2015; Izadi M, et al. 2009, Jafarpour et al. 2015 | Estimation of true timeliness (from the outbreak start date to the capture date). Simulation of surveillance data enables to determine the occurence and timing of outbreaks within the data. Possible to apply it in case of lack of real surveillance data. Enables quantitative replicable evaluation of performance indicators. Use of bayesian network allows to assess the effect of a change in one algorithm parameter and one performance attribute on the level ofall performance attributes. | Parameters of simulations influence the evaluation outcomes which may not reflect the system or process being modelled. The simulated outbreaks may not reflect the pattern of true outbreak in real conditions. Use of bayesian network is intensive in programming skills. | |
Data-driven mathematical model | Walker, 2010 | Estimation of true timeliness (from the outbreak start date to the capture date) Allows inferring other transmission parameters at the same time | Heavy in terms of computer power and programming skills | |
In situ observation | Rumisha SF, et al. 2007; Paterson et al, 2012 | Observation in situ: no record bias | Direct observation on the field: ressource and time consuming. Only an estimate of the time between detection and notification but not a complete measure of timeliness (start date of outbreak unknown). | |
- Semi-quantitative assessment methods
You could leave a comment if you were logged in.