We have revised the CAYT ‘level of evidence’ grades for scoring impact studies. The new guidelines are available here.
“Evidence-based” practice – meaning “best practice” or “with well-supported evidence” – is a crucial element in policy development and the implementation of programmes in the prevention field. When selecting prevention programmes for young people, policy makers, practitioners and health and education professionals need easy access to reliable and independently validated information. It is therefore necessary to clearly establish “what works,” as well as what counts as good evidence. That means drafting well-defined standards in order to classify the levels of evidence-based research. Intervention programmes showing relevant effectiveness and rigorous methodologies have to be highlighted within the community of (evidence-based) practice within the prevention field.
A well supported evidence-based intervention programme is usually comprised of two components: a strong magnitude of impact, along with a fair and rigorous methodological approach (Nation et al., 2003). In other words, there has to be causal relationship between implementation of the programme and outcomes of the intervention.
Measuring and classifying the impact of an intervention, according to the existing scoring system, is relatively objective. On the other hand, when it comes to classifying the rigour and types of evidence, the methodological complexity and variables of research design have to be considered, making it more difficult to fairly assign an evidence grade to the evaluation of programmes.
In 2015 we updated the evidence guidelines for CAYT impact studies.