[section separator="true"]
[section-item 9]
[row]
[column 12]
[toc-this]
Principles
Where the findings, results or conclusions of an [link title="evaluation" link="%2Faware%2FPA%2FPages%2FExamination%2FEvaluation.aspx" /]
are relevant to the objectives of audit they may only be used as [link title="audit%20evidence" link="%2Faware%2FGAP%2FPages%2FAudit-evidence.aspx" /]
after carrying out appropriate testing to assess their reliability or validity.
Such testing will need to focus on
- the robustness of methods for data collection and analysis used,
- the coherence between the data set collected,
- the analysis used and the conclusions drawn, and
- the impartiality or objectivity of participants with respect to the judgemental aspects of the evaluation process.
More about [link title="evaluation%20and%20performance%20audit" link="%2Faware%2FPA%2FPages%2FExamination%2FEvaluation-and-Performance-audit.aspx" /]
.
Instructions
Auditing an individual evaluation
The ECA may wish to audit an individual evaluation if
- the evaluation is highly relevant to the Parliament and Council and likely to be a significant input to decisions about substantial amounts of EU funds,
- as part of audit testing designed to gather evidence about quality of the evaluation system of a DG or the management of the activity evaluated;
- as part of testing to gather audit evidence directly about the effects of projects, programmes or other measures/instruments where the objectives of the evaluation overlap considerably with those of the audit.
Criteria for assessing the quality of an individual evaluation
- usefulness - the evaluation should address issues that are relevant to intended users / decisions that it is intended to support, be delivered on time and in an appropriate manner;
- coherence - the methods for collecting and analysing data should be appropriate for answering the questions, the human and financial resources for conducting the evaluation should be sufficient, and the management structure should be appropriate;
- robustness - the data collection and analysis should be carried out rigorously so as to accurately reflect the underlying reality;
- impartiality - the reported conclusions should be free from bias and fairly reflect evaluation findings;
- clarity - the documents making up the evaluation should be clear about the context and purpose of the evaluation, the questions addressed, assumptions made (e.g. the limits of data collected or analytical techniques used), data collected, methods used, the results obtained, the conclusions drawn and recommendations offered;
- cost effectiveness - the evaluation should be efficient and produce information of sufficient value, so that the resources expended can be justified.
The assessment
Trade-offs can exist between the qualities of an evaluation, e.g. pressure to deliver results in time for decisions (usefulness) may lead to a less accurate but less time consuming methods being employed (robustness). The qualities can also be mutually reinforcing either positively or negatively, e.g. a lack of coherence in the design of the evaluation is likely to lead to a lack of clarity in the final report.
Judging the quality of an evaluation is about assessing whether trade-offs made were appropriate in the circumstances and did not unduly compromise other aspects of the evaluation. Having less robust but more timely results can be justified if it contributes to the usefulness of the evaluation so long as methodological compromises are properly taken into account in arriving at conclusions (i.e. coherence is not compromised) and the limits of the analysis are clearly stated in the report (i.e. clarity maintained). Thus the evaluation is likely to be cost effective; the alternative, an evaluation producing reliable results too late would be worthless.
Depending on the specific audit objectives testing could focus on one or more of these qualities. Assessing the extent to which an evaluation possesses the qualities above should be done on the basis of testing related to:
- the management of the evaluation process,
- the evaluation report, and
- underlying data and analysis.
Are you interested in learning more about how to [link title="build%20up%20a%20programme%20for%20auditing%20an%20individual%20evaluation" icon="file-word-o" link="%2Faware%2FDocuments%2FEvaluation-audit-programme.docx" /]
?
Auditing the evaluation system of an individual entity
Evaluation needs can vary widely between policy areas depending on the nature of the activities carried out. Evaluation systems should therefore be adapted to circumstances taking due account of the costs involved.
Key qualities of an evaluation system
Effective management of the demand for evaluation is about:
- guaranteeing support from the "top" for evaluations i.e. from high level decision makers, those responsible for allocating resources or from managers of interventions;
- creating reasonable expectations on the part of stakeholders about what evaluation can deliver - unreasonable expectations will undermine support for and use of evaluation;
- having the right mix of incentives for carrying out evaluations in terms of "sticks, carrots and sermons" e.g. requirements to evaluate (stick), earmarked funds (carrot) or encouragement through awareness raising exercises such as training (sermons);
- providing sufficient links with decision making processes e.g. regarding renewal decisions, setting priorities or allocating resources.
Capacity to produce sufficient quantities of good quality evaluation efficiently requires :
- targeted training and support, e.g. for project managers, steering group participants, and programme managers;
- ensuring complementarity with monitoring and audit systems where this would be cost effective, e.g. procedures for ensuring that data collection, monitoring and evaluation arrangements are built into proposal for new interventions and that there is no overlap with auditing;
- planning evaluations so as to ensure that, wherever feasible or useful, evaluation results are available to support decision making and to maximise the chance of them being of good quality, delivered on time and in budget, e.g. decisions need to be anticipated, resources mobilised and evaluations launched promptly;
- involving stakeholders in an appropriate way so as to provide access to data, support the work of the evaluator from a methodological viewpoint, contribute to the assessment of the evaluation's quality and to encourage use of the results without unduly compromising the cost, management, and timing of the report or the credibility of the results, e.g. through the use of steering groups.
- ensuring methodological quality (i.e. robustness and coherence) in the execution of the evaluation without unduly compromising its usefulness or cost, e.g. through applying quality control procedures at the inception stage.
- choosing appropriate evaluators with sufficient knowledge or experience of both evaluation and the policy area. This goes beyond simply administering public procurement procedures, as steps often have to be taken over time in order to grow the supply of external expertise capable of carrying out evaluations of EU policies, e.g. by publicising the need and the resources available for commissioning studies
For evaluations to be used policies, arrangements and/or procedures should be in place for:
- identifying users and their needs as an input to the processes for determining the focus of the evaluation and the strategy for disseminating the results;
- ensuring questions are relevant to users and in line with the overall purpose of the evaluation;
- making judgements and recommendations that are credible and thought provoking as part of the evaluation process thus encouraging discussion and follow up.
- communicating findings in ways that are appropriate to intended users to maximise their impact;
- delivering evaluations in time for them to be used or where this is not possible for ensuring that early findings are disseminated or even that a halt is called to the evaluation;
- monitoring use and follow up of findings, results, conclusions and recommendations to provide feedback on what kind has the most impact.
Are you interested in learning more about how to [link title="build%20up%20a%20programme%20for%20auditing%20an%20evaluation%20system" icon="file-word-o" link="%2Faware%2FDocuments%2FEvaluation-system-audit-programme.docx" /]
?
[/toc-this]
[/column]
[/row]
[/section-item]
[section-item 3]
[row]
[column 12]
[toc fixed="true" selectors="h2%2Ch3" class="basic-toc" /]
[/column]
[/row]
[/section-item]
[/section]