Synthesis phase (phase 3)


This section is structured as follows:

Each step of the synthesis phase is described according to the respective role of :
1420993037_em.png The evaluation manager
1421077476_1418988054_teams.png The external evaluation team






The evaluation team formalises its findings, which all derive from facts, data, interpretations and analyses. Findings may include cause-and-effect statements (e.g. "partnerships, as they were managed, generated lasting effects"). Unlike conclusions, findings do not involve value judgements. 

The evaluation team proceeds with a systematic review of its findings with a view to confirming them. At this stage, its attitude is one of systematic self criticism, e.g.:

  • If statistical analyses are used, do they pass validity tests?
  • If findings arise from a case study, do other case studies contradict them?
  • If findings arise from a survey, could they be affected by a bias in the survey?
  • If findings arise from an information source, does cross-checking show contradictions with other sources?
  • Could findings be explained by external factors independent from the project / programme under evaluation?
  • Do findings contradict lessons learnt elsewhere and if so, is there a plausible explanation for that?





The evaluation team answers the evaluation asked through a series of conclusions which derive from facts and findings. In addition, some conclusions may relate to other issues which have emerged during the evaluation process. 

Conclusions involve value judgements, also called reasoned assessments (e.g. "partnerships were managed in a way that improved sustainability in comparison to the previous approach"). Conclusions are justified in a transparent manner by making the following points explicit:

  • Which aspect of the project/programme is assessed?
  • Which evaluation criterion is used?
  • How is the evaluation criterion actually applied in this precise instance?

The evaluation team strives to formulate conclusions in a limited number so as to secure their quality. It either clarifies or deletes any value judgement which is not fully grounded in facts and entirely transparent. 

The evaluation team manages to use evaluation criteria in a balanced way, and pays special attention to efficiency and sustainability, two evaluation criteria which tend to be overlooked in many instances. 

The evaluation team synthesises its conclusions into an overall assessment of the project/programme, and writes a summary of all conclusions, which are prioritised and referred to findings and evidence. Methodological limitations are mentioned, as well as dissenting views if there are any. 

The evaluation team leader verifies that the conclusions are not systematically biased towards positive or negative views. He/she also checks that criticisms may lead to constructive recommendations.



Recommendations and lessons


The evaluation team maintains a clear distinction between conclusions which do not entail action (e.g. "partnerships were managed in a way that improved sustainability in comparison to the previous approach") and other statements which derive from conclusions and which are action-oriented, i.e.

  • Lessons learnt (e.g. "the successful way of managing partnerships could be usefully considered in other countries with similar contextual conditions")
  • Recommendations (e.g. "the successful way of managing partnerships should be reinforced in the next programming cycle").

Recommendations may be presented in the form of alternative options with pros and cons. 

As far as possible, recommendations are :

  • Tested in terms of utility, feasibility and conditions of success
  • Detailed in terms of time frame and audience
  • Clustered and prioritised.

The evaluation team acknowledges clearly where changes in the desired direction are already taking place, in order to avoid misleading readers and causing unnecessary offence.



Draft report


The evaluation team writes the first version of the report which has the same size, format and contents as the final version. Depending on the intended audience, the report is written:

  • With or without technical terminology.
  • With either a summarised or a detailed presentation of the project/programme and its context.

In general, the report includes a 2 to 5-page executive summary, a 40 to 60-page main text, plus annexes.

Structure of the report

Executive Summary

  • The executive summary is a dense, self-standing document which presents the project/programme under evaluation, the purpose of the evaluation, the main information sources and methodological options, and the key conclusions, lessons learned and recommendations.

Tables of contents, figures, acronyms 


  • Description of the project/programme and the evaluation. The reader is provided with sufficient methodological explanations to gauge the credibility of the conclusions and to acknowledge limitations or weaknesses if there are any.

Answered questions

  • A chapter presents the evaluation questions, together with evidence, reasoning and value judgements pertaining to them. Each question is given a clear and short answer.

Overall assessment

  • A chapter synthesises all answers to evaluation questions in an overall assessment of the project/programme. The evaluation team should not just follow the evaluation questions, the logical framework, or the evaluation criteria. On the contrary, it should articulate all the findings, conclusions and lessons in a way that reflects their importance and facilitates the reading.

Conclusions, lessons and recommendations

  • Conclusions and lessons are listed, clustered and prioritised in a few pages, as are recommendations.


The evaluation team leader checks that the report meets the quality criteria. The report is submitted to the person in charge of the quality control before it is handed over to the evaluation manager.

Quality certificate

The evaluation team leader attaches a quality certificate to the draft final report, indicating the extent to which:

  • Evaluation questions are answered.
  • Reliability and validity limitations are specified.
  • Conclusions apply evaluation criteria in an explicit and transparent manner.
  • The present guidelines have been used.
  • Tools and analyses have been implemented according to standards.
  • The language, layout, illustrations, etc. are according to standards.

The evaluation team leader and the evaluation manager discuss the quality of the report. Improvements are made if requested. 

In the case of a multi-country programme, the country notes are published as part of the overall evaluation exercise in annexes to the synthesis report (editing is therefore required).



Quality control


The evaluation manager receives the first version of the final report. The document should have the same format, contents and quality as the final version.
The evaluation manager assesses the quality of the report on the basis of an eight-criteria assessment grid. The assessment is double-checked by a second person.

Quality assessment criteria

The following eight criteria are derived from international evaluation standards and are compatible with them:
1. Meeting needs

  • Does the report describe precisely what is evaluated, including the intervention logic and its evolution? Does it cover the appropriate period of time, target groups and areas? Does it respond to all ToR requests?

2. Appropriate design

  • Is the evaluation design described in enough detail? Is it adapted to the project/programme? Are there well-defined and appropriate indicators? Does the report point out the limitations, risks and potential biases associated with the evaluation method?

3. Reliable data

  • Is the data collection approach clearly explained and consistent with the overall evaluation design? Are the sources of information clearly identified in the report and cross-checked? Are the data collection tools (samples, focus groups, etc.) applied in accordance to standards? Have data collection limitations and biases been explained and discussed?

4. Sound analysis

  • Is the analysis based on the collected data and focused on the most relevant cause/effect assumptions? Is the context adequately taken into account? Have stakeholders' inputs been used in a balanced way? Are the limitations identified, discussed and presented in the report?

5. Credible findings

  • Are the findings derived from the data and analyses? Are interpretations and extrapolations justified and supported by sound arguments? Is the generalisability of findings discussed?

6. Valid conclusions

  • Are the conclusions coherent, logically linked to the findings, and free of personal or partisan considerations? Do they cover the five DAC criteria?

7. Useful recommendations

  • Are recommendations consistent with the conclusions? Are they operational, realistic and sufficiently explicit to provide guidance for taking action? Are they clustered, prioritised and devised for the different stakeholders?

8. Clear report

  • Is there a relevant and concise executive summary? Is the report well structured, adapted to its various audiences, and not more technical than necessary? Is there a list of acronyms?

The quality assessment should enhance the credibility of the evaluation without undermining its independence. It therefore focuses on the way conclusions are substantiated and explained and not on their content. The quality assessment must not be handled by those who are involved in the evaluated project/programme.
The evaluation manager and the evaluation team leader discuss the quality assessment. Improvements are requested if necessary.



Discussion meeting(s)


Discussion of draft report

The evaluation team presents the report in a reference group meeting. The presentation is supported by a by a series of slides which cover:

  • Answered questions and methodological limitations
  • Overall assessment, conclusions and lessons learnt
  • Recommendations.

Comments are collected in order to:

  • Further check the factual basis of findings and conclusions
  • Check the transparency and impartiality
  • Check the utility and feasibility of the recommendations

Discussion seminar


The evaluation manager submits the draft report to the reference group members for consultation. If appropriate, he/she convenes and chairs a meeting where the report is presented and discussed. Special attention is paid to the utility of conclusions and the feasibility of recommendations. At this stage, the evaluation manager may decide to convene a discussion seminar with a wide range of stakeholders. The purpose would be to discuss the content of the conclusions and the utility of the recommendations in the presence of the evaluation team. Attendance may include the delegation staff, national authorities, civil society, project management, other donors and/or experts. Participants are provided with an updated draft report.


Finalising the report


The evaluation team finalises the report by taking into account all comments received. Annexes are also finalised in one or the following forms:

  • Printed attachments to the report.
  • Annexes on CDROM.

Terms of reference. 
List of activities specifically assessed. 
Logical framework and comments. 
Detailed evaluation method including :

  • Options taken, difficulties encountered and limitations.
  • Detail of tools and analyses.
  • List of interviews.
  • List of documents used.

Any other text or table which contains facts used in the evaluation.

The report is printed out according to the instructions stated in the terms of reference. 

The evaluation team leader receives a final quality assessment from the manager. If necessary, he/she writes a note setting forth the reasons why certain requests for quality improvement have not been accepted. This response will remain attached to both the quality assessment and the report.


Final report


Comments are taken into account by the evaluation team in a new version of the report. The evaluation manager also receives an electronic version of the slides presented by the evaluation team.
He/she checks that the comments received have been taken into account in an appropriate way, and that the report is ready for dissemination, including the full set of annexes.

He/she runs a final quality assessment against the eight criteria of the assessment grid, writes qualitative comments for all criteria, and decides upon the overall quality score.

The evaluation manager sends the final version of the report and quality assessment to the reference group members, and thanks them for their contribution.


Check lists

See for inspiration the Check lists for Geographic, thematic and other complex evaluation.



Former Capacity4dev Member
last update
7 December 2022

More actions