5. Results monitoring and evaluation

Gender equality results need to be monitored and evaluated at whatever level they are developed. This chapter provides overall guidance, focusing primarily on the programme level, but the guidance is also largely applicable at other levels.

Measuring gender equality change, and especially gender-transformative change, requires working within existing frameworks and indicators, while providing flexibility and adaptation to reflect the nature and timescales of gender equality results. These are unlikely to be achieved within the timeline of a typical project. As with other complex social change, changes in gender relations can often be nonlinear and unpredictable. Changes that seem positive at first may quickly erode. A hard-won victory by community members for women’s land rights, for example, can provoke a backlash against activists or an increase in gender-based violence. Monitoring and evaluation of gender equality results needs flexibility to track progress and achievements, and to capture negative impacts, resistance, reaction, holding ground and unexpected outcomes (Batliwala and Pittman, 2010[1]).

Each Development Assistance Committee (DAC) member is at a different stage in the monitoring and evaluation approaches and infrastructure it has in place, and the resources and capacities for monitoring and evaluating gender equality initiatives. Some DAC members are experimenting with innovative evaluation methods, including feminist evaluation.1

The rise of the results agenda – and increased emphasis on monitoring and evaluation of development efforts – has increased the capacity of DAC members to define and track gender equality outcomes and to evaluate gender equality results related to their investments. The focus on results has helped anchor gender equality in DAC member systems and build momentum for commitments to gender equality and women’s rights (OECD, 2014[2]). Building a strong body of evidence showing the achievement of gender equality results, or the lack of them, can help build the political will to focus on investments in gender equality.

Meanwhile, the challenges associated with monitoring and evaluating gender equality results – particularly transformative change related to shifting power relations and changing norms – have become more apparent, along with the view of what counts as “evidence” of change. As tools and guidance on gender-sensitive or responsive monitoring and evaluation have increased, the need for indicators and methodologies better able to capture long-term change and transformational gender equality results is acknowledged by DAC members. Given the long-term nature of transformative change, investment in and use of ex post or impact evaluations and meta-evaluations may increase DAC members’ capacity to evaluate gender equality results (USAID, 2021[3]). This is valid for official development assistance (ODA) funded programmes and equally for “other” types of investments, such as blended finance.

Monitoring is “a continuing function that uses systematic collection of data on specified indicators to provide management and the main stakeholders of an ongoing […] intervention with indications of the extent of progress and achievement of objectives and progress in the use of allocated funds” (OECD, 2002[4]). The DAC gender equality policy marker and its scores – while not designed or intended as a monitoring tool – can be used strategically in this context as a framework for monitoring efforts. The marker score may also need to be adjusted based on the monitoring (Chapter 4).

DAC members should consider adapting performance measurement frameworks and assessment tools to account for the timelines and complex nature of gender equality results. This might include encouraging partners to report on unanticipated results, either positive or negative, without undue judgement on programme quality.

Undertaking a thorough risk assessment during the design stage to name potential risk and mitigation strategies is required good practice, although it does not preclude unexpected negative results (Chapter 2). Results need to be defined, monitored and evaluated using frameworks that are both flexible and learning oriented, where both positive and negative results provide insights for policy or programme improvement and future design (see Box 3.3 on the Women’s Voice and Leadership programme).

Results reporting can encourage political and financial support for policies, programmes and projects and help build a solid knowledge base. It can also introduce changes in the way institutions operate, leading to improved performance and accountability.

Development partners have for some time argued for more streamlined or simplified reporting, given capacity gaps.2 Multilaterals and larger civil society organisations (CSOs) have systems for meeting members’ reporting requirements, but small local organisations find it difficult to handle the reporting burden that comes with bilateral and multilateral funding, particularly quantitative data collection. Some DAC members are encouraging organisations to use alternative methods for integrating qualitative data in their reporting, such as embedding videos, music, case studies and vignettes to accompany data on quantitative indicators.

DAC members can also helpfully address specific gender equality objectives and results indicators with investors and private sector actors when engaging in “beyond aid” initiatives, such as blended finance (see Chapter 4).

DAC members should consider options for streamlining and simplifying reporting. The approach of using a narrower set of mandatory but adaptable indicators being taken by some DAC members is one example.

A few DAC members have also experimented with using a common reporting template where they are funding the same organisation, instead of requiring separate reports. Other ways include less frequent reporting (e.g. moving from annual to bi-annual results reporting) and continuing to examine how to balance learning and accountability in institutional structures.

Evaluation is “the systematic and objective assessment of an ongoing or completed project, programme or policy, its design, implementation and results. The aim is to determine the relevance and fulfilment of objectives, […] efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process of both recipients and donors” (OECD, 2002[4]).

The strategy of gender mainstreaming (see Chapter 3), should include a focus at the institutional level to address gender equality and empowerment of women and girls through internal organisational changes, such as resource allocation, strategic planning, policies, culture, human resources, staff capacity, leadership, management, accountability and performance management. These efforts also need to be evaluated.

Ethical considerations must be front and centre in evaluating gender equality efforts, particularly in assessing and selecting the approach and methods used in an evaluation. In many contexts where evaluations are undertaken, support for gender equality is limited. Data security and safeguarding, paramount in every context, is especially crucial here.

The DAC Quality Standards for Development Evaluation commit members to abide by relevant professional and ethical guidelines and codes of conduct and undertake evaluations with integrity and honesty (OECD, 2010[7]). Specifically, evaluators should note how they plan to ensure that the evaluation process does not cause any harm, respects participants’ confidentiality and ensures informed consent of all participants in the evaluation process. Issues such as who asks who what types of questions, and what types of risks are involved in answering questions at the household, community or national level, must be taken into account including the potential risks of digital evaluations. Some DAC members have developed ethical guidance for research, evaluation and monitoring, not necessarily specific to gender equality, which should be applied to all evaluation and research (Thorley and Henrion, 2019[8]).

Good practice includes considering the following questions in the design phase of an evaluation:

  • Have the evaluation design and data collection tools considered approaches to include full participation of different groups of women and girls?

  • Do the data collection tools, and in particular, surveys, avoid perpetuating negative gender norms and model positive gender norms in the way questions are formulated?

  • Are opportunities created for women and girls to collect data themselves through participatory data collection methods, engagement in analysis of data, strategic oversight of the evaluation process and the communication of findings?

  • Will the data collection methods allow unintended results – positive and negative – to emerge on the well-being, lived experiences and status of girls and women?

  • Does the evaluation team include local evaluator(s) with strong gender and intersectional analysis skills, and is it at a minimum gender-balanced?

  • Have protocols on safety, data security and privacy issues been followed?

Once necessary evaluation data, including gender data, have been gathered, the next step is to ensure that gender analysis is applied to that data. It is important to consider ways to engage women and girls in the analysis of data. Their participation in interpreting data may bring a unique perspective important in triangulating evaluation data. In addition to participatory, inclusive approaches to data analysis, the following may be helpful:

  • integrating contextual analysis such as gendered-related social norms, power dynamics as they affect different groups of individuals

  • comparing data with existing community, country, etc., information on women and girls’ rights and other social indicators, to confirm or refute trends and patterns already identified

  • disaggregating survey data (if used) along lines of sex, age, education, geographical location, poverty, ethnicity, indigeneity, disability, sexual orientation and gender identity, and paying attention to trends, patterns, common responses and differences (following up, if possible, with further qualitative methods and analysis)

  • analysing how far the programme has addressed structural factors that contribute to inequalities experienced by women and girls, especially those experiencing multiple forms of exclusion

  • assessing the extent to which (different groups of) women and girls were included as participants in programme planning, design, implementation, decision making, and monitoring and accountability processes.

DAC members should design or commission evaluations that use mixed-method approaches to answer evaluation questions and include participatory data collection and data analysis techniques that allow women’s voices and perspectives to be heard.

Feminist evaluation is grounded in feminist theory and principles and can help make the link to the feminist foreign policies that some DAC members have implemented (Chapter 1). An initial impetus was a recognition of the negative consequences of lack of attention to gender and gender inequities in conceptualising, designing, conducting, and analysing data (Frey, 2018[12]). Beyond this, there are no prescribed methods or tools for feminist evaluation, or indeed any agreed-upon definition of feminist evaluation.3 Evaluators may explicitly use the term “feminist” to describe their approach or refer to a different term, while still using approaches based on feminist principles.

The gender and the feminist approach to evaluation differ in several ways. These include the kinds of questions posed, the design of the evaluation processes, how data and evaluation reports are used and by whom. Feminist evaluation acknowledges from the outset the need for transformative change in gender and power relations – i.e. it is values-driven – and explores and challenges the root causes of gender inequalities. Feminist evaluation emphasises the design of processes that are not only inclusive of diverse women and girls but engage them in ways that are empowering. This includes, for example, using participatory methods of data collection and data analysis that directly include project participants, who can give voice to and make meaning out of their own experiences. Crucially, feminist evaluation emphasises the position of the evaluators and encourages them to reflect on the assumptions and biases they bring to the evaluation. In other words, feminist evaluation holds that evaluations are not value-free.

Finally, feminist evaluation prioritises the use of knowledge generated in the evaluation process by those directly implicated in the evaluation. Evaluation findings should be accessible and barrier-free for all stakeholders. The most effective way to ensure this is to ask them what products will be most useful (social media, infographics, videos, briefings).

Data generated in the monitoring of gender equality results, evaluations or performance assessment processes provide important information for DAC members on progress towards gender equality. The communication and dissemination of monitoring data and evaluation findings can potentially strengthen multiple levels of accountability on gender equality. Such data can be useful both to promote gender equality objectives externally and internally within the institution, in efforts to understand progress towards results and to course-correct where progress is not happening as anticipated.

A focus on results monitoring and reporting of DAC members’ internal institutional gender equality efforts (i.e. Gender Action Plans) is as important as tracking results on programme or policy efforts. The value of evaluating institutional gender equality initiatives includes helping understand the relevance, coherence, efficiency and effectiveness of institutional gender mainstreaming (including gender policies, gender parity strategies, gender markers, financial tracking systems, gender analysis in programme and policy design); and building an evidence base of the correlations between institutional gender equality initiatives and development results. A strong evidence base showing the relationship between internal gender equality changes and programme outcomes can also build political will for investments in gender equality initiatives.

It is good practice to integrate learning-orientated approaches in monitoring and evaluation on gender equality.

The development of a learning agenda is increasingly being used by DAC members and development partners in their gender equality work.4 Typically, a learning agenda includes: a set of questions addressing critical knowledge gaps on gender equality identified during implementation start-up; a set of associated activities to answer them; and knowledge products aimed at disseminating findings and designed with use of multiple stakeholders in mind. A theory of change approach lends itself well to the use of a learning agenda. Learning questions can be framed to test and explore assumptions and hypotheses throughout implementation and to generate new evidence for advocacy and future programme and policy development. A learning agenda can be set at different levels, and ideally should be developed during the design phase of a strategy, project or activity. It can provide a framework for performance management planning, using regular feedback loops related to key learning questions, and can also assist in evaluation design, to prioritise evaluation questions.

References

[1] Batliwala, S. and A. Pittman (2010), Capturing Change in Women’s Realities: A Critical Overview of Current Monitoring and Evaluation Frameworks and Approaches, Association for Women’s Rights in Development (AWID), https://www.awid.org/sites/default/files/atoms/files/capturing_change_in_womens_realities.pdf.

[11] Better Evaluation (2014), Photo Voice, https://www.betterevaluation.org/en/evaluation-options/photovoice (accessed on 27 April 2022).

[12] Frey, B. (2018), Feminist Evaluation, The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation, https://doi.org/10.4135/9781506326139.n262.

[9] Newton, J., A. van Eerdewijk and F. Wong (2019), What do participatory approaches have to offer to the measurement of empowerment of women and girls?, KIT Royal Tropical Institute, https://www.kit.nl/wp-content/uploads/2019/03/KIT-Working-Paper_final.pdf.

[10] Oakden, J. (2013), “Evaluation rubrics: how to ensure transparent and clear assessment that respects diverse lines of evidence”, Better Evaluation, https://www.betterevaluation.org/sites/default/files/Evaluation%20rubrics.pdf (accessed on 27 April 2022).

[5] OECD (2021), Applying Evaluation Criteria Thoughtfully, OECD Publishing, Paris, https://doi.org/10.1787/543e84ed-en.

[2] OECD (2014), From ambition to results: Delivering on gender equality in donor institutions, https://www.oecd.org/dac/gender-development/fromambitiontoresultsdeliveringongenderequalityindonorinstitutions.htm.

[7] OECD (2010), Quality Standards for Development Evaluation, OECD Publishing, https://www.oecd.org/development/evaluation/qualitystandards.pdf.

[4] OECD (2002), Glossary of Key Terms in Evaluation and Results Based Management, https://www.oecd.org/dac/evaluation/2754804.pdf.

[6] OECD (n.d.), Evaluation Criteria, https://www.oecd.org/dac/evaluation/daccriteriaforevaluatingdevelopmentassistance.htm (accessed on 22 April 2022).

[8] Thorley, L. and E. Henrion (2019), DFID ethical guidance for research, evaluation and monitoring activities, DfID, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/838106/DFID-Ethics-Guidance-Oct2019.pdf.

[3] USAID (2021), Discussion Note: Ex-Post Evaluations, https://usaidlearninglab.org/sites/default/files/resource/files/dn-ex-post_evaluation_final2021.pdf.

For insight and guidance on the value of participatory approaches, see the KIT Royal Tropical Institute working paper “What do participatory approaches have to offer the measurement of empowerment of women and girls”: https://www.kit.nl/wp-content/uploads/2019/03/KIT-Working-Paper_final.pdf.

For more information on monitoring whether projects and programmes are having their intended effect, and to make changes if they are not, see the Research and practice note “Changing Gender Norms: Monitoring and Evaluating Programmes and Projects”: https://odi.org/en/publications/changing-gender-norms-monitoring-and-evaluating-programmes-and-projects/.

For examples of development, monitoring and evaluation of gender equality results at the country and sector level and the programme and project level, see the Asian Development Bank and Australian Aid’s “Tool Kit on Gender Equality Results and Indicators”: https://www.oecd.org/derec/adb/tool-kit-gender-equality-results-indicators.pdf.

For guidelines to help those who work on results-based monitoring (RBM), see the “Guidelines on designing a gender-sensitive results-based monitoring (RBM) system” from the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ): https://www.oecd.org/dac/gender-development/GIZ-guidelines-gender-sensitive-monitoring.pdf.

For examples of good practice where results from gender-integrated and targeted gender equality interventions are presented in compelling results reports that mix quantitative data with case studies, vignettes and data analysis, see UNICEF’s “Gender Equality: Global Annual Results Report 2020”: https://www.unicef.org/media/102281/file/Global-annual-results-report-2020-gender-equality.pdf or UNICEF’s “Health Results 2020: Maternal, Newborn and Adolescent Health” report: https://www.unicef.org/media/102666/file/Health-Results-2020-Maternal-Newborn-Adolescent-Health.pdf.

For examples of gender equality evaluations, including evaluations of Gender Action Plans from DAC members and other development partners, see the UN Women evaluation portal: https://genderevaluation.unwomen.org/en/region/global?region=8c6edcca895649ef82dfce0b698ebf60&orgtype=c580545e97254263adfcaf86c894e45b.

For guidance on how to integrate an equity-focused and gender-responsive approach to national evaluation systems, see “Evaluating the Sustainable Development Goals: With a ’No One Left Behind’ lens through equity-focused and gender-responsive evaluations”: https://www2.unwomen.org/-/media/field%20office%20americas/imagenes/publicaciones/2017/06/eval-sdgs-web.pdf?la=en&vs=4007.

For guidance on how to integrate a gender lens in UNICEF evaluations, or evaluations more generally, see the “UNICEF Guidance on Gender Integration in Evaluation”: https://www.unicef.org/evaluation/documents/unicef-guidance-gender-integration-evaluation.

For a resource for development practitioners and evaluators who are seeking explanations and recommendations on how to include a focus on gender impact in commissioning or conducting evaluations, see the Methods Lab resource, “Addressing Gender in Impact Evaluation: What should be considered?”: https://internationalwim.org/wp-content/uploads/2020/10/Addressing-Gender-in-Impact-Evaluation-.pdf.

The United Nations Evaluation Group (UNEG) provides an evaluative framework for evaluations on institutional gender mainstreaming that could be adapted by DAC members in the practical guide “Guidance on Evaluating Institutional Gender Mainstreaming”: http://www.uneval.org/document/detail/2133.

In “The ‘Most Significant Change’ Technique – A Guide to Its Use”, Better Evaluation offers a practical tool for anyone seeking to use Most Significant Change (MSC): https://www.betterevaluation.org/resources/guides/most_significant_change.

For an accessible introduction to Most Significant Change, see: https://www.betterevaluation.org/en/plan/approach/most_significant_change.

For an example of survey design on women’s empowerment, see the Abdul Latif Jameel Poverty Action Lab’s “A Practical Guide to Measuring Women’s and Girls’ Empowerment in Impact Evaluations”: https://www.povertyactionlab.org/sites/default/files/research-resources/practical-guide-to-measuring-women-and-girls-empowerment-appendix1.pdf.

For information on Outcome Mapping (OM) and how it can be used to unpack an initiative’s theory of change and serve as a framework to collect data on immediate, basic changes, see Better Evaluation’s resource: https://www.betterevaluation.org/en/plan/approach/outcome_mapping.

For ethical guidance on data collection, see the World Health Organization’s “Putting Women First: Ethical and Safety Recommendation for Research on Domestic Violence Against Women” resource: https://www.who.int/gender/violence/womenfirtseng.pdf.

See also the subsequent report: https://www.who.int/reproductivehealth/publications/violence/intervention-research-vaw/en/.

For an accessible introduction to the basic concepts that underpin feminist evaluation, see Better Evaluation’s resource “Feminist evaluation”: https://www.betterevaluation.org/en/themes/feminist_evaluation.

For an overview and description of feminist evaluation and gender approaches, and of their differences, see the research paper, originally published in the Journal of Multidisciplinary Evaluation, “Feminist Evaluation and Gender Approaches: There’s a Difference?”: https://www.betterevaluation.org/en/resources/discussion_paper/feminist_eval_gender_approaches.

For an exploration of how quantitative impact evaluations and other technical choices and ethical considerations are changed by bringing a feminist intent to research into monitoring and evaluation processes, see Oxfam GB’s discussion paper, “Centring Gender and Power in Evaluation and Research: Sharing experiences from Oxfam GB’s quantitative impact evaluations”: https://policy-practice.oxfam.org/resources/centring-gender-and-power-in-evaluation-and-research-sharing-experiences-from-o-621204/.

Feminist evaluation can be used alongside, or combined with, other systems of monitoring, evaluation changes and learning for programmes, to help make sense of how social change occurs. For more information, see “Merging Developmental and Feminist Evaluation to Monitor and Evaluate Transformative Social Change”: https://journals.sagepub.com/doi/full/10.1177/1098214015578731.

For examples of concrete steps that can be taken on data security and safeguarding evaluation participants, see “ActionAid’s feminist research guidelines”: https://actionaid.org/publications/2020/feminist-research-guidelines.

Notes

← 1. Thirteen DAC members included gender equality in monitoring and evaluation frameworks for programming, and five members used additional annual quality checks. A few DAC members noted that they produce evaluations of gender equality as a cross-cutting issue, while others produced evaluation reports of their Gender Action Plans or other gender-specific programmes.

← 2. Thirteen DAC members identified the inclusion of results from gender equality programmes and initiatives within regularly scheduled reports to be an important component of their systems for monitoring and evaluation. Of these members, some used report writing at varying stages of the intervention as a system for the monitoring and evaluation of gender equality programmes (quarterly, annually, mid-term, or at the end of the programme).

← 3. The DAC Network on Development Evaluation is developing a Glossary of evaluation terms.

← 4. Nine DAC members incorporated a learning agenda devoted to improving their work on gender equality and the empowerment of women and girls within their development co-operation systems and processes (including monitoring and evaluation). The way these learning agendas are put into effect is extremely varied. Five members incorporated learning agendas in their programming, with dedicated work streams for knowledge management, or broader institutional learning systems. Examples of these agendas ranged from a gender unit being responsible for institutional learning and knowledge management, to help desks that give out rapid advice from external experts. Four members used systematic learning activities such as comprehensive and multi-year reports on the evolution and progress of approaches used to advance gender equality in development co-operation, including key lessons learned and recommendations for moving forward. Two members included their processes for evaluation and programme improvement as a component of their learning agenda, with learning questions integrated into evaluation questions when appropriate. One member noted that its learning agenda is carried out by a designated implementation team. Twelve DAC members indicated that they do not have learning agendas.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2022

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at https://www.oecd.org/termsandconditions.