EXECUTIVE
OFFICE
OF
THE
PRESIDENT
OFFICE
OF
MANAGEMENT
AND
BUDGET
WASHINGTON,
D .
C.
20503
January 11, 2018
THE
DIRECTOR
M-18-04
MEMORANDUM
FOR
HEAD;.;S
O~
FEDERAL DEPARTMENTS AND AGENCIES
FROM: Mick Mulvaney
(/
Director ,,,..,
SUBJECT: Monitoring and Evaluation Guidelines for Federal Departments and Agencies that
Administer United States Foreign Assistance
As required
by
sections 3(b) and 3(d)
of
the Foreign Aid Transparency and Accountability Act
of
2016 (the "Act") this memorandum provides the first cross-agency monitoring and evaluation
guidelines for Federal departments
and
agencies that administer foreign assistance.
The goal
of
these guidelines is to set forth key monitoring and evaluation principles to guide
each agency and to provide specific direction on content for agencies to include
in
their own
policies
on
monitoring and evaluation
of
foreign assistance. OMB has ensured that the
guidelines are robust, comprehensive, and coordinated with Federal departments and agencies.
The guidelines reflect current thinking
on
monitoring and evaluation
of
U.S. foreign assistance
and may
be
updated at a future time.
In
addition to the assistance covered
by
the Act, OMB has determined that for the purposes
of
implementation and to improve foreign assistance effectiveness, all foreign assistance programs
already covered
by
the Guidance
on
Collection
of
U.S. Foreign Assistance Data (OMB Bulletin
12-01) are included
in
these guidelines as well. OMB will require agencies to report annually
through the Budget submission process
on
implementation
of
monitoring and evaluation policies
and practices related to these guidelines.
Attachment: Monitoring and Evaluation Guidelines for Federal Departments and Agencies that
Administer United States Foreign Assistance.
Monitoring and Evaluation Guidelines for Federal Departments and Agencies
that Administer United States Foreign Assistance
These guidelines are required to be set forth under Sec. 3(b) of the Foreign Aid Transparency and
Accountability Act of 2016 (FATAA) and cover the objectives related to monitoring and evaluation as
defined in Sec.3(c)(2)(AM)
Scope and Purpose
Section 3 of the FATAA requires the establishment of guidelines on monitoring, evaluation, and
reporting on the performance of United States foreign assistance and its contribution to the policies,
strategies, projects, program goals, and priorities undertaken by the Federal Government. The Act also
supports and promotes innovative programs to improve effectiveness and seeks to coordinate the
monitoring and evaluation processes of Federal agencies that administer covered U.S. foreign
assistance. Finally, the Act calls for the President to set forth guidelines according to best practices of
monitoring and evaluation.
These guidelines provide direction to Federal departments and agencies that administer United States
foreign assistance on monitoring the use of resources, evaluating the outcomes and impacts of United
States foreign assistance projects and programs, and applying the findings and conclusions of such
evaluations to proposed project and program design. In addition to the assistance covered by the Act,
the Administration has determined that for purposes of implementation of the Act and to improve
foreign assistance effectiveness across the government, all foreign assistance programs already covered
by the Guidance on Collection of U.S. Foreign Assistance Data in the form of OMB Bulletin 12-01
are
included in these guidelines as well. Agencies are encouraged to expand these guidelines to all current
or future accounts and programs that cover foreign assistance.
The goal of these guidelines is to set forth key principles to guide each agency and to specify
requirements, where appropriate, that agencies must cover in their own policies on monitoring and
evaluation of foreign assistance.
Note on program or project planning: The foundation for useful monitoring and evaluation is a well-
documented program or project plan against which progress and results can be assessed. While these
guidelines do not address planning requirements, it is recommended that agencies have guidance on
sound planning that addresses the following:
a) Align programs or projects with higher-level strategies or objectives;
b) Consider contextual or programmatic factors that could affect program or project design,
implementation, or intended results;
c) Ensure programs or projects have clear goals and objectives; and
d) Use logic models to document expected program or project logic and theory of change. Such
models should clearly define the expected inputs, activities, outputs, intermediate outcomes,
and end outcomes.
Monitoring and evaluation activities can then assess the extent to which programs are progressing as
designed, and if changes to the program logic or program itself are necessary.
Definitions
Agencies should be guided by these definitions. Except for “evaluation,” which is defined by FATAA, the
other definitions can be adjusted in agency policies as needed to be relevant to an individual agency’s
operating environment.
Evaluation is the systematic collection and analysis of information about the characteristics and
outcomes of the program, including projects conducted under such program, as a basis for making
judgments and evaluations regarding the program; improving program effectiveness; and informing
decisions about current and future programming.
Impact Evaluations measure the change in an outcome that is attributable to a defined intervention,
are based on models of cause and effect, and require a credible counterfactual to control for factors
other than the intervention that might account for the observed change.
Monitoring is the ongoing and systematic tracking of data and information relevant to policies,
strategies, programs, projects and/or activities and used to determine whether desired results are
occurring as expected during program, project, or activity implementation. Monitoring often relies on
indicators, quantifiable measures of a characteristic or condition of people, institutions, systems or
processes, that may change over time.
A Pilot Program or Pilot Intervention is any new, untested approach that is implemented to learn about
its potential feasibility and efficacy/effectiveness because it is anticipated to be replicated or expanded
in scale or scope through U.S. Government foreign assistance or other funding sources.
Program refers to a set of projects or activities that support a higher level objective or goal. At some
agencies, an activity carries out an intervention or set of interventions through a contract, grant, or
agreement, and a project is a set of complementary activities, over an established timeline and budget,
intended to achieve a discrete result.
Principles
Monitoring and evaluation of U.S. foreign assistance and agency policies or guidance covering
monitoring and evaluation should align with the principles below and include expectations for meeting
them. One or more of these principles may at times compete with others, and any resulting trade-off
decisions must be judiciously balanced.
Designed and Timed for Use: Monitoring and evaluation information is used to generate
evidence that informs decisions, including those related to project and program design and
prioritization.
Use the Best Methods Available: Use monitoring and evaluation methods that are the most
rigorous, feasible, and appropriate to address the questions, and that generate the highest
quality and most credible evidence, subject to the Practical and Efficient principle.
Practical and Efficient: Determine what and how to monitor and evaluate based on information
needs, balanced with costs (taking into consideration time, budget, and other constraints), and
ensure only the most relevant indicators or questions are tracked or evaluated.
Planned Early: Plan for monitoring and evaluation early, while developing policies, strategies,
projects, program goals, and priorities, recognizing that flexibility and adaptation may be
necessary.
Sufficiently Resourced: Provide adequate resources for monitoring and evaluation, including
2
financial and human resources.
Conducted Ethically: Monitoring and evaluation should be conducted in an ethical manner to
appropriately balance the desired creation of evidence with the protection of human subjects,
including safeguarding the dignity, rights, safety, and privacy of participants.
Shared Transparently: Be transparent and share information widely and in a timely manner, and
report candidly about the use of resources and the outcomes and impacts of projects and
programs.
Evaluation and monitoring also have principles specific to each discipline.
Monitoring involves collecting data and information that indicate what is happening and help determine
if implementation is on track or if any timely corrections or adjustments may be needed to improve
efficiency or effectiveness. Monitoring data can also inform decisions on when an evaluation is needed
to understand how or why certain results are being observed, as well as provide useful input into
planning or conducting an evaluation.
Monitoring should be:
Objective with unambiguous and unidimensional indicators;
Based on data and information that meet specific data quality standards. These can include:
o Validity: the data clearly and adequately represent the intended result
o Integrity: safeguards are in place to minimize risk of data manipulation or error
o Precision: data have a sufficient level of detail for decision making
o Reliability: data collection processes and analyses are consistent over time
o Timeliness: data are current and available at a useful frequency for decision making;
Logically linked to program efforts and measure changes plausibly caused by the program; and
Useful to inform course corrections during implementation.
Evaluation often takes monitoring data as a starting point, and supports deeper understanding into why
and how results are or are not being achieved.
Evaluation should be:
Impartial and independent from policy making;
Unbiased in measurement and reporting;
Useful and relevant to important questions and decisions;
Participatory, to the extent possible, involving relevant stakeholders, including beneficiaries;
Shared widely, publicly, and transparently, with results communicated in a useful and actionable
manner;
Credible, based on the inclusion of the aforementioned principles; and
Collaborative among donors and recipients.
An external evaluation is one conducted by third-party experts external to the agency who have no
undisclosed conflict of interest or stake in the agency or bureau commissioning the evaluation. A self-
evaluation, or internal evaluation, is one conducted within an institution, government, agency, or among
collaborating institutions implementing programs or projects, and can be a useful, complementary
approach to assess progress toward goals and reasons for success or failure to meet goals. In all
evaluations whether external, internal, or a blended approach all other principles apply, evaluator
3
qualifications should be appropriate to the evaluation, and any conflicts of interest should be disclosed.
The evaluators should be selected to avoid, and be protected from, any undue pressure or influence
that would affect the independence of the evaluation or objectivity of the evaluator.
Guideline Requirements
1. All federal departments and agencies that administer covered United States foreign assistance must
put in place or establish specific policies and procedures for monitoring and evaluation of covered
foreign assistance no later than one year after these guidelines are published.
2. Other than the term “evaluation,” which is specifically defined in FATAA, agency policies should
define key terms within the agency context as necessary, such as “program,” “project,” and
“activity,” and be clear about how monitoring and evaluation requirements apply to each level.
3. Policies must include mechanisms and requirements for applying the findings and conclusions of
monitoring and evaluation information to proposed projects and programs and, where appropriate,
to ongoing projects and programs.
4. Policies must address funding transfers between or among U.S. government agencies and ensure
accountability for monitoring and evaluation, including in cases where one agency leads or
coordinates an overall program, but multiple agencies implement activities under that program.
Monitoring and evaluation roles and responsibilities should be considered and documented when
funds are transferred, and policies may also address funding transfers to third party institutions or
funds. Policies should define roles and responsibilities for monitoring and evaluation when other
agencies will be implementing activities that support a multi-agency project for a lead agency;
ensure monitoring and evaluation responsibilities are clearly defined in interagency agreements on
covered foreign assistance; and ensure that lead agencies share necessary assessments, past
evaluations, and other information with supporting agencies to assist with the supporting agency’s
monitoring and evaluation.
5. These policies must include requirements to ensure agenciesmonitoring and evaluation of U.S.
foreign assistance support the objectives listed in FATAA Sec. 3(c)(2) and, associated with these, also
address the additional guidelines provided below:
Establish annual monitoring and evaluation objectives and timetables [Sec.3(c)(2)(A)]: Agencies may do
this centrally, or when decentralized, agency policies should require that operating units annually
document their monitoring and evaluation objectives and timetables, as well as other key aspects of
managing monitoring and evaluation and using the resulting information for learning.
Agencies should plan to use monitoring data and evaluation findings for making decisions about
policies, strategies, program priorities, and delivery of services, as well as for planning and budget
formulation processes. Evaluation findings may be used by agency staff to course-correct a project or
program. When evaluators themselves provide course-correction recommendations, the responsible
agency should explicitly consider how to efficiently balance the potentially competing values of the
course corrections and the evaluator’s independence.
Develop specific project monitoring and evaluation plans [Sec.3(c)(2)(B)]: Monitoring and evaluation
4
plans should be developed as part of program, project, and activity design, and should include
measurable goals. Policies should require establishing and documenting a baseline data collection
methodology and a plan for regular monitoring of all programs and projects. Monitoring plans should
document all of the indicators, including baselines and milestones or targets for each indicator. They
should also include data collection methodology and frequency for each indicator, which should be a
time interval that is feasible and necessary to effectively manage and monitor progress and results,
conduct internal learning, and meet external reporting or communication requirements.
Policies should require that monitoring plans be updated or adjusted as necessary to reflect new
or better information that becomes available as learning occurs (e.g., additional indicators if new data
sources become available).
Policies should require that the responsible organizational units establish evaluation plans and
provide guidance about what the plans should include and when and how to submit them. Evaluation
costs should be planned and accounted for as part of the overall program budget.
Apply rigorous monitoring and evaluation methodologies to such programs [Sec.3(c)(2)(C)]: Guidance
should require that evaluations be “evidence based,” meaning they should be based on verifiable data
and information that have been gathered using the monitoring and evaluation principles established in
these guidelines. Evaluation design and data collection methodologies should be appropriate to answer
the key questions posed by the evaluation, including both qualitative and quantitative data. The timing
of evaluation data collection should be driven by the relevant program logic. Considerations for selecting
a methodological approach include the information needs of management, timeline, availability of data,
and resources. Evaluations should include an assessment and disclosure of assumptions and limitations.
Guidance on monitoring methodologies should include the use of logic models and definition of
the program inputs, activities, outputs, intermediate outcomes, and end or long-term outcomes. Logic
models set the foundation against which progress can be monitored and evaluated. Logic model
documentation should include the assumptions upon which the model is based, i.e., the conditions that
need to exist in order for one step in the logic model to succeed, and lead to the next step.
Documentation may also include a theory of change, if applicable, which explains why it is believed that
the stated program activities will lead to the desired outcomes. Logic models should be appropriate for
the type of program, context, existing evidence for the theory of change, and implementation
modalities.
Disseminate guidelines for the development and implementation of monitoring and evaluation
programs to all personnel [Sec.3(c)(2)(D)]: Guidelines should be disseminated to all personnel, including
those in the field, and should include:
a) Roles and responsibilities for monitoring and evaluation, and for ensuring monitoring and
evaluation are informed by and/or inform program design;
b) Requirements for when and how to monitor and evaluate programs, including timing and
frequency;
c) Statement of the expected use of monitoring and evaluation, including processes for the use of
findings for policy and program improvement;
d) Public and internal dissemination of evaluation reports and results; and
e) How the agency will ensure the collection, dissemination, and preservation of knowledge and
lessons learned.
Establish methodologies for the collection of data, including baseline data [Sec.3(c)(2)(E)]: Policies
5
should cover the standards for data collection, including:
a) Establish expectations for developing performance indicators to monitor progress and results
for all programs;
b) Establish expectations for fully defining appropriate use of the indicator, such as its scope,
acceptable data sources, or other terms of use;
c) Establish expectations for identifying or collecting baseline data, as appropriate and feasible, at
the start of a program to provide a basis for planning or assessing subsequent progress;
d) Establish expectations for collecting subsequent results and at what frequency;
e) Ensure targets are set for each performance indicator to indicate the expected change over the
course of each period of performance; and
f) Outline expected procedures for reporting on and using monitoring data, which could include
reviewing and analyzing progress and results, adaptive management, internal learning, meeting
external reporting or communication requirements, and any relevant reporting or sharing of
data to agency stakeholders.
Evaluate, at least once in their lifetime, all programs whose dollar value equals or exceeds the median
program size for the relevant office or bureau or an equivalent calculation [Sec.3(c)(2)(F)]: A key
consideration in selecting a program for evaluation should be the information needs of the agency or
office managing the program to inform future decisions. At a minimum, agencies that directly manage
foreign assistance program funds should direct their responsible organizational units to evaluate, at
least once in their lifetimes, all programs whose dollar value equals or exceeds the median program size
for the relevant bureau or office, or an equivalent calculation, such that the majority of program
resources are evaluated. This determination should reflect the Practical and Efficient principle, taking
into account the scope of their portfolio, size of their budget, anticipated needs of management, and
appropriate programmatic level at which to evaluate. Evaluating a subset or component of a program
may be acceptable provided the evaluation is sufficient to address key uncertainties and critical
questions related to the program’s intended outcomes.
Conduct impact evaluations on all pilot programs before replicating, or conduct performance
evaluations and provide a justification for not conducting an impact evaluation [Sec.3(c)(2)(G)]: Agency
policies should include the expectation that pilot programs or interventions (defined above) should be
evaluated for impact before being replicated or expanded. Pilot interventions should be identified
during project or activity design, and the impact evaluation should be integrated into the design of the
project or activity. An impact evaluation (defined above) requires a specialized design, and must be
carried out by evaluators with the expertise and knowledge to properly implement such a design and
analyze the resulting data. Its timing must also be coordinated with the implementation of the
intervention and so must be planned accordingly. If an impact evaluation is deemed to be impracticable
or inappropriate for a particular pilot program or intervention, a performance evaluation must be
conducted with a justification of the methodological choice.
Develop a clearinghouse capacity for the collection, dissemination, and preservation of knowledge and
lessons learned [Sec.3(c)(2)(H)]: Agencies should make information on program plans, monitoring data,
and evaluation findings available to the public, other foreign assistance agencies, implementing
partners, the donor community and aid recipient governments. Agencies may develop a new website or
house this information on an existing one in a way that is easily accessible to the public. Evaluation
reports must be included on each agency’s clearinghouse website, except those exempted under clearly
6
specified criteria in agency polices under the guidelines to “Publicly report each evaluation.Other
documents published may include:
a) Strategies that guide foreign assistance;
b) Planning information on how programs are developed;
c) Monitoring information and reports;
d) Tools and resources used to manage programs;
e) Summaries of lessons learned;
f) Budget information; and
g) Links to related data required by OMB Bulletin 12-01, Guidance on Collection of U.S. Foreign
Assistance Data, to be reported to FA.gov or other relevant websites.
Internally distribute evaluation reports [Sec.3(c)(2)(I)]: Evaluation reports, program summaries, and
other relevant documents should be made available internally for learning and analysis. At minimum,
the clearinghouse described above should be easily accessible by internal staff, and agencies are
encouraged to use additional strategies for distributing evaluation reports and related information.
These strategies may include a range of options, such as using newsletters or listservs, distributing
abstracts or summaries of recently completed evaluations, videos, blogs, podcasts, and other events,
according to the resources and context of the agency.
Publicly report each evaluation [Sec.3(c)(2)(J)]: Evaluation reports should be clear, concise, and
empirically grounded. They should include an executive summary, a succinct description of the program,
evaluation purpose and questions, evaluation design and data collection methods and their limitations,
key findings, and conclusions or recommendations.
For transparency and accountability, final evaluation reports should be made available to the
public within 90 days of completion of the evaluation as defined by the agency. Agencies may have
additional requirements for completion such, as required internal and stakeholder reviews, and must
establish guidelines that clearly delineate these requirements and processes. To the extent possible,
findings should be made available to communities involved in the program implementation or related
evaluation efforts in an appropriate format. If the evaluations are classified, sensitive, law enforcement
sensitive, or commercially sensitive, agencies should have policies in place spelling out an exemption for
public disclosure. Summaries of results from classified or sensitive evaluations, including a description of
the methodology, key findings and recommendations, may be made available instead.
Undertake collaborative partnerships, as appropriate [Sec.3(c)(2)(K)]: Agencies should undertake
collaborative partnerships or otherwise coordinate with other agencies, operating units, academic
institutions, implementing partners, or international or national institutions and organizations to
conduct monitoring and evaluation of programs, projects, or interventions when such partnerships can
be expected to provide needed expertise or significantly improve the evaluation and analysis. These
partnerships or collaborative arrangements may provide needed expertise to significantly improve
monitoring, evaluation, and analysis, and may or may not involve the transfer of funds. In such cases
where the transfer of funds is involved, agencies should:
a) Determine roles and responsibilities for monitoring and evaluation as part of the agreement
accompanying the provision of funds, and
b) Ensure the responsible organization carries out evaluations of programs consistent with the
agency’s policy and disseminates a final evaluation report.
7
Ensure verifiable, reliable, and timely data are available to monitoring and evaluation personnel
[Sec.3(c)(2)(L)]: Monitoring and evaluation should employ methods appropriate to context and
population to ensure that verifiable, reliable, and timely quantitative and qualitative information is
collected, included, and considered, with appropriate provisions for the protection of human subjects in
the collection and use of this information.
Agency policies should encourage engagement of beneficiaries, partner country governmental
or non-governmental stakeholders, and implementing partners in monitoring and evaluation processes
where feasible. Agency policies should encourage alignment of monitoring and evaluation efforts with
those of partner countries and other donors wherever feasible in order to promote aid effectiveness.
Agency policies should ensure that agreements with third party partners (including, for example,
evaluators, implementing partners, host country partners, and other stakeholders) include a
requirement for activity, project, and/or program data be made available to agency personnel as well as
relevant country stakeholders, while adhering to the principle of ethical conduct of monitoring and
evaluation.
Evaluations should include an assessment and disclosure of assumptions and limitations.
Ensure that standards of professional evaluation organizations for monitoring and evaluation efforts are
employed [Sec.3(c)(2)(M)]: Agency policies should incorporate relevant standards developed by
professional organizations for monitoring and evaluation to ensure appropriate independence of
evaluations, guide the selection of monitoring and evaluation methodologies, permit the exercise of
professional judgment, and provide for quality control in the monitoring and evaluation process.
Professional standards are intended to improve the quality of evaluation processes and
products and to facilitate collaboration. For example, the American Evaluation Association publishes
standards and guidelines on evaluation (see American Evaluation Association’s An Evaluation Roadmap
for More Effective Government). The Organization for Economic Cooperation and Development (OECD)
also has published standards that outline the key quality dimensions for each phase of a typical
evaluation process (see OECD’s Quality Standards for Development Evaluation). Other national and
international organizations also publish evaluation standards. Critical among these standards are the
need for informed peer reviews, transparency, and ensuring that findings are supported by all the
relevant data.
Reporting to OMB
Agencies should report annually through the OMB budget submission process in a manner to be defined
in annual guidance on implementation of monitoring and evaluation related to these guidelines.
Reporting will outline agency policies and guidance developed, as well as best practices and lessons
learned through implementation of such monitoring and evaluation policies and guidance. Agencies
should also report annually on implementation of Sections 4(a)-(c) of the Act (all United States
government departmentsand agenciesaccounts and programs defined by the OMB Bulletin 12-01 to
fund or execute foreign assistance), Section 4(d) of the Act (USAID and Department of State only), and
compliance with reporting to FA.gov as required by guidance contained in OMB Bulletin 12-01.
Annex: Legislative Reference Chart
8
Guidelines
and evaluation objectives and
timetables to plan and manage
the process of monitoring,
evaluating, analyzing progress,
and applying learning toward
achieving results;
a. Agencies may do this centrally, or when
decentralized, agency policies should require that
operating units annually document their monitoring
and evaluation objectives and timetables, as well as
other key aspects of managing monitoring and
evaluation and using the resulting information for
learning.
b. Agencies should plan to use monitoring data and
evaluation findings for making decisions about
policies, strategies, program priorities, and delivery
of services, as well as for planning and budget
formulation processes. Evaluation findings may be
used by agency staff to course-correct a project or
program. When evaluators themselves provide
course-correction recommendations, the
responsible agency should explicitly consider how
to efficiently balance the potentially competing
values of the course corrections and the evaluator’s
independence.
monitoring and evaluation plans,
including measurable goals and
performance metrics, and to
identify the resources necessary
to conduct such evaluations,
which should be covered by
program costs;
Monitoring and evaluation plans should be developed
as part of program, project, and activity design, and
should include measurable goals. Policies should
require establishing and documenting a baseline data
collection methodology and a plan for regular
monitoring of all programs and projects. Monitoring
plans should document all of the indicators, including
baselines and milestones or targets for each indicator.
They should also include data collection methodology
and frequency for each indicator, which should be a
time interval that is feasible and necessary to
effectively manage and monitor progress and results,
conduct internal learning, and meet external reporting
or communication requirements.
Policies should require that monitoring plans be
updated or adjusted as necessary to reflect new or
better information that becomes available as learning
occurs (e.g., additional indicators if new data sources
become available).
9
Policies should require that the responsible
organizational units establish evaluation plans and
provide guidance about what the plans should include
and when and how to submit them. Evaluation costs
should be planned and accounted for as part of the
overall program budget.
evaluation methodologies to such
programs, including through the
use of impact evaluations, ex-post
evaluations, or other methods, as
appropriate, that clearly define
program logic, inputs, outputs,
intermediate outcomes, and end
outcomes;
Guidance should require that evaluations be “evidence
based,” meaning they should be based on verifiable
data and information that have been gathered using
the monitoring and evaluation principles established in
these guidelines. Evaluation design and data collection
methodologies should be appropriate to answer the key
questions posed by the evaluation, including both
qualitative and quantitative data. The timing of
evaluation data collection should be driven by the
relevant program logic. Considerations for selecting a
methodological approach include the information
needs of management, timeline, availability of data,
and resources. Evaluations should include an
assessment and disclosure of assumptions and
limitations.
Guidance on monitoring methodologies should include
the use of logic models and definition of the program
inputs, activities, outputs, intermediate outcomes, and
end or long-term outcomes. Logic models set the
foundation against which progress can be monitored
and evaluated. Logic model documentation should
include the assumptions upon which the model is
based, i.e., the conditions that need to exist in order for
one step in the logic model to succeed, and lead to the
next step. Documentation may also include a theory of
change, if applicable, which explains why it is believed
that the stated program activities will lead to the
desired outcomes. Logic models should be appropriate
for the type of program, context, existing evidence for
the theory of change, and implementation modalities.
development and implementation
of monitoring and evaluation
programs to all personnel,
especially in the field, who are
Guidelines should be disseminated to all personnel,
including those in the field, and should :include:
a. Roles and responsibilities for monitoring and
evaluation, and for ensuring monitoring and
evaluation are informed by and/or inform program
design;
10
management of covered United
States foreign assistance
programs;
b. Requirements for when and how to monitor and
evaluate programs, including timing and frequency;
c. Statement of the expected use of monitoring and
evaluation, including processes for the use of
findings for policy and program improvement;
d. Public and internal dissemination of evaluation
reports and results; and
e. How the agency will ensure the collection,
dissemination, and preservation of knowledge and
lessons learned.
the collection of data, including
baseline data to serve as a
reference point against which
progress can be measured;
Policies should cover the standards for data collection,
including:
a. Establish expectations for developing performance
indicators to monitor progress and results for all
programs;
b. Establish expectations for fully defining appropriate
use of the indicator, such as its scope, acceptable
data sources, or other terms of use;
c. Establish expectations for identifying or collecting
baseline data, as appropriate and feasible, at the
start of a program to provide a basis for planning or
assessing subsequent progress;
d. Establish expectations for collecting subsequent
results and at what frequency;
e. Ensure targets are set for each performance
indicator to indicate the expected change over the
course of each period of performance; and
f. Outline expected procedures for reporting on and
using monitoring data, which could include
reviewing and analyzing progress and results,
adaptive management, internal learning, meeting
external reporting or communication requirements,
and any relevant reporting or sharing of data to
agency stakeholders.
lifetime, all programs whose dollar
value equals or exceeds the
median program size for the
relevant office or bureau or an
equivalent calculation to ensure
the majority of program resources
are evaluated;
A key consideration in selecting a program for
evaluation should be the information needs of the
agency or office managing the program to inform future
decisions. At a minimum, agencies that directly manage
foreign assistance program funds should direct their
responsible organizational units to evaluate, at least
once in their lifetimes, all programs whose dollar value
equals or exceeds the median program size for the
relevant bureau or office, or an equivalent calculation,
11
such that the majority of program resources are
evaluated. This determination should reflect the
Practical and Efficient principle, taking into account the
scope of their portfolio, size of their budget, anticipated
needs of management, and appropriate programmatic
level at which to evaluate. Evaluating a subset or
component of a program may be acceptable provided
the evaluation is sufficient to address key uncertainties
and critical questions related to the program’s intended
outcomes.
all pilot programs before
replicating, or conduct
performance evaluations and
provide a justification for not
conducting an impact evaluation
when such an evaluation is
deemed inappropriate or
impracticable;
Agency policies should include the expectation that
pilot programs or interventions, (defined above) should
be evaluated for impact before being replicated or
expanded. Pilot interventions should be identified
during project or activity design, and the impact
evaluation should be integrated into the design of the
project or activity. An impact evaluation (defined
above) requires a specialized design, and must be
carried out by evaluators with the expertise and
knowledge to properly implement such a design and
analyze the resulting data. Its timing must also be
coordinated with the implementation of the
intervention and so must be planned accordingly. If an
impact evaluation is deemed to be impracticable or
inappropriate for a particular pilot program or
intervention, a performance evaluation must be
conducted with a justification of the methodological
choice.
capacity for the collection,
dissemination, and preservation of
knowledge and lessons learned to
guide future programs for United
States foreign assistance
personnel, implementing partners,
the donor community, and aid
recipient governments;
Agencies should make information on program plans,
monitoring data, and evaluation findings available to
the public, other foreign assistance agencies,
implementing partners, the donor community and aid
recipient governments. Agencies may develop a new
website or house this information on an existing one in
a way that is easily accessible to the public. Evaluation
reports must be included on each agency’s
clearinghouse website, except those exempted under
clearly specified criteria in agency polices under the
guidelines to “Publicly report each evaluation.” Other
documents published may include:
a. Strategies that guide foreign assistance;
b. Planning information on how programs are
developed;
12
c. Monitoring information and reports;
d. Tools and resources used to manage programs;
e. Summaries of lessons learned;
f. Budget information; and
g. Links to related data required by OMB Bulletin 12
-
01, Guidance on Collection of U.S. Foreign
Assistance Data, to be reported to FA.gov or other
relevant websites.
reports;
Evaluation reports, program summaries, and other
relevant documents should be made available internally
for learning and analysis. At minimum, the
clearinghouse described above should be easily
accessible by internal staff, and agencies are
encouraged to use additional strategies for distributing
evaluation reports and related information. These
strategies may include a range of options, such as using
newsletters or listservs, distributing abstracts or
summaries of recently completed evaluations, videos,
blogs, podcasts, and other events, according to the
resources and context of the agency.
including an executive summary, a
description of the evaluation
methodology, key findings,
appropriate context, including
quantitative and qualitative data
when available, and
recommendations made in the
evaluation within 90 days after
the completion of the evaluation;
a. Evaluation reports should be clear, concise, and
empirically grounded. They should include an
executive summary, a succinct description of the
program, evaluation purpose and questions,
evaluation design and data collection methods and
their limitations, key findings, and conclusions or
recommendations.
b. For transparency and accountability, final
evaluation reports should be made available to the
public within 90 days of completion of the
evaluation as defined by the agency. Agencies may
have additional requirements for completion, such
as required internal and stakeholder reviews, and
must establish guidelines that clearly delineate
these requirements and processes. To the extent
possible, findings should be made available to
communities involved in the program
implementation or related evaluation efforts in an
appropriate format. If the evaluations are classified,
sensitive, law enforcement sensitive, or
commercially sensitive, agencies should have
policies in place spelling out an exemption for
public disclosure. Summaries of results from
13
classified or sensitive evaluations, including a
description of the methodology, key findings and
recommendations, may be made available instead.
partnerships and coordinate
efforts with the academic
community, implementing
partners, and national and
international institutions, as
appropriate, that have expertise in
program monitoring, evaluation,
and analysis when such
partnerships provide needed
expertise or significantly improve
the evaluation and analysis;
Agencies should undertake collaborative partnerships
or otherwise coordinate with other agencies, operating
units, academic institutions, implementing partners, or
international or national institutions and organizations
to conduct monitoring and evaluation of programs,
projects, or interventions when such partnerships can
be expected to provide needed expertise or
significantly improve the evaluation and analysis. These
partnerships or collaborative arrangements may
provide needed expertise to significantly improve
monitoring, evaluation, and analysis, and may or may
not involve the transfer of funds. In such cases where
the transfer of funds is involved, agencies should:
a. Determine roles and responsibilities for monitoring
and evaluation as part of the agreement
accompanying the provision of funds, and
b. Ensure the responsible organization carries out
evaluations of programs consistent with the
agency’s policy and disseminates a final evaluation
report.
timely data, including from local
beneficiaries and stakeholders,
are available to monitoring and
evaluation personnel to permit
the objective evaluation of the
effectiveness of covered United
States foreign assistance
programs, including an
assessment of assumptions and
limitations in such evaluations;
and
Monitoring and evaluation should employ methods
appropriate to context and population to ensure that
verifiable, reliable, and timely quantitative and
qualitative information is collected, included, and
considered, with appropriate provisions for the
protection of human subjects in the collection and use
of this information.
Agency policies should encourage engagement of
beneficiaries, partner country governmental or non
-
governmental stakeholders, and implementing partners
in monitoring and evaluation processes where feasible.
Agency policies should encourage alignment of
monitoring and evaluation efforts with those of partner
countries and other donors wherever feasible in order
to promote aid effectiveness.
Agency policies should ensure that agreements with
third party partners (including, for example, evaluators,
implementing partners, host country partners, and
14
other stakeholders) include a requirement for activity,
project, and/or program data be made available to
agency personnel as well as relevant country
stakeholders, while adhering to the principle of ethical
conduct of monitoring and evaluation.
Evaluations should include an assessment and
disclosure of assumptions and limitations.
Agency policies should incorporate relevant standards
professional evaluation developed by professional organizations for monitoring
organizations for monitoring and and evaluation to ensure appropriate independence of
evaluation efforts are employed, evaluations, guide the selection of monitoring and
including ensuring the integrity evaluation methodologies, permit the exercise of
and independence of evaluations,
professional judgment, and provide for quality control
permitting and encouraging the
in the monitoring and evaluation process.
exercise of professional judgment,
and providing for quality control Professional standards are intended to improve the
and assurance in the monitoring quality of evaluation processes and products and to
and evaluation process.
facilitate collaboration. For example, the American
Evaluation Association publishes standards and
guidelines on evaluation (see American Evaluation
Association’s An Evaluation Roadmap for More
Effective Government). The Organization for Economic
Cooperation and Development (OECD) also has
published standards that outline the key quality
dimensions for each phase of a typical evaluation
process (see OECD’s Quality Standards for Development
Evaluation). Other national and international
organizations also publish evaluation standards. Critical
among these standards are the need for informed peer
reviews, transparency, and ensuring that findings are
supported by all the relevant data.
15