Policy development relies on solid evidence and transparent reasoning. This guide offers practical, non-technical advice to help policy officials calculate figures for Impact Assessments (IAs) and Options Assessments (OAs) with clarity, consistency, and accountability. It draws on established public-sector appraisal practices while remaining accessible for day-to-day policy work.
1) Frame the problem and establish the baseline
– Start with a clear statement of the policy problem, objective, and the policy instrument under consideration.
– Define the baseline (counterfactual) that would prevail without the proposed policy. This is essential for measuring incremental impacts.
– Identify the horizon of analysis early. Align the time frame with policy relevance and data availability.
2) Decide the evaluation approach
– Determine whether the proposal requires an IA, an OA, or both. IAs typically examine a policy’s direct and indirect impacts, while OAs focus on comparing a set of feasible options.
– Decide whether to monetise impacts, or to present non-monetised indicators alongside monetised figures. Some effects (e.g., fairness, civic trust, biodiversity) may be difficult to price accurately but remain important for decision-makers.
3) Identify costs and benefits
– Costs to consider:
– Direct public sector costs (implementation, administration, monitoring, enforcement).
– Costs to businesses or the voluntary sector (regulatory burden, compliance costs, training).
– Costs to individuals (time, travel, changes in behaviour).
– Transition costs and potential offsetting savings (efficiency gains, reduced future programme outlays).
– Benefits to consider:
– Productivity gains (labour market, output per hour, time savings).
– Health, safety and wellbeing improvements.
– Environmental outcomes (emissions reductions, resource efficiency).
– Public sector efficiency (facilitation of service delivery, avoided costs).
– Revenue effects and broader macroeconomic implications where appropriate.
4) Data sources, quality and transparency
– Gather data from credible, input-tested sources (administrative data, surveys, published studies, pilot results).
– Document data sources, assumptions, limitations, and any and all data cleaning steps.
– When data is imperfect or incomplete, use ranges, literature-backed defaults, or expert judgement with explicit caveats. Always flag areas where data quality drives uncertainty.
5) Modelling approaches and parameter choices
– Choose transparent modelling approaches appropriate to the policy context. Simple spreadsheet models are often sufficient for IAs and OAs; more complex models may be warranted for larger-scale interventions.
– Clearly describe model structure, inputs, and outputs. Include a glossary of terms if the model is used across teams.
– Use modular templates so that updates (new data, new options) can be incorporated without rebuilding the model.
6) Time horizon and discounting
– Select an appropriate time horizon that captures the lasting effects of the policy and any delayed benefits or costs.
– Use the discount rate specified in the Green Book or relevant government guidance. Document the chosen rate, and justify sensitivity if long horizons are used or if alternative rates are considered for robustness.
– Present both discounted and, where helpful, undiscounted figures for long-term outcomes.
7) Uncertainty, sensitivity and scenario analysis
– Acknowledge uncertainty explicitly. Distinguish between parameter uncertainty (inputs) and structural uncertainty (model design).
– Conduct sensitivity analyses to test how results change with key assumptions (e.g., discount rate, uptake, compliance, price changes).
– Include scenario analysis to illustrate outcomes under plausible futures (optimistic, pessimistic, and baseline scenarios).
– Where feasible, consider probabilistic methods (Monte Carlo simulations) to convey the probability distribution of outcomes, or provide ranges and confidence intervals for critical figures.
8) Distributional and non-monetised impacts
– Assess how impacts fall across different groups (by income, region, age, disability, business size, etc.). Distributional analysis supports fairer decision-making and can be essential for public acceptability.
– When prices cannot capture welfare changes, use well-justified non-monetised indicators (qualitative notes, matched comparisons, or equity weights where policy allows).
– Document any distributional weights or criteria used and explain their rationale.
9) Avoid double counting and interdependencies
– When combining impacts from multiple sources or policies, be careful not to double-count benefits or costs.
– Map dependencies between policy areas to ensure coherent aggregation. Where interdependent effects exist, document the direction and strength of those linkages.
10) Documentation and governance
– Create a clear, audit-friendly trail: problem statement, baseline, options, data sources, assumptions, methods, calculations, and limitations.
– Include an annex with full data tables, model equations, and sensitivity analyses to support scrutiny.
– Ensure version control and stakeholder review points. Seek feedback from colleagues in evidence, finance, and policy teams to promote cross-cutting legitimacy.
11) Presentation of results
– Produce a concise executive summary with the headline figures (monetised and non-monetised), key uncertainties, and the preferred option.
– Use clear visuals: simple charts and tables that show the comparison across options, confidence ranges, and distributional effects.
– Provide practical implications for decision-makers: what changes with each option, what risks to watch, and what monitoring will be required post-implementation.
12) Templates, tools, and practical tips
– Develop or adopt standard IA/OA templates that include:
– Baseline and counterfactual description
– A fixed set of cost and benefit categories
– A transparent discounting approach
– A structured sensitivity and scenario section
– A distributional analysis module
– An annex for data sources and modelling details
– Reuse previous IAs/OAs where appropriate to maintain consistency and reduce rework, updating only the inputs that change.
– Start the calculation early in the policy cycle; iteratively refine figures as more data becomes available.
– Engage with statisticians, economists, and governance teams early to validate methods and assumptions.
– Maintain an internal quality assurance process: peer reviews, sign-off steps, and public-facing disclosures where required.
13) Common pitfalls to avoid
– Overstating precision: avoid implying exact certainty where there is significant uncertainty.
– Double counting: ensure impacts are counted once and only in the most relevant category.
– Ignoring distributional effects: neglecting equity can undermine legitimacy and compliance.
– Inadequate transparency: failing to document assumptions or data sources reduces credibility and contestability.
– Underestimating implementation challenges: real-world uptake and enforcement often differ from plans.
14) A practical example (illustrative, non-endorsement)
– Problem: A local authority proposes a charging scheme for single-use plastics to reduce litter.
– Baseline: Current waste trends without the charge.
– Options: (1) No charge, (2) Small charge, (3) Higher charge with exemptions for vulnerable groups.
– Costs: administrative costs of collecting the charge; enforcement costs; behavioural change costs for businesses.
– Benefits: reduced litter cleaning costs; environmental benefits; health and tourism impacts; potential revenue recycling.
– Analysis: estimate incremental costs and benefits for each option, apply a discount rate, run sensitivity analyses on uptake and price, assess distributional impacts (which groups are most affected by the charge or exemptions).
– Decision support: present a succinct summary of which option yields best value, given uncertainty, equity considerations, and feasibility.
Closing thoughts
Robust calculation of IA and OA figures is a collaborative, iterative process that balances discipline with practicality. By clearly defining the problem, transparently documenting data and methods, and presenting results in an accessible way, policy officials can enhance the credibility of their recommendations and support well-informed, accountable decisions. The goal is not to produce a perfect forecast, but to provide a rigorous, evidence-based basis for choosing among credible options and for monitoring policy outcomes once implemented.
January 29, 2026 at 10:57AM
指南:影响评估与选项评估计算器
为政策官员提供用于计算影响评估(IAs)和选项评估(OAs)所需数值的帮助。


Our Collaborations With