Healthcare decision-makers face mounting pressure to deliver high-quality care at sustainable costs. A rigorous evaluation of cost effectiveness ensures that limited resources achieve maximum patient benefit. This article examines foundational methodologies—cost-effectiveness analysis, QALYs, ICER, HTA frameworks and value-based care models—while presenting real-world case studies and practical guidance for healthcare managers. You will learn how to calculate Quality-Adjusted Life Years, interpret ratios, implement Health Technology Assessment guidelines, integrate value-based principles, and address ethical considerations. By the end, you will have a clear roadmap for conducting robust economic evaluations that inform policy, optimise spending efficiency and uphold societal values.
Cost-effectiveness analysis (CEA) compares treatment costs against patient outcomes to guide resource allocation. By linking monetary investments with clinical benefits, CEA promotes efficient spending and improved care quality. Health economists collect cost data—direct medical expenses, indirect societal costs—and measure outcomes in natural units such as life years gained or symptom-free days. This structured comparison reveals which interventions deliver the greatest health gains per pound spent, empowering policymakers to prioritise high-value treatments and discourage low-impact expenditures.
Cost-effectiveness analysis evaluates medical treatments by quantifying costs and health gains in a unified framework. Analysts gather data on intervention costs—hospital stays, medications, follow-up care—and measure outcomes such as survival or disease remission. They then calculate a ratio of incremental costs to incremental outcomes, guiding decisions about which therapy offers superior value. This approach ensures transparently comparing options and supports evidence-based funding choices.
CEA rests on three essential components: costs, outcomes and perspective.
The Incremental Cost-Effectiveness Ratio (ICER) expresses the additional cost per additional health outcome unit when comparing two interventions. Calculated as ΔCost divided by ΔEffectiveness, ICER identifies the marginal value of a new therapy over standard care. A lower ICER indicates greater value. Decision thresholds—commonly £20,000–£30,000 per QALY in the UK—help determine whether an intervention is deemed cost-effective under established guidelines.
The interpretation of ICERs can be complex, and alternative statistical approaches are being explored to enhance their robustness.
Median-Based Incremental Cost-Effectiveness Ratio (ICER) in Healthcare
Cost-effectiveness analysis (CEA) is a form of economic evaluation that scrutinises the costs and health outcomes of alternative strategies and has been widely adopted within the health sciences. The incremental cost-effectiveness ratio (ICER), which quantifies the additional cost per unit of outcome gained by one strategy compared with another, has become a prevalent methodology in CEA. Notwithstanding its popularity, limited consideration has been given to summary measures beyond the mean for aggregating costs and effectiveness within the context of CEA. Although certain clear advantages of alternative measures of central tendency, such as the median for cost data that are frequently highly skewed, are well recognised, the median has hitherto rarely been incorporated into the ICER. In this paper, we introduce the median-based ICER, accompanied by inferential procedures, and propose that mean- and median-based ICERs should be considered concurrently as complementary instruments in CEA to facilitate informed decision-making, acknowledging the respective merits and drawbacks of each. If mean- and median-based CEAs yield concordant findings, we may possess reasonable confidence in the cost-effectiveness of an intervention; however, if they produce divergent results, our confidence may require adjustment, pending further substantiation.
Median-based incremental cost-effectiveness ratio (ICER), H Zhao, 2012
Below is a comparative analysis of three economic evaluation methods:
Analysis TypeUnit of MeasurementValue MetricCost-EffectivenessNatural health units (e.g., life years)Cost per life year gainedCost-UtilityQuality-Adjusted Life YearsCost per QALYCost-BenefitMonetary termsNet monetary benefit
Each model applies a distinct outcome metric—CEA uses clinical units, CUA incorporates quality adjustments and CBA translates outcomes into currency—thus informing different policy decisions.
Quality-Adjusted Life Years (QALYs) combine life expectancy with quality-of-life weights to standardise health outcomes. By assigning a utility value (0 to 1) to health states, QALYs capture both quantity and quality of life gained from interventions. This mechanism supports cross-disease comparisons and resource prioritisation. Use of QALYs enhances transparency in funding decisions and aligns spending with interventions that maximise total health benefit.
A QALY measures one year of perfect health equivalent. It integrates survival duration and health-related quality of life by multiplying time in a health state by its utility weight. For example, six months at 0.8 utility equals 0.4 QALYs. This metric enables policymakers to compare disparate treatments—cancer therapy versus chronic disease management—on a common effectiveness scale.
This process produces transparent outcome measures that inform cost-effectiveness ratios and funding decisions.
While QALYs standardise outcomes, they raise ethical concerns. Utility weights may undervalue rare diseases, disable populations or end-of-life care. Assigning lower quality scores to certain groups can perpetuate inequities. Moreover, small trial sizes and subjective quality measures introduce uncertainty. Recognising these limitations prompts policymakers to adjust or supplement QALYs for fairness and inclusivity.
The application of QALYs and ICERs within healthcare systems, particularly in the UK, is subject to ongoing scrutiny regarding fairness and equity.
"NICE and Fair?" Health Technology Assessment Policy and Fairness
Within any healthcare system operating with finite resources, decisions must be made regarding resource allocation. This inevitably creates both ‘winners’ and ‘losers’, as some groups have their needs prioritised while others are prevented from accessing potentially beneficial technologies. Healthcare priority-setting is thus an essential, but often contentious, aspect of health system management.
NICE and fair? Health technology assessment policy under the UK's National Institute for Health and Care Excellence, 1999–2018, V Charlton, 1999
Health systems use QALY thresholds to determine funding eligibility. Interventions below a cost-per-QALY threshold qualify for reimbursement, whereas those above may face restriction. This process steers resources toward therapies delivering the most health per investment unit. QALY-based allocation fosters transparent, consistent decisions that align spending with population health priorities.
Health Technology Assessment (HTA) is a multidisciplinary process that evaluates medical technologies’ clinical and economic value. By integrating systematic reviews, cost-effectiveness models and stakeholder input, HTA informs healthcare policy, reimbursement and clinical guidelines. This structured framework ensures that innovations—drugs, devices, diagnostics—are rigorously appraised for efficacy, safety and value before adoption, safeguarding system sustainability and patient outcomes.
The HTA process guides policy by synthesising clinical evidence, economic data and ethical considerations into comprehensive appraisals. Expert panels review safety, effectiveness and cost-effectiveness reports, then recommend reimbursement status. Policymakers use these recommendations to set coverage and pricing rules. This evidence-driven pathway ensures that funding decisions reflect both health benefits and budgetary constraints.
Leading HTA bodies follow published guidelines—such as NICE’s technology appraisal process—detailing methodological standards for economic evaluation. These frameworks specify reference case requirements: perspective, time horizon, discount rates and outcome measures (QALYs). By standardising methodology, HTA guidelines promote comparability across assessments and support transparent decision-making.
The National Institute for Health and Care Excellence (NICE) is a prominent example of an organization that employs structured HTA processes, though variations exist across its different assessment programs.
NICE Health Technology Assessment: Methods, Processes, and Allocative Efficiency
Decisions made by the National Institute for Health and Care Excellence (NICE) influence the allocation of resources within the National Health Service’s fixed budgets. However, guidance for different types of health interventions is managed through distinct programmes within NICE, each employing different methods and processes.
The objective of this research was to identify variations in the processes and methods across NICE’s health technology assessment programmes and to explore their potential impact on allocative efficiency within the National Health Service.
A review of NICE methods and processes across health technology assessment programmes: why the differences and what is the impact?, A Cole, 2017
Emerging interventions—AI diagnostics, gene therapies—face HTA scrutiny through adaptive pathways. Early evidence submissions and real-world data collection enable provisional appraisals. Economic models incorporate surrogate endpoints and Bayesian methods to handle uncertainty. This iterative evaluation balances rapid patient access with rigorous value assessment, guiding funding while evidence evolves.
Value-based care links reimbursement to patient outcomes rather than service volume. By rewarding providers for health improvements, value-based models align financial incentives with cost-effectiveness principles. This approach encourages adoption of interventions demonstrating high QALY gains per expenditure and discourages low-value practices. Ultimately, value-based care drives system efficiency, enhances patient satisfaction and reduces wasteful spending.
These principles create a cycle of accountability and innovation that bolsters cost-effectiveness across the care continuum.
Outcome measurement employs clinical indicators (survival rates, readmissions), patient-reported outcomes (quality-of-life scores) and process metrics (timely follow-up). Aggregating these data into composite value scores—often incorporating QALYs—enables comparisons across providers and treatments. Robust outcome measurement ensures that payment aligns with genuine health improvements.
By incentivising interventions with proven effectiveness, value-based care reduces unnecessary procedures and hospitalisations. Providers adopt best practices, preventive measures and care coordination solutions that yield superior health gains. This shift from fee-for-service to value-oriented payment structures curbs excess utilisation, directs resources to high-impact treatments and ultimately enhances system sustainability.
Recent case studies illustrate how economic evaluations shape practice and policy. In 2024, a novel oncology drug demonstrated a cost-per-QALY of £25,000, meeting NICE thresholds and securing reimbursement. A public health influenza vaccination programme yielded a £10 per QALY gain, guiding funding and expanding coverage. These examples underscore the tangible impact of cost-effectiveness assessments on treatment access and budget allocation.
Decision-makers use evaluation results to define reimbursement criteria, negotiate pricing and develop clinical guidelines. A favourable ICER can accelerate drug approval and widen patient access, whereas high ratios trigger price negotiations or restricted use. Clinicians reference economic evidence when selecting therapies, ensuring that patient care aligns with both clinical efficacy and cost considerations.
Real-world evaluations face data limitations—small sample sizes, short follow-up periods and incomplete cost tracking. Variability in treatment settings and patient adherence introduces heterogeneity that complicates modelling. Addressing these challenges requires robust sensitivity analyses, use of registry data and transparent reporting to enhance confidence in conclusions.
Healthcare managers can implement basic CEA using a structured, step-wise methodology and accessible tools. By gathering cost data, selecting appropriate outcome measures and applying spreadsheet-based models, managers transform raw data into actionable ratios. Interpreting results relative to established thresholds enables informed decisions about service offerings, contract negotiations and care pathways.
Interpreting ICER requires comparing the ratio to willingness-to-pay thresholds. Values below the threshold indicate cost-effectiveness, while higher ratios suggest limited value or need for price negotiation. Sensitivity analyses reveal robustness of conclusions under varying assumptions. Clear interpretation guides funding recommendations, formulary inclusion and strategic planning.
Economic evaluations must balance efficiency with equity and social values. Ethical issues arise when cost-effectiveness metrics disadvantage vulnerable populations, rare diseases or end-of-life care. Incorporating societal preferences—such as greater weight for life-saving interventions—enhances fairness. Transparent deliberation of thresholds and distributional impacts ensures that cost-effectiveness does not compromise equitable access.
Ethical concerns include undervaluing treatments for patients with disabilities or chronic conditions whose quality-of-life weights are low. Standard ICER thresholds may restrict access to expensive therapies that nonetheless carry societal importance. Addressing these challenges involves adjusting weights, applying equity-weighted analyses and engaging stakeholders in threshold decisions to uphold social justice.
Policy agencies incorporate public preferences through deliberative panels, willingness-to-pay surveys and multi-criteria decision analysis. By capturing societal priorities—such as priority for children or end-of-life care—evaluations reflect community values. This integration complements quantitative metrics and guides policymakers toward balanced, culturally sensitive resource allocation.
Discussions continue over setting rigid versus flexible thresholds. A fixed threshold provides clarity but may exclude high-cost innovations for rare diseases. Flexible, context-specific thresholds allow case-by-case deliberation but risk inconsistency. Ongoing debate explores hybrid approaches that combine quantitative benchmarks with ethical override mechanisms to balance value and access.
Evaluating the cost effectiveness of medical treatments requires a comprehensive blend of rigorous methodology, transparent thresholds and ethical awareness. By mastering CEA, QALYs, ICER interpretation, HTA processes and value-based frameworks, healthcare leaders can drive data-informed policy, optimise patient outcomes and steward resources responsibly. Real-world case studies illustrate how evaluations translate into funding decisions and clinical guidelines. As new technologies emerge, continued refinement of methods and ethical frameworks will ensure that economic evaluations remain relevant, equitable and aligned with evolving societal priorities.