Eligibilty

Eligibility criteria for inclusion in the Case Mix Programme (CMP) Annual Quality Report are as follows:

  • Adult, general critical care units (either ICU or combined ICU/HDU) located in NHS hospitals in England, Wales and Northern Ireland participating in the CMP
  • Have submitted at least six months of data for the period 1 April to 31 March of the reporting year
  • Formal, signed consent from the Clinical Director of the critical care unit and the Chief Executive of the Trust

CMP Dataset (Version 3.1)

An overview of the Case Mix Programme (CMP) Dataset (Version 3.1) is provided below:

CMP data processing

An overview of the Case Mix Programme (CMP) process for the collection, entry, submission, validation, analysis and reporting of CMP data is shown in the diagram below: 


 

 

Presentation of results

Potential Quality Indicators are presented in a funnel plot format.

The grey lines represent control limits at two standard deviations (2SD) represented by a dashed line, and three standard deviations (3SD) represented by a solid line. Selected units are highlighted in orange.

As the number of admissions increases, the precision with which a result can be calculated increases.

 

If variation between results is random then on average 95% of points should lie within the inner control limits (2 standard deviations) and 99.8% should within the outer control limits (3 standard deviations).
 

Example – Hospital mortality

If the model is perfectly predicting hospital mortality, we expect the points to form a funnel shape centred on a mortality ratio of one (1.0).

 

 

ICNARC risk prediction model

ICNARC uses the ICNARC (2011) risk prediction model to to calculate the risk of acute hospital mortality.

The ICNARC (2011) model was developed using data from over 200,000 admissions in the CMP Database (Harrison et al, 2007). Regular recalibration ensures that each critical care unit is being compared with current CMP data. ICNARC (2011) model is the latest recalibration.


For more information on the ICNARC (2011) model including inclusion and exclusion criteria, please follow the link below:

ICNARC (2011) risk prediction model (external link) 

Managing outliers

An outlier is a result that is statistically significantly further from the expected comparator value than would usually occur by chance alone.

ICNARC have developed guidance to ensure that potential outliers are identified through the processes of national clinical audit. The guidance document sets out the actions that ICNARC takes when data indicate that results for a site significantly deviate from the expected value.

ICNARC's guidance on the detection and anagement of outliers is based on Department of Health recommendations on the 'Detection and management of outliers' and supersedes ICNARC’s previous policy.

For more information on the detection and management of outliers, please follow the link below:

Detection and management of outliers - guidance (external link)

Statistical appendix

Model fit

The ICNARC model was most recently recalibrated in April 2011 using data from 233,441 admissions to 199 NHS, adult general critical care units from January 2008 to December 2010.

For the purpose of the Case Mix Programme (CMP) Annual Quality Report 2011/12, the fit of the model was assessed, based on 125,924 admissions to 203 NHS, adult general critical care units using the following methods:

  • Discrimination was assessed using the c index,1 equivalent to the area under the receiver operating characteristic (ROC) curve;2
  • Calibration was assessed graphically by dividing the dataset into 20 equal sized groups based on quantiles of predicted risk. Note that the Hosmer-Lemeshow test for perfect calibration was not used, as in a sample of this size, statistically significant departures from perfect calibration would be expected even with a well fitting model;3
  • Overall goodness of fit was assessed with Brier’s score,4 representing the mean squared error between outcomes and predictions, and with Shapiro’s R,5 representing the geometric mean probability assigned to the event that occurred.

 The c index (area under the ROC curve) was 0.875 (95% confidence interval 0.873 to 0.877). Calibration was qualitatively very good (see below). Brier’s score was 0.110 and Shapiro’s R was 0.705. These compare favourably with values of the c index, Brier’s score and Shapiro’s R of 0.870, 0.132 and 0.665 from the original development data6 and 0.868, 0.126 and 0.673 from previous external validation.7 The ROC curve and calibration plot are shown below:


Calculation of funnel plots 

Funnel plots were calculated using the methods of Spiegelhalter (2005).8 Over-dispersion was adjusted for by estimating a multiplicative over-dispersion factor for each potential quality indicator based on 10% Winsorisation.8,9 The estimated over-dispersion factors were: Hospital mortality 1.6; Unit-acquired MRSA 1.2; Unplanned readmissions within 48 hours 1.3; Out-of-hours discharges to the ward (not delayed) 2.2; Non-clinical transfers (out) 2.0.


References

1. Harrell FE, Califf RM, Pryor DB, et al. Evaluating the yield of medical tests. JAMA 1982; 247:2543-6.

2. Hanley JA, McNeil BJ. The meaning and use of the area under the receiver operating characteristics (ROC) curve. Radiology 1982; 143:29-36.

3. Kramer AA, Zimmerman JE. Assessing the calibration of mortality benchmarks in critical care: the Hosmer-Lemeshow test revisited. Crit Care Med 2007; 35:2052-6.

4. Brier GW. Verification of forecasts expressed in terms of probability. Monthly Weather Review 1950; 75:1-3.

5. Shapiro AR. The evaluation of clinical predictions. N Engl J Med 1977; 296:1509-14.

6. HarrisonDA, Parry GJ, Carpenter JR, Short A, Rowan K. A new risk prediction model for critical care: the Intensive Care National Audit & Research Centre (ICNARC) model. Crit Care Med 2007; 35:1091-8.

7. HarrisonDA, Rowan KM. Outcome prediction in critical care: the ICNARC model. Curr Opin Crit Care 2008; 14:506-12.

8. Spiegelhalter D. Funnel plots for comparing institutional performance. Stat Med 2005; 24:1185-202.

9. Spiegelhalter D. Handling over-dispersion of performance indicators. Qual Saf Health Care 2005; 14:347-51.