The prediction rule
This large retrospective cohort study has identified four easily measurable clinical variables: serum albumin level ≤ 24.5 g/L, CRP level >228 mg/L and a combination of WCC >12 × 103 mcL and respiratory rate >17 resps/min (as this proved more discriminatory than either value alone) that if measured within or around 48 hours of CDI diagnosis, are capable of predicting the risk of mortality in patients with CDI. This prediction rule has been validated through an internal split sample procedure and on an independent cohort, and the variables are robust with respect to clinical threshold levels identified in other studies. These four variables can accurately assess the risk of mortality in patients with CDI and are not themselves defined by other parameters.
This simple prediction rule is more likely to be of practical use in the clinical setting than previously developed more complicated prediction rules, which have yet to become part of clinical practice. For example, the study conducted by Bhangu et al. (2010)  relied on six variables which could readily be measured, but one variable; severity of disease, is further defined by three more variables including sepsis, peritonitis and ≥ 10 episodes of diarrhoea in 24 hrs. The diagnosis of sepsis is further defined by the presence of diarrhoea with at least two other parameters that could include tachycardia (≥ 90 bpm), pyrexia (temperature ≥ 38°C), tachypnea (≥ 20 breaths per minute) or new onset hypotension. The addition of all these clinical parameters requires a more complicated and prolonged analysis than can be undertaken in the time constraints of a busy ward round.
The method used in this study allowed a classifier model to use a portion of data in training to generate a rule, which was then validated on a test data set, providing an unbiased model with greater accuracy. This study is unique in that it has used a decision tree model to evaluate threshold values for significant variables identified in multinomial logistic regression. A recent publication by Adams and Leveson (2012)  states that rules generated in this manner are generally easily understood and translatable into everyday clinical practices, but could lose accuracy if too little information is used to generate the rule. However, we do not feel this was the case during this study due to the comprehensive set of variables which were analysed.
The four variables; serum albumin (g/L) (P = 0.001), CRP (mg/L) (P = 0.020), WCC ( × 103 mcL) (P = 0.025) and respiratory rate (resps/min), (P = 0.003) were identified by univariate and multivariate analysis as being significant predictors of all-cause mortality in patients with CDI (Additional file 1). These are all clinical measurements that could be readily obtained at the time of CDI diagnosis and are likely to be taken routinely from hospitalised patients with symptoms suggestive of CDI, which makes this prediction rule very accessible in a clinical setting. A meta-analysis by Bloomfield et al., (2012)  and Chakra et al., (2012)  also conclude that serum albumin and WCC levels are important mortality risk factors in patients with CDI, whilst presence of fever, haemoglobin/haematocrit level, diarrhoea severity, presence of renal disease, diabetes, cancer, or nasogastric tube use did not appear to be associated with mortality. This is consistent with findings in this study which also looked at these variables in relation to mortality and they were found not to be significant (data not shown). A recent publication has implicated that serum albumin, WCC and CRP are important prognostic variables for short term mortality in patients with CDI . Other studies have found that a fall in serum albumin level was consistent with the onset of CDI  as well as being prognostic of mortality from CDI [11, 17, 19] and increased WCCs have also been implicated in other studies [11, 19, 20] as being prognostic of mortality in patients with CDI.
Whilst there is no certainty that an increase/decrease in clinical variables such as respiratory rate, WCC, CRP and serum albumin are alone due to CDI, as these patients are usually older, and have multiple co-morbidities, it is generally seen that these markers have usually returned to baseline levels before a later rise which occurs around the time of C. difficile diagnosis, which could be up to 1–2 weeks after cessation of antibiotic treatment for a previous condition. Thus, an acute rise/decline in these markers, around the time of infection diagnosis may be generally attributed to C. difficile infection and a combination of all these variables would prove useful as predictors of mortality in patients with CDI and warrants their inclusion in a prediction rule, as supported by others [11, 17–20].
Other studies have suggested elevated urea as a marker of risk of mortality, however urea levels were not evaluated in this study as emphasis was placed on the % rise of creatinine from a baseline reading as specified by the Department of Health report . Creatinine rise was not found to be a significant predictor of mortality in this study which might be attributed to particular emphasis being placed on maintaining hydration in the patients on the cohort ward, while in other clinical settings patients at CDI diagnosis are not often initially managed by a specialist. The role of urea will be re-evaluated in a prospective study to ensure that it does not add statistical strength to the prediction rule.
The (simplified) prediction rule derived in this study was significant at classifying those patients with increased mortality in the derivation cohort (AUC = 0.704; P < 0.001; 95% CI: 0.619-0.790) and has been made applicable to patient cohorts obtained from non-specialist environments, by its application to a validation cohort from the Bhangu et al. study . It was shown to be consistent in classification of patients with increased mortality risk in the validation cohort (AUC = 0.653; P = 0.001; 95%; CI: 0.565-0.741), with that of the prediction rule developed in the actual study by Bhangu et al.,  (Table 4). This clearly shows the prediction rule was robust; even though a key variable was missing, when tested on a new data set, as the prediction rule remained statistically significant even though the AUC values were reduced. It is pertinent to note that scores of zero in both this study and that of Bhangu et al. seem to still be associated with a higher mortality than that reflected in the CURB-65 study by Lim et al. (2003)  whereby scores of zero represent 0–0.5% mortality. The actual mortality in the Lim et al. (2003)  cohort was around 9.5%. The mortality rate for C. difficile cohorts used for this study were 24% (derivation cohort) and 38% (validation cohort). Therefore, clearly the base level of mortality for the prediction rule derived in this study will have a part to play in how discriminatory the test can be at lower levels of mortality. In the Lim et al., (2003)  study the following mortality percentages are quoted for the CURB-65 score (in the validation cohort) 0 = 0%, 1 = 0%, 2 = 8.3%, 3 = 21.4%, 4 = 26.3, 5 = 33.3%. Analysis of this result showed that a score of 2 or less results in a lower risk of mortality than the mean (9.34%) and those above 2, an increased risk. In comparison, the prediction rule score for this study (in the validation cohort) is as follows; 0 = 20.9%, 1 = 37.1%, 2 = 54.3%, 3 = 66.7%. The same analysis holds true for this approach albeit for higher overall mortality rates, with points 0 and 1 resulted in lower than average (38%) risk of mortality and points 2 and 3 demonstrating increased risk. Thus, whilst the CURB-65 score is undoubtedly more discriminatory at the lower end of mortality than this proposed approach; the characteristics of the study data, which has a much higher mean risk of mortality, mean that the proposed rule is better at the higher end. This may perhaps be more helpful to someone who is first attending a patient presenting with CDI. Finally, it should be noted that although the zero score has an attendant mortality rate that is significantly higher than 0%, it is also significantly lower than the actual mean mortality in both derivation and validation cohorts (9.5% vs. 24% in the derivation cohort and 20.8% vs. 38% in the validation cohort). Nonetheless, this prediction rule would benefit from further prospective validation in the future.