Applied Mathematics
Vol.09 No.08(2018), Article ID:86931,21 pages
10.4236/am.2018.98065

Methodology for Constructing a Short-Term Event Risk Score in Heart Failure Patients

Kévin Duarte1,2*, Jean-Marie Monnez1,2,3, Eliane Albuisson4,5,6

1CNRS, INRIA, Institut Elie Cartan de Lorraine, Université de Lorraine, Nancy, France

2CHRU Nancy, INSERM, Université de Lorraine, CIC, Plurithématique, Nancy, France

3IUT Nancy-Charlemagne, Université de Lorraine, Nancy, France

4Institut Elie Cartan de Lorraine, Université de Lorraine, CNRS, Nancy, France

5CHRU Nancy, BIOBASE, Pôle S2R, Université de Lorraine, Nancy, France

6Faculté de Médecine, InSciDenS, Université de Lorraine, Nancy, France

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: May 2, 2018; Accepted: August 26, 2018; Published: August 29, 2018

ABSTRACT

We present a methodology for constructing a short-term event risk score from an ensemble predictor using bootstrap samples, two different classification rules, logistic regression and linear discriminant analysis for mixed data, continuous or categorical, and random selections of variables into the construction of predictors. We establish a property of linear discriminant analysis for mixed data and define an event risk measure by an odds-ratio. This methodology is applied to heart failure patients on whom biological, clinical and medical history variables were measured and the results obtained from our data are detailed.

Keywords:

Ensemble Predictor, Linear Discriminant Analysis, Logistic Regression, Mixed Data, Scoring, Supervised Classification

1. Introduction

In this study, we focus on the problem of constructing a short-term event risk score in heart failure patients based on observations of biological, clinical and medical history variables.

Firstly, we present how we defined the learning sample using the available data and the list of explanatory variables used.

Secondly, we state a property of linear discriminant analysis (LDA) for mixed data, continuous or categorical.

Thirdly, we describe the methodology used, based on constructing an ensemble predictor using two different classification rules, logistic regression and LDA, bootstrap samples, and introducing random selections of variables into the construction of predictors. After presenting the method used to build a risk score and to reduce its variation scale from 0 to 100, we define a measure of the importance of variables or groups of correlated variables in the score and a measure of the event risk by an odds-ratio.

Finally, we describe the results obtained by applying our methodology to our data, using the AUC Out-Of-Bag (OOB) as a measure of predictor accuracy.

2. Data

The database at our disposal was EPHESUS, a clinical trial that included 6632 patients with heart failure (HF) after acute myocardial infarction (MI) complicated by left ventricular systolic dysfunction (left ventricular ejection fraction < 40%) [1] . All patients were randomly assigned to treatment with eplerenone 25 mg/day or placebo.

In this trial, each patient was regularly monitored, with visits at the inclusion in the study (baseline), 1 month after inclusion, 3 months later, then every 3 months until the end of follow-up. At each visit, biological, clinical parameters or medical history were observed. In addition, all adverse events (deaths, hospitalizations, diseases) that occurred during follow-up were collected.

To define the learning sample used to construct the short-term event risk score, we made the following working hypothesis: based on biological, clinical measurements or medical history on a patient at a fixed time, we sought to assess the risk that this patient has a short-term HF event. The individuals considered are couples (patient-month) without taking into account the link between several couples (patient-month) concerning the same patient. Therefore, it was assumed that the short-term future of a patient depends only on his current measures.

Firstly, we did a full review of the database in order to:

• identify the biological and clinical variables that were regularly measured at each visit,

• determine the medical history data that we could update from information collected during the follow-up.

We were thus able to define a set of 27 explanatory variables whose list is presented in Figure 1. The biological parameter ePVS was defined in [2] . Estimated glomerular filtration rate (eGFR) was assessed using three formulas [3] [4] [5] . The different types of hospitalization were defined in supplementary material of [1] .

Then, we defined the response variable as the occurrence of a composite short-term HF event (death or hospitalization for progression of HF). In order to have enough events, we defined the short term as being equal to 30 days. Patient-months with a follow-up of less than 30 days and no short-term HF event during this incomplete follow-up period, were not taken into account.

Figure 1. List of variables.

There were finally 21,382 patient-months from 5937 different patients whose 317 with short-term HF event and 21,065 with no short-term event.

3. Property of Linear Discriminant Analysis of Mixed Data

Denote A' the transposed of a matrix A.

In case of mixed data, categorical and continuous, a classical method to perform a discriminant analysis is:

1) perform a preliminary factorial analysis according to the nature of the data, such as multiple correspondence factorial analysis (MCFA) [6] for categorical data, multiple factorial analysis (MFA) [7] for groups of variables, mixed data factorial analysis (MDFA) [8] , ... ;

2) after defining a convenient distance, perform a discriminant analysis from the set of values of principal components, or factors.

See for example the DISQUAL (DIScrimination on QUALitative variables) method of Saporta [9] , who performs MCFA, then LDA or QDA.

Denote as usual T the total inertia matrix of a dataset partitioned in classes, W and B respectively its intraclass and interclass inertia matrix.

We show hereafter that when performing LDA with metrics T−1 or W−1, it is not necessary to perform a preliminary factorial analysis and LDA can be directly performed from the raw mixed data.

Metrics W−1 will be used in the following but can be replaced by T−1.

Let I = { 1 , 2 , , n } a set of n individuals, partitioned in q disjoint classes I 1 , , I q . Denote n k = c a r d ( I k ) , p k i the weight of ith individual of class I k ( i = 1 , , n k ; k = 1 , , q ) and P k = i = 1 n k p k i the weight of I k , with q k = 1 P k = 1 . p quantitative variables or indicators of modalities of categorical variables, denoted x 1 , , x p , are observed on these individuals. Suppose that there exists no affine relation between these variables, especially for each categorical variable an indicator is removed.

For j = 1 , , p , denote x k i j the value of x j for ith individual of class I k . Denote x k i the vector ( x k i 1 x k i p ) and g k the barycenter of the elements x k i for i I k :

g k = 1 P k i I k p k i x k i .

Intraclass inertia ( p , p ) matrix W is supposed invertible:

W = k = 1 q i = 1 n k p k i ( x k i g k ) ( x k i g k ) .

A currently used distance in LDA d W 1 ( a , b ) between two points a and b in p is such that:

d W 1 2 ( a , b ) = ( a b ) W 1 ( a b ) .

Suppose we want to classify an individual knowing the vector a of values of x 1 , , x p . Principle of LDA is to classify it in I k such that d W 1 2 ( a , g k ) is minimal.

Consider now new variables y 1 , , y m affine combinations of x 1 , , x p , with m p , such that:

y k i = A x k i + β ,

with y k i = ( y k i 1 y k i m ) , A a ( m , p ) matrix of rank p and β a vector in m .

Denote h k the barycenter of vectors y k i in m for i I k :

h k = 1 P k i I k p k i y k i = 1 P k i I k p k i ( A x k i + β ) = A g k + β ,

y k i h k = A ( x k i g k ) .

Let Z the intraclass inertia ( m , m ) matrix of { y k i , i = 1 , , n k ; k = 1 , , q } :

Z = k = 1 q i I k p k i ( y k i h k ) ( y k i h k ) = A W A .

The rank of Z is equal to the rank of A, p m . For m > p , the ( m , m ) matrix Z is not invertible. Then use in this case the pseudoinverse (or Moore-Penrose inverse) of Z, denoted Z + , which is equal to the inverse of Z when m = p , to define the pseudodistance denoted d Z + in m . The denomination pseudodistance is used because Z + is not positive definite. Remind the definition of a pseudoinverse and two theorems [10] .

Definition Let A a ( k , l ) matrix of rank r. The pseudo-inverse of A is the unique ( l , k ) matrix A + such that:

1) A A + A = A ,

2) A + A A + = A + ,

3) ( A A + ) = A A + ,

4) ( A + A ) = A + A .

Theorem 1: Maximal rank decomposition

Let A a ( k , l ) matrix of rank r. Then there exist two full-rank (r) matrices, F of dimension ( k , r ) and G of dimension ( r , l ) ( r g ( F ) = r g ( G ) = r ) such that A = F G .

Theorem 2: Expression of A+

Let A = F G a full-rank decomposition of A. Then A + = G ( F A G ) 1 F .

Prove now:

Proposition 1: d Z + 2 ( A a + β , A b + β ) = d W 1 2 ( a , b ) .

Proof. Z = ( A W ) A . AW and A are of full-rank p. Applying theorem 2 yields:

Z + = A ( ( A W ) A W A A ) 1 ( A W ) = A ( A A ) 1 ( W A A W ) 1 ( A W ) = A ( A A ) 1 W 1 ( A A ) 1 A .

A Z + A = W 1 .

Note that, when m = p , A is invertible and Z + = ( A W A ) 1 = Z 1 .

d Z + 2 ( Aa+β,Ab+β )= ( A( ab ) ) Z + ( A( ab ) )= ( ab ) W 1 ( ab ).

Thus:

Proposition 2: Let A a ( m , p ) matrix, m > p , of rank p and for k = 1 , , q , i = 1 , , n k , y k i = A x k i + β . The results of LDA of the dataset { x k i , k = 1 , , q , i = 1 , , n k } with the metrics W 1 on p are the same as those of LDA of the dataset { y k i , k = 1 , , q , i = 1 , , n k } with the pseudometrics Z + = ( A W A ) + .

Applications

Denote x i j the value of the variable x j for individual i belonging to I, i = 1 , , n , j = 1 , , p and x i = ( x i 1 x i p ) the vector of values of ( x 1 , , x p ) for individual i. Denote p i the weight of individual i, such that i = 1 n p i = 1 . To perform a factorial analysis of the dataset { x i , i = 1 , , n } , the difference between two individuals i and i is measured by a distance d ( i , i ) defined on p associated to a metrics M, such that

d 2 ( i , i ) = ( x i x i ) M ( x i x i ) .

Denote X the ( n , p ) matrix whose element ( i , j ) is x i j . Denote D the diagonal ( n , n ) matrix whose element ( i , i ) is p i .

Perform a factorial analysis of ( X , M , D ) , for instance PCA for continuous variables or MCFA for categorical variables or MDFA for mixed data. Suppose X of rank p. Denote u j = ( u j 1 u j p ) a unit vector of the jth principal axis. Denote c j = X M u j = ( c 1 j c n j ) the jth principal component. Denote U the ( p , p ) matrix ( u 1 ... u p ) and C the ( n , p ) matrix ( c 1 c p ) = X M U ; as u 1 , , u p are M-orthonormal, U M U = I and

C = X M U X = C U for i = 1 , , n , x i = U c i

for i = 1 , , n , c i = U M x i .

Using the metrics of intraclass inertia matrix inverse, LDA from C is equivalent to LDA from X.

Suppose now that the variable x p + 1 = 1 x p is introduced; when x p is the indicator of a modality of a binary variable, x p + 1 is the indicator of the other modality. Then:

( x i 1 x i p x i p + 1 ) = ( u 1 1 u p 1 u 1 p u p p u 1 p u p p ) ( c i 1 c i p ) + ( 0 0 1 )

Denote X 1 the ( n , p + 1 ) matrix whose element ( i , j ) is x i j . LDA from C with the metrics of intraclass inertia matrix inverse is equivalent to LDA from X 1 with the metrics of intraclass inertia matrix pseudoinverse.

For instance:

1) If x 1 , , x p are continuous variables, LDA from X is equivalent to LDA from C obtained by PCA, such as normed PCA, or gCCA [11] and MFA which can be interpreted as PCA with specific metrics.

2) If x 1 , , x p are indicators of modalities of categorical variables, and if MCFA is performed to obtain C, LDA from C with the metrics of intraclass inertia matrix inverse is equivalent to LDA from X with the metrics of intraclass inertia matrix pseudoinverse.

3) Likewise, if x 1 , , x p are continuous variables or indicators of modalities of categorical variables, and if MDFA [8] is performed to obtain C, LDA from C with the metrics of intraclass inertia matrix inverse is equivalent to LDA from X with the metrics of intraclass inertia matrix pseudoinverse. In this case, other metrics can also be used, such as that of Friedman [12] or that of Gower [13] .

4. Methodology for Constructing a Score

4.1. Ensemble Methods

Consider the problem of predicting an outcome variable y, continuous (in the case of regression) or categorical (in the case of classification) from observable explanatory variables x 1 , , x p , continuous or categorical.

The principle of an ensemble method [14] [15] is to build a collection of N predictors and then aggregate the N predictions obtained using:

• in regression: the average of predictions y i ^ ;

• in classification: the rule of the majority vote or the average of the estimations of a posteriori class probabilities.

The ensemble predictor is expected to be better than each of the individual predictors. For this purpose [14] :

• each single predictor must be relatively good,

• single predictors must be sufficiently different from each other.

To build a set of predictors, we can:

• use different classifiers,

• and/or use different samples (e.g. by bootstrapping, boosting, randomizing outputs) [15] [16] [17] ,

• and/or use different methods of variables selection (e.g. ascending, stepwise, shrinkage, random) [18] [19] [20] [21] ,

• and/or in general, introduce randomness into the construction of predictors (e.g. in random forests [22] , randomly select a fixed number of variables at each node of a classification or regression tree).

In Random Generalized Linear Model (RGLM) [23] , at each iteration,

• a bootstrap sample is drawn,

• a fixed number of variables are randomly selected,

• the selected variables are rank-ordered according to their individual association with the outcome variable y and only the top ranking variables are retained,

• an ascending selection of variables is made using AIC [24] or BIC [25] criteria.

Tufféry [26] wrote that logistic models built from bootstrap samples are too similar for their aggregation to really differ from the base model built on the entire sample. This is in agreement with an assertion by Genuer and Poggi [14] . However, Tuffėry suggests the use of a method called “random forest of logistic models” introducing an additional randomness: at each iteration,

• a bootstrap sample is drawn,

• variables are randomly selected,

• an ascending variables selection is performed using AIC [24] or BIC [25] criteria.

Note that this method is in fact a particular case of RGLM method.

Present now the method used in this study to check the stability of the predictor obtained on the entire learning sample.

4.2. Method of Construction of an Ensemble Predictor

The steps of the method for constructing an ensemble predictor are presented in the form of a tree (Figure 2).

At first step, n 1 classifiers are chosen.

At second step, n 2 bootstrap samples are drawn and are the same for each classifier.

At third step, for each classifier and each bootstrap sample, n 3 modalities of random selection of variables are chosen, a modality being defined either by a number of randomly drawn variables or by a number of predefined groups of correlated variables, which are randomly drawn, inside each of which a variable is randomly drawn.

At fourth step, for each classifier, each bootstrap sample and each modality of random selection of variables, one method of selection of variables is chosen, a stepwise or a shrinkage (LASSO, ridge or elastic net) method.

This yields a set of n 1 × n 2 × n 3 predictors, which are aggregated to obtain an ensemble predictor.

Figure 2. General methodology for the construction of a score.

4.3. Choices Made

To assess accuracy of the ensemble predictor, the percentage of well-classified is currently used. But this criterion is not always convenient, especially in the present case of unbalanced classes. We decided to use the area under ROC curve (AUC). AUC in resubstitution being usually too optimistic, we used AUC “out-of-bag” (OOB) [27] : for each patient, consider the set of predictors built on the bootstrap samples that do not contain this patient, i.e. for which this patient is “out-of bag”, then aggregate the corresponding predictions to obtain an OOB prediction.

Two classifiers were used: logistic regression and linear discriminant analysis (LDA) with metrics W 1 . Other classifiers were tested but not retained because of their less good results, such as random forest-random input (RF-RI) [22] or quadratic discriminant analysis (QDA). The k-nearest neighbors method (k-NN) was not tested, because it was not adapted to this study due to the presence of very unbalanced classes with a too small class size.

1000 bootstrap samples were randomly drawn.

Three modalities of random selection were retained, firstly a random draw of a fixed number of variables, secondly and thirdly a random draw of a fixed number of predifined groups of correlated variables followed by a random draw of one variable inside each drawn group. The number of variables or of groups drawn was determined by optimization of AUC OOB.

Fourth step did not improve prediction accuracy and was not retained.

4.4. Construction of an Ensemble Score

4.4.1. Aggregation of Predictors

In the case of two classes Ω 1 and Ω 0 , whose barycenters are respectively denoted g 1 and g 0 , Fisher linear discriminant function

S 1 ( x ) = ( x g 1 + g 0 2 ) W 1 ( g 1 g 0 ) = α 1 x + β 1

can be used as score function. For logistic regression, the following score function can be used:

S 2 ( x ) = ln P ( Ω 1 | X = x ) P ( Ω 0 | X = x ) = α 2 x + β 2 .

Remind that, in the case of a multinormal model with homoscedasticity (covariance matrices within classes are equal), when P ( Ω 1 ) = P ( Ω 0 ) , logistic model is equivalent to LDA [15] ; indeed:

S 2 ( x ) = ln P ( Ω 1 | X = x ) P ( Ω 0 | X = x ) = ln P ( Ω 1 ) P ( Ω 0 ) + S 1 ( x ) = S 1 ( x ) .

So we used the following method to aggregate the obtained predictors:

1) the score functions obtained by LDA are aggregated by averaging; denote now S 1 the averaged score;

2) likewise the score functions obtained by logistic regression are aggregated by averaging; denote S 2 the averaged score;

3) a combination of the two scores, λ S 1 + ( 1 λ ) S 2 is defined, 0 λ 1 ; a value of λ that maximizes AUC OOB is retained; denote S 0 the optimal score obtained by this method.

If s is an optimal cut-off, the ensemble classifier is defined by:

If S 0 ( x ) > s , x is classified in Ω 1 ;

If not, x is classified in Ω 0 .

4.4.2. Definition of a Score from 0 to 100

The variation scale of the score function S 0 ( x ) was reduced from 0 to 100 using the following method. Denote:

S 0 ( x ) = α 0 x + β 0 = j = 1 p α 0 j x j + β 0 .

Denote for j = 1 , , p :

P j = | α 0 j | ( max 1 i n x i j min 1 i n x i j )

and

P = j = 1 p P j = j = 1 p | α 0 j | ( max 1 i n x i j min 1 i n x i j ) .

Let m j the minimal value of the variable x j if α 0 j > 0 , or its maximal value if α 0 j < 0 .

Denote S ( x ) the “normalized” score function, with values from 0 to 100, defined by:

S ( x ) = 100 P j = 1 p α 0 j ( x j m j ) = 100 j = 1 p α 0 j ( x j m j ) k = 1 p | α 0 k | ( max 1 i n x i k min 1 i n x i k ) = α x + β ,

with

( β α 1 α p ) = ( 100 P j = 1 p α 0 j m j 100 α 0 1 P 100 α 0 p P ) .

4.4.3. Measure of Variables Importance

Explanatory variables are not expressed in the same unit. To assess their importance in the score, we used “standardized” coefficients, multiplying the coefficient of each variable in the score by its standard deviation. These coefficients are those associated with standardized variables and are directly comparable. For all variables, the absolute values of their standardized coefficient, from the greatest to the lowest, were plotted on a graph. The same type of plot was used for groups of correlated variables, whose importance is assessed by the sum of absolute values of their standardized coefficients.

4.4.4. Risk Measure by an Odds-Ratio

Define a risk measure associated to a score s by an odds-ratio O R 1 ( s ) :

O R 1 ( s ) = P ( Y = 1 | S > s ) P ( Y = 0 | S > s ) P ( Y = 0 ) P ( Y = 1 ) = P ( S > s | Y = 1 ) P ( S > s | Y = 0 ) = S e ( s ) 1 S p ( s ) .

An estimation of O R 1 ( s ) , also denoted O R 1 ( s ) , is n 1 n 0 × N 0 N 1 with n k = # { S > s } { Y = k } and N k = # { Y = k } , k = 0 , 1 .

Note that:

O R 1 ( s ) decreases when S e ( s ) decreases and S p ( s ) is constant. In practice, the decrease will be much smaller when there are many observations;

O R 1 ( s ) is not defined when S p ( s ) is equal to 1.

For these reasons, the following definition can also be used:

O R 2 ( s ) = max t s : O R 1 ( t ) < O R 1 ( t ) .

Note that O R 1 is the slope y/x of the line joining the origin to the point ( x , y ) of the ROC curve. In the case of an “ideal” ROC curve, supposed continuous above the diagonal line, assuming that there is no vertical segment in the curve, this slope increases from point ( 1,1 ) , corresponding to the minimal value of score, to point ( 0,0 ) , corresponding to its maximal value; the case of a vertical segment (Se decreases, Sp is constant), occurring when the score of a patient with event is between those of two patients without event, is particularly visible in the case of a small number of patients and also justifies the definition of O R 2 , whose curve fits that of O R 1 .

For very high score values, when n 0 or n 1 are too small, the estimation of O R 1 is no longer reliable. A reliability interval of the score could be defined, depending on the values of n 0 and n 1 .

5. Results

5.1. Pre-Processing of Variables

5.1.1. Winsorization

To avoid problems related to the presence of outliers or extreme data, all continuous variables were winsorized using the 1st percentile and the 99th percentile of each variable as limit values [28] . We chose this solution because of the large imbalance of the classes (317 patients with event against 21,065 with no event, so there is a ratio of about 1 to 66). The elimination of extreme data would have led to decrease the number of patients with event.

5.1.2. Transformation of Variables

Among qualitative variables, two are ordinal: the NYHA class with 4 modalities and the number of myocardial infarction (no. MI) with 5 modalities. In order to preserve the ordinal nature of these variables, we chose to use an ordinal encoding. For NYHA, we therefore associated 3 binary variables: NYHA ≥ 2, NYHA ≥ 3 and NYHA ≥ 4. In the same way, for the no. MI, we considered 4 binary variables: no. MI ≥ 2, no. MI ≥ 3, no. MI ≥ 4 and no. MI ≥ 5.

On the other hand, continuous variables were transformed in the context of logistic regression. For each continuous variable, a linearity test was performed using the method of restricted cubic splines with 3 knots [29] . A cubic spline restricted with 3 knots is composed of a linear component and a cubic component. Linearity testing is to test, under the univariable logistic model, the nullity of the coefficient associated with the cubic component. To do this, we used the likelihood ratio test. The results of linearity tests are given in Table 1 (p-value 1).

At 5% level, linearity was rejected for 9 of 16 continuous variables. For each of these 9 variables, we represented graphically the relationship between the logit (logarithm of the probability of event) and the variable. An example of graphical representation is given for potassium: we observe a quadratic relationship between the logit and the potassium (Figure 3). In agreement with the relationship observed, we applied a simple, monotonous or quadratic transformation function to each of the 9 variables. The transformation function applied to each variable is given in Table 1.

For hematocrit and the three variables of eGFR, the relationship is clearly monotonous. So we considered some simple monotonic transformation functions as f ( x ) = x a with a { 2, 1, 0.5,0.5,1,2 } or f ( x ) = log ( x ) , then we

Table 1. Linearity tests and transformation of continuous variables.

Figure 3. Relationship between potassium and logit of probability of event.

retained for each variable the transformation for which the likelihood under univariable logistic model was maximal (minimal p-value).

For other variables not checking linearity, namely potassium, the three blood pressure measures (systolic, diastolic and mean), and heart rate, the relationship between the logit and the variable was rather quadratic. We therefore applied a quadratic transformation function ( X k * ) 2 with k an optimal value determined by maximizing likelihood under univariable logistic model. To compare, we also used the criterion of maximal AUC to determine an optimal value. These results are presented in Table 2. Notice that the optimal values determined by the two methods are the same for systolic BP, diastolic BP and heart rate and are very close for potassium and mean BP.

Also note that the transformation applied to potassium allows to take into account both hypokalemia and hyperkalemia, two different clinical situations pooled here that may increase the risk of death and/or hospitalization measured by the score.

To verify that the transformation of the variables was good, a linearity test for the transformed variable was performed according to the previously detailed principle. All tests are not significant at the 5% level (see Table 1, p-value 2).

5.2. Ensemble Score

5.2.1. Ensemble Score by Logistic Regression

As a first step, we applied our methodology with the following parameters:

• use of a single classification rule, logistic regression ( n 1 = 1 ),

• draw of 1000 bootstrap samples ( n 2 = 1000 ),

• random selection of variables according to a single modality ( n 3 = 1 ).

Three modalities for the random selection of variables were defined:

• 1st modality: random draw of m variables among 32,

• 2nd modality: random draw of m groups among 18, then one variable from each drawn group,

• 3rd modality: random draw of m groups among 24, then one variable from each drawn group.

The groups of variables considered for each modality are presented in Table 3. For modalities 2 and 3, we formed groups of variables based on correlations between variables. For the second modality, we gathered for example in the same

Table 2. Quadratic transformations.

Table 3. Composition of groups of variables.

group hemoglobin, hematocrit and ePVS because of their high correlations. For the third modality, the same groups were used, except for the two variables linked to hospitalization for HF, the four variables linked to the no. MI and the three variables related to the NYHA class, for which each binary variable was considered as a single group.

For each modality, an ensemble score was built for all possible values of m and the one that gave maximal AUC OOB was selected. In Table 4 are reported the

Table 4. Results obtained by logistic regression.

results obtained for each modality with the optimal m. The best result was obtained for the third modality, with AUC OOB equal to 0.8634.

The ensemble score by logistic regression, denoted S 2 ( x ) , obtained by averaging the three ensemble scores that we constructed, gave slightly better results, with AUC OOB of 0.8649.

5.2.2. Ensemble Score by LDA for Mixed Data

The same methodology was used by simply replacing the classification rule (logistic regression) by LDA for mixed data and keeping the same other settings. Again, for each modality, we searched the optimal m parameter. The obtained results are presented in Table 5.

As for logistic regression, the best results were obtained for the third modality, with AUC OOB equal to 0.8638.

The ensemble score by LDA, denoted S 1 ( x ) , yielded better results with AUC OOB equal to 0.8654.

5.2.3. Ensemble Score Obtained by Synthesis of Logistic Regression and LDA

The final ensemble score denoted S 0 ( x ) , obtained by synthesis of the two ensemble scores S 1 ( x ) and S 2 ( x ) presented previously, provided the best results with AUC equal to 0.8733 in resubstitution and 0.8667 in OOB.

This ensemble score corresponds to the one obtained by applying our methodology with the following parameters:

• two classification rules are used, logistic regression and LDA for mixed data ( n 1 = 2 ),

• 1000 bootstrap samples are drawn ( n 2 = 1000 ),

• m variables are randomly selected according to three modalities ( n 3 = 3 ).

The scale of variation of the score function S 0 ( x ) was reduced from 0 to 100 according to the procedure described previously. We denote this “normalized” score S ( x ) .

In Table 6, we present the “raw” and “standardized” coefficients associated with each of the variables in the score function S 0 ( x ) and the “normalized” score function S ( x ) .

5.2.4. Importance of Variables in the Score

To have a global view of the importance of the variables in the “normalized” score, we represented on a graph the absolute value of standardized coefficient

Table 5. Results obtained by LDA for mixed data.

associated with each variable, from the largest value to the smallest (see Figure 4). Note that the most important variables are heart rate, NYHA class ≥ 3 and history of hospitalization for HF in the previous month. On the other hand, variables such as weight, no. MI ≥ 5 or BMI do not play a large part in the presence of others.

The same type of graph was made to represent the importance of the groups of variables in configuration 2 defined by the sum of the absolute values of the “standardized” coefficients associated with the variables of the group, from the largest sum to the smallest (see Figure 4). Note that the two most influential groups are NYHA (NYHA ≥ 2, NYHA ≥ 3 and NYHA ≥ 4) and history of hospitalization for HF(hospitalization for HF in the previous month and hospitalization for HF during life). Three important groups follow: “Hematology” (ePVS, hemoglobin, hematocrit), “heart rate” and “Renal function” (creatinine and three formulas of eGFR). The least important groups of variables are “Obesity” (weight, BMI) and gender.

5.2.5. Risk Measure by an Odds-Ratio

We represented the variation of n 0 , n 1 , S e ( s ) , 1 S p ( s ) , O R 1 ( s ) and O R 2 ( s ) according to the score s (Table 7). For score values s > 49.1933 , n 1 is less than or equal to 30. Thus, beyond this threshold value 49.1933, O R 1 is no longer very reliable. We therefore defined as reliability interval of the O R 1 and O R 2 functions [ 0 ; 49.1933 ] .

We represented the variation of odds-ratio O R 1 and O R 2 in this reliability interval (Figure 5). By reading the graph, for a patient with a score of 40 for

example, P ( Y = 1 | S > 40 ) P ( Y = 0 | S > 40 ) is about 15 times higher than P ( Y = 1 ) P ( Y = 0 ) .

6. Conclusions and Perspectives

In this article, we presented a methodology for constructing a short-term event risk score, based on an ensemble predictor built using two classification rules (logistic regression and LDA for mixed data), 1000 bootstrap samples and three modalities of random selection of variables. This score was normalized on a scale from 0 to 100. We gave a measure of the importance of each variable and each group of variables in the score and defined an event risk measure by an odds-ratio.

Table 6. Ensemble score.

Table 7. Variation of n 0 , n 1 , S e ( s ) , 1 S p ( s ) , O R 1 ( s ) and O R 2 ( s ) according to the values of score s.

Figure 4. Importance of variables and groups of variables.

Due to the nature of the data available (data derived from the EPHESUS study), we had to define the short term to 30 days in order to have enough observations with HF event. It would be better to have data of patients with

Figure 5. Risk measure by an odds-ratio.

shorter intervals, in order to have data the closest possible of an event and eventually improve the quality of the score. When such data will be available, it will be interesting to apply the same methodology to construct a new score.

Note that this methodology can be adapted to the case of a data stream. Suppose that new data for heart failure patients arrive continuously. They can be allocated to bootstrap samples using online bagging [30] . Each predictor based on logistic regression or binary linear discriminant analysis can be updated online using stochastic gradient algorithms. As examples of such algorithms, see [31] for binary LDA and [32] for logistic regression, which use online standardized data in order to avoid numerical explosions in the presence of extreme values. Thus the ensemble score obtained by averaging can be updated online.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Duarte, K., Monnez, J.-M. and Albuisson, E. (2018) Methodology for Constructing a Short-Term Event Risk Score in Heart Failure Patients. Applied Mathematics, 9, 954-974. https://doi.org/10.4236/am.2018.98065

References

  1. 1. Pitt, B., Remme, W., Zannad, F., et al. (2003) Eplerenone, a Selective Aldosterone Blocker, in Patients with Left Ventricular Dysfunction after Myocardial Infarction. New England Journal of Medicine, 348, 1309-1321. https://doi.org/10.1056/NEJMoa030207

  2. 2. Duarte, K., Monnez, J.M., Albuisson, E., Pitt, B., Zannad, F. and Rossignol, P. (2015) Prognostic Value of Estimated Plasma Volume in Heart Failure. JACC: Heart Failure, 3, 886-893. https://doi.org/10.1016/j.jchf.2015.06.014

  3. 3. Cockcroft, D.W. and Gault, H. (1976) Prediction of Creatinine Clearance from Serum Creatinine. Nephron, 16, 31-41. https://doi.org/10.1159/000180580

  4. 4. Levey, A.S., Coresh, J., Balk, E., et al. (2003) National Kidney Foundation Practice Guidelines for Chronic Kidney Disease: Evaluation, Classification, and Stratification. Annals of Internal Medicine, 139, 137-147. https://doi.org/10.7326/0003-4819-139-2-200307150-00013

  5. 5. Levey, A.S., Stevens, L.A., Schmid, C.H., et al. (2009) A New Equation to Estimate Glomerular Filtration Rate. Annals of Internal Medicine, 150, 604-612. https://doi.org/10.7326/0003-4819-150-9-200905050-00006

  6. 6. Lebart, L., Morineau, A. and Warwick, K. (1984) Multivariate Descriptive Statistical Analysis: Correspondence Analysis and Related Techniques for Large Matrices. Wiley, New-York.

  7. 7. Escofier, B. and Pagès, J. (1990) Multiple Factor Analysis. Computational Statistics and Data Analysis, 18, 121-140. https://doi.org/10.1016/0167-9473(94)90135-X

  8. 8. Pagès, J. (2004) Analyse Factorielle de Données Mixtes. Revue de Statistique Appliquée, 52, 93-111.

  9. 9. Saporta, G. (1977) Une Méthode et un Programme d’Analyse Discriminante sur Variables Qualitatives. Analyse des Données et Informatique, Inria, 201-210.

  10. 10. Rotella, F. and Borne, P. (1995) Théorie et Pratique du Calcul Matriciel. Editions Technip.

  11. 11. Carroll, J.D. (1968) A Generalization of Canonical Correlation Analysis to Three or More Sets of Variables. Proceedings of the 76th Annual Convention of the American Psychological Association, Washington DC, 1968, 227-228.

  12. 12. Friedman, J.H. and Meulman, J.J. (2004) Clustering Objects on Subsets of Attributes (with Discussion). Journal of the Royal Statistical Society: Series B (Statistical Methodology), 66, 815-849. https://doi.org/10.1111/j.1467-9868.2004.02059.x

  13. 13. Gower, J.C. (1971) A General Coefficient of Similarity and Some of its Properties. Biometrics, 27, 857-871. https://doi.org/10.2307/2528823

  14. 14. Genuer, R. and Poggi, J. M. (2017) Arbres CART et Forêts Aléatoires, Importance et Sélection de Variables. https://arxiv.org/pdf/1610.08203v2.pdf

  15. 15. Hastie, T., Tibshirani, R. and Friedman, J. (2009) The Elements of Statistical Learning. Springer, New York. https://doi.org/10.1007/978-0-387-84858-7

  16. 16. Efron, B. and Tibshirani, R.J. (1994) An Introduction to the Bootstrap. CRC Press, Boca Raton.

  17. 17. Breiman, L. (1996) Bagging Predictors. Machine Learning, 24, 123-140. https://doi.org/10.1007/BF00058655

  18. 18. In Lee, K. and Koval, J.J. (1997) Determination of the Best Significance Level in Forward Stepwise Logistic Regression. Communications in Statistics-Simulation and Computation, 26, 559-575. https://doi.org/10.1080/03610919708813397

  19. 19. Wang, Q., Koval, J.J., Mills, C.A. and Lee, K.I.D. (2007) Determination of the Selection Statistics and Best Significance Level in Backward Stepwise Logistic Regression. Communications in Statistics-Simulation and Computation, 37, 62-72. https://doi.org/10.1080/03610910701723625

  20. 20. Bendel, R.B. and Afifi, A.A. (1977) Comparison of Stopping Rules in Forward “Stepwise” Regression. Journal of the American Statistical Association, 72, 46-53.

  21. 21. Tibshirani, R. (1996) Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58, 267-288. http://www.jstor.org/stable/2346178

  22. 22. Breiman, L. (2001) Random Forests. Machine Learning, 45, 5-35. https://doi.org/10.1023/A:1010933404324

  23. 23. Song, L., Langfelder, P. and Horvath, S. (2013) Random Generalized Linear Model: A Highly Accurate and Interpretable Ensemble Predictor. BMC Bioinformatics, 14, 5. https://doi.org/10.1186/1471-2105-14-5

  24. 24. Akaike, H. (1998) Information Theory and an Extension of the Maximum Likelihood Principle. In: Parzen, E., Tanabe, K. and Kitagawa, G., Eds., Selected Papers of Hirotugu Akaike, Springer Series in Statistics (Perspectives in Statistics), Springer, New York, 199-213.

  25. 25. Schwarz, G. (1978) Estimating the Dimension of a Model. The Annals of Statistics, 6, 461-464. https://doi.org/10.1214/aos/1176344136

  26. 26. Tufféry, S. (2015) Modélisation Prédictive et Apprentissage Statistique avec R. Editions Technip.

  27. 27. Breiman, L. (1996) Out-Of-Bag Estimation. https://www.stat.berkeley.edu/~breiman/OOBestimation.pdf

  28. 28. Dixon, W.J. (1960) Simplified Estimation from Censored Normal Samples. The Annals of Mathematical Statistics, 31, 385-391. https://doi.org/10.1214/aoms/1177705900

  29. 29. Royston, P. and Sauerbrei, W. (2007) Multivariable Modeling with Cubic Regression Splines: A Principled Approach. Stata Journal, 7, 45-70.

  30. 30. Oza, N.C. and Russell, S. (2001) Online Bagging and Boosting. Proceedings of 8th International Workshop on Artificial Intelligence and Statistics, Key West, 4-7 January 2001, 105-112.

  31. 31. Duarte, K., Monnez, J.M. and Albuisson, E. (2018) Sequential Linear Regression with Online Standardized Data. PLoS ONE, 13, e0191186. https://doi.org/10.1371/journal.pone.0191186

  32. 32. Monnez, J.M. (2018) Online Logistic Regression Process of a Data Stream with Online Standardized Data.