International Journal of Intelligence Science
Vol.07 No.02(2017), Article ID:78844,29 pages
10.4236/ijis.2017.72003

A Rough Set Based Optimization Method for Elderly Evaluation

Weiping Li, Tong Mo, Xingzhang Ren, Jingbo Zhang, Zhonghai Wu

School of Software and Microelectronics, Peking University, Beijing, China

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: March 26, 2017; Accepted: April 27, 2017; Published: April 30, 2017

ABSTRACT

In order to improve the efficiency of elderly evaluation, an optimization method based on rough set is proposed. Compared with the traditional rough set attribute reduction, the redundant evaluation items are eliminated by items’ correlation. It avoids a big overhead of calculating the core of rough sets that have many attributes. A novel rule reduction method is proposed based on reliability and coverage, in order to solve the problem of rarely appeared rules and conflict rules in traditional rough set. A sorting algorithm based on coverage is used to optimize the traditional flat evaluation questionnaire model with a hierarchical order. By these optimizations, the number of items that need to evaluate is greatly reduced. The proposed approach is deployed in an elderly service company named Lime family. Real-life result shows that the method can reduce more than 40% items with over 90% accuracy prediction rate. Compared with decision tree and the method based on expert knowledge in reduction rate and accuracy rate, the method has same performance in one index, and 20% improvement on average in the other one.

Keywords:

Rough Set, Attribute Reduction, Elderly Evaluation, Decision Rule

1. Introduction

The population aging problem is getting serious in China, and the care of the elderly has become a heavy burden on the family. Thus, professional community care and family support for the elderly become more important than ever before. Nursing care service which refers to the professional care team providing daily care service for the elderly and disabled persons is essential in the elderly service. Currently the professional nursing service is developing rapidly in china. To deliver the nursing service, we need to perform the health evaluation for an elder person to learn his/her health status. The proper care program for the elderly will be based on the health evaluation. During the implementation of the care program, periodic health evaluation will be performed to evaluate and improve the program. As we can see that the health evaluation of the elderly is one of the important tasks in the nursing service.

There exist some elder evaluation methods which are based on the Barthel Index [1] and the Ability assessment for elder adults [2] , with data from questionnaires or face-to-face interviews. Our work is based on the data from the Lime family company, a professional company providing elderly care service in China. Lime family customizes their own evaluation table based on the Chinese Professional Standard MZ/T 039-2013 and Barthel Index. Normally the investigator fills the tables on the basis of face to face or telephone interviews. A lot of evaluation items need to be investigated in order to fully understand the status of an elderly person. On one hand, the survey becomes a time consuming task. On the other hand, in the evaluation process we found that some items are redundant. After some items are determined, a lot of related items can be estimated and the investigator doesn’t need to ask the elderly. For instance, a person with the Alzheimer’s disease normally cannot go outside by himself and hence cannot go out for shopping, for running and so on. To this end, this paper focuses on the reduction of the evaluation items and the sort of the evaluation items to improve and simplify the evaluation process and improve the efficiency.

The key to optimize the evaluation model is to speculate the value of items from some related items in the questionnaire and to sort the items to make the speculation more efficiently. The often used methods for the health and elder evaluation include the actor analysis method in Statistics and Rasch analysis method [3] [4] . The main task in these methods is the deletion of the redundant evaluation items. These methods will get a small set of items. But these items are still in a flat model, namely, the dependency among them are not analyzed. Another kind of methods is to analysis the relationship among the items, for example the Bayes method [5] , decision tree [6] [7] , and the frequent pattern mining [8] . To improve the efficiency of the elder evaluation, we need to reduce the items to be asked by speculating them with other items and to sort the items. The attribute reduction in the Rough Set theory provides a way to resolve both questions [9] [10] [11] [12] .

As a mathematical tool to deal with these imprecise, inconsistent and incomplete information and knowledge, Rough set theory has been widely concerned since it was proposed by Professor Pawlak in 1982 [13] [14] . It has been widely used in the field of machine learning, data mining, and decision support [15] - [20] . Since the rough set theory can effectively deal with the vague and uncertain data without any additional information or a priori knowledge, this paper take the Rough Set method to improve the elder person evaluation.

The main contribution this paper includes: 1) Provides a method to calculate the correlation degree of evaluation items using information theory and the conditional entropy of evaluation items; proposes an attribute reduction method for eliminating redundant evaluation items. 2) Proposes the concept of reliability and coverage degree for the decision equivalent class, which promote the rule generation method in the rough set by avoiding the uncertain rule and the rarely appeared rules. 3) Presents an evaluation items sort algorithm based on the coverage degree. By this mean, we change the traditional flat evaluation model into an orderly evaluation model. The elderly evaluation process can be carried out in the sorted order and predict some items with the decision rules, which will reduce the evaluation items that need to be asked upfront.

This paper is organized as follows. Section 2 reviews related work. Section 3 introduces the evaluation attribute reduction method and prediction rule optimization method. Section 4 introduces the rough set based model for improving the elderly evaluation and the core algorithms. Section 5 shows the experiment results with the real case data. Section 6 concludes the paper.

2. Related Work

The work in this paper is to optimize the elderly evaluation process. On the one hand, we will analyze the evaluation items to find out the redundant items, which will reduce the workload of the survey. On the other hand, we will estimate the uninvestigated items with the known items to reduce the items collected from elderly people. This section reviews the item reduction of the healthcare questionnaire; the prediction of related items and the rough set attribute reduction method.

2.1. Item Reduction in Healthcare Question Area

Usually people take the statistics methods to reduce or delete the evaluation items. Luis Prieto take the Exploratory Factor Analysis (EFA) method of the Classical Test Theory (CTT) and the Rasch Analysis (RA) method of the item response theory reduce the Nottingham Health Profile (NHP 38) [3] . The 38 items are reduced to 20 items and 22 items respectively and get the new scale NHP 20 and NHP 22. CTT resulted in 20 items (4 dimensions) whereas RA in 22 items (2 dimensions). Both instruments showed similar characteristics under CTT requirements: item-total correlation ranged 0.45 - 0.75 for NHP20 and 0.46 - 0.68 for NHP22, while reliability ranged 0.82 - 0.93 and 0.87 - 94 respectively.

Ephrem Fernandez uses a 3-setp decision rule to reduce the McGill Pain Questionnaire (MPQ). By using the minimum absolute frequency of 17 and the minimum relative frequency of 1/2 as threshold value, the words of MPQ are reduced from 78 to less than 20 in average. The selective reduction and reorganization of these descriptors can enhance the efficiency of this approach to pain assessment [4] .

The above research works of item reduction mainly aimed at reduce the number of questionnaire items and the result is still with no hierarchy. In elderly evaluation, the values of items from different aspects have a strong correlation. For example, the health of the elderly usually has a strong impact on the ability of self-care. Therefore, the order of items is as important as reducing the number of questionnaire items need to be asked.

2.2. Prediction of Related Items

The values of some evaluation items can be inferred from their high correlation items, so that some items are not need to be asked, and the efficiency of evaluation is improved. The related methods include Bayesian formulation, decision tree, and frequent pattern mining and so on.

A Bayesian forecasting model is described in the literature [5] , estimating the prior probabilities from a sample of SPEAK Test scores of 803 prospective ITAs at UVa between 2006 and 2013, and using the TOEFL iBT scores from 318 students to update the forecast probabilities. Overall, this forecasting model demonstrates and explains a useful statistical association between the SPEAK Test scores and the TOEFL iBT scores, used widely in university admissions.

In the literature [6] , S.S. Panigrahi deals with two established technique viz. Epsilon-SVR and Decision Tree for stock market forecasting. The available numerical historical data and some technical indices of BSE-sensex have been used for empirical studies. Both epsilon-SVR and Decision Tree techniques are run over the dataset, respective efficiencies has been evaluated and explained through established statistical parameters. The work concludes that the SVM has outperformed decision tree in training front and lagged behind in validation in comparison with regression decision tree.

Le Thi Ngoc Anh proposes the forecast model of the possibility of cholera occurrence in Hanoi city based on association rule mining from cholera dataset collected in Hanoi’s districts from 2001 to 2012 [8] . Experimental results show that the proposed method is suitable for cholera forecast and can be used as an important input in the decision making process of the preventive healthcare.

Chunmei Liu introduces non-financial index into financial risk forecasting system to establish mix financial index evaluation system including financial index and non-financial index, and also introduces C4.5 decision tree arithmetic into the modeling process [7] . According to the 40 trained listed companies in the 2004, 2005, get the model of “model trained”, its accuracy rate is 82.5%.

2.3. Attribute Reduction Based on Rough Set

Reduction of attribute is one of the important topics in the research on rough sets theory and concern by many researchers. Using traditional rough set to optimize the model of elderly evaluation exist some problems and shortcomings: 1) the computing for finding the core in traditional attribute reduction costs large amount of calculation, so they are unsuitable for large data sets; 2) the traditional attribute reduction algorithms don’t measure dependencies among the attributes, so redundant attributes are still contained in the final result; 3) the traditional decision rule generation algorithms may generate conflicting rules, which increase the indeterminacy of decisions; 4) the traditional decision rule generation algorithms may generate rarely appeared rules.

Compared with traditional rough sets theory, Honghai Feng uses rule generation algorithm (RGA) to reduce the attributes instead of calculating the core [9] . Experimental results show that RGA achieves good classification performance. In order to improve the efficiency of attribute reduction, Chen yanyun integrates the parallel idea in the attribute reduction and construct a parallel rough set attribute reduction algorithm based on attribute frequency [10] . The algorithm is applied to corn breeding. Experiments show that the algorithm outperforms the traditional algorithms. In the literature [11] Hiroshi Saka and others extend rough set-based rule generation algorithm. They have extended this algorithm to tables with non-deterministic information, and implemented it according to the constraint satisfaction problem. This algorithm is important for rule generation in tables with uncertainties. Zhe Liu provides a new heuristic algorithm named “Short First Extraction (SFE)” based on the classical rough set theory for rules generation [12] . Based on the datasets provided by UCI machine learning repository, SFE has better performance than Johnson Reducer, genetic reducer and Holte’s 1R reducer. SFE is a new rules generating method.

In summary, using another method instead of calculating the core to reduce the attributes and generate rules is a feasible way. In this paper, we optimize the model of elderly evaluation based on rough set, and information entropy and conditional entropy are introduced in attribute reduction. We also define the reliability and coverage to optimize the process of decision rules, in order to avoid the problems mentioned above.

3. Problem Formulation

This section provides the definition for rough sets of elderly evaluation model. The notations and description mentioned in this section is shown as Table 1.

Table 1. Notations and description.

3.1. Definition

Definition 3.1 Let M = ( A , V ) be the elderly evaluation model. Where, A = ( a 1 , a 2 , , a m ) , is the collection of the evaluation items, V is a collection of all possible values for each evaluation item.

Definition 3.2 Let U be the collection of elderly evaluation result.

Let vi(aj) be the evaluation value of aj of i-th elder person, the evaluation result of the i-th elder person can be u i = ( v i ( a 1 ) , v i ( a 2 ) , , v i ( a m ) ) , then the domain U = { u 1 , u 2 , , u n } is the collection of the evaluation result of all persons.

Definition 3.3 Let I = ( U , A , V ) be the evaluation information system for elderly. Based on the definition 3.1 and 3.2, I can be defined with the Rough set theory.

Definition 3.4 Let IND(A') be the indiscernibility relation on A', which is the collection of the evaluation items. Where A' is the subset of A, A A , if there exist two elder person p and q, for a i , a i A , we can have that v p ( a i ) = v q ( a i ) , then p and q have the indiscernibility relation, namely IND(A').

Definition 3.5 Let EC(A') be the collection of equivalence classes of A'.

For the collection of the evaluation items A A A = ( a 1 , a 2 , , a p ) , the evaluation result of elderly i u i = ( v i ( a 1 ) , v i ( a 2 ) , , v i ( a p ) ) , and the indiscernibility relation on IND(A'), we can get some equivalence classes, ec1(A'), ec2(A'), , ecq(A'). If the evaluation result in each equivalence classes are equal, while differ with other equivalence classes, then these equivalence classes constitute the collection of equivalence classes EC(A').

Definition 3.6 Decision Relation, Decision Attribute, and Condition Attribute.

In the elderly evaluation model, there exists the logic inference relation among items, namely, one item can be determined by other items. If the value of D, collection of items, can be inferred by the collection C, then the collection C have the decision relation on collection D, denote as C D , C A , D A , C D = φ . C is the collection of condition Attribute, and D is the collection of Decision Attribute.

Definition 3.7 Let the D-Table(D) be the Decision table of Decision Attribute D, and ec-d(D) be the decision equivalence class.

The Decision table of Decision Attribute D, D-Table(D), is composed of the decision relation C D , condition Attribute C, Decision Attribute D, EC(C), i.e. the collection of equivalence classes of C, and the eci(C) and the value of decision attribute in EC(C). Each row in the table is a decision equivalence class ec-d(D).

Definition 3.8 Decision rule r.

From the decision equivalence class ec-d(D) in the decision table D-Table(D), we can create some decision rules r = p r e c p o s c . Where the post condition posc is the value of the decision attribute in the same row, precondition prec is the subset of the condition attribute.

Based on the definitions, we optimize the elder evaluation model as follows: In turn, take each item as the decision attribute and the others as the condition attribute to build a decision table. Optimize the decision table by finding and eliminating the redundant items. Optimize each decision equivalence class by eliminating those equivalence classes that may create uncertain rules or rules with low support. Create rules according to each equivalence class in the decision table. Calculate the value of the decision attribute with the condition attributes, which lead fewer items are asked during the evaluation process.

3.2. Calculating the Correlation Degree of Evaluation Items

In turn, one evaluation item will be selected as the decision attribute to build a decision table, left the others as the condition attribute. Normally not all these condition attributes have impact on the decision attribute, so attribute reduction will be performed to eliminate those have weak impact on the decision attribute. The attribute reduction is depending on the importance of the condition attribute on the decision attribute. If there is such an evaluation item in the decision table, when the value of other condition attributes is assigned the decision attribute will be fixed, despite the change of it. Then apparently this evaluation item has no impact on the decision attribute which should be eliminated.

Thus it can be seen the correlation degree of evaluation items in the decision table is important to the attribute reduction. So we need to calculate the correlation degree of evaluation items, which is determined by the conditional probability among the attribute values. The conditional probability of the evaluation items’ value is, when one attribute is assigned, the probability of the other items take a certain value. It means that the higher conditional probabilities have higher correlation degree of the two items.

The evaluation items have different values. The probability distributions of the values are different from one item to others. For some evaluation items it is uniform distribution for most of the elder persons, but for some other items it may focus on one value, only few people have other values. In section five we have statistics on the data of the experimental persons. From Figure 2 we can see that the probability distributions of different items’ value are different and most of them are not uniform distribution. Given this, one should take into account the probability distribution of the items’ value to calculate the correlation of evaluation items. For instance, suppose the conditional probability is 100%, namely when item take a certain value, the probability of item B taking a certain value is 100%. But if this value of item B is scarce, namely, only a few people take this value, the corresponding decision equivalence classes have less meaning.

Entropy is a property of thermodynamically systems, which can measure the uniformity of the distribution of objects. Combine the concept of entropy and the conditional probability; we use the concept of conditional entropy in information theory to discuss the calculation method of correlation degree of evaluation items [21] .

The information theory of evaluation item x is,

H ( x ) = x i x P ( x i ) log a 1 P ( x i ) (1)

where P(xi) is the probability of x take the value xi, and a is an arbitrary value. The conditional entropy of evaluation item x on item y is,

H ( x | y ) = y j y P ( y j ) x i x P ( x i | y j ) log a 1 P ( x i | y j ) (2)

where P ( x | y ) is the conditional probability of x take xi when y is yj, a is an arbitrary value.

With H(x) and H ( x | y ) , the Correlation degree,

S U ( x , y ) = 1 H ( x | y ) + H ( y | x ) H ( x ) + H ( y ) (3)

Suppose evaluation item x may be the value of x0 x1, y may be the value of y0 y1.

When x and y are independent of each other, no matter what the value of x is, it has not impact on y, namely, P ( x 0 | y 0 ) = P ( x 0 ) , P ( x 0 | y 1 ) = P ( x 0 ) , , P ( y 0 | x 0 ) = P ( y 0 ) , P ( y 1 | x 0 ) = P ( y 1 ) , , put them into the formula 2, we have, H ( x | y ) = H ( x ) , H ( x | y ) = H ( x ) , then put H ( x | y ) into formula 3, we get S U ( X , Y ) = 1 1 = 0 .

When x and y are related, we have P ( x 0 | y 0 ) < P ( x 0 ) , P ( x 0 | y 1 ) < P ( x 0 ) , , P ( y 0 | x 0 ) < P ( y 0 ) , P ( y 1 | x 0 ) < P ( y 1 ) , , H ( x | y ) < H ( x ) , H ( y | x ) < H ( y ) , put them into the formula 3 we get S U ( x , y ) < 1 .

When x and y have completely correlation, namely, x = x0, there must exist y = y0 or y = y1, i.e., when x = x0, we have P ( x 0 | y 0 ) = 1 and P ( x 0 | y 1 ) = 0 , or P ( x 0 | y 0 ) = 0 and P ( x 0 | y 1 ) = 1 .

In the same way, when y = y0, there must exist x = x0 or x = x1, then P ( y 0 | x 0 ) = 1 and P ( y 0 | x 1 ) = 0 , or P ( y 0 | x 0 ) = 0 and P ( y 0 | x 1 ) = 1 .

With formula 2 we have H ( x | y ) = 0 , in the same way, H ( y | x ) = 0 . Thus, with formula 3, S U ( X , Y ) = 1 0 = 1 .

With formula 3, on the one hand, we can calculate the correlation degree between the condition attributes and the decision attribute and, by setting a threshold, filter the condition attribute whose impact on the decision is lower. On the other hand, we can also calculate the correlation degree of between different condition attributes to further filter some condition attribute(s), which actually eliminate some attributes with similar function.

3.3. The Measure of Reliability and Coverage Degree for the Decision Equivalent Class

When creating the decision rules there may be conflicts between rules, i.e., two rules have the condition attributes have the same value but the values of their decision attributes differ. Rule optimization will deal with this problem and eliminate the conflicts. And, according to definition 3.7, the decision rules are created by the equivalent class of the condition attributes. For some rules, if the number of the valuation results for this equivalent class is too small, it means that the situation for this rule is rare and this rule is a sparse rule that should be eliminated.

The main reason for rule conflict lies in the inconsistent decision table, i.e., in the decision table, there exist equivalent classes that have same condition attribute value but different decision attribute values. In order to make the decision table consistent, the conflict decision equivalent class will be deleted. The key issue is to build the evidence to find the conflict decision equivalent class to be deleted. So we take the reliability the decision equivalent class as the evidence [22] . The reliability is

u ( X i , Y j ) = | X i Y j | | X i | (4)

where Xi is the one of the equivalent class in the collection of equivalent class EC(C) of condition attribute C, | X i | represent the number of elder persons that contained in the equivalent class Xi Yj represent one of the equivalent class in the collection of equivalent class EC(D) of decision attribute D, | Y i | represent the number of elder persons that contained in the equivalent class Yi X i Y j represents the collection of decision equivalent class that appear in both Xi and Yj | X i Y j | is the number of elderly persons in the X i Y j .

If there is more than one decision equivalent class that has the condition attribute Xi, then these decision equivalent classes have the same condition attribute but different decision attributes, namely, there conflict with each other. From formula 4, the sum of all the reliability of decision equivalent class that has the condition attribute Xi equals to one. To resolve this kind of conflict and delete the secondary factors that may cause conflicts, we set the reliability of decision equivalent class a threshold α = 0.5. Only those decision equivalent classes with reliability larger than α is retained.

For those evaluation items with unbalanced distribution of values, the less distributed values mean that few elderly people will take these values on the evaluation, i.e., these values are the extremely rare case of evaluation. When building the decision table, those equivalent classes including the rarely appeared values will not cause conflict and will be reserved. But when creating the decision rules, the rules created from this kind of equivalent class must also be the rarely rules, which ought to be deleted. So the key problem to delete the rarely appeared rule is to find a measure method for finding out the equivalent class including the rarely appeared evaluation values. To this end, we define the coverage degree for the decision equivalent class as the measure evidence.

The coverage degree is,

c ( X i , Y j ) = | X i Y j | | U | (5)

where, in the decision table D-Table(D), Xi represent the equivalent class in the collection EC(C) with condition attribute C, Yi represent the equivalent class in the collection EC(D) with decision attribute D, | U | is the number of data of all the elderly people. X i Y j represents the collection of decision equivalent class that appear in both Xi and Yj, | X i Y j | is the number of elder persons in the X i Y j .

We can set the threshold of the coverage degree to delete the rarely appeared decision rules. For example, let the threshold of coverage degree be β, if the coverage degree of an equivalent class less than β then this decision equivalent class include the rarely appeared evaluation values and should be deleted. In this way, there will be no rarely appeared rule left.

3.4. An Example

To clarify the concept and formula in this section, we demonstrate them with an example. Suppose one evaluation which will evaluate three items, namely INC-OME, PEE_CONTROL, and SOCIAL_SKILLS. Each of them has the value range of 1, 2, 3, and 4. There are three elderly people join the evaluation, marked with p, q, and r respectively. The evaluation results are shown in Table 2.

The rough set based evaluation model is as follows:

1) The elderly people evaluation model M = ( A , V ) , where,

A = { INCOME , PEE_CONTROL , SOCIAL_SKILLS } ,

V = { [ 1 , 2 , 3 , 4 ] , [ 1 , 2 , 3 , 4 ] , [ 1 , 2 , 3 , 4 ] }

2) The evaluation result collection is,

U = { [ 1 , 2 , 1 ] , [ 2 , 2 , 2 ] , [ 1 , 2 , 1 ] }

3) Elderly people evaluation system I = ( U , A , V ) , where,

U = { [ 1 , 2 , 1 ] , [ 2 , 2 , 2 ] , [ 1 , 2 , 1 ] }

A = { INCOME , PEE_CONTROL , SOCIAL_SKILLS }

V = { [ 1 , 2 , 3 , 4 ] , [ 1 , 2 , 3 , 4 ] , [ 1 , 2 , 3 , 4 ] }

4) IND( A ), the indiscernibility relation on A , where A A .

a) When A = { INCOME } , v p ( a 1 ) = v r ( a 1 ) = 1 , elderly people p and r is the indiscernibility relation on the evaluation items { INCOME } . When v p ( a 1 ) = 1 , v q ( a 1 ) = 2 , and v p ( a 1 ) v q ( a 1 ) , p and q are distinguishable on evaluation item collection { INCOME } .

b) When A = { PEE_CONTROL , SOCIAL_SKILLS } , v p ( a 1 ) = v r ( a 1 ) = 1 , and v p ( a 2 ) = v r ( a 2 ) = 2 , p and r is the indiscernibility relation on the evaluation

Table 2. Evaluation results.

items { PEE _ CONTROL , SOCIAL _ SKILLS } , when v p ( a 1 ) = 1 , v q ( a 1 ) = 2 , and v p ( a 1 ) v q ( a 1 ) , p and q are distinguishable on evalution item collection { PEE _ CONTROL , SOCIAL _ SKILLS } . When v q ( a 1 ) = 2 , v r ( a 1 ) = 1 and v q ( a 1 ) v r ( a 1 ) , q and r are distinguishable on evaluation item collection { PEE _ CONTROL , SOCIAL _ SKILLS } .

5) EC( A ), the collection of equivalence classes of item collection A , Where A A .

a) When A = { INCOME } , because p and r is the indiscernibility relation and q and { p , r } are distinguishable, E C ( A ) = { [ p , r ] , [ q ] } .

b) When A = { PEE_CONTROL , SOCIAL_SKILLS } , because p and r is the indiscernibility relation, q and { p , r } are distinguishable, E C ( A ) = { [ p , r ] , [ q ] } .

6) In system I, there exist one decision relationship, C D , C A . Where D is the decision attribute INCOME, C is the condition attribute collection { PEE _ CONTROL , SOCIAL _ SKILLS } .

7) In system I, we can build a decision table D-Table(D) shown in Table 3, where D is the decision attribute INCOME.

8) For the decision table D-Table(D), the following rules are defined:

r1: ( Y , 2 ) ( Z , 1 ) ( X , 1 )

r2: ( Y , 2 ) ( Z , 2 ) ( X , 2 )

r3: ( Y , 2 ) ( X , 1 )

r4: ( Z , 1 ) ( X , 2 )

r5: ( Z , 2 ) ( X , 2 )

9) Calculating the Correlation degree of evaluation items

According to formula 1 and 2, one can calculate the information entropy and conditional entropy of the evaluation items X Y Z:

H ( X ) = P ( X = 1 ) log 10 1 P ( X = 1 ) + P ( X = 2 ) log 10 1 P ( X = 2 ) = 2 3 log 10 3 2 + 1 3 log 10 3 = 0.28 (6)

H ( X | Y ) = P ( Y = 2 ) ( P ( X = 1 | Y = 2 ) log 10 1 P ( X = 1 | Y = 2 ) ) + P ( Y = 2 ) ( P ( X = 2 | Y = 2 ) log 10 1 P ( X = 2 | Y = 2 ) ) = 2 3 log 10 3 2 + 1 3 log 10 3 = 0.2 (7)

Table 3. Decision table D-Table(D).

Similarly, H ( Y ) = 0 , H ( Z ) = 0.28 , H ( X | Y ) = 0.28 , H ( Y | X ) = 0 , H ( X | Z ) = 0 , H ( Z | X ) = 0 , with formula 3 we can get the correlation degree between item X and items Y and Z respectively,

S U ( X , Y ) = 1 H ( X | Y ) + H ( Y | X ) H ( X ) + H ( Y ) = 1 0.28 0.28 = 0 (8)

S U ( X , Z ) = 1 H ( X | Z ) + H ( Z | X ) H ( X ) + H ( Y ) = 1 (9)

Let σ, the threshold of SU, be 0.1. Since S U ( X , Y ) < σ , when take X as the decision attribute, item Y will be deleted. S U ( X , Z ) > σ , so X and Z are closely correlated, which show the correctness of the measure. With the threshold we can filter those items independent with the decision item. This builds the basis of the decision table reduction and rule generation.

10) The reliability and coverage degree of the decision items

For the one hundred elderly people, build the decision table D-Table(X) shown in Table 4.

Dividing the domain with the condition collection C we get the equivalent class X = { [ U 1 , U 2 ] , [ U 3 ] , [ U 4 ] } , dividing the domain with the decision attribute D we get the equivalent class Y = { [ U 1 ] , [ U 2 ] , [ U 3 , U 4 ] } , thus we get the following rules, according to the formula 4 and 5. We can also get the reliability u(x,y) and coverage degree c(x,y). The result of rules and their equivalent classes, reliability and coverage is shown in Table 5.

Let α = 0.6 and β = 0.2, the rule R2 and R4 will be deleted. Finally we get rule R1 and R3. We can see that although R2 has larger coverage degree, but it can lead to decision confliction. Though rule R4 has higher reliability, it is the rarely appeared rule, it may not be true. So rule R2 and R4 should be deleted.

Table 4. D-Table(X).

Table 5. Rule table R.

4. Optimization of the Evaluation Model

This section describes the optimization process for the elderly evaluation model. There four steps to perform the optimization.

1) Build the decision table based on the correlation degree of evaluation items:

In turn, take one item as the decision attribute and the others as the condition attributes to build the decision table. Firstly, optimize the condition attribute in the decision table by calculating the correlation degree between each condition attribute and the decision attribute and delete the attribute that its correlation degree less than the threshold. Secondly, delete the equivalent condition attribute in the decision table. Calculate the correlation degree between each condition with all the others and delete the condition attributes whose correlation degree with this particular condition attribute is large than the degree with the decision attribute.

2) Generate the decision rules with the reliability and coverage degree.

Calculate the reliability and coverage degree for each decision equivalent class in the decision table and delete those classes whose degree is less than the threshold. In this way the classes that may generate the rarely appeared rules and the uncertain rules will be deleted. Then the decision rules will be created by each decision equivalent class.

3) Sort the evaluation items with the coverage degree. Use the rule merge algorithm to merge the rules according to the coverage degree and create the evaluation sequence of items by sorting their coverage degree in descending order.

4) Verify the model with accuracy and reduction rate. Use the merged rules and evaluation sequence to simulate the elderly person’s evaluation process. By calculating the reduction rate and accuracy one can verify the model.

4.1. The Evaluation Reduction Algorithm Based on the Correlation Degree

The evaluation reduction algorithm is based on the formula in section 3.2. Evaluation items reduction build the basis for decision rule generation and the sorting of the evaluation items.

For the elderly evaluation information system I = ( U , A , V ) , set the threshold of SU to be σ. Firstly build the collection of decision tables List (d-table). Secondly, traverse every decision table dt in List (d-table) to find the unrelated condition attributes. For every condition attribute c in dt, calculate the correlation degree SU with the decision attribute d and get su, if su < σ then delete c from the condition attribute collect. Finally, traverse dt in List (d-table) again and delete the redundant condition attributes in each dt. Traverse the condition attribute c in dt and calculate the correlation degree su’ between c and c’, the other condition attributes. If su’ < su, correlation degree between c’ and d, then c’ is believed as redundant, which should be deleted in dt. Then we get the optimized collection of decision table List (d-table).

The pseudo code is shown below.

4.2. The Rule Generation Algorithm Based on Reliability and Coverage Degree

The rule generation algorithm is based on reliability and coverage degree calculation in section 3.3, which delete the equivalent decision class to generate accurate and effective decision rules.

Let the threshold of reliability be α, threshold of coverage degree be β. Traverse the dt in collection of decision table List (d-table) obtained with algorithm 1. Traverse each decision equivalent class ec-d row by row in dt, and calculate the reliability and coverage degree of ec-d. If the reliability less than α then there exists confliction in ec-d. If the coverage degree less than β, then there exist rarely appeared evaluation values, this ec-d should be deleted from dt.

Followed, traverse the ec-d in dt and generate rule set R. For each decision equivalent class ec-d, generate a set of rules in accordance with the permutation and combination of the conditional attribute set R: { X i > Y j } . In order to improve the matching rate and to reduce the number of conditional attribute of the rules, for R generated form the same decision equivalence class, we select the rules with less condition attributes and have no conflict with rules generated from other ec-ds.

With algorithm 2, we obtain the accurate and efficient decision rule set List (rule). The pseudo code is shown as follows.

4.3. Evaluation Items Sort Algorithm Based on the Coverage Degree

This part proposes the sort algorithm that can get an optimized evaluation sequence.

If the items in antecedent of the rules have been evaluated, the items in consequent of the rules can be predicted by rules, so that the number of items that need to be evaluated is reduced. Therefore, in the process of evaluation, the items are ordered by their frequency in antecedent of the rules and rules’ coverage degree.

The pseudo code is shown as follows.

4.4. The Index for the Effective of the Method

There are two intuitive measure indexes for this elderly evaluation model. One is the reduction rate (rr) and the other is the accuracy rate of prediction (ar).

r r = l e n ( S ) l e n ( S ) (10)

a r = l e n ( S ) l e n ( S ) (11)

where S is set of the key value pair , i indicate i-th evaluation item and v is its value. S is set of the key value pair , i indicate i-th evaluation item and v is its predicted value. S is the S whose prediction is correct. The function len represents the length of the set.

The reduction rate in formula 6 reflects the effect of our method. The greater reduction rate means more obvious in the optimization of the model and the higher efficiency of the evaluation process. The accuracy of prediction in formula 7 reflects the quality of our method. The higher the accuracy means the prediction is closer to the real value. Because the real value of the evaluation items in nature is imprecise, it is very difficult get the accuracy over 95%. A good model optimization should keep 80% of accuracy and over 30% of reduction rate.

Next we simulate the elderly evaluation process and calculate the reduction rate and the accuracy. For an elderly person u, let’s evaluate his items with the evaluation Sequence (item) got from algorithm 3. For i-th item, get its value v and put it into the evaluation sequence S with the key value pair s < i, v>, in this way we can simulate the real evaluation process. With the algorithm 2 one can get the decision rule set List (rule). We will match every rule antecedents in List (rule) with the items in S. If it matches the rule than its consequent will be the predicted evaluation value and hence be added into the prediction sequence S , which is to predict the reduced evaluation items. Finally calculate rr with S and S. And calculate ar by compare with the real case evaluation data and the prediction data.

Algorithm 4 illustrates the calculation process of the two indexes.

5. Experiments

5.1. Experiment Data

Experiments are based on the actual data from Lime Family Limited Company (Lime Family for short). Lime Family has deployed dozens of elderly service station in Beijing, Yantai and Haikou three cities in China and carried out professional nursing assistant training. In order to fully grasp the situation of the elderly, the existing evaluation questionnaire in Lime Family with a total of 56 items is drafted based on the Barthel Index and the national industry standard of ability assessment for older adults. Average evaluation time for an elder person is about 15 - 20 minutes. Because there are too many items, the time is always too long to assess user dissatisfaction, so that it is urgent to propose an effective method to optimize it. The method proposed in this paper is verified in the real application background.

1) Experiment data

After data preprocessing, 43 items and 136 elderly evaluation data were selected. The total evaluation items are shown as Table 6.

2) Distribution of evaluation items’ value

Distribution of evaluation items’ value is shown by bar graph such as Figure 1.

Figure 1 shows the value distribution of No.0 item SEX with different colors to distinguish the different values and the sum of the proportion is 100%. As Figure 1 shows, there are two possible values in item SEX and value 1 is male (green) accounted for about 32% while value 2 is female (red) accounted for about 67%.

Table 6. Data information.

Figure 1. Example of distribution of value.

The distribution of total evaluation items’ value is shown as Figure 2. As Figure 2 shows, the actual evaluation results are very unevenly distributed in various items. Different items have different number of values, which is at least 2 and up to 5. Besides, in one item, the difference among the numbers of the elderly each value contained is relatively large with the maximum gap reached 99.24%. For example, in the data of evaluation item ATTACKS about 136 elderly people, there are 133 elderlies whose value is 0 while one person is 2 and two persons are 1.

5.2. Experiment Result

Using the methods proposed in Section 4, the correlation degree of evaluation items is calculated, and the decision rules are generated. The evaluation items are prioritized and finally the reduced and prioritized evaluation model is got. Applying the optimized evaluation model we re-evaluate the elderly and compare the result with the actual data to verify the effectiveness of the method. The experiment result is shown as follows.

1) Calculating the correlation degree of evaluation items

Based on the Formula 3, the correlation degree of 43 evaluation items between each other are calculated and the result is shown as Figure 3.

The horizontal and vertical axes of Figure 3 represent the identifier of evaluation items. The correlation degree result is range interval [0, 1] with the color temperature to represent the degree as value 0 indicates no correlation and value 1 indicates the strongest correlation.

By setting the threshold of SU σ = 0.2, filter out the correlation degree of evaluation items whose value is lower than σ. The result is shown as Figure 4(a). For example, after the threshold filtration, there is only 1 item that is correlated with item No.0.

The correlation degrees of items after applying Algorithm 1 to reduce is shown as Figure 4(b), which indicates the value of item index by vertical axe is influenced by the value of the item index by horizontal axe. Taking the No.38 item LIFE_SKILLS as an example, after the threshold filtration, there are 24 related items, No.8 - 17, No.23 - 31, No.37 - 40, No.42, as Figure 4(a) shows. According to Table 6, they are EATING, BATHE, CLOTHING, PEE_CONTROL and so on. After the Algorithm 1 reduction, there is only 1 related item, No.11 CLOTHING, as Figure 4(b) shows, which indicates that the value of the No.38

Figure 2. Distribution of evaluation items’ value.

Figure 3. the correlation degree of 43 evaluation items.

(a) (b)

Figure 4. the correlation degree of 43 evaluation items.

evaluation item LIFE_SKILLS is mainly influenced by the value of the No.11 evaluation item CLOTHING.

2) Generating the decision rules

After evaluation item reduction, respectively take each item as decision attribute and receive a total of 43 decision table. Table 7 shows the decision table of No.38 item, D-Table (LIFE_SKILLS), where decision attributes D is item LIFE_SKILLS, condition attributes C is item CLOTHING, and size is the number of elderlies who have the same values of condition attributes and decision attribute. Base on the Formula 3 and 4, calculate the reliability u(x,y) and coverage degree c(x,y) of each decision equivalent class in decision table D-Table (LIFE_SKILLS).

By setting the threshold of the reliability α = 0.55 and coverage degree β = 0.1, based on algorithm 2, filter each decision equivalent class from decision table D-TABLE(LIFE_SKILLS), and the result is shown as Table 8.

Setting CLOTHING as a, and LIFE_SKILLES as d, generate the decision rules are shown in Table 9.

After applying Algorithm 2 to 4 decision tables optimized by Algorithm 1, there are a total of 61 decision rules.

3) Prioritizing evaluation items

Applying Algorithm 3 based on the 61 decision rules, we get the reduction evaluation items and the order of them. There is a total of 23 evaluation items after reduction shown as Table 10 whose identifier represents the order of evaluation.

4) Process of optimizing evaluation

Let the 61 decision rules generated by Algorithm 2 be rule set List , the ordered evaluation items be Sequence (item), the collection of elderly evaluation result U be List (u) and all evaluation items A be List (item). The process of re-evaluation for the elderlies using the optimized evaluation model is shown in Figure 5.

Table 7. D-Table (LIFE_SKILLS).

Table 8. D-Table (LIFE_SKILLS) after reduction.

Table 9. Result of decision rules.

Table 10. Result of items-order.

First, evaluate item No.1 as the order of prioritized evaluation items. Then, according to the value of evaluated items and the decision rules, if there are some items can be inferred, infer the values of them, otherwise evaluate the next unknown item as the order of prioritized evaluation items, and so on. Finally, if all the 23 prioritized items have been evaluated and there still are some items have not the value; these not valued items can be evaluated in any order.

5) Verification the effectiveness of the method

Applying the optimized evaluation model we re-evaluate the 136 elderly. In the process, record the number of actual evaluation items compared with the number of total evaluation items, record the values of inferred items according to the decision rules compared with truth value of them, and calculate the reduction rate rr and accuracy rate ar based on Formula 6 and 7.

Based on Algorithm 4, letting the identifier of the elderly as the horizontal axis, the line chart of reduction rates (RR, dotted line) and accuracy rates (AR, solid line) about 136 elderlies are shown as Figure 6.

Figure 5. Process of optimizing evaluation.

Figure 6. Line chart of RR and AR.

As Figure 6 shows, applying in the elderly evaluation with the optimized method proposed in this paper get the average reduction rate is 57.23% and the average accuracy rate is 82.42%, which indicates that more than half of evaluation items can be inferred by decision rules without asking and the accuracy rate of them can be more than eighty percent. The dispersion coefficients of reduction rate and accuracy rate respectively are 0.22 and 0.14, indicating the volatility of them is small. Generally, the optimized method proposed in this paper is effective.

5.3. Parameter Designs Analysis

In the optimized evaluation model, the threshold of SU σ, reliability α and coverage degree β these three parameters have an important influence on the result. In order to analyze the influence on the reduction rate and accuracy rate by these parameters, we designed the following three experiment groups so that to find the better setting of parameters.

1) Experiment group 1

Supposing the threshold of the reliability α = 0.10 and the threshold of the coverage degree β = 0.10, let the value of the threshold of the correlation degree σ increase by 0.1 from 0 to 1, which designed 11 experiments. In each experiment, we build decision tables, generated decision rules, carried out the evaluation for elderly and calculated the reduction rate and the accuracy rate to optimize the model as Section 4.

2) Experiment group 2

Supposing the threshold of the correlation degree σ = 0.10 and the threshold of the coverage degree β = 0.10, let the value of the threshold of the reliability α increase by 0.1 from 0 to 1, which designed 11 experiments. In each experiment, we build decision tables, generated decision rules, carried out the evaluation for elderly and calculated the reduction rate and the accuracy rate to optimize the model as Section 4.

3) Experiment group 3

Supposing the threshold of the reliability α = 0.10 and the threshold of the correlation degree σ = 0.10, let the value of the threshold of the coverage degree β increase by 0.1 from 0 to 1, which designed 11 experiments. In each experiment, we build decision tables, generated decision rules, carried out the evaluation for elderly and calculated the reduction rate and the accuracy rate to optimize the model as Section 4.

Specific parameters of these three experiment groups are shown as in Table 11.

With the value of the mutative parameter in three groups as the horizontal axis and the percentage as the vertical axis, the line chart of reduction rates (RR, dotted line) and accuracy rates (AR, solid line) about three experiment groups is shown as Figure 7, which is respectively distinguished by green, blue and red.

As Figure 7 shows, there is a strong negative correlation between the reduction rate and the accuracy rate. When the reduction rate is high, the accuracy rate is always low, which is coincident with the actual situation. In the process of optimizing model, we hope both of them as high as possible. To this end, it is necessary to find the most optimal solution. In order to measure the optimality of these two indexes, we introduce the statistical method F-test.

F = 2 × R R × A R R R + A R (12)

Table 11. Specific parameters of experiment groups.

Figure 7. RR and AR line chart of experiment groups.

With the value of the mutative parameter in three groups as the horizontal axis and the F value as the vertical axis, the line chart of F values about three experiment groups is shown as Figure 8.

Combined with Figure 7 and Figure 8, we selected the appropriate parameter designs satisfied business requirements: when the higher reduction rate is required, the parameters could be σ = 0.10, α = 0.01, β = 0.10 (RR = 61.90%, AR = 75.28%); when the higher accuracy rate is required, the parameters could be σ = 0.50, α = 0.10, β = 0.10 (RR = 17.80%, AR = 97.93%); when the higher reduction rate and accuracy rate are both required, parameters could be σ = 0.10, α = 0.80, β = 0.10 (RR = 40.90%, AR = 90.44%).

Figure 8. F values of experiment groups.

5.4. Comparative Experiment Analysis

In order to verify the effectiveness of the method, we designed a decision tree based experiment and expert knowledge based experiment compared with the method proposed in this paper.

1) Exp.1 decision tree based optimization experiment

First, based on C4.5 Algorithm, build 43 decision trees with the 43 evaluation items respectively as decision attribute and evaluate the No.0 items as the order of Table 6. Then, according to the value of evaluated items and the decision trees, if there are some items can be inferred, infer the values of them, otherwise evaluate the next unknown item as the order of Table 6, and so on. Finally, if all the 43 items have been evaluated, calculate the reduction rate and the accuracy rate of each elderly based on Formula 6 and 7. The experiment result is shown as Figure 9.

As Figure 9 shows, the average of reduction rate is 34.97% and the average of accuracy rate is 61.21%, which indicates the decision tree method is not suitable for practical use because of the lower accuracy rate.

2) Exp.2 expert knowledge based optimization experiment

We set up 32 decision rules and the order of evaluation items using the knowledge of expert in Lime Family. As the evaluation process in Section 5.2 - 4, evaluate the 136 elderly people and the experiment result is shown as Figure 10.

As Figure 10 shows, the average of reduction rate is 14.93% and the average of accuracy rate is 85.95%. There are many of elderly people whose accuracy rate is 100%, which indicates the expert knowledge based optimization method is more considerable. However, the reduction rate is not good enough; the problem of redundancy evaluation still exists.

3) Comparison results

The line chart of reduction rates (RR, dotted line) and accuracy rates (AR, solid line) about three experiments, decision tree based optimization experiment

Figure 9. RR and AR line chart of Exp.1.

Figure 10. RR and AR line chart of Exp.2.

Exp.1, expert knowledge based optimization experiment Exp.2 and experiment with method proposed in this paper Exp.3, is shown as Figure 11, which is respectively distinguished by green, red and blue.

As Figure 11 shows, the RR of Exp.3 is far higher than Exp.1 and Exp.2, which indicates using the method proposed in this paper to optimize the evaluation process will make the evaluation time greatly reduced and the problem of redundancy evaluation largely eliminated. Although the AR of Exp.3 is lower than Exp.2, the distribution of AR in Exp.3 is more stable than Exp.2. Moreover, the polarization between high value and low value in Exp.2 is more serious and there are some elderly people whose accuracy rate is 0.

Table 12 shows the reduction rate rr, dispersion coefficient of rr DC (rr), accuracy rate ar, dispersion coefficient of ar DC (ar) and F value of three experiments.

(a) (b)

Figure 11. RR and AR of Exp.1, Exp.2 and Exp.3.

Table 12. indexes of three experiments.

As Table 12 shows, the rr of Exp.3 is higher than Exp.1 and Exp.2 while the rr stability of Exp.3 is between others; the ar stability of Exp.3 is higher than Exp.1 and Exp.2 while the ar is between others. Generally, the F value of Exp.3 is much higher than Exp.1 and Exp.2.

6. Conclusions

This paper proposes a method for optimize the elderly evaluation model with the rough set theory. The method proposed in this paper is tested in the Lime Family Company. Real-life result shows that the method can reduce more than 40% items with over 90% accuracy prediction rate. Compared with commonly used methods in industry, our method has good performance on both reduction rate and accuracy. For example, compared with decision tree, our method has the same reduction rate performance and 20% improvement on average in accuracy. Compared with expert knowledge based methods, our method has the same accuracy performance and can reduce more than 30% items of evaluation. Our method helps to promote the efficiency of the evaluation process.

Future work includes analyzing the impact of parameter settings on the evaluation results, investigating the different importance among items, and validating with data from more companies.

Acknowledgements

This work is supported by the National High Technology Research and Development Program (“863” Program) of China under Grant No.2015AA016009, the National Natural Science Foundation of China under Grant No.61232005. The authors wish to thank Lei Yang from Lime Family Company, who provides us the evaluation data for 200 elderly persons.

Cite this paper

Li, W.P., Mo, T., Ren, X.Z., Zhang, J.B. and Wu, Z.H. (2017) A Rough Set Based Optimization Method for Elderly Evaluation. International Journal of Intelligence Science, 7, 25-53. https://doi.org/10.4236/ijis.2017.72003

References

  1. 1. Mahoney, F.I. and Barthel, D. (1965) Functional Evaluation: The Barthel Index. Maryland State Medical Journal, 14, 56-61.

  2. 2. MZ/T 039-2013 (2014) Ability Assessment for Elder Adults. Standards Press of China, Beijing.

  3. 3. Prieto, L., Alonso, J. and Lamarca, R. (2003) Classical Test Theory versus Rasch Analysis for Quality of Life Questionnaire Reduction. Health & Quality of Life Outcomes, 1, 1035-1039.

  4. 4. Fernandez, E. and Boyle, G.J. (2001) Affective and Evaluative Descriptors of Pain in the McGill Pain Questionnaire: Reduction and Reorganization. Journal of Pain Official Journal of the American Pain Society, 2, 318-325. https://doi.org/10.1054/jpai.2001.xbcorr25530

  5. 5. Gurlitz, M. (2015) Forecasting SPEAK Test Score from TOEFL Score: A Bayesian Model for Screening International Teaching Assistants. Systems & Information Engineering Design Symposium, Charlottesville, VA, 24 April 2015, 188-193. https://doi.org/10.1109/SIEDS.2015.7116971

  6. 6. Panigrahi, S.S. and Mantri, J.K. (2015) Epsilon-SVR and Decision Tree for Stock Market Forecasting. International Conference on Green Computing & Internet of Things. Greater Noida, Delhi, 8-10 October 2015, 761-766. https://doi.org/10.1109/ICGCIoT.2015.7380565

  7. 7. Liu, C. and Jiang, Q. (2009) Mixed Financial Forecasting Index System Construct and Financial Forecasting Study on the C4.5 Decision Tree. International Conference on Management & Service Science, Wuhan, 16-18 September, 1-4. https://doi.org/10.1109/ICMSS.2009.5302147

  8. 8. Anh, L.T.N., Dau, H.X. and Phuong, N.H. (2015) Cholera Forecast Based on Association Rule Mining. IEEE 2015 International Conference on Communications, Management and Telecommunications (ComManTel), DaNang, Vietnam, 28-30 December 2015, 133-137. https://doi.org/10.1109/ComManTel.2015.7394274

  9. 9. Feng, H., Chen, Y., Ni, Q. and Huang, J. (2014) A New Rough Set Based Classification Rule Generation Algorithm (RGI). International Conference on Computational Science & Computational Intelligence, Las Vegas, 10-13 March 2014, 380-385.

  10. 10. Chen, Y., Qiujianlin, Chen jianping, Chen, L. and Pan, Y. (2012) A Parallel Rough Set Attribute Reduction Algorithm Based on Attribute Frequency. International Conference on Fuzzy Systems & Knowledge Discovery, Chongqing, 29-31 May 2012, 211-215. https://doi.org/10.1109/FSKD.2012.6233881

  11. 11. Sakai, H., Wu, M. and Yamaguchi, N. (2014) On the Definability of a Set and Rough Set-Based Rule Generation. International Conference on Advanced Applied Informatics, Kita-Kyushu, 31 August-4 September 2014, 122-125. https://doi.org/10.1109/IIAI-AAI.2014.34

  12. 12. Liu, Z. and Li, Y. (2009) A New Heuristic Algorithm of Rules Generation Based on Rough Sets. International Seminar on Business & Information Management, 1, 291-294.

  13. 13. Pawlak, Z.I. (1982) Rough Set. International Journal of Computer & Information Sciences, 11, 341-356. https://doi.org/10.1007/BF01001956

  14. 14. Pawlak, Z., Grzymala-Busse, J., Slowinski, R. and Ziarko, W. (1995) Rough Set. Communications of the ACM, 38, 800-805. https://doi.org/10.1145/219717.219791

  15. 15. Hu, Q., Zhang, L., An, S., Zhang, D. and Yu, D. (2012) On Robust Fuzzy Rough Set Models. IEEE Transactions on Fuzzy Systems, 20, 636-651. https://doi.org/10.1109/TFUZZ.2011.2181180

  16. 16. Chen, H., Li, T., Ruan, D., Lin, J. and Hu, C. (2013) A Rough-Set-Based Incremental Approach for Updating Approximations under Dynamic Maintenance Environments. IEEE Transactions on Knowledge & Data Engineering, 25, 274-284. https://doi.org/10.1109/TKDE.2011.220

  17. 17. Liang, J., Wang, F., Dang, C. and Qian, Y. (2014) A Group Incremental Approach to Feature Selection Applying Rough Set Technique. IEEE Transactions on Knowledge & Data Engineering, 26, 294-308. https://doi.org/10.1109/TKDE.2012.146

  18. 18. Huang, H.H. and Kuo, Y.H. (2011) Cross-Lingual Document Representation and Semantic Similarity Measure: A Fuzzy Set and Rough Set Based Approach. IEEE Transactions on Fuzzy Systems, 18, 1098-1111. https://doi.org/10.1109/TFUZZ.2010.2065811

  19. 19. Maji, P. and Garai, P. (2013) Fuzzy-Rough Simultaneous Attribute Selection and Feature Extraction Algorithm. IEEE Transactions on Cybernetics, 43, 1166-1177. https://doi.org/10.1109/TSMCB.2012.2225832

  20. 20. Albanese, A., Pal, S.K. and Petrosino, A. (2014) Rough Sets, Kernel Set, and Spatiotemporal Outlier Detection. IEEE Transactions on Knowledge & Data Engineering, 26, 194-207. https://doi.org/10.1109/TKDE.2012.234

  21. 21. Gersho, A. and Gray, R.M. (2003) Codecell Convexity in Optimal Entropy-Constrained Vector Quantization. IEEE Transactions on Information Theory, 49, 1821-1828. https://doi.org/10.1109/TIT.2003.813478

  22. 22. Zhu, W. and Wang, F. (2007) On Three Types of Covering-Based Rough Sets. IEEE Transactions on Knowledge & Data Engineering, 19, 1131-1144. https://doi.org/10.1109/TKDE.2007.1044