Journal of Software Engineering and Applications
Vol.10 No.07(2017), Article ID:77151,20 pages
10.4236/jsea.2017.107033
On Utilization of K-Means for Determination of q-Parameter for Tsallis-Entropy-Maximized-FCM
Makoto Yasuda
Department of Electrical and Computer Engineering, National Institute of Technology, Gifu College, Motosu, Japan
Copyright © 2017 by author and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).
http://creativecommons.org/licenses/by/4.0/
Received: April 14, 2017; Accepted: June 20, 2017; Published: June 23, 2017
ABSTRACT
In this paper, we consider a fuzzy c-means (FCM) clustering algorithm combined with the deterministic annealing method and the Tsallis entropy maximization. The Tsallis entropy is a q-parameter extension of the Shannon entropy. By maximizing the Tsallis entropy within the framework of FCM, membership functions similar to statistical mechanical distribution functions can be derived. One of the major considerations when using this method is how to determine appropriate values and the highest annealing temperature, , for a given data set. Accordingly, in this paper, a method for determining these values simultaneously without introducing any additional parameters is presented. In our approach, the membership function is approximated by a series of expansion methods and the K-means clustering algorithm is utilized as a preprocessing step to estimate a radius of each data distribution. The results of experiments indicate that the proposed method is effective and both and can be determined automatically and algebraically from a given data set.
Keywords:
Fuzzy c-Means, K-Means, Tsallis Entropy, Entropy Maximization, Entropy Regularization, Deterministic Annealing
1. Introduction
Techniques from statistical mechanics can be used for the investigation of the macroscopic properties of a physical system consisting of many elements. Recently, research activities utilizing statistical mechanical models or techniques for information processing have become increasingly popular.
Rose et al. [1] [2] proposed deterministic annealing (DA) as a deterministic variant of simulated annealing (SA) [3] . In DA, the minimization problem for an objective function is treated as the minimization of the free energy of a system. The DA approach tracks the function’s minimum with decreasing the system temperature, thus allowing the deterministic optimization of the objective function at each temperature. Hence, DA is more efficient than SA, but does not guarantee that the solution is the global optimal solution. From the viewpoint of statistical mechanics, the membership functions of the fuzzy c-means (FCM) clustering [4] with maximum entropy or entropy regularization methods [5] [6] can be seen as distribution functions from statistical mechanics. For example, FCM maximized with the Shannon entropy gives a membership function similar to the Boltzmann distribution function [1] .
Tsallis [7] , inspired by multi-fractal, non-extensively extended the Boltzmann? Gibbs statistics by postulating a generalized form of the entropy (the Tsallis entropy) with a generalization parameter q. The Tsallis entropy is proved to be applicable to the numerous systems [8] [9] . In the field of fuzzy clustering, a membership function was derived by maximizing the Tsallis entropy within the framework of FCM [10] [11] [12] . This membership function has a similar form to the statistical mechanical distribution function, and is suitable for use with annealing methods because it contains a parameter corresponding to the system temperature. Accordingly, the Tsallis entropy maximized FCM was successfully combined with the DA method as Tsallis-DAFCM in [13] .
One of the major challenges with using Tsallis-DAFCM is the determination of an appropriate value for and the highest (or initial) annealing temperature, , for a given data set. Especially, the determination of a suitable value is a fundamental problem for systems where the Tsallis entropy is applied. Even in physics, quite a few systems are known in which is calculable. In the previous study [13] , the values were experimentally determined, and only roughly optimized.
Accordingly, we presented a method that can determine both and simultaneously from a given data set without introducing additional parameters [14] . The membership function of Tsallis-DAFCM was approximated by a series expansion to simplify the function. Based on this simplified formula, both and could be estimated along with the membership function for a given data set. However, it was also found that the results from this method depend on the estimation of the radius of the distribution of the data or the location of clusters.
To overcome this difficulty, in this study, we propose a method that utilizes K-means [15] as a preprocessing step of the approximation method. That is, a data set is clustered by K-means roughly. We then estimate the radius of the distribution of the data set, and apply the approximation method to determine and .
Experiments are performed on numerical data and the Iris Data Set [16] , and the results show that the proposed method can be used to determine and automatically and algebraically from a data set. It is also confirmed that the data can be partitioned into clusters appropriately using these parameters.
2. FCM with Tsallis Entropy Maximization
Let be a data set in p-dimensional real space, and let be the distinct clusters. Let be the membership function, and let
(1)
be the objective function of FCM, where .
On the other hand, the Tsallis entropy is defined as
(2)
where
is the probability of the
Next, we apply the Tsallis entropy maximization method to FCM [12] [13] . First, Equation (2) is rewritten as
(3)
Then, the objective function in Equation (1) is rewritten as
(4)
Under the normalization constraint of
(5)
the Tsallis entropy functional becomes
(6)
where and are the Lagrange multipliers. By applying the variational method, the stationary condition for the Tsallis entropy functional yields the following membership function for Tsallis-FCM [12] :
(7)
where
(8)
From Equation (7), the expression for becomes
(9)
3. Approximation of Membership Function
The performance of Tsallis-DAFCM is superior to those of other entropy-based- FCM methods [12] . However, it is still unknown how to determine an appropriate value and a highest annealing temperature for a given data set. To tackle this problem, we first simplify the membership function using a series expansion.
3.1. Series Expansion of
in Equation (7) can be expanded to a power of as follows:
(10)
When the temperature is high enough, if the series expansion up to the third order terms is used, Equation (10) becomes
(11)
where
(12)
3.2. Determination of and
Based on the results in Section 3.1, we propose a method for determining both and simultaneously.
First, to ensure the convergence of Equation (10), we use the following expression for :
(13)
where and denote the maximum number of iterations, and the number of iterations to be used in the calculation of , respectively. can be calculated as .
Then, setting and replacing with the continuous variable , Equation (11) becomes
(14)
where
(15)
From this equation, can be determined as follows. By designating the range of the dataset as , the maximum range of the distribution is defined as
(16)
Furthermore, by assuming that the radius of each cluster is between , and tends to at , Equation (14) can be solved for . Consequently, we have the following formula for .
(17)
It should be noted that in this equation, for simplicity, is set to
(18)
because Equation (7) tends to as goes to .
4. Proposed Algorithm
By combining the method presented in the previous section with Tsallis- DAFCM, we proposed the following fuzzy c-means clustering algorithm [14] . In this algorithm, the number of clusters in the data is assumed to be known in advance.
In the first algorithm shown in Figure 1, the parameters and for a given data set are determined ( is the maximum number of iteration. In Equation (17), and are approximated by and , respectively.).
The second algorithm is the conventional Tsallis-DAFCM algorithm [12] .
1) Set the temperature reduction rate , and the thresholds for convergence and .
2) Generate c initial clusters at random locations. Set the current temperature to .
3) Calculate using Equation (7).
4) Calculate the cluster centers using Equation (9).
5) Compare the difference between the current centers and the centers of the previous iteration obtained using the same temperature . If the convergence condition is satisfied, then go to Step 2.6. Otherwise re-
Figure 1. Processing flow of the conventional method.
turn to Step 2.3.
6) Compare the difference between the current centers and the centers of the previous iteration obtained using a lower temperature . If the convergence condition is satisfied, then stop. Otherwise decrease the temperature; , and return to Step 2.3.
The experimental results in [14] confirmed that the first algorithm can determine desirably. However, they also revealed that q from this algorithm strongly depends on the estimation of the radius r in Equation (17). Accordingly, as shown in Figure 2, the first algorithm is divided in two parts. The first one determines . In the second part, the K-means algorithm is utilized to calculate r by assuming that each data point belongs to its nearest cluster.
5. Experiments
To examine the effectiveness of the proposed algorithm, we conducted two experiments.
Figure 2. Processing flow of the proposed method.
5.1. Experiment 1
The first experiment examined whether appropriate and values can be determined for a given data set, and the relation between the number of iterations and the parameters and .
In this experiment, data sets containing (a) three clusters and (b) five clusters were used, as shown in Figure 3. Each cluster follows a normal distribution, and contains 2, 250 data points.
Dependencies of the maximum, minimum, mean and standard deviation of , a mean radius of the data distribution and q for Figure 3(a) on the number of iterations N are summarized in Table 1, Table 2 and Table 7. Figure 4 shows the plots of the maximum, minimum, and mean of q. In these tables, and denote and the mean of and , respectively. and , on the other hand, denote the maximum and mean radius of the distribution obtained by K-means, respectively.
In Table 7, the value of q for rmax for example is calculated using Equation (17) as . Based on the results in Table 1, the value of q was calculated by
(a)
(b)
Figure 3. Numerical data ( denotes the cluster number). (a) ; (b) .
Figure 4. Maximum, minimum, and mean of ( , ).
Table 1. Maximum, minimum, mean, and standard deviation of ( ).
Table 2. Maximum, minimum, mean, and standard deviation of and ( ).
fixing to its mean value 5.351e−06. , , and for Figure 3(a) are 860.0, 430.0 and 286.7, respectively.
From Table 1, it can be seen that the maximum of tends to increase and the minimum of tends to decrease with increasing N. However, when N become 100 or more, the mean of does not depend on N.
From Table 2, it can be seen that the mean of and hardly depends on N, though the standard deviation becomes larger when N become 1, 000 or more. This is caused by a very seldom misclassification of K-means.
Comparing the results in Table 7, it can be found that, when r is set to or , q has smaller standard deviations, and the magnitude of the change in the mean values of q is comparatively small. This shows that q can be calculated stably by performing K-means first. It is also can be found that the maximum of q increases with increasing N, because of the random locations of clusters. Even though overestimates the mean radius of the clusters, clustering can be performed properly in this case.
Accordingly, has little impact on clustering in this experiment.
Dependencies of the maximum, minimum, mean and standard deviation of , a mean radius of the data distribution and for Figure 3(b) on the number of iterations are summarized in Table 3, Table 4 and Table 8. Figure 5 shows the plots of the maximum, minimum, and mean of .
Figure 5. Maximum, minimum, and mean of ( , ).
Table 3. Maximum, minimum, mean, and standard deviation of ( ).
Table 4. Maximum, minimum, mean, and standard deviation of and ( ).
, , and for Figure 3(b) are 860.0, 430.0 and 258.0, respectively. Based on the results in Table 3, the value of was calculated by fixing to 3.608e−06.
Comparing these results with those in Table 1, Table 2 and Table 7, it can be found that for has larger standard deviations than those for . This is caused by an increase in the number of combinations of data points and clusters.
In Table 8, it can be seen that for has the largest standard deviations. This is considered to be caused by the significant standard deviations of shown in Table 2, suggesting a variation of the estimation of the radius of the distribution. On the other hand, for has the smallest standard deviations.
Substituting the values of and in Table 3 and Table 8 directly, Figure 6 and Figure 7 compare the membership function for the cluster ,
(19)
with
(20)
for , and , and and 10,000. In the equations, is set to each of the cluster coordinates in Figure 3(b). The data projections on the xz and yz planes are also plotted.
The figures show no significant difference between and and between and .
Figure 6. Comparisons of the membership functions calculated by Equations (19) and (20) ( , , ).
Figure 7. Comparisons of the membership functions calculated by Equations (19) and (20) ( , , ).
Compared with the clusters in Figure 3(a), those in Figure 3(b) are not aligned in a straight line. However, the results for are as accurate as those for . As a result, the maximum error factor is considered to be . Since the clusters in Figure 3(a) are aligned in a straight line, cannot be determined optimally by locating clusters randomly as does in the algorithm in Figure 1.
From these results, it can be confirmed that is sufficient to determine both and for the data sets in Figure 1.
5.2. Experiment 2
In this experiment, the Iris Data Set [16] , which comprises data from 150 iris flowers with four-dimensional vectors, is used. The three clusters to be detected are Versicolor, Virginia and Setosa, and the parameters in the algorithm in Figure 2 are set as follows: , and , and .
, , and are 5.90, 2.95 and 1.97, respectively.
5.2.1. Determination of Parameters
The maximum, minimum, mean, and standard deviation of , , and are summarized in Table 5, Table 6 and Table 9. Figure 8 shows the plots of the maximum, minimum, and mean of . Based on the results in Table 5, the value of was calculated by fixing to 1.076e-01.
From Table 5, it can be seen that a dependency of on is same as those in Table 1 and Table 3. Table 6 shows that the mean of and
Figure 8. Maximum, minimum, and mean of for the Iris data set ( ).
Table 5. Maximum, minimum, mean, and standard deviation of for the Iris data set.
Table 6. Maximum, minimum, mean, and standard deviation of and for the Iris data set.
can be calculated regardless of the value of .
Table 9 shows that the standard deviations of q for and are smaller than those of and showing the effectiveness of the proposed method.
It can be found that these tables show that the proposed method gives similar results to those in the Section 5.1, and to 10 is sufficient to determine , , , and . In the algorithm shown in Figure 1, it is unnecessary to repeat the calculations of the means of and the same number of times .
It is also found that not only the estimations of the radius are important to improve the accuracy because gives superior result compared with those of . For this reason, a preprocessing method that can estimate the location of clusters quickly, such as the Canopy method [17] might be suitable for the proposed method to be more effective.
5.2.2. Clustering Accuracy
The maximum and mean number of data points misclassified by the previous method [14] , the proposed method, and Tsallis-DAFCM in 1, 000 trials are summarized in Table 10 and Figure 9. is fixed to 1/1.076e−01 = 9.294. In Tsallis-DAFCM, as a typical value, is changed from 1.2 to 2.8.
Even though the experiment was repeated 1000 times, the results obtained with the proposed method were almost identical.
By comparing the mean number of misclassified data points of the proposed method with those of the previous method, it can be confirmed the results from both methods are not significantly different when and or when and .
By comparing the mean number of misclassified data points of the proposed method with those of Tsallis-DAFCM, it can be confirmed the proposed method can get slightly better results. By examining the maximum number of misclassified, we see that Tsallis-DAFCM misclassifies data more often than does the proposed method.
These results confirm that appropriate values of and for the Iris Data Set can be estimated by the proposed method. Setting is most suitable for this data set.
5.2.3. Computational Time
Figure 10 compares the mean of computational times of and , and clus-
Figure 9. Maximum, minimum, and mean numbers of misclassified data points for the Iris Data Set of the previous method, the proposed method and Tsallis-DAFCM ( , ).
(a)
(b)
(c)
Figure 10. Mean of computational times of , and clustering for the Iris Data Set. (a) ; (b) ; (c) Clustering.
tering in 1000 trials (Executions were conducted on an Intel(R) Core(TM)2 Duo CPU E6550 @ 2.33 GHz).
Figure 10(a) shows that the computational time of does not depend on and increases proportionally to because, as can be seen from Equations (12) and (13), the value of is determined independently of and is calculated times.
Figure 8(b), on the other hand, shows that the calculation of for sometimes takes time suggesting that, in this case, becomes too large to give an appropriate value.
Figure 10(c) shows that that when is set to or , clustering can be conducted quickly and stably.
5.3. Evaluation of the Proposed Algorithm
From the experimental results in 5.1 and 5.2, the effectiveness of the proposed algorithm using K-means can be evaluated as follows:
1) and can be obtained with very small variances without consuming much computational time;
2) can be determined with a very small variance using without consuming;
3) Much computational time;
4) The numerical data sets and the Iris Data Set can be clustered desirably using .
6. Conclusions
The Tsallis entropy is a q-parameter extension of the Shannon entropy. FCM with the Tsallis entropy maximization has a proper characteristic for clustering, especially when it is combined with DA as Tsallis-DAFCM. The extent of its membership function strongly depends on the parameter and the initial annealing temperature .
In this study, we proposed a method for approximating the membership function of Tsallis-DAFCM which, by using the K-means method as a preprocessing step, determines and automatically and algebraically from a given data set.
Experiments were performed on the numerical data sets and the Iris Data Set, and showed that the proposed method can more accurately and stably determine and algebraically than the previous method without consuming much computational time. It was also confirmed that the data can be partitioned into clusters appropriately using these parameters.
In the future, as described in 5.1, we first intend to explore ways to improve the accuracy of the estimates for and by using other rough clustering methods. We then intend to examine the effectiveness of the method using very complicated real world data set [18] .
Cite this paper
Yasuda, M. (2017) On Utilization of K-Means for Determination of q-Parameter for Tsallis-Entropy-Maximized- FCM. Journal of Software Engineering and Applications, 10, 605-624. https://doi.org/10.4236/jsea.2017.107033
References
- 1. Rose, K. (1998) Deterministic Annealing for Clustering, Compression, Classification, Regression, and Related Optimization Problems. Proceedings of the IEEE, 86, 2210-2239. https://doi.org/10.1109/5.726788
- 2. Rose, K., Gurewitz, E. and Fox, B.C. (1990) A Deterministic Annealing Approach to Clustering. Pattern Recognition Letters, 11, 589-594.
- 3. Kirkpatrick, S., Gelatt, C.D. and Vecchi, M.P. (1983) Optimization by Simulated Annealing. Science, 220, 671-680. https://doi.org/10.1126/science.220.4598.671
- 4. Bezdek, J.C. (1981) Pattern Recognition with Fuzzy Objective Function Algorithms. Prenum Press, New York. https://doi.org/10.1007/978-1-4757-0450-1
- 5. Honda, K. and Ichihashi, H. (2007) A Regularization Approach to Fuzzy Clustering with Nonlinear Membership Weights. Journal of Advanced Computational Intelligence and Intelligent Informatics, 11, 28-34. https://doi.org/10.20965/jaciii.2007.p0028
- 6. Kanzawa, Y. (2012) Entropy-Based Fuzzy Clustering for Non-Euclidean Relational Data and Indefinite Kernel Data. Journal of Advanced Computational Intelligence and Intelligent Informatics, 16, 784-792. https://doi.org/10.20965/jaciii.2012.p0784
- 7. Tsallis, C. (1988) Possible Generalization of Boltzmann-Gibbs Statistics. Journal of Statistical Physics, 52, 479-487. https://doi.org/10.1007/BF01016429
- 8. Abe, S. and Okamoto, Y. (2001) Nonextensive Statistical Mechanics and Its Applications. Springer, Berlin.
- 9. Gell-Mann, M. and Tsallis, C. (2004) Nonextensive Entropy—Interdisciplinary Applications. Oxford University Press, New York.
- 10. Menard, M., Courboulay, V. and Dardignac, P. (2003) Possibilistic and Probabilistic Fuzzy Clustering: Unification within the Framework of the Non-Extensive Thermostatistics. Pattern Recognition, 36, 1325-1342.
- 11. Menard, M., Dardignac, P. and Chibelushi, C.C. (2004) Non-Extensive Thermostatistics and Extreme Physical Information for Fuzzy Clustering. International Journal of Computational Cognition, 2, 1-63.
- 12. Yasuda, M. (2010) Deterministic Annealing Approach to Fuzzy C-Means Clustering Based on Entropy Maximization. Advances in Fuzzy Systems, 2011, Article ID: 960635.
- 13. Yasuda, M. and Orito, Y. (2014) Multi-Q Extension of Tsallis Entropy Based Fuzzy C-Means Clustering. Journal of Advanced Computational Intelligence and Intelligent Informatics, 18, 289-296. https://doi.org/10.20965/jaciii.2014.p0289
- 14. Yasuda, M. (2016) Approximate Determination of Q-Parameter for FCM with Tsallis Entropy Maximization. Proceedings of the Joint 8th International Conference on Soft Computing and 17th International Symposium on Advanced Intelligent Systems, 700-705. https://doi.org/10.1109/scis-isis.2016.0151
- 15. MacQueen, J. (1967) Some Methods for Classification and Analysis of Multivariate Observations. Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, 281-297.
- 16. UCI Machine Learning Repository (1998) Iris Data Set. http://archive.ics.uci.edu/ml/datasets/Iris/
- 17. McCallum, A., Nigam, K. and Ungar, L.H. (2000) Efficient Clustering of High Dimensional Data Sets with Application to Reference Matching. Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 169-178.
- 18. De, T., Burnet, D.F. and Chattopadhyay, A.K. (2016) Clustering Large Number of Extragalactic Spectra of Galaxies and Quasars through Canopies. Communication in Statistics—Theory and Methods, 45, 2638-2653. https://doi.org/10.1080/03610926.2013.848286
Appendix
Table 7. Maximum, minimum, mean, and standard deviation of ( , ).
Table 8. Maximum, minimum, mean, and standard deviation of ( , ).
Table 9. Maximum, minimum, mean, and standard deviation of for the Iris Data Set ( ).
Table 10. Maximum, minimum, and mean numbers of misclassified data points for the Iris Data Set of the previous method, the proposed method and Tsallis-DAFCM ( , ).
Table 11. Computational times of , , and clustering for the Iris data set.