Applied Mathematics
Vol.06 No.03(2015), Article ID:54511,11 pages
10.4236/am.2015.63046
Finding the Asymptotically Optimal Baire Distance for Multi-Channel Data
Patrick Erik Bradley1, Andreas Christian Braun2
1Institute of Photogrammetry and Remote Sensing, Karlsruhe Institute of Technology, Karlsruhe, Germany
2Remote Sensing and Landscape Information Systems, University of Freiburg, Freiburg, Germany
Email: patrick.bradley@kit.edu, andreas.braun@felis.uni-freiburg.de
Copyright © 2015 by authors and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/



Received 16 February 2015; accepted 6 March 2015; published 10 March 2015
ABSTRACT
A novel permutation-dependent Baire distance is introduced for multi-channel data. The optimal permutation is given by minimizing the sum of these pairwise distances. It is shown that for most practical cases the minimum is attained by a new gradient descent algorithm introduced in this article. It is of biquadratic time complexity: Both quadratic in number of channels and in size of data. The optimal permutation allows us to introduce a novel Baire-distance kernel Support Vector Machine (SVM). Applied to benchmark hyperspectral remote sensing data, this new SVM produces results which are comparable with the classical linear SVM, but with higher kernel target alignment.
Keywords:
p-Adic Numbers, Ultrametrics, Baire Distance, Support Vector Machine, Classification

1. Introduction
The Baire distance was introduced to classification in order to produce clusters by grouping data in “bins” by [1] . In this way, they seek to find inherent hierarchical structure in data defined by their features. Now, if there are many different features associated with data, then it is reasonable to sort the feature vector by some criterion which ranks their contribution to this inherent hierarchical structure. We will see that there is a natural Baire distance associated to any given permutation of features. Hence, it is natural to ask for this task to be performed in reasonable time. In general, there is no efficient way of sorting
variables, but if the task is to find a per- mutation satisfying some optimality condition, then often a gradient descent algorithm can be applied. In that case, the run-time complexity is decreased considerably.
In this paper we introduce a permutation-dependent Baire distance for data with
features, and we define a linear cost function depending on the pairwise Baire distances for all possible permutations. The Baire distance we use depends on a parapmeter
, and we argue that the precise value of this parameter is seldom to be ex- pected of interest. On the contrary, we believe that it practically makes more sense to vary this parameter and to study the limiting case
. Our theoretical result is that there is a gradient-descent algorithm which can
find the asymptotic minimum for
with a runtime complexity of
, where
is the number of all data pairs.
The Support Vector Machine (SVM) is a well known technique for kernel based classification. In kernel bas- ed classification, the similarity between input data is modelled by kernel functions. These functions are em- ployed to produce kernel matrices. Kernel matrices can be seen as similarity matrices of the input data in reproducing kernel Hilbert spaces. Via optimization of a Lagrangian minimization problem, a subset of input points is found, which is used to produce a separating hyperplane for the data of various classes. The final de- cision function is dependent only on the position of these data in the feature space and does not require esti- mation of first or second order statistics on the data. The user has a lot of freedom on how to produce the kernel functions. This offers the option of producing individual kernel functions for the data.
As an application of our theoretical result, we introduce the new class of Baire-distance kernels which are functions of our parametrized Baire distance. For the asymptotically optimal permutation, the resulting Baire distance SVM yields results comparable with the classical linear SVM on the AVIS Indian Pine dataset. The latter is a well known hyperspectral remote sensing dataset. Furthermore, the kernel target alignment [2] re- presents an a priori quality assessment and favours our new Baire distance multi-kernel SVM constructed from Baire distance kernels at difference feature resolutions. This new multi-kernel combines in a sense our first ap- proach with the approach of [1] , as it combines the different resolutions defined by their method of “bin” grouping. As our preliminary practical result, we obtain greater completeness in many of our clusters than with the classical linear SVM clusters.
2. Ultrametric Distances for Multi-Channel Data
After a short review on the ultrametric parametrized Baire distance, it is shown how to find for
variables their asymptotically optimal permutation for a linear cost function defined by permutation-dependent Baire dis- tances. It has quadratic run-time complexity, if the data size is fixed.
2.1. Baire Distance
Let
be words over an Alphabet
. Then the Baire distance is

where
is the length of the longest common initial subword, as depicted in Figure 1. The length of a
word is defined as the number of letters from
(with multiple occurrences). The reason for choosing
as the basis in the Baire distance is pure arbitrariness, at least to our opinion. Hence,
can be replaced by any fixed
in the interval
Definition 2.1. The expression
Figure 1. Two words with common initial subword.
is the 
Later on, we will study the limiting case
Remark 2.2. The metrics 
The Baire distance is important for classification, because it is an ultrametric. In particular, the strict triangle inequality
holds true. This is shown to lead to efficient hierarchical classification with good classification results [1] [3] [4] .
Data representation is often related to some choice of alphabet. For instance, the distinction “Low” and “High” leads to 







Example 2.3. The simplest example of 






The role of the parameter 





Observe that 





2.2. Optimal Baire Distance
Given data 





i.e. a word with letters from the alphabet

In order to determine a suitable permutation for the data, consider the average Baire distance. A high average Baire distance will arise if there is a large number of singletons, and branching is high up in the hierarchy. On the other hand, if there are lots of common initial features, then the average Baire distance will be low. In that case, clusters tend to have a high density, and there are few singletons. From these considerations, it follows that the task is to find a permutation 
is minimal, leading to the optimal Baire distance


Let




where 



where



Some first properties of 
1. 
2.
3.
4.
5.
These properties follow from Equation (2) above, and they imply some first properties of



An important observation is that 



The following two examples list all values of
in the case 





Example 2.4. Table for 

Example 2.5. Table for 

2.3. Finding Optimal Permutations
Let 




The function
from Equation (1) is to be minimised, where 









To 








The counts 




Observe that all edge weights are non-negative:
because



An injective path 


where 

Definition 2.6. A permutation 


where 

Lemma 2.7. If 

where the path 
Proof. Let 


Assume that 


from which the assertion follows for 



The following is an immediate consequence:
Corollary 2.8. Let


The minimising 


Corollary 2.9. Dijkstra’s shortest path algorithm on 


The main problem with applying Corollary 2.9 is the size of 








Algorithm 2.10. (Gradient descent) Input.
Step 0. Set 

Step 1. Collect in 




Step




Output. The subgraph of 


This algorithm clearly terminates after 



Lemma 2.11. Let 






Proof. We may assume that there exists some 

as otherwise 



as otherwise 





is a polynomial with real coefficients such that



An immediate consequence of the lemma is that gradient descent is asymptotically the method of choice:
Theorem 2.12. There exists a constant 



Proof. Let 




The competitiveness of the gradient descent method is manifest in the following Remarks:
Remark 2.13. Algorithm 2.10 is of run-time complexity at most
Proof. In the first step, there are 



Notice that the efficiency holds only for the case that the weights 




Lemma 2.14. Let


We will write 


the computation of which seems at first sight exponential in the dimension of






which counts all pairs 


as this allows to define a nice way of computing the weight
Lemma 2.15. Let 



Proof. This is an immediate consequence of the identity
which follows from Lemma 2.14.
Assume now that we are given for each pair 






and its corresponding cardinality
together with the conventions
Then the identity

is immediate. Its usefulness is that the right hand side is computed more quickly than the left hand side:
Lemma 2.16 The cost of 

Proof. Take each 


Algorithm 2.17 Input.



Step 1. Find minimal edge 

minimal. Set


Step




Output. Path 

Theorem 2.18 Algorithm 2.17 has run-time complexity at most
Proof. The complexity in Step 











Notice that the constant 



3. Combining Ultrametrics with SVM
Within this section the potential of integrating ultrametrics into state-of-the art classifiers―the Support Vector Machine (SVM) as introduced by [8] ―is presented. SVM has been intensely applied for classification tasks in remote sensing and several methodological comparisons have been established in previous work of the authors [9] [10] . At first, our methodology is outlined. Secondly, a classification result for a standard benchmark from hyperspectral remote sensing is shown.
3.1. Methodology
Kernel matrices are the representation of similarity between the input data used for SVM classification. To integrate ultrametrics into SVM classification the crucial step is therefore to create a new kernel function [11] [12] . Instead of representing the Euclidean distance between input data, this new kernel function represents the Baire distance between them. To have an optimal kernel based on the Baire distance, at first an optimal per- mutation 

where 

This new kernel function could be used for classification directly. However, one feature of kernel based classification is that multiple kernel functions can be combined to increase classification performance [13] . The Baire distance is dependent on the resolution (bitrate) of the data. Two very similar features will maintain a large 








This multiple kernel also belongs to the new class of Baire distance kernels and has the advantage of in- corporating the similarity at different bit depths. It is compared against the standard linear kernel frequently used for SVM:

where the bracket 
3.2. Application
Within this section, a comparison on a standard benchmark dataset from hyperspectral remote sensing is presented, cf. also [14] . The AVIRIS Indian Pines dataset consists of a 
Although our implementation of Algorithm 2.17 is capable to process 220 features, only the first six principal components are considered. The reason is that there are two sources of coincidences. The first is coincidence due to spectral similarity of land cover classes (signal), the second is coincidence due to noise. For this work, only the coincidence of signal is relevant. Since the algorithm is not fit to distinguish between the two sources, only the six first principal components are considered relevant. They explain 99.66% of the sum of eigenvalues and are therefore believed to contribute considerably to coincidences due to signal and only marginally to coincidence due to noise.
At first, the dataset is classified with a linear kernel SVM as given in Equation (18). A visual result can be seen in Figure 3 (left). The overall accuracy yielded is 53.5% and the 


Figure 2. Hyperspectral image.

Figure 3. (a) Linear SVM; (b) Multi-Baire-kernel SVM.
The overall accuracy is the percentage of correctly classified pixels from the reference data. The 
As can be seen, both results have a lot of resemblance in the major part. However, the result produced with the linear kernel tends to confuse the brown crop classes in the north with green pasture classes. On the other hand, the linear kernel SVM better recognizes the street in the Western part of the image.
The kernel target alignment between these kernels and the ideal kernel
was computed. The ideal kernel is defined via the label 
where
denotes the usual scalar product between Gram matrices.
The kernel target alignment takes values in the interval 




The users’ accuracy shows what percentage of a particular ground class was correctly classified. The pro- ducers’ accuracy is a measure of the reliability of an output map generated from a classification scheme which tells what percentage of a class truly corresponds to a class in the reference. Both are local (i.e. class-dependent) measurements of performance.
Table 1. Hyperspectral image (channels R:25, G:12, B:1).
Table 2. Hyperspectral image (continued).
As had to be expected, each classification approach outperformed the other for some classes. The approach based on 




Since producers’ accuracy outlines which amount of pixels from the reference are found in the classification (completeness) while users’ accuracy outlines which amount of the pixels in one class are correct, it can be concluded, that the proposed approach produces more complete results for many classes than with the standard linear kernel approach. Of course, due to the low overall accuracy values yielded, the approach should be ex- tended by applying e.g. Gaussian functions over the similarity matrices.
4. Conclusion
Finding optimal Baire distances defined by permutations of 
















Acknowledgements
This work has grown out of a talk given at the International Conference on Classification (ICC 2011) and the discussions afterwards. The first author is funded by Deutsche Forschungsgemeinschaft (DFG), and the second author by the Deutsches Zentrum für Luft-und Raumfahrt e.V. (DLR). Thanks to Fionn Murtagh, Roland Glantz, and Norbert Paul for valuable conversation, as well as Fionn Murtagh and David Wishart for the organising of the International Conference on Classification (ICC 2011) in Saint Andrews, Scotland. The article processing charge was funded by the German Research Foundation (DFG) and the Albert Ludwigs University Freiburg in the funding programme Open Access Publishing.
Cite this paper
Patrick ErikBradley,Andreas ChristianBraun, (2015) Finding the Asymptotically Optimal Baire Distance for Multi-Channel Data. Applied Mathematics,06,484-495. doi: 10.4236/am.2015.63046
References
- 1. Contreras, P. and Murtagh, F. (2010) Fast Hierarchical Clustering from the Baire Distance. In: Locarek-Junge, H. and Weighs, C., Eds., Classification as a Tool for Research, Springer Berlin Heidelberg, Germany, 235-243.
http://dx.doi.org/10.1007/978-3-642-10745-0_25 - 2. Cristianini, N., Shawe-Taylor, J., Elisseeff, A. and Kandola, J. (2006) On Kernel Alignment. In: Holmes, D.E., Ed., Innovations in Machine Learning, Springer Berlin Heidelberg, Germany, 205-256.
- 3. Murtagh, F. (2004) On Ultrametricity, Data Coding, and Computation. Journal of Classification, 21, 167-184.
http://dx.doi.org/10.1007/s00357-004-0015-y - 4. Benois-Pineau, J., Khrennikov, A.Y. and Kotovich, N.V. (2001) Segmentation of Images in p-Adic and Euclidean Metrics. Doklady Mathematic, 64, 450-455.
- 5. Bradley, P.E. (2010) Mumford Dendrograms. The Computer Journal, 53, 393-404.
http://dx.doi.org/10.1093/comjnl/bxm088 - 6. Bradley, P.E. (2008) Degenerating Families of Dendrograms. Journal of Classification, 25, 27-42.
http://dx.doi.org/10.1007/s00357-008-9009-5 - 7. Bradley, P.E. (2009) On p-Adic Classification. p-Adic Numbers, Ultrametric Analysis, and Applications, 1, 271-285.
- 8. Cortes, C. and Vapnik, V. (1995) Support-Vector Networks. Machine Learning, 20, 273-297.
http://dx.doi.org/10.1007/BF00994018 - 9. Braun, A.C. (2010) Evaluation of One-Class SVM for Pixe-Based and Segment-Based Classification in Remote Sensing. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 38, 160-165.
- 10. Braun, A.C., Weidner, U., Jutzi, B. and Hinz, S. (2011) Integrating External Knowledge into SVM Classification—Fusing Hyperspectral and Laserscanning Data by Kernel Composition. In: Heipke, C., Jacobsen, K., Rottensteiner, F., Müller, S. and Sörgel, U., Eds., High-Resolution Earth Imaging for Geospatial Information, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 38, Part 4/W19 (on CD).
- 11. Amari, S. and Wu, S. (1999) Improving Support Vector Machine Classifiers by Modifying Kernel Functions. Neural Networks, 12, 783-789.
http://dx.doi.org/10.1016/S0893-6080(99)00032-5 - 12. Braun, A.C., Weidner, U., Jutzi, B. and Hinz, S. (2012) Kernel Composition with the One-against-One Cascade for Integrating External Knowledge into SVM Classification. Photogrammetrie-Fernerkundung-Geoinformation, 2012, 371-384.
http://dx.doi.org/10.1127/1432-8364/20/0124 - 13. Sonnenburg, S., Rätsch, G., Schäfer, C. and Schölkopf, B. (2006) Large Scale Multiple Kernel Learning. The Journal of Machine Learning Research, 7, 1531-1565.
- 14. Braun, A.C., Weidner, U. and Hinz, S. (2010) Support Vector Machines for Vegetation Classification? A Revision. Photo-grammetrie-Fernerkundung-Geoinformation, 2010, 273-281.
http://dx.doi.org/10.1127/1432-8364/2010/0055

































