^{1}

^{2}

^{*}

^{3}

^{2}

^{2}

^{2}

^{4}

In this paper, we are presenting a new vector order, a solution to the open problem of the generalization of mathematical morphology to multicomponent images and multidimensional data. This approach uses the paradigm of P–order. Its primary principle consists, first in partitioning the multi-component image in the attribute space by a classification method in different numbers of classes, and then the vector attributes are ordered within each class (intra-order-class). And finally the classes themselves are ordered in turn from their barycenter (inter-class order). Thus, two attribute vectors (or colors) whatever, belonging to the vector image can be compared. Provided with this relation of order, vectors attributes of a multivariate image define a complete lattice ingredient necessary for the definition of the various morphological operators. In fact, this method creates a strong close similarity between vectors in order to move towards an order of the same principle as defined in the set of real numbers. The more the number of classes increases, the more the colors of the same class are similar and therefore the absolute adaptive referent tends to be optimal. On the other hand, the more the class number decreases or equals two, the more our approach tends towards the hybrid order developed previously. The proposed order has been implemented on different morphological operators through different multicomponent images. The fundamental robustness of our approach and that relating to noise have been tested. The results on the gradient, Laplacian and Median filter operators show the performance of our new order.

Mathematical morphology (MM) is a non-linear image processing and irreversible approach based on a fundamental structure: Complete lattice

The paper is organized as follows: in Section 2, we present the proposed approach then a theoretical and experimental validation of our approach is presented in Section 3. We shall conclude with some perspectives for future works, which will specifically be based on morphological classification and segmentation vector.

Construction of morphological operators needs a complete lattice structure. In other words, the definition of new orders for the extension of the scalar algorithms vector case. The main issue with this kind of extension is based on the definition of a suitable vector ordering, because there is no natural ordering for vectors particularly in the case of mathematical morphology that makes it complex for color images application. The main of this paper is to presenting a new vector order, a solution to the open problem of the generalization of mathematical morphology to multicomponent images and multidimensional data [

We present in this section the principle and algorithms for our new order.

Firstly, the multidimensional compact histogram multicomponent of the image is calculated. Then the image attribute space is partitioned into a number of K- classes. After, we order each class attributes vectors (intraclass order) using the hybrid vector order we proposed recently in [

Each pixel of the image is randomly assigned to a cluster and it is iterated as follows: the centers of gravity of the different groups are recalculated and each pixel is again assigned to a group according to the nearest center of gravity. Convergence is achieved when the centers are fixed.

Let n be the number of color components of the image,

Each point is assigned to a class with the function defined as follows:

Compact groups are then obtained by minimizing the following expression J:

where

The K-means algorithm works by minimizing the Equation (3) on J iteratively.

Let’s call I the image of the matrix associated with a multicomponent image N of the plan (N > 1). The compact histogram of I is divided into K-classes I_{k} where _{k} the k^{th} partition and G_{k} the barycenter of the I_{k} class.

Let

Consider

the minimum absolute referent vector. Where each component corresponds to the minimum per component of all elements of that class and

The minimum absolute referent vector centers of gravity and h is the bit- mixing function defined in [

Let’s

To classify the vectors belonging to different classes we will introduce

Let

Now, let’s consider two vectors

We can illustrate our algorithm by the flow chart of the

Principal Algorithm

Let v be a neighborhood order 3 × 3 around pixel p of image I with (i,j) as coordinates. We designate L and M respectively the number of rows and columns of image I.

Let Hc be the compact histogram of an image, K the number of classes and Classe a struct which contains the colors by classe and the index by classes relatively to the compact histogram, ordred by increasing order.

[G, Classe] =K-MeansBis (Hc, K)

[HistoOrderlyFinal, IndexOrderlyFinal, ClasseFinal] =OrderByClassification (Hc, Classe, K)

For i=1 to L do

For j=1 to M do

Determination of the neighborhood W=(Vp) around the pixel p(i,j)

//Determination of V_{min} and V_{max}_{:} //

[Pos]=FindPositions (HistoOrderlyFinal, W); // returns the position of colors in W relatively to HistoOrderlyFinal//

V_{min}= HistoOrderlyFinal(Min(Pos));

V_{max}= HistoOrderlyFinal(Max(Pos));

//Determination of the Gradient and the Laplacian of the pixel p(i,j)_{:} //

Grad(i,j)=Gradient (p(i,j), V_{min}, V_{max} );

Lapl(i,j)=Laplacian (p(i,j), V_{min}, V_{max} );

MedFilter_Grad(i,j)=MedianFilter(P(i,j));

MedFilter_Lapl(i,j)=MedianFilter(P(i,j))

End for

End For

Function [V_{max}, V_{min}] = HybridOrderMaxMin(w)

// W: Histogram or matrix of colors //

// Let I be the matrix associated with a multicomponent image of P plans. We note Max and Min, the conventional comparison operators that are the maximum and minimum respectively.//

// Let

// The dilated V_{max} and the eroded V_{min} of the histogram W are defined by the following expressions:

// In the case of no unicity of V_{max} and V_{min}_{,} the solution vectors are ordered according to the vector order by bit mixing, see the article [

End

Function [IndexNewOrderly, HistoNewOrderly] = HistoOrderly (HistoCompact, Index)

// HistoCompact: the compact histogram of an image, see the article [

// Index: is a vector matric containing the position or order of colors in HistoCompact. //

N=length(HistoCompact ); // return the total number of colors in the image

IndexNewOrderly_{min}=[ ];

IndexNewOrderly_{max}=[ ];

HistoNewOrderly_{min}=[ ];

HistoNewOrderly_{max}=[ ];

HistoNewOrderly=[ ];

j←0

do

j←j+1

[V_{max}, V_{min}] = HybridOrderM_{ax}M_{i}_{n}(HistoCompact)

Pos_{Min}= FindPosition (V_{min}) // return the position of V_{min} in HistoCompact

Pos_{Max}= FindPosition (V_{max})

IndexNewOrderly_{Min}=[ IndexNewOrder_{min}, Pos_{Min} ];

IndexNewOrderly_{Max}=[ IndexNewOrder_{Max}, Pos_{Max} ];

IndexNewOrderly=[ IndexNewOrderly, IndexNewOrderly_{min}];//returns the position of the minimums//

HistoNewOrder =[ HistoNewOrder, V_{min}]; // returns colors of the HistoCompact ordered by increasing order//

HistoCompact(Pos_{Min})= [ ];

Index(Pos_{Min}) =[ ]

While ( j

End

Function [G, Classe] =K-MeansBis (HistoCompact, K)

// HistoCompact: The Compact Histogram

// K: The number of classes

// G: The matrix of the centers of gravity of the classes ordered by the function HistoOrderly//

// Classe.Index: struct of tree type. It return the list of the positions of colors with regard to HistoCompact by Class //

// Classe.Histo: struct of tree type. It return the list of colors (Histogram) with regard to HistoCompact by Class //

[G1, Index1]=kmeans (HistoCompact, K);

// kmeans: function which returns the centroids of classes in G1 and the class number which each of the color of HistoCompact belongs to, in Index1 under Matlab //

Index2=[1:K]; // Matrix vector containing the numbers in increasing order from 1 to K

[Index2NewOrderly, G]= HistoOrderly(G1, Index2);

For k=1:K

Classe(k).Index=FindPositions(Index1, Index2NewOrderly( k)); // returns the positions of colors of the class number k//

Classe(k).Histo=HistoCompact(Classe(Index2NewOrderly( k)).Index)); // returns the positions of colors of the class number k//

end

End

Function [HistoOrderlyFinal, IndexOrderlyFinal, ClasseFinal] =OrderByClassification(Hc, Classe, K)

K:// The number of classes //

Hc:// The Compact Histogram//

HistoOrderlyFinal:// Compact Histogram ordered by classification order//

IndexOrderlyFinal:// Positions of the colors of Compact Histogram ordered by classification order//

ClasseFinal: // struct of tree type. It return the colors and list of the positions of colors ordered by classification order, with regard to HistoCompact by Class //

HistoOrderlyFinal=[ ];

IndexOrderlyFinal=[ ];

[G, Classe] =K-MeansBis (HistoCompact, K)

For k=1:K

[IndexNewOrderly, HistoNewOrderly]= HistoOrderly(Classe(k).Histo, Classe(k).Index )

ClasseFinal(k).Histo=HistoNewOrderly;

ClasseFinal(k).Index= IndexNewOrderly;

HistoOrderlyFinal= [HistoOrderlyFinal, HistoNewOrderly];

HistoOrderlyFinal=[ HistoOrderlyFinal, IndexNewOrderly];

end

End

In this section, the different parameters below are studied in order to highlight the relevance and robustness of our approach. They are as follows:

・ The influence of the number of classes upon the proposed orders;

・ Statistical link between our order and other orders;

・ The performance of the proposed order through the operators gradient and laplacien as compared to other orders;

・ The robustness of the median filter of the proposed order face to noise regard to other orders.

We study here the influence of the number of classes on the proposed order. To do this, we first review some mathematical quantities [

Let A be an alphabet and F the set of sequences of length n with values in A. The Hamming distance between two elements a and b of F is the number of elements of the set of images of a that differ from that of b.

In other words, if d(,) denotes the Hamming distance:

where the notation

Let (X, Y) be a pair of random variables of joint probability density given by P (X, Y) Let us note the marginal distributions P (x) and P (y). Then the mutual information is in the discrete case:

The correlation coefficient between two random variables X and Y each having a (finite) variance, denoted Cor (X, Y) or sometimes

where

The proposed order requires a partitioning of the multidimensional histogram of the image and therefore the influence of the corresponding K parameter should be studied.

In fact, the variation of K has been studied on many natural and synthetic color images. To illustrate our approach, an analysis was made on the synthetic image Savoise of

We also presented some results on real images. For fundamental reasons we made K vary between 2 and 6 K as shown in the tables below.

For the proposed new order, (i.e.) order by classification,

Synthetic image: Savoise | K: Number of classes | ||||
---|---|---|---|---|---|

2 | 3 | 4 | 5 | 6 | |

Order by classification (Index relating to the Lexico order) | 3 | 2 | 2 | 3 | 3 |

4 | 1 | 1 | 2 | 4 | |

2 | 3 | 3 | 1 | 2 | |

1 | 4 | 4 | 4 | 1 | |

5 | 5 | 5 | 5 | 5 | |

6 | 6 | 6 | 6 | 6 |

synthetic image Savoise.

We see through this table that the order of tuples may change when the number of K classes varies and stabilized itself at a certain rank, here K = 5.

The study of the influence of K was also performed on real images. Direct assessment is not possible because of the number (Nt) of high tuples of the multidimensional histogram. For example, House and Mandrill images have a respective Nt 33,925 and 61,662. However, when K = 1 order by classification coincides with the hybrid reduced order. The same phenomenon is repeated when K tends to Nt.

Therefore, we have introduced in this section similarity measurements such as the generalized Hamming (DH) distance that is to say multi-symbols and the inter-class correlation coefficient (δ) and the mutual information to evaluate the the influence of K parameter when passing a number of classes K_{1} = I to a number of classes K_{2} = j. Thus, the Hamming distance between two tuples of the same length measures the number of symbols or positions for which the two tuples are different. Here DH is expressed as a (%) of the length of the tuples corresponding in practice to an error rate.

We note DH_{ij}, δ_{i}_{j}, IM and the Hamming distance respectively, the coefficient of correlation and mutual information between classes i and j.

We have calculated these similarity measurements on different images. An illustration is made on Savoise photographs House and Lena through Tables 2-4.

On the whole of the images processed and particularly the Tables 2-5 presented shows that:

・ The order changes with the variation in the number of K classes. This is due to the colorimetric content of the image;

・ The interclass similarity of Hamming varies randomly from an image to another. This is inevitably due to the content of the images;

・ When K tends to Nt, the similarity between interclass and orders is higher This has no practical interest for our application.

Indeed, the order of the tuples becomes stable when the number of classes increases.

Regarding the interclass correlation coefficient δ_{ij} the results are not relevant and do not allow to objectively compare the orders because of the inclusion of the number of tuples.

The tests similarity measurement with mutual information IM_{ij} were unsuccessful because constant regardless of the combination of the number of K classes.

In addition, for i = i_{0} the j variation causes an increase of the Hamming distance interclass orders if the difference between (i_{0}, j) increases. In other words, the similarity of the interclass orders is greater when the interclass distance is small. As shown in Tables 3-5.

As for the coefficient of linear correlation the interclass orders, it decreases when the deviation of the couple (i, j) increases. This is not the case for images

Variables i and j expressed In number of classes | DH (%) | δ (%) | IM | |
---|---|---|---|---|

I | J | DH_{ij} | δ_{ij} | IM_{ij} |

2 | 3 | 66.67 | 0.2319 | 1.5248 |

2 | 4 | 66.67 | 0.2319 | 1.5248 |

2 | 5 | 50 | 0.9275 | 1.5248 |

2 | 6 | 0 | 1 | 1.5248 |

3 | 4 | 0 | 1 | 1.5248 |

3 | 5 | 50 | 0.5028 | 1.5248 |

3 | 6 | 66.67 | 0.2319 | 1.5248 |

4 | 5 | 50 | 0.5028 | 1.5248 |

4 | 6 | 66.67 | 0.2319 | 1.5248 |

5 | 6 | 50 | 0.9275 | 1.5248 |

Variables i and j expressed In number of classes | DH (%) | IM | |
---|---|---|---|

I | J | DH_{ij} | IM_{ij} |

2 | 3 | 75.7808 | 12.2584 |

2 | 4 | 76.1805 | 12.2584 |

2 | 5 | 79.7665 | 12.2584 |

2 | 6 | 90.4722 | 12.2584 |

3 | 4 | 51.2883 | 12.2584 |

3 | 5 | 79.5667 | 12.2584 |

3 | 6 | 90.8928 | 12.2584 |

4 | 5 | 74.5294 | 12.2584 |

4 | 6 | 88.7370 | 12.2584 |

5 | 6 | 88.6634 | 12.2584 |

Variables i and j expressed In number of classes | DH (%) | IM | |
---|---|---|---|

I | J | DH_{ij} | IM_{ij} |

2 | 3 | 98.7583 | 13.9559 |

2 | 4 | 99.7005 | 13.9559 |

2 | 5 | 99.7192 | 13.9559 |

2 | 6 | 99.7816 | 13.9559 |

3 | 4 | 82.9600 | 13.9559 |

3 | 5 | 88.3106 | 13.9559 |

3 | 6 | 88.1949 | 13.9559 |

4 | 5 | 86.0673 | 13.9559 |

4 | 6 | 88.1575 | 13.9559 |

5 | 6 | 87.8330 | 13.9559 |

Variables i and j expressed In number of classes | DH (%) | IM | |
---|---|---|---|

I | J | DH_{ij} | IM_{ij} |

2 | 3 | 72.8768 | 13.7068 |

2 | 4 | 79.2956 | 13.7068 |

2 | 5 | 84.4980 | 13.7068 |

2 | 6 | 84.5754 | 13.7068 |

3 | 4 | 79.3166 | 13.7068 |

3 | 5 | 84.5402 | 13.7068 |

3 | 6 | 84.4558 | 13.7068 |

4 | 5 | 84.5894 | 13.7068 |

4 | 6 | 84.5683 | 13.7068 |

5 | 6 | 83.8161 | 13.7068 |

Savoise and Lena. One might therefore think that its results depend on the colorimetric content of the image.

Also, when the Hamming distance is null, the correlation coefficient equal s 1 and therefore the distributions of the interclass orders are identical (see

We study in this section the statistical relationship between the classification by order (marked OD_C) proposed in this article and other existing orders in the literature that we deemed relevant [

As before, for each image, the compact multidimensional histogram was calculated. The rank of tuples in the histogram is its index. For the different orders we have calculated each multidimensional histogram.

Using the multi-symbol Hamming distance, we have calculated the similarity rate of index ranks between orders two by two for a multitude of images for some results are recorded in the tables below.

Note that we took several different values of k for order by classification.

For each given order and image the inter order mutual information is identical and of the same value as that of the orders by interclass classification.

No law allows us to appreciate the distance of Hamming inter order when the class K number varies. This could be justified by the colorimetric content of the images.

When K = 2, the Hamming distance between the hybrid order and the classification order is the smallest for each image, as shown in

In image processing, the Gradient and the Laplacian are operators allowing highlighting the high frequency information in an image. Indeed, they perform a function of detecting contours. In discrete functional mathematical morphology,

Variables p and q express The choice of an order | DH (%) | IM | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

P | Q | DH_{pk} | Pq | IM_{pk} | ||||||||

K = 2 | K = 3 | K = 4 | K = 5 | K = 6 | K = 2 | K = 3 | K = 4 | K = 5 | K = 6 | |||

OD_L | ODE | 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | 0.6923 | 0.6923 | 0.6923 | 0.6923 | 0.6923 | 1.5248 |

OD_L | OD_R | 66.67 | 66. | 66.67 | 66.67 | 66.67 | 0.2486 | 0.2486 | 0.2486 | 0.2486 | 0.2486 | 1.5248 |

OD_L | OD_H | 66.67 | 66.67 | 66.67 | 66.67 | 66.67 | 0.2615 | 0.2615 | 0.2615 | 0.2615 | 0.2615 | 1.5248 |

OD_L | Od_c | 66.67 | 33.33 | 33.33 | 33.33 | 66.67 | 0.9149 | 0.9149 | 0.9149 | 0.9149 | 0.9149 | 1.5248 |

ODE | OD_R | 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | 0.6179 | 0.6179 | 0.6179 | 0.6179 | 0.6179 | 1.5248 |

ODE | OD_H | 50 | 50 | 50 | 50 | 50 | 0.4722 | 0.4722 | 0.4722 | 0.4722 | 0.4722 | 1.5248 |

ODE | OD_C | 50 | 50 | 50 | 66.67 | 50 | 0.4722 | 0.3982 | 0.3983 | 0.4801 | 0.4722 | 1.5248 |

OD_R | OD_H | 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | 0.9572 | 0.9572 | 0.9572 | 0.9572 | 0.9572 | 1.5248 |

OD_R | OD_C | 33.33 | 66.67 | 66.67 | 33.33 | 33.33 | 0.9572 | 0.1036 | 0.1036 | 0.8602 | 0.9572 | 1.5248 |

OD_H | OD_C | 0 | 66.67 | 66.67 | 50 | 0 | 1 | 0.2319 | 0.2319 | 0.9275 | 1 | 1.5248 |

Variables p and q express The choice of an order | DH (%) | IM | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

P | Q | DH_{pk} | Pq | IM_{pk} | ||||||||

K = 2 | K = 3 | K = 4 | K = 5 | K = 6 | K = 2 | K = 3 | K = 4 | K = 5 | K = 6 | |||

OD_L | ODE | 99.6950 | 99.6950 | 99.6950 | 99.6950 | 99.6950 | 0.7314 | 0.7314 | 0.7314 | 0.7341 | 0.7314 | 12.2584 |

OD_L | OD_R | 99.8212 | 99.8212 | 99.8212 | 99.8212 | 99.8212 | 0.6936 | 0.6936 | 0.6936 | 0.6936 | 0.2486 | 12.2584 |

OD_L | OD_H | 99.8317 | 99.8317 | 99.8317 | 99.8317 | 99.8317 | 0.7037 | 0.7037 | 0.7037 | 0.7037 | 0.7037 | 12.2584 |

OD_L | OD_C | 99.8528 | 99.8528 | 99.8738 | 99.8528 | 99.8423 | 0.6916 | 0.6886 | 0.6979 | 0.7240 | 0.7354 | 12.2584 |

ODE | OD_R | 99.6004 | 99.6004 | 99.6004 | 99.6004 | 99.6004 | 0.9828 | 0.9828 | 0.9828 | 0.9828 | 0.9828 | 12.2584 |

ODE | OD_H | 99.6319 | 99.6319 | 99.6319 | 99.6319 | 99.6319 | 0.9856 | 0.9856 | 0.9856 | 0.9856 | 0.9856 | 12.2584 |

ODE | OD_C | 99.5583 | 99.5688 | 99.5688 | 99.5077 | 99.5077 | 0.9857 | 0.9753 | 0.9755 | 0.9734 | 0.0.969 | 12.2584 |

OD_R | OD_H | 98.1176 | 98.1176 | 98.1176 | 98.1176 | 98.1176 | 0.9572 | 0.9572 | 0.9572 | 0.9572 | 0.9572 | 12.2584 |

OD_R | OD_C | 98.7170 | 99.2323 | 99.1061 | 99.1061 | 99.1061 | 0.9947 | 0.9910 | 0.9913 | 0.9824 | 0.9824 | 12,2584 |

OD_H | OD_C | 38.3426 | 76.2541 | 76.3803 | 80.0189 | 90.8297 | 0.9671 | 0.9636 | 0.9840 | 0.9551 | 0.9551 | 12.2584 |

Variables p and q express The choice of an order | DH (%) | IM | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

P | Q | DH_{pk} | Pq | IM_{pk} | ||||||||

K = 2 | K = 3 | K = 4 | K = 5 | K = 6 | K = 2 | K = 3 | K = 4 | K = 5 | K = 6 | |||

OD_L | ODE | 99.9376 | 99.9376 | 99.9376 | 99.9376 | 99.9376 | 0.7910 | 0.7910 | 0.7910 | 0.7910 | 0.7910 | 13.9559 |

OD_L | OD_R | 99.9314 | 99.9314 | 99.9314 | 99.9314 | 99.9314 | 0.7603 | 0.7603 | 0.7603 | 0.7603 | 0.7603 | 13.9559 |

OD_L | OD_H | 99.9688 | 99.9688 | 99.9688 | 99.9688 | 99.9688 | 0.7476 | 0.7476 | 0.7476 | 0.7476 | 0.7476 | 13.9559 |

OD_L | OD_C | 99.9750 | 99.9563 | 99.9563 | 99.9563 | 99.9526 | 0.6880 | 0.6879 | 0.7691 | 0.7675 | 0.7796 | 13.9559 |

ODE | OD_R | 99.8939 | 99.8939 | 99.8939 | 99.8939 | 99.8939 | 0.8850 | 0.8850 | 0.8850 | 0.8850 | 0.8850 | 13.9559 |

ODE | OD_H | 99.9251 | 99.9251 | 99.9251 | 99.9251 | 99.9251 | 0.8736 | 0.8736 | 0.8736 | 0.8736 | 0.8736 | 13.9559 |

ODE | OD_C | 99.9064 | 99.8503 | 99.8565 | 99.8939 | 99.8877 | 0.8753 | 0.8873 | 0.8347 | 0.8234 | 0.8283 | 13.9559 |

OD_R | OD_H | 99.4572 | 99.4572 | 99.4572 | 99.4572 | 99.4572 | 0.9940 | 0.9940 | 0.9940 | 0.9940 | 0.9940 | 13.9559 |

OD_R | OD_C | 99.4821 | 99.1140 | 99.2887 | 99.4322 | 99.4135 | 0.9789 | 0.0.9728 | 0.9735 | 0.9650 | 0.9638 | 13.9559 |

OD_H | OD_C | 66.5128 | 99.6880 | 99.7879 | 99.7754 | 99.8253 | 0.9756 | 0.9681 | 0.9687 | 0.9610 | 0.9585 | 13.9559 |

they rely on two basic operators: dilation and erosion, which correspond to the maximum and minimum in a neighborhood of a pixel of the image commonly called a vectorial structuring element. We used the Symmetric Laplacian in this work. The following formulas describe these operators:

Let I be a multicomponent image with n (n> = 1) components and a B a structuring element. It is noted

Hence the following expressions of the Gradient and the Laplacian are noted respectively Grad and Lapl:

The objective in this part is to evaluate quantitatively our gradient and Laplacian operators through the order by proposed classification and the hybrid order. Due to the lack of direct methods for evaluating the results of these operators, we used binary segmentation or thresholding to obtain contour segmentation images. For this we used the Otsu thresholding method [

In fact, the BSDS300 [

From a quantitative point of view, different measurements of contour segmentation evaluation exist to evaluate the relevance of our contour detection operators. In our context we took into account distance from Pratt as well as Vinet measure.

The results are shown in

Benchmark Images | 323016 | 113044 | ||
---|---|---|---|---|

Contour Operators | Gradient | Laplacian | Gradient | Laplacian |

Hybridorder | 0.9146 | 0.3657 | 0.8497 | 0.5111 |

Classification order (K = 2) | 0.9145 | 0.3662 | 0.8178 | 0.5152 |

Classification order (K = 3) | 0.9148 | 0.3672 | 0.8760 | 0.5161 |

Classification order (K = 4) | 0.9145 | 0.3680 | 0.8110 | 0.5153 |

Classification order (K = 5) | 0.9141 | 0.3659 | 0.8269 | 0.5133 |

Classification order (K = 6) | 0.9144 | 0.3659 | 0.8754 | 0.5160 |

Benchmark Images | 323016.jpg | 113044.jpg | ||
---|---|---|---|---|

Contour Operators | Gradient | Laplacian | Gradient | Laplacian |

Hybridorder | 0.3790 | 0.2095 | 0.2855 | 0.2586 |

Classification order (K = 2) | 0.3781 | 0.2099 | 0.2938 | 0.2603 |

Classification order (K = 3) | 0.3790 | 0.2108 | 0.2990 | 0.2581 |

Classification order (K = 4) | 0.3790 | 0.2099 | 0.3016 | 0.2595 |

Classification order (K = 5) | 0.379 | 0.2103 | 0.2894 | 0.2590 |

Classification order (K = 6) | 0.3790 | 0.2095 | 0.2981 | 0.2599 |

obtained we can make the following analysises:

The Vinet measurement generally gives constant values for the different values of K in the case of order by proposed classification; therefore it does not discriminate their performance.

- However, Pratt's measurement shows the performance of the different operators.

- The set of results from the different images of the database show that the proposed classification approach is generally efficient than the hybrid order for the gradient operator.

On the contrary, the hydric order is more efficient than the order by classification for the laplacian operator.

Note that the lower the measurement, the better the contour detection operator.

To test the effectiveness of our proposal to noise, we added impulsive noise to different images of the Benchmark BSDS300 Images database to study the robustness of the proposed approach and the median filter. We chose two images here for illustration (113044.jpg, 323016.jpg) and different noise powers were added to the images described by the parameter p.

For each noisy image we applied the median filter using the proposed order and the hybrid order, and then we calculated the error rates generated by each order. The results are shown in

Image 113044.jpg noise | P = 0.001 | P = 0.007 | P = 0.01 | P = 0.03 | P = 0.05 |
---|---|---|---|---|---|

Hybridorder | 0.8488 | 0.8366 | 0.8353 | 0.8488 | 0.8409 |

Classification order (K = 2) | 0.9996 | 0.9991 | 0.9983 | 0.9996 | 0.9987 |

Classification order (K = 3) | 0.9996 | 0.9974 | 0.9983 | 0.9996 | 0.9996 |

Classification order (K = 4) | 0.9987 | 0.9996 | 0.9991 | 0.9987 | 0.9991 |

Classification order (K = 5) | 0.9983 | 0.9991 | 0.774 | 0.9983 | 0.9991 |

Classification order (K = 6) | 0.9991 | 1.0000 | 0.9996 | 0.9991 | 0.9996 |

Image 323016.jpg noise | P= 0.001 | P = 0.007 | P = 0.01 | P = 0.03 | P = 0.05 |
---|---|---|---|---|---|

Hybridorder | 0.8162 | 0.8170 | 0.8053 | 0.8275 | 0.8288 |

Classification order (K = 2) | 0.9883 | 0.9922 | 0.9930 | 0.9909 | 0.9887 |

Classification order (K = 3) | 0.9848 | 0.9904 | 0.9839 | 0.9891 | 0.9913 |

Classification order (K = 4) | 0.9900 | 0.9865 | 0.997 | 0.9861 | 0.9961 |

Classification order (K = 5) | 0.9822 | 0.9857 | 0.9865 | 0.9857 | 0.9870 |

Classification order (K = 6) | 0.9900 | 0.9917 | 0.9909 | 0.9957 | 0.99 |

We noted that the results obtained have the same order of magnitude for the set of images.

However the error rate median filter is lower than the proposed hybrid order. We can conclude that our proposal is less robust to noise compared to the hybrid order. This could be explained by the fact that the classification algorithm used, in particular K-means, is sensitive to noise.

In this paper, we have presented a new vector order, a solution to the open problem of the generalization of mathematical morphology to multicomponent images and multidimensional data. Our proposal is a P-order. Indeed, it first parses the multicomponent image in the attribute space by the K-means classification method into different numbers of classes. Then the attribute vectors are ordered within each class (intra class order). And finally the classes themselves are ordered in turn from their barycenter (interclass order). Equipped with this order the space attribute of the image is a complete lattice.

We can conclude that on all images tested, the proposed order by classification gives better results than the hybrid order on the gradient operator, specifically for edge detection. However, our order resists less noise than the hybrid order.

In our future work we intend to improve the noise resistance of our order by using a classification method rather than the K-means algorithm. Then we know that K varies theoretically from 1 to Nt. Indeed, the choice or the implementation of an unsupervised evaluation criterion should automatically allow us to obtain the value K = Kopt producing for each morphological operator (Gradient, Laplacian, Median filter, etc.), the optimal solution whatever the vector order may be.

What is more, we shall also focus our research on the development of new methods of Vector morphological segmentation based on scalar approaches or grayscale.

Kouassi, A.F., Ouattara, S., Okaingni, J.-C., Koné, A., Vangah, W.J., Loum, G. and Clement, A. (2017) A New Vectorial Order Approach Based on the Classification of Tuples Attribute and Relative Absolute Adaptive Referent: Applications to Multicomponent Images. Journal of Software Engineering and Applications, 10, 546-563. https://doi.org/10.4236/jsea.2017.106030