Engineering
Vol. 4  No. 10 (2012) , Article ID: 23357 , 4 pages DOI:10.4236/eng.2012.410076

Method for Segmenting Tomato Plants in Uncontrolled Environments

Deny Lizbeth Hernández-Rabadán, Julian Guerrero, Fernando Ramos-Quintana

Monterrey Institute of Technology and Higher Education, Cuernavaca Campus, Morelos, México

Email: a01125746@itesm.mx

Received July 12, 2012; revised August 10, 2012; accepted August 25, 2012

Keywords: Image Segmentation; Color Images; Self-Organizing Maps; Bayesian Classifier

ABSTRACT

Segmenting vegetation in color images is a complex task, especially when the background and lighting conditions of the environment are uncontrolled. This paper proposes a vegetation segmentation algorithm that combines a supervised and an unsupervised learning method to segment healthy and diseased plant images from the background. During the training stage, a Self-Organizing Map (SOM) neural network is applied to create different color groups from a set of images containing vegetation, acquired from a tomato greenhouse. The color groups are labeled as vegetation and non-vegetation and then used to create two color histogram models corresponding to vegetation and non-vegetation. In the online mode, input images are segmented by a Bayesian classifier using the two histogram models. This algorithm has provided a qualitatively better segmentation rate of images containing plants’ foliage in uncontrolled environments than the segmentation rate obtained by a color index technique, resulting in the elimination of the background and the preservation of important color information. This segmentation method will be applied in disease diagnosis of tomato plants in greenhouses as future work.

1. Introduction

Segmenting vegetation in color images is a complex task, especially when the background and lighting conditions of the environment are uncontrolled. A variety of approaches have been applied to solve this problem in such complex environments, most of which have used image processing to segment only green color vegetation areas in crop rows. Some of the approaches are known as Threshold techniques, for example fixed threshold [1] and Otsu based methods [2,3]. Different indices have been widely employed to separate vegetation from background in color images, such as the Normalized Difference Index (NDI) [4], the color index (CIVE) [5] and Excess Green minus Excess Red (ExG-ExR) [6,7].

Other efficient but complicated methods have been developed for vegetation segmentation in color images. For example, Zheng [8] developed a segmentation algorithm by introducing a mean-shift procedure and a Back Propagation Neural Network (BPNN), with segmentation results better than those obtained by the index-based methods.

The methods mentioned above provide good results when green or healthy vegetation is segmented, but they are not efficient when the foliage of plants present different colors areas caused for example by chemical burn, diseases, lack of nutrients, or other factors, since these color areas are commonly eliminated in the segmentation process. In agricultural applications, such as the diagnosis of diseased leaves, the segmentation process is more complex because it consists of preserving the affected color areas when the vegetation is segmented, in order to diagnose the plant at a later recognition stage, and which is even more complicated in an uncontrolled environment. Therefore, in this type of application the image processing methods based on green color segmentation are not feasible.

Some researchers have developed algorithms addressing this problem. For example, in [9] a Self-organizing Map together with a BPNN is deployed to segment pixels with diseased and green color from background color in images of grape leaves. Their results showed an efficient segmentation rate in images with complex background. However, ambiguous color pixels are extracted eventually causing noise.

The Self-Organizing Map (SOM) or Kohonen map [10] is an unsupervised non-parametric Artificial Neural Network (ANN) method that has been mainly used for visualizing and interpreting large data sets in high-dimensional space by mapping them to a low dimensional space based on a competitive learning scheme. The standard SOM algorithm has been used in many image segmentation applications [11-13]. However, the number of clusters in the algorithm (the SOM structure) must be specified a priori in order to obtain successful results. Owing to the intrinsic complexity, in images with uncontrolled background, it is not always feasible to define the number of classes (clusters) to obtain a good segmentation using SOM. Because of this, different approaches have been used to find an appropriate structure of SOM, for example, by embedding heuristics to dynamically change the structure in the training mode [14,15]. The integration of SOM with other methods has been also used. In [16] K-means and SOM were integrated, wherein the role of K-means was to segment the image at a coarser scale, and then SOM re-segmented the image at a fine scale. Genetic Algorithms (GA) and SOM have also been integrated for finding the optimal number of center cluster [17,18] which resulted in high accuracy in satellite images. Although the computational form of the SOM is very simple, there are still many aspects to be exploited.

In this paper a new method is proposed to improve the vegetation segmentation rate in images with uncontrolled background. Due to the extensive use of SOM in complex images, an adaptive one-dimensional SOM is proposed to create different color groups of the training images, and heuristics are embedded into the SOM to define the number of clusters. The generated color groups with the SOM training are the input of a supervised learning method, a Bayesian Classifier, to complete the segmentation process. The Bayesian classifier is proposed since the number of features to be used is small therefore the training process will be faster compared with a neural network. In this work, 62 color images of healthy and unhealthy foliage of tomato plants are used to train the model and 40 to test it, which were acquired from a greenhouse.

Preliminary results of the method are presented to show its efficacy to segment the green vegetation color and the affected color areas in the leaves caused by any factor mentioned before.

The promising results obtained so far, seem to be the basis towards a disease diagnosis application in plants as a future work.

The remainder of this paper is organized as follows. Section 2 introduces the SOM neural networks and the Bayesian classifier. In Section 3, the proposed method is presented. The discussion and analysis of experimental results are treated in 4 and finally conclusions are given in Section 5.

2. SOM Neural Networks (SOM NN) and Bayesian Classifier

2.1. SOM NN

The SOM is a self-organizing neuronal system that was created in 80’s by Teuvo Kohonen [10]. The SOMs are an unsupervised method originally designed for data clustering, information visualization, data mining and data abstraction. Although SOM was not designed for pattern recognition, self-organization in general is a fundamental pattern recognition process [19]. The typical SOM structure consists of two layers: an input and an output layer without a hidden layer.

The first layer is composed by m neurons, one neuron for each input variable, which act as buffers and distribute the input information to the neurons in the second layer. The processing is performed in the second layer where the feature map is formed. This map normally has a rectangular architecture of nx × ny neurons operating in parallel and connected to each input neuron by some weights. Sometimes one-dimensional or three-dimensional output layer architectures are used.

In the SOM standard learning algorithm the output layer structure must be defined in advanced, which is a problem because the success of the algorithm depends on the specified number of output neurons. One solution is to dynamically change the SOM structure during the training. The research presented in this paper applies the approach proposed by [14], who integrates heuristics into the SOM algorithm to automatically find an appropriate number of clusters (or neurons) by growing, pruning and merging operations, with some changes.

2.2. Bayesian Classifier

The Bayesian classifier is a statistical method that has been applied in image segmentation for controlled and uncontrolled environments with excellent results [20-22]. Bayesian reasoning is based on the assumption that optimal decisions can be made by relating probability distributions with observed data. Since the Bayesian classifier requires initial knowledge of the probabilities, it is considered a supervised method [23]. When these probabilities are not known a priori they are often estimated by a training process where examples, background knowledge, or previously available data, are incorporated to an algorithm and a human determines the different classes. The classification process with the Bayesian classifier is done by the combination of Bayes theorem and Bayes rule. In this work we are interested in classifying pixels in an image into two classes (vegetation and non-vegetation classes).

In Section 3 the way of applying the Bayesian classifier to our problem is described.

3. The Proposed Method

In this work, the objective is to separate, in an image of plant foliage, the regions of vegetation. Since the environment is uncontrolled, factors such as lighting and shadows, as well as the different colors present in the vegetation due to disease, the segmentation of vegetation becomes a difficult process. We are proposing a method that combines an unsupervised and a supervised method for segmenting such complex images.

The proposed method is graphically defined in Figure 1. Based on the pixel properties, each pixel is classified into two regions or classes (vegetation and non-vegetation). Different color channels are the properties of the pixel to be considered, as it has been demonstrated, the color is a strong feature for image segmentation [8,9,24]. In addition, it is computationally inexpensive and it can provide more information than a grey-scale image or an edge segmented image [14].

Before the pixel classification, a training phase composed of two processes is carried out as shown in Figure 1: the first is to train the SOM and the second is to train the vegetation and non-vegetation models. The steps of the proposed method are described as follows:

1) In order to determine the number of classes in the images and generate the color groups of the set of training images, the training process of the unsupervised method (the SOM) is performed. This set of training images was acquired from the Centro de Desarrollo Tecnológico “Tezoyuca” (CDT), from a tomato greenhouse with uncontrolled background and lighting.

Figure 1. The proposed method for vegetation segmentation.

2) The changes in light intensity in the images affect the segmentation considerably. As shown in Figure 1, to reduce effect illumination in the images, the color model of the input image is first converted from RGB into the HSV and Lab color model as a preprocessing step. These color models were chosen because they are perceptually linear, that is, a linear relation between the color attributes and color perception exists [8]. In this work, the components H and b, from the HSV and Lab respectively, were chosen as features of the pixels.

3) In complex images the different color groups that exist should be defined by taking into account their similarity relationships (common features, data correlations, etc), where they are represented by an output neuron in the SOM.

In our case, the number of color groups should be greater than 3, in order to be able to separate the colors belonging to the background, the vegetation and others.

The SOM training algorithm applied in this work was proposed by [14]. The competition between neurons applied in this work is described as:

(1)

where vi is the output value of the ith neuron with weight vector wi, x is an input vector, D is a distance measurement between the input vector and the weight vector, and m is a finite number of neurons.

A change in the heuristic applied to found the competition winner neuron was done, which is described as:

(2)

When the result is c = NULL, the algorithm creates a new neuron by assigning the input vector as the weights of the new neuron. In [14] c = NULL when the mean value and the median value of the outputs of all neurons are approximate, then a new neuron is created.

A weight map is generated with the SOM training, representing the different color groups that exist in the image.

4) The SOM, during its execution phase, uses the weight map to separate the pixels in the images into the different color groups. We use a Euclidean distance to measure the similarity between the input vector and each neuron in the map. The input vector is the pixel combination [H,b] which will be labeled with the most similar color group. The similarity measure to be used is described as:

(3)

where W is the weight vector, X is an input vector and n is the number of features. Each image is separated in its different color groups and new images are created, one image for each color group. The set of images used in this process is the same as the one used for training the SOM.

5) The new images have been labeled as vegetation and non-vegetation. The K-means has helped in this process classifying the neuron’s weight of the map into 3 groups or classes. The number of classes has been set to 3 to consider vegetation, background and other color regions in the images.

We consider that the component H gives us more information about vegetation color. The combination [H,b] that corresponds to the vegetation class is that which has the maximum value H. Considering the Euclidean distance, we label as vegetation the images of the color groups with the minimum distance to the class with the combination [H,b] considered as vegetation, otherwise the image is labeled as non-vegetation.

6) In Figure 1, after the SOM training, the next process in the proposed method is the creation of the vegetation and non-vegetation models. The RGB color model is used in this step and is normalized for each image for each color group set. Two color histograms models are created with all of the elements of each set of images, one for the image group of vegetation and the other for the non-vegetation group.

The histogram represents the relative frequency of each combination [rgb] in the image. For classification purposes, the histogram counts are converted into discrete probability distributions as follows:

(4)

where c[rgb] represents the count in the histogram bin associated with the [rgb] color combination and Tc is the total count obtained by summing the counts in all the bins.

Given vegetation and non-vegetation histograms we can compute the probability that a given color value (a [rgb] combination) belongs to the vegetation (v) and nonvegetation (~v) classes using the Bayes theorem, described as:

(5)

(6)

where the conditional probabilities, and a priori probabilities P(v), P(~v) are directly computed from the vegetation and non-vegetation histograms, respectively.

A pixel is classified as vegetation if:

(7)

where 0 ≤ Θ ≤ 1 is a threshold. A faster way is by applying the rule

(8)

The conditionals probabilities are computed as:

(9)

(10)

where v[rgb] is the pixel count contained in bin [rgb] of the vegetation histogram, nv[rgb] is the equivalent count from the non-vegetation histogram, and Tv and Tnv are the total counts contained in the vegetation and non-vegetation histograms, respectively.

7) Due to the complex background, some small isolated regions of the background can be detected by the vegetation segmentation algorithm and some pixels, belonging to lesion color, can be eliminated.

These regions have few pixels and are similar to salt and pepper noise. The morphological operations, dilate and erode, were applied to remove each noise respectively.

4. Results and Discussion

The method has been tested with tomato images acquired from a real greenhouse considered as an uncontrolled environment. The images captured the foliage of tomato plants and some of them contained more than a single leaf. We tried to capture the foliage in its natural state without attempting to avoid shadows or overlapping leaves.

Due to the fact that we cannot quantitatively compare other segmentation methods with our method, the evaluation of the results has been qualitative. However, the CIVE method is used as a basis for visual comparison of the segmentation obtained with our method.

Seven color groups were used during the SOM training process, which corresponds to 7 neurons. A new image is created for each group. An example is shown in Figure 2, along with the original image.

The new images are labeled as vegetation (186 images) and non-vegetation (248 images). This set of images is used to create the histograms for vegetation and nonvegetation. The dimension considered for the histogram is 32 × 32 × 32.

Figure 3 shows examples of images segmented with our method (the middle column) and the CIVE method (the right column) after morphological operations. The images have background and color affectations in the leaves of varying complexity.

The following observations have been drawn:

• Independently of the background complexity, the vegetation regions are well segmented with our method, even when some images contained green mildew in the ground and these areas could be considered as vegetation by the algorithm.

• The segmentation rate was visually better than the CIVE method, even in images with a background less complex, where CIVE could have obtained a better performance.

• The most color affectations of the leaves were detected in spite of their similarity in color with the background.

• Some highlights in the leaves, which are produced due to the illumination conditions, were eliminated by the algorithm because they could produce problems in the disease diagnosis process to be carried out in the

Figure 2. (a) Original image; (b)-(h) Color groups generated with the SOM.

Figure 3. Examples of segmented images using the proposed method and CIVE.

future.

• The green fruit was falsely extracted with both methods, owing to its similarity in color with the foliage. We consider that using only the color information is not possible to completely separate the green fruit from the true vegetation regions.

In general, the results showed that the proposed method provides desirable performance in vegetation segmentation in complex images. Even though the color of the soil is similar to the color of the lesions in the leaves, some areas where the CIVE algorithm failed were well detected by our method. Therefore, these promissory results point towards the application of this method to the diagnosis of tomato plant diseases in greenhouses.

5. Conclusion

Segmentation of vegetation in images is essential to diseases diagnosis applications. However, uncontrolled background and changing conditions of light make complex the segmentation process. A variety of approaches has been applied to solve this problem in such complex environments, most of which yield good results when green or healthy vegetation is segmented, but they are not efficient when the foliage of plants present different colors areas caused sometimes for diseases, since these color areas are commonly eliminated in the segmentation process.

In order to improve the segmentation efficiency in the conditions mentioned above, this paper presents a vegetation segmentation method that combines a supervised and an unsupervised learning method, to segment healthy and diseased images of tomato plants from a complex background.

The preliminary results with the proposed method shows that the algorithm can adequately segment vegetation regions, including regions that are not mostly green because of some factor (diseases, lack of nutrients, chemical burn, etc.), and non-vegetation regions.

6. Acknowledgements

We would like to thank Edson Efren Rios Cuevas from Centro de Desarrollo Tecnológico Tezoyuca for his support in the acquisition of the tomato images.

REFERENCES

  1. J. Hemming, and T. Rath, “Computer-Vision-Based Weed Identification under Field Conditions Using Controlled Lighting,” Journal of Agricultural and Engineering Research, Vol. 78, No. 3, 2001, pp. 233-243. doi:10.1006/jaer.2000.0639
  2. R. H. Ji, Z. T. Fu and L. J. Qi. “Real-Time Plant Image Segmentation Algorithm under Natural Outdoor Light Conditions,” New Zealand Journal of Agricultural Research, Vol. 50, No. 5, 2007, pp. 847-854. doi:10.1080/00288230709510359
  3. W. Hongxia and L. Mingxi, “A Method of Tomate Image Segmentation Based on Mutual Information and Threshold Iteration,” IFIP International Federation for Information Proccesing, Vol. 294, Computers and Computing Technologies in Agriculture II, Vol. 2, Springer, Boston, 2009, pp. 1097-1104.
  4. A. J. Perez, F. López, J. V. Benlloch and S. Christensen, “Colour and Shape Analysis Techniques for Weed Detection in Cereal Fields,” Computers and Electronics in Agriculture, Vol. 25, No. 3, 2000, pp. 197-212. doi:10.1016/S0168-1699(99)00068-X
  5. T. Kataoka, T. Kaneko, H. Okamoto and S. Hata, “Crop Growth Estimation System Using Machine Vision,” Proceeding of the 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Vol. 2, 20-24 July 2003, pp. 1079-1083.
  6. J. C. Neto, G. E. Meyer and D. D. Jones, “Individual Leaf Extractions From Young Canopy Images Using Gustafson-Kessel Clustering and a Genetic Algorithm,” Computers and Electronics in Agriculture, Vol. 51, No. 1-2, 2006, pp. 66-85. doi:10.1016/j.compag.2005.11.002
  7. G. E. Meyer and J. C. Neto, “Verification of Color Vegetation Indices for Automated Crop Imaging Applications,” Computers and Electronics in Agriculture, Vol. 63, No. 2, 2008, pp. 282-293. doi:10.1016/j.compag.2008.03.009
  8. L. Zheng, J. Zhang and Q. Wang, “Mean-Shift-Based Color Segmentation of Images Containing Green Vegetation,” Computers and Electronics in Agriculture, Vol. 65, No. 1, 2009, pp. 93-98. doi:10.1016/j.compag.2008.08.002
  9. A. Meunkaewjinda, P. Kumsawat, K. Attakitmongcol and A. Srikaew, “Grape Leaf Disease Detection from Color Imagery Using Hybrid Intelligent System,” 5th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, Vol. 1, Krabi, 14-17 May 2008, pp. 513-516. doi:10.1109/ECTICON.2008.4600483
  10. T. Kohonen, “Self-Organized Formation of Topologically Correct Feature Maps,” Biological Cybernetics, Vol. 43, No. 1, 1982, pp. 59-69. doi:10.1007/BF00337288
  11. N. Li and Y. F. Li, “Feature Encoding for Unsupervised Segmentation of Color Images,” IEEE Transactions on Systems Man and Cybernetics, Part B: Cybernetics, Vol. 33, No. 3, 2003, pp. 438-447. doi:10.1109/TSMCB.2003.811120
  12. S. Dongcheng, X. Yashu and Z. Long, “Moving Object Detection Based on Scene Understanding,” International Conference on Information Engineering and Computer Science, Wuhan, 19-20 December 2009, pp. 1-4.
  13. K. Yang, H. Zhu and Y.-J. Pan, “Human Face Detection Based on SOFM Neural Network,” IEEE International Conference on Information Acquisition, Weihai, 20-23 August 2006, pp. 1253-1257. doi:10.1109/ICIA.2006.305929
  14. Y. Wu, Q. Liu and T. S. Huang, “An Adaptive Self-Organizing Color Segmentation Algorithm with Application to Robust Real-Time Human Hand Localization,” Proceedings of the 4th Asian Conference on Computer Vision, Taipei, January 2000, pp. 1106-1111.
  15. B. Doungchatom, P. Kumsawat, K. Attakitmongcol and A. Srikeaw, “Modified Self-Organizing Map for Optical Flow Clustering System,” Proceedings of the 7th WSEAS International Conference on Signal, Speech and Image Processing, Beijing, 15-17 September 2007, pp. 61-69.
  16. Z. Zhou, S. Wei, X. Zhang and X. Zhao, “Remote Sensing Image Segmentation Based on Self-Organizing Map at Multiple-Scale,” Proceedings of SPIE Geoinformatics: Remotely Sensed Data and Information, Vol. 6752, 25 May 2007, Nanjing, pp. 122-126.
  17. M. Awad, K. Chehdi and A. Nasri, “Multicomponent Image Segmentation Using Genetic Algorithm and Artificial Neural Network,” IEEE Geosciences and Remote Sensing Letters, Vol. 4, No. 4, 2007, pp. 571-575. doi:10.1109/LGRS.2007.903064
  18. M. Awad, K. Chehdi and A. Nasri, “Multi-Component Image Segmentation Using a Hybrid Dynamic Genetic Algorithm and Fuzzy C-Means,” IET Image Processing, Vol. 3, No. 2, 2009, pp. 52-62.
  19. H. Yin, “The Self-Organizing Maps: Background, Theories, Extensions and Applications,” Studies in Computational Intelligence, Vol. 115, 2008, pp. 715-762. doi:10.1007/978-3-540-78293-3_17
  20. L. F. Tian and D. C. Slaughter, “Environmentally Adaptive Segmentation Algorithm for Outdoor Image Segmentation,” Computers and Electronics in Agriculture, Vol. 21, No. 3, 1998, pp. 153-168. doi:10.1016/S0168-1699(98)00037-4
  21. U. Watchareeruetai, Y. Takeuchi, T. Matsumoto, H. Kudo and N. Ohnishi, “Computer Vision Based Methods for Detecting Weeds in Lawns,” IEEE Conference on Cybernetics and Intelligent Systems, Bangkok, 7-9 June 2006, pp. 1-6. doi:10.1109/ICCIS.2006.252275
  22. A. Tellaeche, X. P. Burgos-Artizzu, G. Pajares and A. Ribeiro, “A Vision-Based Method for Weeds Identification Trough the Bayesian Decision Theory,” The Journal of the Pattern Recognition Society, Vol. 41, No. 2, 2008, pp. 521-530. doi:10.1016/j.patcog.2007.07.007
  23. T. Mitchell, “Machine Learning,” McGraw Hill Inc., New York, 1997.
  24. Q. Huynh-Thu, M. Meguro and M. Kaneko, “Skin-Color Extraction in Images with Complex Background and Varying Illumination,” Proceedings of the 6th IEEE Workshop on Applications of Computer Vision, 2002, pp. 280-285.