Journal of Computer and Communications, 2014, 2, 137-141
Published Online March 2014 in SciRes. http://www.scirp.org/journal/jcc
http://dx.doi.org/10.4236/jcc.2014.24018
How to cite this paper: Iscan, H. and Gunduz, M. (2014) Parameter Analysis on Fruit Fly Optimization Algorithm. Journal of
Computer and Communications, 2, 137-141. h ttp://dx. doi.org/10. 4236/j cc.2014.2 4018
Parameter Analysis on Fruit Fly
Optimization Algorithm
Hazim Iscan, Mesut Gunduz
Computer Engineering Department, Selçuk University, Konya, Turkey
Email: iscan@selcu k.edu.tr, mgunduz@selcuk.edu.tr
Received Novemb er 2013
Abstract
Fruit fly algorithm is a novel intelligent optimization algorithm based on foraging behavior of the
real fruit flies. In order to find optimum solution for an optimization problem, fixed parameters
are obtained as a result of manual test in fruit fly algorithm. In this study, it is aimed to find the
optimum solution by analyzing the constant parameter concerning the direction of the algorithm
instead of manual defining on initialization stage. The study shows an automated approach for
finding the related parameter by utilizing grid search algorithm. According to the experimental
results, it can be seen that this approach could be used as an alternative way for finding related
parameter or other ones in order to achieve optimum model.
Keywords
Fruit Fly; Optimization
1. Introduction
Intelligent optimization algorithms are attracting the attention of many scholars in recent years. These algo-
rithms with their simple steps and efficient search methods have become the most widely used in optimization
problems which require performance. Particle swarm optimization (PSO) [1,2], ant colony optimization (ACO)
[3], artificial bee colony algorithm (ABC) [4], Simulated Annealing (SA) algorithm [5], Bacterial Colony Che-
motaxis (BCC) [6] and Fruit Fly Optimization algorithm (FOA) [7,8] are some of them. Fruit fly algorithm re-
cently joined the intelligent optimization algorithms group. This algorithm is introduced by Wen Tsao Pan in
2011. The algorithm is originated from foraging behavior of fruit flies. The algorithm can be easily understand-
able because of its simple structure. Its updating strategy, which used to find best solution, is simpler than other
algorithms. However, manually definition of this update strategy causes a disadvantage. A method has been
tried to develop for eliminatin g this disadvantage and improving performance of algorithm. In this method, the
manually defined parameters of algorithm, defined with the help of search algorithm. With this new method the
functions which tested in Fruit fly algorithm have been used and obtained better results.
2. Fruit Fly Optimization Algorithm
Fruit fly optimization algorithm is the latest evolutionary computation technique which was pointed out by Wen
H. Iscan, M. Gunduz
138
Tsao Pan in 2011. The Fruit Fly Optimization Algorithm (FOA) is a new intelligent method on the food finding
behavior of the fruit fly. The fruit fly itself is superior to other species in sensing and perception, especially in
osphresis and vision. The osphresis organs of fruit flies can find all kinds of scents floating in the air; it can even
smell food source from 40 km away [7,8]. Then, after it gets close to the food location, it can also use its sensi-
tive vision to find food and the company’s flocking location, and fly towards that direction too. The behaviors of
the fruit flies could be demonstrated in Figure 1.
Fruit fly’s food finding characteristics is divided into several necessary steps as shown in Figure 1, and the
steps of the algorithm could be given as follows [7];
1) Random initial fruit fly swarm location.
Init X_ax is
Init Y_axis
2) Search with random direction and distance to the olfactory organ.
Xi = X_axis + RandomValue
Yi = Y_axis + RandomValue
3) Since food’s position is unknown, the distance (Dist) to the origin is estimated first, and the judged value of
smell concentration (S), which is the inverse of distance, is then calculated.
Disti = √(Xi2 + Yi2)
Si = 1/Disti
4) Substitute smell concentration judgment value (S) into smell concentration judge function (or called fitness
function) so as to find the smell concentration (Smelli) of the individual location of the fruit fly.
Sme ll i = Function(Si)
5) Identifying the position of the best smell concentration (maximum value).
[bestSmellbestIndex] = max(Smell)
6) Keep the best smell concentration value and x, y coordinate, the fruit fly swarm will use vision to fly to-
wards that location.
Smellbest = bestSmell
X_axis = X(bestIndex)
Y_axis = Y(bestIndex)
7) Enter iterative optimization to repeat the implementation of steps 2-5, then judge if the smell concentration
is superior to the previous iterative smell concentration, if so, implement step 6 [7].
Figure 1. Illustration of the group iterative food searching of fruit fly [9].
H. Iscan, M. Gunduz
139
The initial swarm location X_axis and Y_axis
The maximum iteration number maxgen
The population size sizepop
i<sizepop
X
i
= X_axis + RandomValue
Y
i
= Y_axis + RandomValue
Dist
i
= √(X
i2
+Y
i2
)
S
i
= 1/Dist
i
Smell
i
= Function(S
i
)
[be stSme l lb es tI nd e x] = max(Smell)
X_axis = X( be stIndex)
Y_axis = Y( be s tI nd ex)
bes ts = S(bestindex);
Smellbest = bestSmell
p < max search
iterative number
gen < maxgen
i < sizepop
X
i
= X_axis + RandomValue
Y
i
= Y_axis + RandomValue
Dist
i
=√(X
i2
+Y
i2
)
S
i
= 1/Dist
i
Smell
i
= Function(S
i
)
[be stSme l lb es tI nd e x] = max(Smell)
bestSmell> Smellbest
X_axis = X( be stinde x);
Y_axis = Y( be s ti nd ex);
bes ts = S(bes tind ex);
Smellbest = bestSmell;
Xbe s t( g e n) = X_axis;
Ybe s t( g e n) = Y_axis;
yy(p)=Smellbest;
Xbestp(p)=X_axis;
Ybestp(p)=Y_axis;
min = yy(1);
p = 2; max search
iterative number
yy(p) < min
Min = yy(p)
yy
Xpestp
Ybe s tp
End
Figure 2. The fruit fly algorithm (proposed).
H. Iscan, M. Gunduz
140
3. The Proposed Approach
In the fruit fly optimization algorithm, the parameters are employed by pre-defining in the initialization stage.
This pre-definition of the parameters are required to be determined manually in order to achieve optimum solu-
tion concerning to the problem.
The grid search algorithm is one of the most used parameter estimation algorithms [10]. The parameters are
investigated within given ranges and incremental steps. The algorithm is useful and easy on finding the optimum
value of the parameter within the given range.
In this study, a constant parameter concerning the direction of the algorithm is investigated using grid search
algorithm. In order to show working of the proposed method, a flowchart of the algorithm is drawn in Figure 2.
The performance of the proposed approach is evaluated on optimizing 2 different numeric functions given in
Equation (1) and Equ ation (2) in [7]. In this concept, Equation (1) and Equ ation (2) are minimization and max-
imization problems respectively.
2
5
YX
=−+
(1)
2
2 YX= −
(2)
The minimum value of objective function in Equation (1) is 5, and the decision variable (X) should be 0.
The maximum value of objective function in Equation (2) is 2, and the decision variable (X) should be 0.
In order to obtain the experimental results for FFO and proposed approach, the methods are evaluated 10
times and the obtained results are given in Table 1 and Figure 3.
4. Results and Discussion
Based on characteristics of the optimization problem, the basic FFO needs to optimize constant value of coeffi-
cient while calculating step size. In the study, we propose the grid search to obtain the optimum value of coeffi-
cient. The experimental results show that proposed FFO is better than the basic FFO in terms of solution quality.
The proposed method automatically adjusts the coefficient for each numeric benchmark function by using grid
search, and this adapts the behavior of the method according to the problem. Therefore the more successful re-
sults are obtained by the proposed method.
5. Conclusion and Future Works
In this study, an adaptable version for basic FFO algorithm is proposed and the performance of the proposed
Table 1. Experimental results.
Function FFO Proposed Approach
Y = 5 + X2 4.9999 5.00
Y = 2 − X2 1.9999 2.00
Figure 3. Solution curves.
H. Iscan, M. Gunduz
141
method is examined on the numerical benchmark functions. According to experimental results, proposed method
is better than the basic FFO algorithm. Future works include applying the proposed method to different optima-
zation problems such as energy estimation.
Acknowledgements
This study has been supported by Coordinatorship of Scientific Research Projects of Selcuk University.
References
[1] Kennedy, J. and Eberhart, R.C. (1995) Particle Swarm Optimization. Proceedings of IEEE International Conference on
Neural Networks, Perth, 1942-1948.
[2] Shi, Y. H. and Eberhart, R.C. (1998) Parameter Selection in Particle Swarm Optimization. Proceedings of the 7th In-
ternational Conference on Evolutionary Programming VII, 591-600.
[3] Dorigo, M. and Stützle, T. (2004) Ant Colony Optimization. MIT Press, Cambridge. http://dx.doi.org/10.1007/b99492
[4] Karaboga, D. (2005) An Idea Based on Honey Bee Swarm for Numerical Optimization. Technical Report-TR06, Er-
ciyes University, Engineering Faculty, Department of Computer Engineering.
[5] Kirkpatrick, S., Gelatt, Jr., C.D. and Vecchi, M.P. (1983) Optimization by Simulated Annealing. Science, 220, 671-
680. http://dx.doi.org/10.1126/science.220.4598.671
[6] Li, W. W., Wang, H., Zou, Z.J. and Qian, J.X. (2005) Function Optimization Method Based on Bacterial Colony Che-
motaxis. Journal of Circuits and Systems, 10, 58-63.
[7] Pan, W.T. (2011) A New Fruit Fly Optimization Algorithm: Taking the Financial Distress Model as an Example.
Knowledge-Based Systems, 26, 69 -74. http://dx.doi.org/10.1016/j.knosys.2011.07.001
[8] Pan, W.T. (2011) A New Evolutionary Computation Approach: Fruit Fly Optimization Algorithm. Conference of Dig-
ital Technology and Innovation Management, Taipei. http://www.oitecshop.byethost16.com/FOA.html
[9] Lin, S.-M. (2013) Analysis of Service Satisfaction in Web Auction Logistics Service Using a Combination of Fruit Fly
Optimization Algorithm and General Regression Neural Network. Neural Computing and Applications, 22, 783-791.
http://dx.doi.org/10.1007/s00521-011-0769-1
[10] Cormen, T.H., Leiserson, C. E. , Rivest, R.L. and Stein, C. (2001) Introduction to Algorithms. 2nd Edition, MIT Press.