**Applied Mathematics**

Vol.08 No.03(2017), Article ID:75023,15 pages

10.4236/am.2017.83032

Generating Epsilon-Efficient Solutions in Multiobjective Optimization by Genetic Algorithm

El-Desouky Rahmo^{1,2}, Marcin Studniarski^{3 }

^{1}Department of Mathematics, Faculty of Science, Taif University, Khurma, KSA

^{2}Mathematics Department, Faculty of Science, Mansoura University, Mansoura, Egypt

^{3}Faculty of Mathematics and Computer Science, University of Ƚódź, Ƚódź, Poland

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 12, 2017; Accepted: March 27, 2017; Published: March 30, 2017

ABSTRACT

We develop a new evolutionary method of generating epsilon-efficient solutions of a continuous multiobjective programming problem. This is achieved by discretizing the problem and then using a genetic algorithm with some derived probabilistic stopping criteria to obtain all minimal solutions for the discretized problem. We prove that these minimal solutions are the epsilon- optimal solutions to the original problem. We also present some computational examples illustrating the efficiency of our method.

**Keywords:**

Vector Optimization, Approximate Solutions, Genetic Algorithm, Stopping Criteria

1. Introduction

The goal of multiobjective optimization, also called vector optimization, is to find a certain set of optimal (efficient) elements of a nonempty subset of a partially ordered linear space. However, finding an exact description of this set often turns out to be practically impossible or computationally too expensive. Therefore, many researchers have focused their efforts on approximation pro- cedures and approximate solutions (see e.g. [1] [2] and references therein).

More than three decades ago, the notion of $\epsilon $ -efficiency has been introduced by Loridan [3] for multi-objective optimization problems (MOPs). Afterwards, this concept has been used e.g. in [2] [4] [5] . To deal with a continuous multi- objective optimization problem, one has to consider a finite discretization of the set of feasible points (see Section 3 below). The discretization of the search space is one of the efficient techniques to obtain approximate solutions for MOPs, (e.g. [6] [7] ). The aim of the present paper is to develop a method of generating $\epsilon $ - efficient solutions (as defined in [4] ) of a continuous MOP. This is achieved by discretizing the problem and then using a genetic algorithm according to the scheme described in [8] . In this way, some probabilistic stopping criteria are obtained for this procedure. They are given in the form of an upper bound for the number of iterations necessary to ensure finding all minimal elements of a finite partially ordered set with a prescribed probability. Supporting theoretical results are established and some computational examples are provided.

2. Stopping Criteria for Genetic Algorithms

In this section we review the results of [8] on probabilistic stopping criteria which will be applied later in Section 4 to some continuous multiobjective opti- mization problem.

2.1. Random Heuristic Search

The RHS (Random Heuristic Search) algorithm, described in [9] , is defined by a fixed initial population $\stackrel{^}{p}$ and a transition rule $\tau $ which, for a given popu- lation $p$ , determines a new population $\tau \left(p\right)$ . Iterating $\tau $ , we obtain a se- quence of populations:

$\stackrel{^}{p},\text{}\tau \left(\stackrel{^}{p}\right),\text{}{\tau}^{2}\left(\stackrel{^}{p}\right),\cdots $ (1)

Each population consists of a finite number of individuals which are elements of a given finite set $\Omega $ called the search space. Populations are multisets, which means that the same individual may appear more than once in a given popu- lation.

To simplify the notation, it is convenient to identify $\Omega $ with a subset of integers: $\Omega =\left\{0,1,\cdots ,n-1\right\}$ . The number $n$ is called the size of search space. Then a population can be represented as an incidence vector (see [10] , p. 141):

$v={\left({v}_{0},{v}_{1},\cdots ,{v}_{n-1}\right)}^{\text{T}},$ (2)

where ${v}_{i}$ is the number of copies of individual $i\in \Omega $ in the population ( ${v}_{i}=0$ if the $i$ -th individual does not appear in the population). The size of population $v$ is the number

$r={\displaystyle \underset{i=0}{\overset{n-1}{\sum}}}{v}_{i}.$ (3)

We assume that all the populations appearing in sequence (1) have the same size $r$ . Dividing each component of incidence vector (2) by $r$ , we obtain the population vector

$p={\left({p}_{0},{p}_{1},\cdots ,{p}_{n-1}\right)}^{\text{T}}\mathrm{,}$ (4)

where ${p}_{i}={v}_{i}/r$ is the proportion of individual $i\in \Omega $ in the population. In this way, we obtain a more general representation of the population which is independent of population size. It follows that each vector $p$ of this type be- longs to the set

$\Lambda :=\left\{x\in {\mathbb{R}}^{n}:{x}_{i}\ge 0\left(\forall i\right),\text{}{\displaystyle \underset{i=0}{\overset{n-1}{\sum}}}{x}_{i}=1\right\},$ (5)

which is a simplex in ${\mathbb{R}}^{n}$ . However, not all points of this simplex correspond to finite populations. For a fixed $r\in \mathbb{N}$ , the following subset of $\Lambda $ consists of all populations of size $r$ (see [9] , p. 7):

${\Lambda}_{r}:=\frac{1}{r}\left\{x\in {\mathbb{R}}^{n}:{x}_{i}\in \mathbb{N}\cup \left\{0\right\}\left(\forall i\right),{\displaystyle \underset{i=0}{\overset{n-1}{\sum}}}{x}_{i}=r\right\}.$ (6)

We now define the mapping

$\mathcal{G}:\Lambda \to \Lambda ,$

called heuristic ( [9] , p. 9) or generational operator ( [10] , p. 144), in the following way: for a vector $p\in \Lambda $ representing the current population, $\mathcal{G}\left(p\right)$ is the probability distribution that is sampled independently $r$ times (with replace- ment) to produce the next population after $p$ . For each of these $r$ choices, the probability of selecting an individual $i\in \Omega $ is equal to $\mathcal{G}{\left(p\right)}_{i}$ , the $i$ -th com- ponent of $\mathcal{G}\left(p\right)$ .

A transition rule $\tau $ is called admissible if it is a composition of a heuristic $\mathcal{G}$ with drawing a sample in the way described above. Symbolically,

$\tau \left(p\right)=\text{sample}\left(\mathcal{G}\left(p\right)\right),\text{}\forall p\in \Lambda .$ (7)

Of course, a transition rule defined this way is nondeterministic, i.e., by applying it repeatedly to the same vector $p$ , we can obtain different results. It should also be noted that, although $\mathcal{G}\left(p\right)$ may not belong to ${\Lambda}_{r}$ , the result of drawing an $r$ -element sample is always a population of size $r$ ; therefore, it follows from (7) that $\tau \left(p\right)\in {\Lambda}_{r}$ .

2.2. The Case of a Genetic Algorithm

In this subsection we consider a genetic algorithm as a particular case of the RHS. We assume that a single iteration of the genetic algorithm produces the next population from the current population as follows:

1) Choose two parents from the current population by using a selection method which can be described by some heuristic (see [9] , 4.2).

2) Crossover the two parents to obtain a child.

3) Mutate the child.

4) Put the mutated child into the next population.

5) If the next population contains less than $r$ members, return to step 1.

The only difference between the iteration described above and the iteration of the Simple Genetic Algorithm defined in ( [9] , p. 44) is that in our version muta- tion is done after crossover.

To derive our stopping criteria, we will use some properties of mutation which is generally understood as changing one element of the search space to another, with a certain probability. The way of implementing selection and crossover is not important for our model, so we omit the discussion of them (we refer the reader to ( [10] , Chapter 5)). The only requirement is that the com- position of the three operations (selection, crossover, mutation) can be described in terms of some heuristic.

We assume that mutation consists in replacing a given individual from $\Omega $ by another individual, with a prescribed probability. Let us denote by ${u}_{i\mathrm{,}j}$ the probability that individual $i$ mutates to $j$ . In this way, we obtain a $n\times n$

matrix $U={\left[{u}_{i,j}\right]}_{i,j\in \Omega}$ . We denote by $\mathrm{Pr}\left(q|p\right)=\mathrm{Pr}\left(\tau \left(p\right)=q\right)$ the probability

of obtaining a population $q$ in the current iteration of the RHS algorithm provided the previous population is $p$ , and by $\mathrm{Pr}\left(\left[j\right]|p\right)=\mathcal{G}{\left(p\right)}_{j}$ the pro- bability of selecting an individual $j\in \Omega $ by a single sampling of the probability distribution $\mathcal{G}\left(p\right)$ . In particular, the probability of generating individual $j$ from population $p$ by successive application of selection, crossover and muta- tion is equal to (see [8] , formula (7))

$\mathcal{G}{\left(p\right)}_{j}=\mathrm{Pr}{\left(\left[j\right]|p\right)}_{\text{scm}}={\displaystyle \underset{i=0}{\overset{n-1}{\sum}}}{u}_{i,j}\mathrm{Pr}{\left(\left[j\right]|p\right)}_{\text{sc}},$ (8)

where the subscript sc means that we are dealing with the composition of selection and crossover, and the subscript scm indicates the composition of selection, crossover and mutation. To get a whole new population, one should draw an r-element sample from the probability distribution (8). The probability of generating a population $q$ in this way is then equal to (see [8] , formula (8))

$\mathrm{Pr}{\left(q|p\right)}_{\text{scm}}=r!{\displaystyle \underset{j=0}{\overset{n-1}{\prod}}}\frac{{\left(\mathrm{Pr}{\left(\left[j\right]|p\right)}_{\text{scm}}\right)}^{r{q}_{j}}}{\left(r{q}_{j}\right)!}\mathrm{.}$ (9)

2.3. Stopping Criteria for Finding All Minimal Elements of $\Omega $

We now consider the following multiobjective optimization problem. Let $\Omega $ be a finite search space defined in Subsection 2.1, and let $f\mathrm{:}\Omega \to F$ be a function being minimized, where $F=\left\{f\left(\omega \right):\omega \in \Omega \right\}$ and $\left(F\mathrm{,}\preccurlyeq \right)$ is a partially ordered set. An element ${x}^{\ast}\in F$ is called a minimal element of $\left(F\mathrm{,}\preccurlyeq \right)$ if there is no $x\in F$ such that $x\prec {x}^{\ast}$ , where the relation $\prec $ is defined by

$\left(x\prec y\right)\mathrm{:}\iff \left(x\preccurlyeq y\wedge x\ne y\right)\mathrm{.}$ (10)

The set of all minimal elements of $F$ is denoted by Min $\left(F\mathrm{,}\preccurlyeq \right)$ . We define the set of optimal solutions in our multiobjective problem as follows:

${\Omega}^{\ast}={\text{Min}}_{f}\left(\Omega ,\preccurlyeq \right):=\left\{\omega \in \Omega :f\left(\omega \right)\in \text{Min}\left(f\left(\Omega \right),\preccurlyeq \right)\right\}\mathrm{.}$ (11)

In particular, if $F$ is a finite subset of the Euclidean space ${\mathbb{R}}^{k}$ , and $f=\left({f}_{1},\cdots ,{f}_{k}\right)$ , where each component of $f$ is being minimized indepen- dently, then the relation $\preccurlyeq $ in $F$ can be defined by

$\left(x\preccurlyeq y\right)\mathrm{:}\iff \left({x}_{i}\le {y}_{i},i=1,\cdots ,k\right)\mathrm{.}$

In this case, ${\Omega}^{\ast}$ is the set of all Pareto optimal solutions of the respective multiobjective optimization problem.

In this section, we assume that the goal of RHS is to find all elements of ${\Omega}^{\ast}$ . Suppose that the set ${\Omega}^{\ast}$ of minimal solutions has the following form:

${\Omega}^{\ast}=\left\{{j}_{1},{j}_{2},\cdots ,{j}_{m}\right\},$ (12)

where the (possibly unknown) number $m$ of these solutions is bounded from above by some known positive integer $M$ . We say that all elements of ${\Omega}^{\ast}$ have been found in the first $t$ iterations if, for each $l\in \left\{1,\cdots ,m\right\}$ , there exists $s\in \left\{1,\cdots ,t\right\}$ such that ${\tau}^{s}{\left(\stackrel{^}{p}\right)}_{{j}_{l}}>0$ . This means that each minimal solution is a member of some population generated in the first $t$ iterations.

Theorem 1 ( [8] , Thm. 6.1) We consider the general model of genetic algori- thm, described in Subsection 2.2, being a special case of the RHS algorithm with the heuristic $\mathcal{G}$ given by (8). Suppose that there exists a number $\beta \in \left(\mathrm{0,1}\right)$ sa- tisfying

${u}_{i,j}\ge \beta ,\text{\hspace{1em}}\forall i\in \Omega ,\text{}j\in {\Omega}^{\ast}.$ (13)

Let $M$ and $t$ be two positive integers satisfying the inequality

$M{\left(1-\beta \right)}^{rt}<1.$ (14)

Let ${\Omega}^{\ast}$ be of the form (12) with $m\le M$ . Then the probability of finding all elements of ${\Omega}^{\ast}$ in the first $t$ iterations is at least

$1-M{\left(1-\beta \right)}^{rt}.$ (15)

Corollary 2 ( [8] , Cor. 2) We consider the same model of algorithm as in Theorem 1. Suppose that condition (13) holds for some $\beta \in \left(0,1\right)$ . Let $M$ be a given upper bound for the cardinality of ${\Omega}^{\ast}$ . For any $\delta \in \left(\mathrm{0,1}\right)$ , we denote by ${t}_{\mathrm{min}}^{\ast}\left(\delta \right)$ the smallest number of iterations required to guarantee that all elements of ${\Omega}^{\ast}$ have been found with probability $\delta $ . Then

${t}_{\mathrm{min}}^{\ast}\left(\delta \right)\le \lceil \frac{\mathrm{ln}\left(1-\delta \right)-\mathrm{ln}M}{r\mathrm{ln}\left(1-\beta \right)}\rceil ,$ (16)

where $\lceil x\rceil $ is the smallest integer greater than or equal to $x$ .

2.4. Construction of the Set of Minimal Elements

The results of Section 2.3 give no practical way of constructing the set ${\Omega}^{\ast}$ . Different elements of this set are members of different populations generated by the genetic algorithm, and cannot be easily identified. To give an effective way of constructing ${\Omega}^{\ast}$ , some modification of the RHS is necessary.

The algorithm presented below is a combination of the RHS and the base VV (van Veldhuizen) algorithm described in ( [11] , 3.1). Suppose we have some RHS satisfying the assumptions of Theorem 1. It generates a sequence (1) of popu- lations, all of them being members of ${\Lambda}_{r}$ . For each $p\in {\Lambda}_{r}$ , we define the set of individuals represented in population $p$ :

$\text{set}\left(p\right):=\left\{\omega \in \Omega :{p}_{\omega}\ne 0\right\}\mathrm{.}$ (17)

Then we construct a sequence $\left\{{D}_{t}\right\}$ of subsets of $\Omega $ as follows:

${D}_{t}:=\text{set}\left({\tau}^{t}\left(\stackrel{^}{p}\right)\right)\text{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}t=0,1,\cdots ,$ (18)

where ${\tau}^{0}:=id$ is the identity mapping. Finally, we define another sequence $\left\{{E}_{t}\right\}$ of sets recursively by

${E}_{0}:={\text{Min}}_{f}\left({D}_{0}\mathrm{,}\preccurlyeq \right)\mathrm{,}$ (19)

${E}_{t+1}:={\text{Min}}_{f}\left({E}_{t}\cup {D}_{t+1}\mathrm{,}\preccurlyeq \right)\text{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}t=0,1,\cdots ,$ (20)

where we have used the notation ${\text{Min}}_{f}$ as in (11). Formulas (19) and (20) define the VV algorithm.

It is shown in ( [11] , Prop. 1) that the sets $f\left({E}_{t}\right)$ converge with probability 1 to $\text{Min}\left(F\mathrm{,}\preccurlyeq \right)$ in the sense of some metric. However, as the authors comment, “The size of the sets ${E}_{t}$ will finally grow to the size of the set of minimal elements. Since this size may be huge, this base algorithm offers only limited usefulness in practice”. In fact, our considerations can have practical value only if the cardinality of ${\Omega}^{\ast}$ is relatively small. For continuous multiobjective opti- mization problems, such situation can be achieved by choosing a suitable dis- cretization.

Our final result is the following theorem which shows that, with a prescribed probability, the sets ${E}_{t}$ constructed by the VV algorithm are equal to ${\Omega}^{\ast}$ for $t$ sufficiently large.

Theorem 3 ( [8] , Thm. 7.1) Let the assumptions of Corollary 2 be satisfied. Then, with probability $\delta $ , we have

${\Omega}^{\ast}={E}_{t},\text{\hspace{1em}}\forall t\ge {t}_{\mathrm{min}}^{\ast}\left(\delta \right).$ (21)

3. Generation of $\epsilon $ -Efficient Solutions for a Continuous Problem

Let $f\mathrm{:}X\to {\mathbb{R}}^{l}$ be a given mapping, where $X$ is a closed and bounded subset of ${\mathbb{R}}^{k}$ . We consider the following multiobjective optimization problem:

$\mathrm{min}\left\{f\left(x\right):x\in X\right\}.$ (22)

To solve this problem means to find all Pareto optimal (efficient) points of $X$ with respect to the partial order relation in ${\mathbb{R}}^{l}$ defined by

$\left(u\preccurlyeq v\right)\mathrm{:}\iff \left({u}_{i}\le {v}_{i},i=1,\cdots ,l\right)\mathrm{.}$ (23)

However, in practical situations this can be very difficult or even impossible. Therefore, we shall consider a discretized version of problem (22).

For any given $\eta >0$ , we say that a subset $\Omega $ of ${\mathbb{R}}^{k}$ is a $\eta $ -discretization of $X$ if

$\Omega \subset X\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}X\subset {\displaystyle \underset{z\in \Omega}{\cup}B\left(z,\eta \right)},$ (24)

where $B\left(x,\eta \right):=\left\{y\in {\mathbb{R}}^{k}:\Vert y-x\Vert <\eta \right\}$ . Since $X$ is compact, we can always find a finite $\eta $ -discretization of $X$ . The discretized multiobjective optimi- zation problem can now be formulated as follows:

$\mathrm{min}\left\{f\left(x\right):x\in \Omega \right\},$ (25)

where the same relation (23) is considered, but now in the finite set $f\left(\Omega \right)$ .

It is natural to ask whether an exact solution of problem (25) yields some sort of approximate solution of problem (22). One of the cases where a positive answer can be given is described in the proposition below. Before formulating it, we must define $\epsilon $ -efficient solutions, following ( [4] , Definition 2.3 (ii)).

Let $\epsilon =\left({\epsilon}_{1},\cdots ,{\epsilon}_{l}\right)\in {\mathbb{R}}^{l}$ be such that ${\epsilon}_{i}>0\text{}\left(i=1,\cdots ,l\right)$ . We say that a point $\stackrel{\xaf}{x}\in X$ is an $\epsilon $ -efficient solution of problem (22) if there is no $x\in X$ such that

$f\left(x\right)\prec f\left(\stackrel{\xaf}{x}\right)-\epsilon \mathrm{,}$ (26)

where the relation “ $\prec $ ” is defined by formula (10).

Proposition 4 Let $f=\left({f}_{1},\cdots ,{f}_{l}\right):X\to {\mathbb{R}}^{l}$ where $X$ is compact and each function ${f}_{i}$ is Lipschitz continuous with constant ${K}_{i}>0\text{}\left(i=1,\cdots ,l\right)$ . Let $\epsilon \in {\mathbb{R}}^{l}$ be such that ${\epsilon}_{i}>0\text{}\left(i=1,\cdots ,l\right)$ . Define

$\eta :=\mathrm{min}\left\{\frac{{\epsilon}_{i}}{{K}_{i}}:i=1,\cdots ,l\right\},$ (27)

and let $\Omega $ be a $\eta $ -discretization of $X$ . Denote by ${\Omega}^{\ast}$ the set of all Pareto optimal solutions of problem (25) (i.e., ${\Omega}^{\ast}={\text{Min}}_{f}\left(\Omega \mathrm{,}\preccurlyeq \right)$ as in formula (11)). Then every point $\stackrel{\xaf}{x}\in {\Omega}^{\ast}$ is an $\epsilon $ -efficient solution of problem (22).

Proof. Let $\stackrel{\xaf}{x}\in {\Omega}^{\ast}$ . Suppose to the contrary that $\stackrel{\xaf}{x}$ is not an $\epsilon $ -efficient solution of (22). Then there exists $x\in X$ such that (26) holds. In particular, we have

${f}_{i}\left(x\right)\le {f}_{i}\left(\stackrel{\xaf}{x}\right)-{\epsilon}_{i}\text{,}\text{\hspace{0.17em}}\text{forall}i\in \left\{1,\cdots ,l\right\}.$ (28)

By the second inclusion in (24), there exists $z\in \Omega $ such that $\Vert z-x\Vert <\eta $ . Therefore, using (27) and (28), we obtain, for all $i\in \left\{1,\cdots ,l\right\}$ ,

$\begin{array}{c}{f}_{i}\left(z\right)\le {f}_{i}\left(x\right)+\left|{f}_{i}\left(z\right)-{f}_{i}\left(x\right)\right|\\ \le {f}_{i}\left(x\right)+{K}_{i}\Vert z-x\Vert \\ <{f}_{i}\left(x\right)+{K}_{i}\eta \\ \le {f}_{i}\left(\stackrel{\xaf}{x}\right)-{\epsilon}_{i}+{K}_{i}\eta \le {f}_{i}\left(\stackrel{\xaf}{x}\right),\end{array}$

which contradicts the assumption that $\stackrel{\xaf}{x}\in {\Omega}^{\ast}$ . $\u25a0$

4. The Main Algorithm

Consider the multiobjective optimization problem (22), where the constraint set $X$ is a box defined by

$X:={\displaystyle \underset{i=1}{\overset{k}{\prod}}\left[{\alpha}_{i},{\beta}_{i}\right]},$ (29)

where ${\alpha}_{i}<{\beta}_{i}\text{}\left(i=1,\cdots ,k\right)$ . Suppose that the assumptions of Proposition 4 are satisfied. We want to specify a $\eta $ -discretization of $X$ . We construct the set $\Omega $ as follows:

$\Omega :=\left\{x\in {\mathbb{R}}^{k}:{x}_{i}={\alpha}_{i}+\frac{{t}_{i}}{{k}_{i}}\left({\beta}_{i}-{\alpha}_{i}\right),{t}_{i}=0,1,\cdots ,{k}_{i},i=1,\cdots ,k\right\},$ (30)

where ${k}_{i}\text{}\left(i=1,\cdots k\right)$ are given positive integers.

Proposition 5 For every given $\eta >0$ , it is possible to find the numbers ${k}_{i}$ so large that the set $\Omega $ defined by (30) is a $\eta $ -discretization of $X$ .

Proof. The inclusion $\Omega \subset X$ is obvious. We now prove the second inclusion in (24). For simplicity, we consider the ${l}_{\infty}$ norm in ${\mathbb{R}}^{k}$ :

${\Vert x\Vert}_{\infty}:=\underset{1\le i\le k}{\mathrm{max}}\left|{x}_{i}\right|.$ (31)

Let us choose ${k}_{i}$ so that

$\frac{1}{{k}_{i}}\left({\beta}_{i}-{\alpha}_{i}\right)<2\eta .$ (32)

Take any $x\in X$ . Then, for each $i\in \left\{\mathrm{1,}\cdots \mathrm{,}k\right\}$ , there exists ${s}_{i}\in \left\{0,1,\cdots ,{k}_{i}\right\}$ such that the number ${z}_{i}$ defined by

${z}_{i}:={\alpha}_{i}+\frac{{s}_{i}}{{k}_{i}}\left({\beta}_{i}-{\alpha}_{i}\right)$ (33)

satisfies

$\left|{x}_{i}-{z}_{i}\right|\le \frac{1}{2{k}_{i}}\left({\beta}_{i}-{\alpha}_{i}\right)<\eta .$

Then the vector $z:=\left({z}_{1},\cdots ,{z}_{k}\right)\in \Omega $ is such that

${\Vert x-z\Vert}_{\infty}:=\underset{1\le i\le k}{\mathrm{max}}\left|{x}_{i}-{z}_{i}\right|<\eta ,$

which completes the proof. $\u25a0$

In the sequel we consider the following simple evolutionary algorithm which is a special case of the algorithm described in Subsection 2.4. It does not contain selection and crossover. The mutation process is very simple and consists in replacing the current population by another randomly chosen population of the same size. However, the stopping criteria described above still hold for this algorithm because their proofs make use of the properties of the mutation alone.

Algorithm 1 Parameters: $\delta >0$ (for the stopping criterion), $\epsilon \in {\mathbb{R}}^{l}$ (for defining $\eta $ -discretization).

1) Set $t:=0$ .

2) Choose randomly a population ${D}_{0}$ consisting of $r$ elements of $\Omega $ .

3) Construct the set ${E}_{0}$ by formula (19).

4) Mutate the population ${D}_{t}$ by replacing it by another randomly chosen population ${D}_{t+1}$ consisting of $r$ elements of $\Omega $ .

5) Construct the set ${E}_{t+1}$ by formula (20).

6) If $t+1\ge {t}_{\mathrm{min}}^{\ast}\left(\delta \right)$ , then stop and define $\stackrel{\xaf}{\Omega}:={E}_{t+1}$ .

7) Increase $t$ by 1 and go to Step 4.

Proposition 6 After stopping Algorithm 1, the equality $\stackrel{\xaf}{\Omega}={\Omega}^{\ast}$ holds with probability $\delta $ , and consequently, $\stackrel{\xaf}{\Omega}$ consists entirely of $\epsilon $ -efficient solutions of problem (22) with probability $\delta $ .

Proof. Apply Theorem 3 and Corollary 2 with $M:=\text{card}\Omega $ and $\beta :=1/M$ (we assume the equal probability $1/M$ of mutating $i$ to $j$ for every $i\mathrm{,}j\in \Omega $ ). $\u25a0$

5. Computational Examples

Below we report on testing the algorithm described above on some examples taken from literature. To find the set of minimal elements (i.e., nondominated elements) of finite sets in Steps 3 and 5, we have used the simple algorithm for classifying a population according to non-domination, see Section 4.3.1 of [12] .

Example 7 (Problem SCH in Table I of [13] )

$\mathrm{min}\left({f}_{1}\left(x\right),{f}_{2}\left(x\right)\right),$

where ${f}_{1}\left(x\right)={x}^{2},\mathrm{}{f}_{2}\left(x\right)={\left(x-2\right)}^{2},\mathrm{}x\in \left[-{10}^{3},{10}^{3}\right]\mathrm{.}$

As stated in Table I of [13] , any point $x\in \left[\mathrm{0,2}\right]$ is a Pareto optimal solution of this problem. Let $X=\left[-{10}^{3},{10}^{3}\right].$ Since each of the functions ${f}_{i},\mathrm{}i=1,2,$ is continuously differentiable on $X\mathrm{,}$ which is closed and bounded, then each of

${f}_{i}$ is locally Lipschitz continuous on $X\mathrm{.}$ Here $\frac{\text{d}{f}_{1}\left(x\right)}{\text{d}x}\mathrm{=2}x$ and

$\frac{\text{d}{f}_{2}\left(x\right)}{\text{d}x}\mathrm{=2}\left(x-2\right)\mathrm{.}$ Hence, ${{\displaystyle sup}}_{x\in X}\left|\frac{\text{d}{f}_{1}\left(x\right)}{\text{d}x}\right|\le 2000$ and ${{\displaystyle sup}}_{x\in X}\left|\frac{\text{d}{f}_{2}\left(x\right)}{\text{d}x}\right|\le 2004.$

Therefore, we can take the Lipschitz constants ${K}_{i}=2004,\text{}i=1,2$ such that

$\left|{f}_{i}\left(y\right)-{f}_{i}\left(z\right)\right|\le {K}_{i}\left|y-z\right|\mathrm{,}\text{forall}y,z\in X\mathrm{.}$

Let $\epsilon =\left({\epsilon}_{1},{\epsilon}_{2}\right)=\left(50,50\right)\mathrm{.}$ Then, from (27), we have $\eta =\frac{25}{1002}.$ In formula

(30), let ${k}_{1}=64\times {10}^{3}.$ Hence the cardinality of $\Omega $ is $\text{card}\left(\Omega \right)=64\times {10}^{3}+1$

and $\frac{1}{{k}_{1}}\left({\beta}_{1}-{\alpha}_{1}\right)=\frac{1}{32},$ and therefore inequality (32) is satisfied. Suppose that

the population size is $r=200.$ For the stopping criterion, we take $\delta =0.99$ and compute ${t}_{\mathrm{min}}^{\ast}\left(\delta \right)=5016$ . After 5016 iterations of Algorithm 1, the resulting set $\stackrel{\xaf}{\Omega}$ is the following:

$\stackrel{\xaf}{\Omega}=\left\{\begin{array}{c}0,1,2,\frac{1}{32},\frac{1}{16},\frac{3}{32},\frac{1}{8},\frac{5}{32},\frac{3}{16},\frac{7}{32},\frac{1}{4},\frac{9}{32},\frac{5}{16},\frac{11}{32},\frac{3}{8},\frac{13}{32},\frac{7}{16},\frac{15}{32},\\ \frac{1}{2},\frac{17}{32},\frac{9}{16},\frac{19}{32},\frac{5}{8},\frac{21}{32},\frac{11}{16},\frac{23}{32},\frac{3}{4},\frac{25}{32},\frac{13}{16},\frac{27}{32},\frac{7}{8},\frac{29}{32},\frac{15}{16},\\ \frac{31}{32},\frac{33}{32},\frac{17}{16},\frac{35}{32},\frac{9}{8},\frac{37}{32},\frac{19}{16},\frac{39}{32},\frac{5}{4},\frac{41}{32},\frac{21}{16},\frac{43}{32},\frac{11}{8},\frac{45}{32},\frac{23}{16},\frac{47}{32},\frac{7}{4}\\ \frac{49}{32},\frac{25}{16},\frac{51}{32},\frac{13}{8},\frac{53}{32},\frac{27}{16},\frac{55}{32},\frac{7}{4},\frac{57}{32},\frac{29}{16},\frac{59}{32},\frac{15}{8},\frac{61}{32},\frac{31}{16},\frac{63}{32}\end{array}\right\}.$ (34)

Remarks:

1) One should remember that the number ${t}_{\mathrm{min}}^{\ast}\left(\delta \right)$ depends on the pre- scribed probability $\delta $ . We have run Algorithm 1 many times up to 10,000 iterations and have observed the following changes in the set ${E}_{t+1}$ : it has always become the set (34) somewhere between iterations 1155 and 1330, and has not changed in the later iterations. This means that the theoretically computed number of 5016 iterations gives the correct set $\stackrel{\xaf}{\Omega}$ (in the sense that it cannot be further improved), but in fact much less iterations are sufficient to obtain the same result.

2) The cardinality of $\stackrel{\xaf}{\Omega}$ is $\text{card}\left(\stackrel{\xaf}{\Omega}\right)=65$ . Each element of $\stackrel{\xaf}{\Omega}$ belongs to the interval $\left[\mathrm{0,2}\right]$ , and hence is a Pareto efficient solution.

3) According to the performance measure Diversity Metric $\Delta \mathrm{,}$ see section B page 188 in [13] , the mean and variance of $\Delta $ for Algorithm 1 is $0.1014490343$ and $0.09539251009,$ respectively, where ${d}_{f}={d}_{l}=0.$ Hence our algorithm finds better spread of solutions than any other algorithm included in Table III of [13] , see Figure 1, this is because the mean is the smallest one.

Example 8 (Problem FON in Table I of [13] ). Consider the following opti- mization problem:

$\mathrm{min}\left({f}_{1}\left(x\right),\mathrm{}{f}_{2}\left(x\right)\right),$

where

${f}_{1}\left(x\right)=1-\mathrm{exp}\left(-{\displaystyle \underset{i=1}{\overset{3}{\sum}}}{\left({x}_{i}-\frac{1}{\sqrt{3}}\right)}^{2}\right),\mathrm{}{f}_{2}\left(x\right)=1-\mathrm{exp}\left(-{\displaystyle \underset{i=1}{\overset{3}{\sum}}}{\left({x}_{i}+\frac{1}{\sqrt{3}}\right)}^{2}\right),\mathrm{}$

with variable bounds ${x}_{1}\mathrm{,}{x}_{2}\mathrm{,}{x}_{3}\in \left[-\mathrm{4,4}\right]\mathrm{.}$

Table I of [13] states that every point $\left({x}_{1}\mathrm{,}{x}_{2}\mathrm{,}{x}_{3}\right)$ satisfying the condition

${x}_{1}={x}_{2}={x}_{3}\in \left[-\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}}\right]$ (35)

is a Pareto optimal solution of this problem. Let $X={\left[-4,4\right]}^{3}$ . Since each of the functions ${f}_{i},\mathrm{}i=1,2,$ is continuously differentiable on $X$ , which is closed and bounded, then each of ${f}_{i}$ is locally Lipschitz continuous on $X\mathrm{.}$ We denote by $\nabla {f}_{i}\left(x\right)$ the gradient vector of ${f}_{i}$ at $x$ :

$\nabla {f}_{i}\left(x\right)={\left(\frac{\partial {f}_{i}\left(x\right)}{\partial {x}_{1}},\mathrm{}\frac{\partial {f}_{i}\left(x\right)}{\partial {x}_{2}},\mathrm{}\frac{\partial {f}_{i}\left(x\right)}{\partial {x}_{3}}\right)}^{\text{T}},\mathrm{}i=1,2.$

Then

${\Vert \nabla {f}_{i}\left(x\right)\Vert}_{\infty}=\underset{1\le j\le 3}{\mathrm{max}}\left|\frac{\partial {f}_{i}\left(x\right)}{\partial {x}_{j}}\right|,\mathrm{}i=1,2.$

Figure 1. True PF and nondominated solutions by New Algorithm on SCH.

Note that ${{\displaystyle \mathrm{sup}}}_{x\in X}{\Vert \nabla {f}_{i}\left(x\right)\Vert}_{\infty}\le 1$ for $i=1,2$ . For any $y\mathrm{,}z\in X$ , there exists $u\in \left[y,z\right]$ such that

$\begin{array}{c}\left|{f}_{i}\left(y\right)-{f}_{i}\left(z\right)\right|=\left|\langle \nabla {f}_{i}\left(u\right),y-z\rangle \right|=\left|{\displaystyle \underset{j=1}{\overset{3}{\sum}}}\frac{\partial {f}_{i}\left(u\right)}{\partial {x}_{j}}\left({y}_{j}-{z}_{j}\right)\right|\\ \le {\displaystyle \underset{j=1}{\overset{3}{\sum}}}\left|\frac{\partial {f}_{i}\left(u\right)}{\partial {x}_{j}}\right|\left|{y}_{j}-{z}_{j}\right|\le 3{\Vert \nabla {f}_{i}\left(u\right)\Vert}_{\infty}{\Vert y-z\Vert}_{\infty}\\ \le 3\underset{x\in X}{\mathrm{sup}}{\Vert \nabla {f}_{i}\left(x\right)\Vert}_{\infty}{\Vert y-z\Vert}_{\infty}.\end{array}$ (36)

Therefore, we can take the Lipschitz constants ${K}_{i}=3,\text{}i=1,\mathrm{2,}$ such that

$\left|{f}_{i}\left(y\right)-{f}_{i}\left(z\right)\right|\le {K}_{i}{\Vert y-z\Vert}_{\infty}\mathrm{,}\text{forall}y\mathrm{,}z\in X\mathrm{.}$ (37)

Let $\epsilon =\left({\epsilon}_{1},{\epsilon}_{2}\right)=\left(\frac{3}{5},\frac{3}{5}\right).$ Then, from (27), we have $\eta =\frac{1}{5}.$ In formula (30),

let ${k}_{i}=50,\text{}i=1,2,3.$ Hence the cardinality of $\Omega $ is $\text{card}\left(\Omega \right)={51}^{3}=132651$

and $\frac{1}{{k}_{i}}\left({\beta}_{i}-{\alpha}_{i}\right)=\frac{4}{25},$ and therefore inequality (32) is satisfied. Suppose that

the population size is $r=200$ . For the stopping criterion, we take $\delta =0.99$ and compute ${t}_{\mathrm{min}}^{\ast}\left(\delta \right)=10878$ . After $10878$ iterations of Algorithm 1, the re- sulting set $\stackrel{\xaf}{\Omega}$ is the following:

(38)

Remarks:

1) In practical tests, the set ${E}_{t+1}$ has always become the set (38) somewhere between iterations 3475 and 3500, and has not changed in the later iterations.

2) The cardinality of $\stackrel{\xaf}{\Omega}$ is card $\left(\stackrel{\xaf}{\Omega}\right)=57.$ The points in $\stackrel{\xaf}{\Omega}$ which satisfy condition (35) are Pareto optimal but the other elements of $\stackrel{\xaf}{\Omega}$ are not optimal. However, it follows from Proposition 6 that all elements of $\stackrel{\xaf}{\Omega}$ are $\epsilon $ -efficient solutions with probability $\delta $ .

3) According to the performance measure Diversity Metric $\Delta \mathrm{,}$ see section B page 188 in [13] , the mean and variance of $\Delta $ for Algorithm 1 is $0.06078996663$ and $0.4859115201,$ respectively, where ${d}_{f}={d}_{l}=\mathrm{0.01343253265.}$ Hence our algorithm finds better spread of solutions than any other algorithm included in Table III of [13] , see Figure 2, this is because the mean is the smallest one.

Example 9 (Problem POL in Table I of [13] ).

$min\left({f}_{1}\left(x\right)\mathrm{,}{f}_{2}\left(x\right)\right)\mathrm{,}$

where

${f}_{1}\left(x\right)=\left[1+{\left({A}_{1}-{B}_{1}\right)}^{2}+{\left({A}_{2}-{B}_{2}\right)}^{2}\right],\text{}\mathrm{}{f}_{2}\left(x\right)=\left[{\left({x}_{1}+3\right)}^{2}+{\left({x}_{2}+1\right)}^{2}\right],$

${A}_{1}=0.5\mathrm{sin}1-2\mathrm{cos}1+\mathrm{sin}2-1.5\mathrm{cos}\mathrm{2,}$ $\text{}{A}_{2}=1.5\mathrm{sin}1-\mathrm{cos}1+2\mathrm{sin}2-0.5\mathrm{cos}2,$

${B}_{1}=0.5\mathrm{sin}{x}_{1}-2\mathrm{cos}{x}_{1}+\mathrm{sin}{x}_{2}-1.5\mathrm{cos}{x}_{2},$

${B}_{2}=1.5\mathrm{sin}{x}_{1}-\mathrm{cos}{x}_{1}+2\mathrm{sin}{x}_{2}-0.5\mathrm{cos}{x}_{2},$ with variable bounds ${x}_{1}\mathrm{,}{x}_{2}\in \left[-\text{\pi}\mathrm{,}\text{\pi}\right]\mathrm{.}$

POL is a problem with two nonconvex Pareto fronts that are disconnected in both the objective and decision variable spaces, see [13] . The true set of Pareto- optimal solutions is difficult to know in this problem. Figure 3 illustrates that Algorithm 1 is able to discover the two disconnected Pareto fronts that lie on the boundaries of the search space.

Let $X=\left[-\text{\pi}\mathrm{,}\text{\pi}\right]\times \left[-\text{\pi}\mathrm{,}\text{\pi}\right]\mathrm{.}$ Since each of the functions ${f}_{i},\mathrm{}i=1,2,$ is con- tinuously differentiable on $X\mathrm{,}$ which is closed and bounded, then each of ${f}_{i},\mathrm{}i=1,2,$ is locally Lipschitz continuous on $X\mathrm{.}$ By using a computer program, it is possible to show that ${{\displaystyle sup}}_{x\in X}{\Vert \nabla {f}_{1}\left(x\right)\Vert}_{\infty}\le 34$ and ${{\displaystyle \mathrm{sup}}}_{x\in X}{\Vert \nabla {f}_{2}\left(x\right)\Vert}_{\infty}\le 13$ . Using an estimate similar to (36), but with two variables, we find that, for the following Lipschitz constants: ${K}_{1}=68,$ and ${K}_{2}=26$ , we have

$\left|{f}_{i}\left(y\right)-{f}_{i}\left(z\right)\right|\le {K}_{i}{\Vert y-z\Vert}_{\infty}\mathrm{,}\text{forall}y\mathrm{,}z\in X,i=1,2.$

Figure 2. True PF and nondominated solutions by New Algorithm on FON.

Let $\epsilon =\left({\epsilon}_{1},{\epsilon}_{2}\right)=\left(5/2,1\right).$ Then, from (27), we have $\eta =\frac{5}{136}.$ In formula

(30), let ${k}_{i}=100,$ $i=1,2.$ Hence the cardinality of $\Omega $ is

$\text{card}\left(\Omega \right)={101}^{2}=10201$ and $\frac{1}{{k}_{i}}\left({\beta}_{i}-{\alpha}_{i}\right)=\frac{\text{\pi}}{50},$ and therefore inequality (32) is

satisfied. Suppose that the population size is $r\mathrm{=200}$ . For the stopping criterion, we take $\delta =0.99$ and compute ${t}_{\mathrm{min}}^{\ast}\left(\delta \right)=706$ . After 706 iterations of Algorithm 1, the resulting set $\stackrel{\xaf}{\Omega}$ is the following:

(39)

Remarks:

1) In practical tests, the set ${E}_{t+1}$ has always become the set (39) somewhere

Figure 3. True PF and nondominated solutions by New Algorithm on POL.

between iterations 285 and 350, and has not changed in the later iterations.

2) The cardinality of $\stackrel{\xaf}{\Omega}$ is $\text{card}\left(\stackrel{\xaf}{\Omega}\right)=75.$ It follows from Proposition 6 that all elements of $\stackrel{\xaf}{\Omega}$ are $\epsilon $ -efficient solutions with probability $\delta $ .

3) According to the performance measure Diversity Metric $\Delta \mathrm{,}$ see section B page 188 in [13] , the mean and variance of $\Delta $ for Algorithm 1 is $0.5021982345$ and $\mathrm{0.7382353788,}$ respectively, where ${d}_{f}=0.1063348336$ and ${d}_{l}=0.01974325126$ for the left Pareto front in Figure 3, and ${d}_{f}=0.0139738762$ and ${d}_{l}=0.1941428847$ for the right Pareto front in Fig- ure 3. Hence, in Table III of [13] , a better spread of solution is achieved by the algorithm NSGA-II (real-coded) in [13] . The spread of solution by our algori- thm is the next-best for this problem.

6. Conclusion

We have presented a new evolutionary method for generating $\epsilon $ -efficient so- lutions of a continuous multiobjective programming problem. This was achieved by discretizing the problem and then using a genetic algorithm. Some proba- bilistic stopping criteria were used for this procedure to obtain, with a pre- scribed probability, all minimal solutions for the discretized problem, which are $\epsilon $ -efficient solutions for the original problem. This article contains the main underlying theory and only some preliminary numerical computations pertaining to this method.

Acknowledgements

The authors are grateful to an anonymous referee for his/her comments which have improved the quality of the paper.

Cite this paper

Rahmo, E.-D. and Studniarski, M. (2017) Generating Epsilon- Efficient Solutions in Multiobjective Optimization by Genetic Algorithm. Applied Ma- thematics, 8, 395-409. https://doi.org/10.4236/am.2017.83032

References

- 1. Ruzika, S. and Wiecek, M.M. (2005) Approximation Methods in Multiobjective Programming. Journal of Optimization Theory and Applications, 126, 473-501. https://doi.org/10.1007/s10957-005-5494-4
- 2. Ghaznavi-Ghosoni, B.A., Khorram, E. and Soleimani-damaneh, M. (2013) Scalarization for Characterization of Approximate Strong/Weak/Proper Efficiency in Multi-Objective Optimization. Optimization, 62, 703-720. https://doi.org/10.1080/02331934.2012.668190
- 3. Loridan, P. (1984) -Solutions in Vector Minimization Problems. Journal of Optimization Theory and Applications, 42, 265-276. https://doi.org/10.1007/BF00936165
- 4. Engau, A. and Wiecek, M.M. (2007) Generating -Efficient Solutions in Multiobjective Programming. European Journal of Operational Research, 177, 1566-1579. https://doi.org/10.1016/j.ejor.2005.10.023
- 5. Engau, A. and Wiecek, M.M. (2007) Exact Generation of Epsilon-Efficient Solutions in Multiple Objective Programming. OR Spectrum, 29, 335-350. https://doi.org/10.1007/s00291-006-0044-5
- 6. Laumanns, M., Thiele, L., Deb, K. and Zitzler, E. (2002) Combining Convergence and Diversity in Evolutionary Multiobjective Optimization. Evolutionary Computation, 10, 263-282. https://doi.org/10.1162/106365602760234108
- 7. Schutze, O., Laumanns, M., Tantar, E., Coello Coello, C.A. and Talbi, E.-G. (2007) Convergence of Stochastic Search Algorithms to Gap-Free Pareto Front Approximations. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2007), 892-899.
- 8. Studniarski, M. (2011) Finding All Minimal Elements of a Finite Partially Ordered Set by Genetic Algorithm with a Prescribed Probability. Numerical Algebra, Control and Optimization, 1, 389-398. https://doi.org/10.3934/naco.2011.1.389
- 9. Vose, M.D. (1999) The Simple Genetic Algorithm: Foundations and Theory. MIT Press, Cambridge.
- 10. Reeves, C.R. and Rowe, J.E. (2003) Genetic Algorithms—Principles and Perspectives: A Guide to GA Theory. Kluwer, Boston.
- 11. Rudolph, G. and Agapie, A. (2000) Convergence Properties of Some Multi-Objective Evolutionary Algorithms. In: Zalzala, A., et al., Eds., Proceedings of the 2000 Congress on Evolutionary Computation (CEC 2000), Vol. 2, IEEE Press, Piscataway, 1010-1016.
- 12. Osman, M.S., Abo-Sinna, M.A. and Mousa, A.A. (2005) An Effective Genetic Algorithm Approach to Multiobjective Resource Allocation Problems (MORAPs). Applied Mathematics and Computation, 163, 755-768. https://doi.org/10.1016/j.amc.2003.10.057
- 13. Deb, K., Pratap, A., Agarwal, S. and Meyarivan, T. (2002) A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6, 182-197. https://doi.org/10.1109/4235.996017