**Applied Mathematics**

Vol.08 No.02(2017), Article ID:74337,10 pages

10.4236/am.2017.82013

A New Global Scalarization Method for Multiobjective Optimization with an Arbitrary Ordering Cone

El-Desouky Rahmo^{1,2*}, Marcin Studniarski^{3 }

^{1}Department of Mathematics, Faculty of Science, Taif University, Khurma, Kingdom of Saudi Arabia

^{2}Mathematics Department, Faculty of Science, Mansoura University, Mansoura, Egypt

^{3}Faculty of Mathematics and Computer Science, University of Ƚódź, Ƚódź, Poland

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 24, 2016; Accepted: February 21, 2017; Published: February 24, 2017

ABSTRACT

We propose a new scalarization method which consists in constructing, for a given multiobjective optimization problem, a single scalarization function, whose global minimum points are exactly vector critical points of the original problem. This equivalence holds globally and enables one to use global optimization algorithms (for example, classical genetic algorithms with “roulette wheel” selection) to produce multiple solutions of the multiobjective problem. In this article we prove the mentioned equivalence and show that, if the ordering cone is polyhedral and the function being optimized is piecewise differentiable, then computing the values of a scalarization function reduces to solving a quadratic programming problem. We also present some preliminary numerical results pertaining to this new method.

**Keywords:**

Multiobjective Optimization, Scalarization Function, Generalized Jacobian, Vector Critical Point

1. Introduction

Scalarization is one of the most commonly used methods of solving multiobjective optimization problems. It consists in replacing the original multiobjective problem by a scalar optimization problem, or a family of scalar optimization problems, which is, in a certain sense, equivalent to the original problem. The existing scalarization methods can be divided into two groups:

1) Methods that use some representation of a given multiobjective problem as a parametrized family of scalar optimization problems. Such scalarization methods should have the following two properties (see [1] , p. 77): (i) an optimal solution of each scalarized problem is efficient (in some sense) for the original multiobjective problem, (ii) every efficient solution of the multiobjective problem can be obtained as an optimal solution of an appropriate scalarized problem by adjusting the parameter value. Some examples of possible scalarizations of this kind are given, for instance, in [1] (pp. 77-78) and [2] .

2) Methods that use local equivalence of a multiobjective optimization problem and some scalar optimization problem whose formulation depends on a given point. Such equivalence enables one to solve the multiobjective problem locally by using necessary and/or sufficient optimality conditions formulated for the scalar problem (for examples of such an approach, see [3] , Thm. 1 and [4] , Prop. 2.1 and 2.2).

There are also scalarization approaches which combine properties of both groups such as the Pascoletti-Serafini scalarization [5] (for a survey of different scalarization methods, see [6] , Chapter 2; for adaptive algorithms using different scalarizations, see [6] , Chapter 4; for scalarizations in the context of variable ordering structures, see [7] , Chapters 4 and 5).

In this paper, we propose a new scalarization method different from the above-mentioned ones. It consists in constructing, for a given multiobjective optimization problem, a single scalarization function, whose global minimum points are exactly vector critical points in the sense of [8] for the original problem. This equivalence holds globally and enables one to use global optimization algorithms designed for scalar-valued problems (for example, classical genetic algorithms with “roulette wheel” selection) to solve the original multiobjective problem. We also show that, if we consider an order defined by a polyhedral cone and the function being optimized is piecewise differentiable, then computing the values of a scalarization function reduces to solving a quadratic programming problem.

So far, the term “scalarization function” has been used for a scalar-valued function defined on the image space of an optimization problem, which transforms a vector-valued objective function into a scalar-valued one (see [9] , Thm. 1.1). However, by using such a scalarization, we are able to find only some (usually a small part of) Pareto solutions, or efficient points, of the original multiobjective optimization problem, while the other Pareto solutions are lost. Contrary to this approach, our scalarization function is defined on the space of feasible solutions of the original problem and attains the minimum (zero) value on the set of vector critical points for this problem. The set of vector critical points is larger than the set of efficient solutions and can serve as an approximation of the latter one.

The purpose of this research is to describe the idea of our new scalarization method and to present some underlying theory for the case of an unconstrained multiobjective optimization problem. The extension to constrained optimization is also possible and will be the subject of further investigations.

2. A Global Scalarization Function for an Arbitrary Ordering Cone

Let
$\Omega $ be an open set in
${\mathbb{R}}^{n}$ , and let
$f=\left({f}_{1},\cdots ,{f}_{p}\right):\Omega \to {\mathbb{R}}^{p}$ be a locally Lipschitzian vector function. Suppose that C is a closed convex pointed cone in
${\mathbb{R}}^{p}$ with nonempty interior. We denote by C^{+} the positive polar cone to C, i.e.,

${C}^{+}:=\left\{z\in {\mathbb{R}}^{p}:\langle z,y\rangle \ge 0,\forall y\in C\right\},$ (1)

where $\langle \cdot \mathrm{,}\cdot \rangle $ is the usual inner product in ${\mathbb{R}}^{p}$ . The partial order relation in ${\mathbb{R}}^{p}$ is defined by

$y\preccurlyeq z\text{\hspace{0.17em}}\text{ifandonlyif}\text{\hspace{0.17em}}z-y\in C,$ (2)

for all $y\mathrm{,}z\in {\mathbb{R}}^{p}$ . We consider the following multiobjective optimization problem:

$\text{minimize}\text{\hspace{0.17em}}f\left(x\right)\text{\hspace{0.17em}}\text{subject}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}x\in \Omega \mathrm{.}$ (3)

Definition 1 [10] We define the (Clarke’s) generalized Jacobian of f at $\stackrel{\xaf}{x}\in \Omega $ as follows:

$\partial f\left(\stackrel{\xaf}{x}\right)\mathrm{:}=\text{co}\left\{\underset{n\to \infty}{\mathrm{lim}}Jf\left({x}_{n}\right)\mathrm{:}{x}_{n}\to \stackrel{\xaf}{x}\text{,}Jf\left({x}_{n}\right)\text{exists}\right\}\mathrm{,}$ (4)

where $Jf\left(x\right)$ denotes the usual Jacobian matrix of f at x whenever f is Fréchet differentiable at x, and “co” denotes the convex hull of a set.

We will denote by ${\mathbb{R}}^{p\times n}$ the vector space of all $p\times n$ real matrices. It follows from ( [10] , Prop. 2.6.2(a)) that $\partial f\left(\stackrel{\xaf}{x}\right)$ is a nonempty convex compact subset of ${\mathbb{R}}^{p\times n}$ . The calculation of Clarke’s generalized Jacobian in the general case can be quite difficult due to the lack of exact calculus rules. For piecewise differentiable functions, however, there is a representation of the generalized Jacobian as the convex hull of a finite number of Jacobian matrices, which was obtained by Scholtes in [11] . To formulate this result, we need some additional definitions.

Definition 2 Let Ω be an open subset of ${\mathbb{R}}^{n}$ and let ${f}^{i}\mathrm{:}\Omega \to {\mathbb{R}}^{p},$ Math_22#, be a collection of continuous functions.

(i) A function $f\mathrm{:}\Omega \to {\mathbb{R}}^{p}$ is said to be a continuous selection of the functions ${f}^{1}\mathrm{,}\cdots \mathrm{,}{f}^{k}$ on the set $U\subset \Omega $ if f is continuous on U and $f\left(x\right)\in \left\{{f}^{1}\left(x\right),\cdots ,{f}^{k}\left(x\right)\right\}$ for every $x\in U$ .

(ii) A function
$f\mathrm{:}\Omega \to {\mathbb{R}}^{p}$ is called a PC^{1}-function if for every
$\stackrel{\xaf}{x}\in \Omega $ there exists an open neighborhood
$U\subset \Omega $ and a finite number of C^{1}-functions
${f}^{i}:U\to {\mathbb{R}}^{p},i=1,\cdots ,k$ , such that f is a continuous selection of
${f}^{1}\mathrm{,}\cdots \mathrm{,}{f}^{k}$ on U. In this case, we call
${f}^{1}\mathrm{,}\cdots \mathrm{,}{f}^{k}$ the selection functions for f at
$\stackrel{\xaf}{x}$ .

(iii) Let
$f\mathrm{:}\Omega \to {\mathbb{R}}^{p}$ be a PC^{1}-function and let
$\stackrel{\xaf}{x}\in U\subset \Omega $ (U open). Suppose that f is a continuous selection of
${f}^{1}\mathrm{,}\cdots \mathrm{,}{f}^{k}$ on U. We define the set of essentially active indices for f at
$\stackrel{\xaf}{x}$ as follows:

${I}_{f}^{e}\left(\stackrel{\xaf}{x}\right):=\left\{i\in \left\{\mathrm{1,}\cdots \mathrm{,}k\right\}\mathrm{:}\stackrel{\xaf}{x}\in \text{cl}\left(\mathrm{int}\left\{x\in U:f\left(x\right)={f}^{i}\left(x\right)\right\}\right)\right\}\mathrm{.}$ (5)

Proposition 3 ( [11] , Prop. 4.3.1) If Ω is an open subset of
${\mathbb{R}}^{n}$ and
$f\mathrm{:}\Omega \to {\mathbb{R}}^{p}$ is a PC^{1}-function with C^{1} selection functions
${f}^{i}:U\to {\mathbb{R}}^{p},i=1,\cdots ,k$ , where
$\stackrel{\xaf}{x}\in U\subset \Omega $ , then

$\partial f\left(\stackrel{\xaf}{x}\right)=\text{co}\left\{J{f}^{i}\left(\stackrel{\xaf}{x}\right)\mathrm{:}i\in {I}_{f}^{e}\left(\stackrel{\xaf}{x}\right)\right\}\mathrm{.}$ (6)

Definition 4 [8] Let $\stackrel{\xaf}{x}\in \Omega $ . We say that

(i) $\stackrel{\xaf}{x}$ is a vector critical point for problem (3) if there exist $z\in {C}^{+}\backslash \left\{{0}_{p}\right\}$ and $A\in \partial f\left(\stackrel{\xaf}{x}\right)$ such that

${z}^{\text{T}}A={0}_{n},$ (7)

where ${0}_{n}$ is the zero vector in ${\mathbb{R}}^{n}$ ;

(ii) $\stackrel{\xaf}{x}$ is an efficient solution for (3) if

$\left(f\left(\Omega \right)-f\left(\stackrel{\xaf}{x}\right)\right)\cap \left(-C\right)=\left\{{0}_{p}\right\};$ (8)

(iii) $\stackrel{\xaf}{x}$ is a weakly efficient solution for (3) if

$\left(f\left(\Omega \right)-f\left(\stackrel{\xaf}{x}\right)\right)\cap \left(-\mathrm{int}C\right)=\varnothing ;$ (9)

(iv) $\stackrel{\xaf}{x}$ is a local weakly efficient solution for (3) if there exists a neighborhood U of $\stackrel{\xaf}{x}$ such that

$\left(f\left(\Omega \cap U\right)-f\left(\stackrel{\xaf}{x}\right)\right)\cap \left(-\mathrm{int}C\right)=\varnothing .$ (10)

It is obvious that implications $\left(\text{ii}\right)\Rightarrow \left(\text{iii}\right)\Rightarrow \left(\text{iv}\right)$ hold in Definition 4. The implication $\left(\text{iv}\right)\Rightarrow \left(\text{i}\right)$ (for locally Lipschizian f) follows from [12] (Thm. 5.1 (i)(b)). Some opposite implications can be obtained under additional assumptions of generalized convexity type. In particular, Gutiérrez et al. [8] have identified the class of pseudoinvex functions for which $\left(\text{i}\right)\Rightarrow \left(\text{iii}\right)$ holds, and the class of strong pseudoinvex functions for which $\left(\text{i}\right)\Rightarrow \left(\text{ii}\right)$ holds.

Definition 5 [13] Let C be a nontrivial convex cone in ${\mathbb{R}}^{p}$ . A nonempty convex subset B of C is called a base for C if each nonzero element $z\in C$ has a unique representation of the form $z=\lambda b$ with $\lambda >0$ and $b\in B$ .

Remark 6 If B is a base of the nontrivial convex cone C, then ${0}_{p}\notin B$ .

Lemma 7 (a finite-dimensional version of [13] , Lemma 2.2.17) Let C be a nontrivial closed convex cone in ${\mathbb{R}}^{p}$ with $\mathrm{int}C\ne \varnothing $ . If $\stackrel{\xaf}{y}\in \mathrm{int}C$ , then the set

$B:=\left\{z\in {C}^{+}:\langle z,\stackrel{\xaf}{y}\rangle =1\right\}$ (11)

is a compact base for ${C}^{+}$ .

In the sequel, we consider a fixed vector $\stackrel{\xaf}{y}\in \mathrm{int}C$ and a base B for ${C}^{+}$ defined by (11). In order to define a global scalarization function for problem (3), we first consider the following mapping $h:{\mathbb{R}}^{p}\times {\mathbb{R}}^{p\times n}\to {\mathbb{R}}^{n}$ :

$h\left(y,A\right):={y}^{\text{T}}A.$ (12)

Lemma 8 A point $\stackrel{\xaf}{x}\in \Omega $ is a vector critical point for problem (3) if and only if

${0}_{n}\in h\left(B\times \partial f\left(\stackrel{\xaf}{x}\right)\right)\mathrm{.}$ (13)

Proof. If $\stackrel{\xaf}{x}\in \Omega $ is a vector critical point for problem (3), then equality (7) holds for some $z\in {C}^{+}\backslash \left\{{0}_{p}\right\}$ and $A\in \partial f\left(\stackrel{\xaf}{x}\right)$ . Since B is a base for ${C}^{+}$ , there exist $\lambda \mathrm{>0}$ and $b\in B$ such that $z=\lambda b$ . Then, by (7),

$h\left(b,A\right)={b}^{\text{T}}A={0}_{n},$ (14)

so that (13) holds. Conversely, if (14) is true for some $b\in B$ and $A\in \partial f\left(\stackrel{\xaf}{x}\right)$ , then by Definition 5 and Remark 6, we have $b\in {C}^{+}\backslash \left\{{0}_{p}\right\}$ . Taking $z=b$ in Definition 4, we see that $\stackrel{\xaf}{x}$ is a vector critical point for (3). $\square $

For a nonempty subset S of ${\mathbb{R}}^{n}$ , let $\text{d}\left(\cdot \mathrm{,}S\right)\mathrm{:}{\mathbb{R}}^{n}\to \mathbb{R}$ be the distance function of S, defined as follows:

$\text{d}\left(x,S\right):=\mathrm{inf}\left\{\Vert x-u\Vert :u\in S\right\},$ (15)

where $\Vert \cdot \Vert $ denotes the Euclidean norm. We now introduce the following scalari- zation function $s:\Omega \to \left[0,+\infty \right)$ :

$s\left(x\right):=\text{d}\left({0}_{n},h\left(B\times \partial f\left(x\right)\right)\right).$ (16)

Note that $s$ depends on the choice of $\stackrel{\xaf}{y}$ . The name “scalarization function” is justified by the following.

Theorem 9 A point $\stackrel{\xaf}{x}\in \Omega $ is a vector critical point for problem (3) if and only if $s\left(\stackrel{\xaf}{x}\right)=0$ .

Proof. If $\stackrel{\xaf}{x}$ is a vector critical point for (3), then by Lemma 8, condition (13) holds, which gives $s\left(\stackrel{\xaf}{x}\right)=0$ . Conversely, suppose that $s\left(\stackrel{\xaf}{x}\right)=0$ . Since h is continuous and the sets B and $\partial f\left(\stackrel{\xaf}{x}\right)$ are compact in ${\mathbb{R}}^{p}$ and ${\mathbb{R}}^{p\times n}$ , respectively, the set $h\left(B\times \partial f\left(\stackrel{\xaf}{x}\right)\right)$ is also compact; hence it is closed. Therefore, the equality $s\left(\stackrel{\xaf}{x}\right)=0$ implies condition (13). $\square $

Having defined the scalarization function s, we can now replace problem (3) by the following scalar optimization problem:

$\text{minimize}\text{\hspace{0.17em}}s\left(x\right)\text{\hspace{0.17em}}\text{subject}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}x\in \Omega \mathrm{.}$ (17)

Obviously, problems (3) and (17) are not equivalent because there may exist vector critical points which are not (weakly) efficient solutions for (3). Nevertheless, by solving problem (17) we can obtain some approximation of the set of solutions to (3).

Computing the distance function in (16) is not easy in the general case, but under additional assumptions on both C and f, it is possible to apply some existing algorithms to perform this task. The details are described below.

Definition 10 ( [14] , p. 170) A convex set D in ${\mathbb{R}}^{p}$ is called polyhedral if it can be expressed as the intersection of some finite collection of closed half- spaces, that is, there exist vectors ${b}^{i}\in {\mathbb{R}}^{p}$ and numbers ${\beta}_{i}$ such that

$D=\left\{y\in {\mathbb{R}}^{p}:\langle y,{b}^{i}\rangle \le {\beta}_{i},i=1,\cdots ,m\right\}.$ (18)

A convex cone which is a polyhedral set is called a polyhedral cone.

Theorem 11 Suppose that the ordering cone C in
${\mathbb{R}}^{p}$ is polyhedral and the function
$f\mathrm{:}\Omega \to {\mathbb{R}}^{p}$ is PC^{1}. Let
$\stackrel{\xaf}{y}\in \mathrm{int}C$ , let B be a base for
${C}^{+}$ defined by (11) and let h be the function defined by (12). Then, for each
$x\in \Omega $ , the set
$h\left(B\times \partial f\left(x\right)\right)$ is polyhedral, or equivalently, it can be represented as the convex hull of a finite number of points in
${\mathbb{R}}^{n}$ .

Proof. It follows from ( [14] , Thm. 19.1) that a convex set D in ${\mathbb{R}}^{p}$ is polyhedral if and only if it is finitely generated, which means that there exist vectors ${a}^{1},\cdots ,{a}^{l}$ such that, for a fixed integer k, $0\le k\le l$ , D consists of all the vectors of the form

$x={\lambda}_{1}{a}^{1}+\cdots +{\lambda}_{k}{a}^{k}+{\lambda}_{k+1}{a}^{k+1}+\cdots +{\lambda}_{l}{a}^{l},$ (19)

where

${\lambda}_{1}+\cdots +{\lambda}_{k}=1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\lambda}_{i}\ge 0\text{for}i=1,\cdots ,l.$ (20)

In particular, if D is bounded, then no ${\lambda}_{i}$ can be arbitrarily large, which implies that $k=l$ , and conditions (19) - (20) reduce to

$x\in \text{co}\left\{{a}^{1}\mathrm{,}\cdots \mathrm{,}{a}^{k}\right\}\mathrm{.}$

By assumption, C is polyhedral, hence, by [14] (Corollary 19.2.2), ${C}^{+}$ is also a polyhedral cone, which implies that its base B is a polyhedral set. By Proposition 3, $\partial f\left(x\right)$ is the convex hull of a finite collection of $p\times n$ matrices, so it is a polyhedral set in ${\mathbb{R}}^{p\times n}$ . It is easy to prove that the Cartesian product of two polyhedral sets is a polyhedral set and that the image of a polyhedral set under a linear transformation is a polyhedral set (see [15] , Proposition A.3.4). Therefore, $h\left(B\times \partial f\left(x\right)\right)$ is a polyhedral set in ${\mathbb{R}}^{n}$ . $\square $

Theorem 11 reduces the problem of computing the values $s\left(x\right)$ given by (16) to the problem of computing the Euclidean projection of ${0}_{n}$ onto the polyhedron $h\left(B\times \partial f\left(x\right)\right)$ . This is a particular case of a quadratic programming problem (see [16] , p. 398). There are also specialized algorithms designed for computing such projections (see [17] [18] ).

3. The Case of Two Objectives

For two objectives, under differentiability assumptions, it is possible to find some representation of the scalarization function s in terms of the gradients
$\nabla {f}_{1}$ and
$\nabla {f}_{2}$ . Let p = 2 and suppose that the mapping
$f=\left({f}_{1},{f}_{2}\right)$ is continuously differentiable on
${\mathbb{R}}^{n}$ . Denote by
$\nabla {f}_{i}\left(x\right)$ the gradient of f_{i} at x (i = 1, 2). Then (4) implies

$\partial f\left(\stackrel{\xaf}{x}\right)=\left\{Jf\left(\stackrel{\xaf}{x}\right)\right\}=\left[\begin{array}{c}\nabla {f}_{1}\left(\stackrel{\xaf}{x}\right)\\ \nabla {f}_{2}\left(\stackrel{\xaf}{x}\right)\end{array}\right].$ (21)

The following theorem will help to compute the scalarization function (16) for bi-objective problems.

Theorem 12 Let p = 2, $\stackrel{\xaf}{y}\in \mathrm{int}C$ , and let B be the compact base for ${C}^{+}$ defined by (8). Then there exist vectors ${b}^{i}=\left({b}_{1}^{i},{b}_{2}^{i}\right)\in B,i=1,2$ , such that

$h\left(B\times \partial f\left(\stackrel{\xaf}{x}\right)\right)=\text{co}\left\{{b}_{1}^{1}\nabla {f}_{1}\left(\stackrel{\xaf}{x}\right)+{b}_{2}^{1}\nabla {f}_{2}\left(\stackrel{\xaf}{x}\right),{b}_{1}^{2}\nabla {f}_{1}\left(\stackrel{\xaf}{x}\right)+{b}_{2}^{2}\nabla {f}_{2}\left(\stackrel{\xaf}{x}\right)\right\}\mathrm{.}$ (22)

Proof. It follows from (8) that B is a subset of some line in ${\mathbb{R}}^{2}$ . Moreover, by Lemma 7, B is compact and convex, so it must be a closed line segment. Denote by ${b}^{\left(1\right)}$ and ${b}^{\left(2\right)}$ the endpoints of B. Using (21) and the linearity of h with respect to the first argument, we obtain

$\begin{array}{c}h\left(B\times \partial f\left(\stackrel{\xaf}{x}\right)\right)=h\left(\text{co}\left\{{b}^{1},{b}^{2}\right\}\times \left\{Jf\left(\stackrel{\xaf}{x}\right)\right\}\right)\\ =h\left(\left\{\left(\lambda {b}^{1}+\left(1-\lambda \right){b}^{2},Jf\left(\stackrel{\xaf}{x}\right)\right):0\le \lambda \le 1\right\}\right)\\ =\left\{\lambda h\left({b}^{1},Jf\left(\stackrel{\xaf}{x}\right)\right)+\left(1-\lambda \right)h\left({b}^{2},Jf\left(\stackrel{\xaf}{x}\right)\right):0\le \lambda \le 1\right\}\\ =\text{co}\left\{h\left({b}^{1},Jf\left(\stackrel{\xaf}{x}\right)\right),h\left({b}^{2},Jf\left(\stackrel{\xaf}{x}\right)\right)\right\}\\ =\text{co}\left\{{b}_{1}^{1}\nabla {f}_{1}\left(\stackrel{\xaf}{x}\right)+{b}_{2}^{1}\nabla {f}_{2}\left(\stackrel{\xaf}{x}\right),{b}_{1}^{2}\nabla {f}_{1}\left(\stackrel{\xaf}{x}\right)+{b}_{2}^{2}\nabla {f}_{2}\left(\stackrel{\xaf}{x}\right)\right\}.\text{}\square \end{array}$

Pareto Optimization

We now consider the case of classical Pareto optimization, i.e., when $C={\mathbb{R}}_{+}^{2}$ . We have ${C}^{+}=C$ . Let $\stackrel{\xaf}{y}=\left(1,1\right)\in \mathrm{int}C$ , then by Lemma 7 the set

$B:=\left\{z\in {C}^{+}:{z}_{1}+{z}_{2}=1\right\}$

is a compact base for ${C}^{+}$ , and B is the closed line segment joining the two points ${b}^{\left(1\right)}=\left(1,0\right)$ and ${b}^{\left(2\right)}=\left(0,1\right)$ . According to Theorem 12, we have

$h\left(B\times \partial f\left(\stackrel{\xaf}{x}\right)\right)=\text{co}\left\{\nabla {f}_{1}\left(x\right)\mathrm{,}\nabla {f}_{2}\left(x\right)\right\}\mathrm{,}$

hence, the scalarization function has the form

$s\left(x\right)=\text{d}\left(0,\text{co}\left\{\nabla {f}_{1}\left(x\right),\nabla {f}_{2}\left(x\right)\right\}\right).$

For any point $x\in {\mathbb{R}}^{n}$ , there are two possible cases:

(i) $\nabla {f}_{1}\left(x\right)=\nabla {f}_{2}\left(x\right)$ . Then $s\left(x\right)=\Vert \nabla {f}_{1}\left(x\right)\Vert =\Vert \nabla {f}_{2}\left(x\right)\Vert $ .

(ii) $\nabla {f}_{1}\left(x\right)\ne \nabla {f}_{2}\left(x\right)$ . Then $s\left(x\right)$ is the distance from 0 to the line segment S joining $\nabla {f}_{1}\left(x\right)$ and $\nabla {f}_{2}\left(x\right)$ .

We now consider case (ii). The line L passing through $\nabla {f}_{1}\left(x\right)$ and $\nabla {f}_{2}\left(x\right)$ is parametrized as $L\left(t\right)=b+ta$ where $b:=\nabla {f}_{1}\left(x\right)$ is a point on the line, and $a:=\nabla {f}_{2}\left(x\right)-\nabla {f}_{1}\left(x\right)$ is the line direction. The closest point on the line L to 0 is the projection of 0 onto L which is equal to

$q:=b+{t}_{0}a\text{,}\text{\hspace{0.17em}}\text{where}\text{\hspace{0.17em}}{t}_{0}=-\frac{\langle a,b\rangle}{\langle a,a\rangle}=-\frac{\langle a,b\rangle}{{\Vert a\Vert}^{2}}\mathrm{.}$

Using the same parametrization, we can represent the line segment S as follows:

$S=\left\{b+ta:0\le t\le 1\right\}\mathrm{.}$

Therefore, if ${t}_{0}\le 0$ , then the point in S closest to 0 is b. Similarly, if ${t}_{0}\ge 1$ , then the point in S closest to 0 is $b+a$ . Finally, if $0<{t}_{0}<1$ , then the point in S closest to 0 is q. Hence, the function s can be described as follows:

$s\left(x\right)=\{\begin{array}{lll}\Vert b\Vert \hfill & \text{if}\hfill & {t}_{0}\le \mathrm{0,}\hfill \\ \Vert b+{t}_{0}a\Vert \hfill & \text{if}\hfill & 0<{t}_{0}<\mathrm{1,}\hfill \\ \Vert b+a\Vert \hfill & \text{if}\hfill & {t}_{0}\ge 1.\hfill \end{array}$ (23)

Taking into account the definitions of $a$ and $b$ above, we see that this scalarization function depends on the values of gradients of ${f}_{1}$ and ${f}_{2}$ only, so it is easily computable.

Example 13 (problem FON in [19] , p. 187) Let $f=\left({f}_{1},{f}_{2}\right):{\mathbb{R}}^{3}\to {\mathbb{R}}^{2}$ be defined by

${f}_{1}\left(x\right)=1-\mathrm{exp}\left(-{\displaystyle \underset{i=1}{\overset{3}{\sum}}}{\left({x}_{i}-\frac{1}{\sqrt{3}}\right)}^{2}\right),$ (24)

${f}_{2}\left(x\right)=1-\mathrm{exp}\left(-{\displaystyle \underset{i=1}{\overset{3}{\sum}}}{\left({x}_{i}+\frac{1}{\sqrt{3}}\right)}^{2}\right).$ (25)

The authors of [19] consider problem (3), where $\Omega ={\left[-4,4\right]}^{3}$ , and state that the set of efficient (Pareto) solutions for this problem is equal to the set of points $x=\left({x}_{1},{x}_{2},{x}_{3}\right)$ satisfying

${x}_{1}={x}_{2}={x}_{3}\in \left[-1/\sqrt{3},1/\sqrt{3}\right].$ (26)

Here the set $\Omega $ is closed (contrary to the rest of our paper), but this constraint is in fact inessential and the problem can also be considered on the whole space ${\mathbb{R}}^{3}$ . Computing the partial derivatives of ${f}_{1}$ and ${f}_{2}$ , we obtain from (24) - (25)

$\frac{\partial {f}_{1}}{\partial {x}_{j}}\left(x\right)=2\left({x}_{j}-\frac{1}{\sqrt{3}}\right)\mathrm{exp}\left(-{\displaystyle \underset{i=1}{\overset{3}{\sum}}}{\left({x}_{i}-\frac{1}{\sqrt{3}}\right)}^{2}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,2,3,$ (27)

$\frac{\partial {f}_{2}}{\partial {x}_{j}}\left(x\right)=2\left({x}_{j}+\frac{1}{\sqrt{3}}\right)\mathrm{exp}\left(-{\displaystyle \underset{i=1}{\overset{3}{\sum}}}{\left({x}_{i}+\frac{1}{\sqrt{3}}\right)}^{2}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,2,3.$ (28)

We have designed a program in Maple to compute $s\left(x\right)$ , using formulae (23) and (27) - (28). This program consists of three nested loops for the values of the variables ${x}_{1}\mathrm{,}{x}_{2}\mathrm{,}{x}_{3}$ , each variable taking values from −4 to 4 in steps of 0.01. We have obtained $s\mathrm{(}x\mathrm{)=0}$ for each x satisfying (26), and $s\left(x\right)>0$ for all other points x. However, there are some points x for which the values $s\left(x\right)$ are very small; the smallest value obtained is

$s\left(4,4,4\right)=s\left(-4,-4,-4\right)=\alpha :=0.79802094823\times {10}^{-26}.$ (29)

There are no other points at which $s\left(x\right)<\alpha $ , except the Pareto optimal solutions (26).

This example shows that one must be careful when using global optimization algorithms to minimize s because points like the ones appearing in (29) can be easily misclassified as vector critical points.

4. Conclusion

We have presented a new scalarization method for solving multiobjective optimization problems which is based on computing the Euclidean distance from the origin to some subset determined by the generalized Jacobian of the mapping being optimized. This article contains the main underlying theory and only some preliminary numerical computations pertaining to this method. More numerical results will be presented in another research.

Acknowledgements

The authors are grateful to an anonymous referee for his/her comments which have improved the quality of the paper.

Cite this paper

Rahmo, E.-D. and Studniarski, M. (2017) A New Global Scalarization Method for Multiobjective Optimization with an Arbitrary Ordering Cone. Applied Mathematics, 8, 154-163. https://doi.org/10.4236/am.2017.82013

References

- 1. Ehrgott, M. and Gandibleux, X. (Eds.) (2002) Multiple Criteria Optimization: State of the Art Annotated Bibliography Surveys. Kluwer, Boston.
- 2. Wierzbicki, A.P. (1986) On the Completeness and Constructiveness of Parametric Characterizations to Vector Optimization Problems. Operations-Research-Spektrum, 8, 73-87. https://doi.org/10.1007/BF01719738
- 3. Bakhtin, V.I. and Gorokhovik, V.V. (2010) First and Second Order Optimality Conditions for Vector Optimization Problems on Metric Spaces. Proceedings of the Steklov Institute of Mathematics, 269, S28-S39. https://doi.org/10.1134/s0081543810060040
- 4. Ginchev, I., Guerraggio, A. and Rocca, M. (2005) Isolated Minimizers and Proper Efficiency for C0,1 Constrained Vector Optimization Problems. Journal of Mathematical Analysis and Applications, 309, 353-368. https://doi.org/10.1016/j.jmaa.2005.01.041
- 5. Pascoletti, A. and Serafini, P. (1984) Scalarizing Vector Optimization Problems. Journal of Optimization Theory and Applications, 42, 499-524. https://doi.org/10.1007/BF00934564
- 6. Eichfelder, G. (2008) Adaptive Scalarization Methods in Multiobjective Optimization. Springer, Berlin. https://doi.org/10.1007/978-3-540-79159-1
- 7. Eichfelder, G. (2014) Variable Ordering Structures in Vector Optimization. Springer, Berlin.
- 8. Gutiérrez, C., Jiménez, B., Novo, V. and Ruiz-Garzon, G. (2016) Vector Critical Points and Efficiency in Vector Optimization with Lipschitz Functions. Optimization Letters, 10, 47-62. https://doi.org/10.1007/s11590-015-0850-2
- 9. Nakayama, H., Yun, Y. and Yoon, M. (2009) Sequential Approximate Multiobjective Optimization Using Computational Intelligence. Springer, Berlin.
- 10. Clarke, F.H. (1983) Optimization and Nonsmooth Analysis. John Wiley & Sons, New York.
- 11. Scholtes, S. (2012) Introduction to Piecewise Differentiable Equations. Springer, New York. https://doi.org/10.1007/978-1-4614-4340-7
- 12. Guerraggio, A. and Luc, T. (2001) Optimality Conditions for C1,1 Vector Optimization Problems. Journal of Optimization Theory and Applications, 109, 615-629. https://doi.org/10.1023/A:1017519922669
- 13. Gopfert, A., Riahi, H., Tammer, Ch. and Zalinescu, C. (2003) Variational Methods in Partially Ordered Spaces. Springer, New York.
- 14. Rockafellar, R.T. (1970) Convex Analysis. Princeton University Press, Princeton.
- 15. Lange, K. (2013) Optimization. 2nd Edition, Springer, New York. https://doi.org/10.1007/978-1-4614-5838-8
- 16. Boyd, S. and Vandenberghe, L. (2004) Convex Optimization. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511804441
- 17. Arioli, M., Laratta, A. and Menchi, O. (1984) Numerical Computation of the Projection of a Point onto a Polyhedron. Journal of Optimization Theory and Applications, 43, 495-525. https://doi.org/10.1007/BF00935003
- 18. Mückeley, M. (1992) Computing the Vector in the Convex Hull of a Finite Set of Points Having Minimal Length. Optimization, 26, 15-26. https://doi.org/10.1080/02331939208843839
- 19. Deb, K., Pratap, A., Agarwal, S. and Meyarivan, T. (2002) A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6, 182-197. https://doi.org/10.1109/4235.996017