﻿ Fuzzy Regression Model Based on Fuzzy Distance Measure

Journal of Data Analysis and Information Processing
Vol.06 No.03(2018), Article ID:86790,15 pages
10.4236/jdaip.2018.63008

Fuzzy Regression Model Based on Fuzzy Distance Measure

Jun Deng, Qiujun Lu

University of Shanghai for Science and Technology, Shanghai, China    Received: July 16, 2018; Accepted: August 19, 2018; Published: August 22, 2018

ABSTRACT

Some existed fuzzy regression methods have some special requirements for the object of study, such as assuming the observed values as symmetric triangular fuzzy numbers or imposing a non-negative constraint of regression parameters. In this paper, we propose a left-right fuzzy regression method, which is applicable to various forms of observed values. We present a fuzzy distance and partial order between two left-right (LR) fuzzy numbers and we let the mean fuzzy distance between the observed and estimated values as the mean fuzzy error, then make the mean fuzzy error minimum to get the regression parameter. We adopt two criteria involving mean fuzzy error (comparative mean fuzzy error based on partial order) and SSE to compare the performance of our proposed method with other methods. Finally four different types of numerical examples are given to illustrate that our proposed method has feasibility and wide applicability.

Keywords:

LR Fuzzy Number, LR Fuzzy Distance Measure, Mean Fuzzy Error, Fuzzy Regression 1. Introduction

As a widely used statistical method, regression analysis is playing a more and more important role in model establishment and prediction evaluation. The traditional regression analysis requires accurate data, but in practice, the information obtained in many cases is not accurate. In this regard, Zadeh  founded the fuzzy set theory in 1965, which provided theoretical support for the fuzzy regression analysis. In 1980, Dubois  proposed to take LR fuzzy numbers to represent the input, output or parameters of the system. On this basis, scholars combined with the extension principle to promote the application of fuzzy linear regression (FLR) model in practical problems.

For the first time, Tanaka  set up an FLR model with parameter that was fuzzy number, which adopted linear programming method for parameter estimation with the criterion of minimum of fuzziness index. They regard the fuzzy parameters as the reflection of the estimation deviations in the linear function. However, Tanaka’s model can’t be widely applied, because the model requires crisp input. But in some case, the input is fuzzy number. Thus this approach was later improved by Sakawa  , they proposed the multi-objective linear regression analysis of fuzzy input and fuzzy output, and established an interactive decision-making method based on linear programming to solve the multi-objective linear programming problem.

In 1988, Diamond  set up an FLR model with triangular fuzzy number, which adopted least squares method for parameter estimation with the criterion of minimum of deviations between the observed values and the estimated values. In 2002, Wu  proposed the method for obtaining the fuzzy least squares estimators based on extension principle in fuzzy sets theory. According to the usual least squares estimators, he also constructed the membership functions of fuzzy least squares estimators. In the same year, Yang  combined linear programming based methods with fuzzy least squares methods, and proposed two methods for estimating fuzzy parameters: approximate-distance and interval-distance fuzzy least-squares. Modarres  regarded the predictability of linear programming approach which is not so satisfactory and the computation of the fuzzy least squares method is complicated. Therefore, they develop three mathematical programming models called risk-neutral, risk-averse and risk-seeking problems.

In recent years, scholars have improved and promoted the fuzzy regression method, but some methods are not rigorous when it comes to error estimation and sometimes there are some special requirements for the object of study, such as the observations should be symmetric fuzzy triangular fuzzy number, constraints that the regression parameters should be non-negative. Moreover, in multivariate fuzzy linear regression, their model results are not so satisfactory due to the huge difference between the two inputs. Therefore, Roldan  proposed the concept of family of fuzzy semi-distances between fuzzy numbers and combined with least squares method to obtain regression parameters. Then G. Alfonso  introduced a fuzzy regression procedure involving a class a fuzzy numbers defined by some level sets called finite fuzzy numbers.

The main purpose of this paper is to introduce a fuzzy regression method using the fuzzy distance between left right fuzzy numbers. In order to do this, in Section 2 some preliminary theories of left-right fuzzy distance and partial order are given. In Section 3 some properties of LR fuzzy distance are studied and the concrete formula for calculating the fuzzy distance is determined. In Section 4, the fuzzy distance is used as the mean fuzzy error, then the minimized mean fuzzy error is used as the objective function and the stepwise regression is used to solve it. In Section 5, we apply the proposed model and other previous models in four different types of examples, and compare the models by SSE and mean fuzzy error based on the partial order.

2. Preliminaries

We will use the following notion about fuzzy number. Let $\mathbb{I}=\left[0,1\right]$, ${ℝ}_{0}^{-}=\left(-\infty ,0\right]$ and ${ℝ}_{0}^{+}=\left[0,\infty \right)$ .

Definition 2.1. (Aguilar  ) A fuzzy set on $ℝ$ is a map $A:ℝ\to \mathbb{I}$ . A fuzzy number (for short FN) on $ℝ$ is a fuzzy set $A$ on $ℝ$ such that, for all $\alpha \in \left[0,1\right]$, the α-level set (or α-cut) ${A}_{\alpha }=\left\{x\in ℝ:A\ge \alpha \right\}$ is a non-empty, closed subinterval of $ℝ$ . The kernel of a FN $A$ is ker $A={A}_{1}$ and we will only consider FNs with compact support, its support is the closure $\text{supp}A=\stackrel{¯}{\left\{x\in ℝ:A>0\right\}}$ .

Let $F$ be the set of all FNs (with compact support). Thus, for each $\alpha \in \mathbb{I}$ the α-level set ${A}_{\alpha }$ of $A$ is a compact subinterval of $ℝ$ that can be expressed as ${A}_{\alpha }=\left[{\underset{_}{a}}_{\alpha },{\stackrel{¯}{a}}_{\alpha }\right]$, where ${\underset{_}{a}}_{\alpha }$ is the inferior extreme and ${\stackrel{¯}{a}}_{\alpha }$ is the superior extreme of the interval ${A}_{\alpha }$ . Following this notation, we will also denote the support of $A$ by ${A}_{0}=\left[{\underset{_}{a}}_{0},{\stackrel{¯}{a}}_{0}\right]$ . The number ${D}_{c}A=\left({\underset{_}{a}}_{1}+{\stackrel{¯}{a}}_{1}\right)/2$ is the center of the FN $A$, and its radius is $\text{spr}A=\left({\stackrel{¯}{a}}_{1}-{\underset{_}{a}}_{1}\right)/2$ .

Proposition 2.1. (Wu  ) Let $A$ and $B$ be two fuzzy numbers. Then $A\oplus B$ and $A\otimes B$ are fuzzy numbers. Furthermore, we have

${\left(A\oplus B\right)}_{\alpha }=\left[{\underset{_}{a}}_{\alpha }+{\underset{_}{b}}_{\alpha },{\stackrel{¯}{a}}_{\alpha }+{\stackrel{¯}{b}}_{\alpha }\right]$ (1)

$\begin{array}{l}{\left(A\otimes B\right)}_{\alpha }=\left[\mathrm{min}\left({\underset{_}{a}}_{\alpha }\cdot {\underset{_}{b}}_{\alpha },{\underset{_}{a}}_{\alpha }\cdot {\stackrel{¯}{b}}_{\alpha },{\stackrel{¯}{a}}_{\alpha }\cdot {\underset{_}{b}}_{\alpha },{\stackrel{¯}{a}}_{\alpha }\cdot {\stackrel{¯}{b}}_{\alpha }\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{max}\left({\underset{_}{a}}_{\alpha }\cdot {\underset{_}{b}}_{\alpha },{\underset{_}{a}}_{\alpha }\cdot {\stackrel{¯}{b}}_{\alpha },{\stackrel{¯}{a}}_{\alpha }\cdot {\underset{_}{b}}_{\alpha },{\stackrel{¯}{a}}_{\alpha }\cdot {\stackrel{¯}{b}}_{\alpha }\right)\right]\end{array}$ (2)

Definition 2.2. (Dubois  ) A (generalized) left-right fuzzy number (LRFN) is an FN $A={\left({a}_{1}/{a}_{2}/{a}_{3}/{a}_{4}\right)}_{LR}$, where ${a}_{1},{a}_{2},{a}_{3},{a}_{4}\in ℝ$ (corners of $A$ ), ${a}_{1}\le {a}_{2}\le {a}_{3}\le {a}_{4}$, defined by

$A\left(x\right)=\left\{\begin{array}{lll}L\left(\frac{x-{a}_{1}}{{a}_{2}-{a}_{1}}\right)\hfill & \hfill & {a}_{1} (3)

where $L,R:\mathbb{I}\to \mathbb{I}$ are strictly increasing, continuous mappings such that $L\left(0\right)=R\left(0\right)=0$ and $L\left(1\right)=R\left(1\right)=1$ . Clearly, the kernel of $\mathcal{A}$ is $\left[{a}_{2},{a}_{3}\right]$ and its support is $\left[{a}_{1},{a}_{4}\right]$ . Let ${F}_{LR}$ be the family of all LRFN.

Triangular fuzzy number are special cases of LRFN (denote them by $A={\left({a}_{1}/{a}_{2}/{a}_{3}/{a}_{4}\right)}_{LR}$ ) with $L\left(x\right)=R\left(x\right)=x$ for all $x\in \mathbb{I}$ and ${a}_{1}\le {a}_{2}={a}_{3}\le {a}_{4}$ . To be short, we will denote triangular fuzzy number by $A={\left({a}_{1}/{a}_{2}/{a}_{4}\right)}_{T}$ . Let ${F}_{T}$ be the family of all TFN.

Proposition 2.2. Given $\alpha \in \mathbb{I}$ and $L,R:\mathbb{I}\to \mathbb{I}$ are strictly increasing, continuous mapping. ${a}_{1}\le {a}_{2}\le {a}_{3}\le {a}_{4}\in ℝ$, then there exists a unique LRFN $A$ such that

${\underset{_}{a}}_{\alpha }=\underset{_}{a}\left(\alpha \right)={a}_{1}+\left({a}_{2}-{a}_{1}\right){L}^{-1}\left(\alpha \right)$ (4)

${\stackrel{¯}{a}}_{\alpha }=\stackrel{¯}{a}\left(\alpha \right)={a}_{4}-\left({a}_{4}-{a}_{3}\right){R}^{-1}\left(\alpha \right)$ (5)

Definition 2.3. (Alfonso  ) A function $X:\Omega \to {F}_{LR}$ is a left-right fuzzy random variable if its representation ${\left({x}_{1}/{x}_{2}/{x}_{3}/{x}_{4}\right)}_{LR}:\Omega \to {ℝ}^{4}$ is a random vector. The expected value of a left-right fuzzy variable $\stackrel{˜}{X}$ is the unique fuzzy set $E\left[\stackrel{˜}{X}\right]$ in ${F}_{LR}$ whose representation is ${\left(E{x}_{1}/E{x}_{2}/E{x}_{3}/E{x}_{4}\right)}_{LR}$ .

A partition of the interval $\mathbb{I}$ is a set $P=\left\{{\delta }_{0},{\delta }_{1},\cdots ,{\delta }_{n}\right\}$ such that $0={\delta }_{0}<{\delta }_{1}<\cdots <{\delta }_{n}=1$ . The simplest partition of $\mathbb{I}$ is ${P}_{0}=\left\{0<{\delta }_{0}<{\delta }_{1}=1\right\}$ . If $P={\left\{{\delta }_{i}\right\}}_{i=0}^{n}$ is a partition of $I$ and $f:S\to ℝ$ is a mapping defined on $S\supseteq \mathbb{I}$, we will denote, for all $i\in \left\{1,2,\cdots ,n\right\}$,

$\Delta {f}_{i}=\Delta {f}_{\left[{\delta }_{i-1},{\delta }_{i}\right]}=\left[f\left({\delta }_{i}\right)-f\left({\delta }_{i-1}\right)\right]/\left({\delta }_{i}-{\delta }_{i-1}\right)$ (6)

Therefore, for each $\alpha \in \mathbb{I}$ the α-level set ${A}_{\alpha }$ of $A={\left({a}_{1}/{a}_{2}/{a}_{3}/{a}_{4}\right)}_{LR}$, we have

$\Delta {\underset{_}{a}}_{i}=\Delta {\underset{_}{a}}_{\left[{\delta }_{i-1},{\delta }_{i}\right]}=\frac{\underset{_}{a}\left({\delta }_{i}\right)-\underset{_}{a}\left({\delta }_{i-1}\right)}{{\delta }_{i}-{\delta }_{i-1}}=\frac{\left({a}_{2}-{a}_{1}\right)\left[{L}^{-1}\left({\delta }_{i}\right)-{L}^{-1}\left({\delta }_{i-1}\right)\right]}{{\delta }_{i}-{\delta }_{i-1}}$ (7)

$\Delta {\stackrel{¯}{a}}_{i}=\Delta {\stackrel{¯}{a}}_{\left[{\delta }_{i-1},{\delta }_{i}\right]}=\frac{\stackrel{¯}{a}\left({\delta }_{i}\right)-\stackrel{¯}{a}\left({\delta }_{i-1}\right)}{{\delta }_{i}-{\delta }_{i-1}}=\frac{\left({a}_{4}-{a}_{3}\right)\left[{R}^{-1}\left({\delta }_{i-1}\right)-{R}^{-1}\left({\delta }_{i}\right)\right]}{{\delta }_{i}-{\delta }_{i-1}}$ (8)

Definition 2.4. (Hierro  ) Let ${0}_{S}$ be a point of a set S provided with a partial order $⊑$ . Consider the set ${S}_{0,⊑}^{+}=\left\{x\in S{:0}_{S}⊑x\right\}$ and let $s:{S}_{0,⊑}^{+}×{S}_{0,⊑}^{+}\to {S}_{0,⊑}^{+}$ be a mapping. A distance function on $\left(S{,0}_{S},⊑,s\right)$ (or a metric) is a mapping $\rho :S×S\to {S}_{0,⊑}^{+}$ verifying, for all $x,y,z\in S$, we have

1) $\rho \left(x,x\right)={0}_{S}$ ;

2) if $\rho \left(x,y\right)=0$, then $x=y$ ;

3) $\rho \left(x,y\right)=\rho \left(y,x\right)$ ;

4) $\rho \left(x,z\right)⊑s\left(\rho \left(x,y\right),\rho \left(y,z\right)\right)$ .

we also say that $\left(S{,0}_{S},s\right)$ is a metric space w.r.t. the partial order $⊑$ . The function $\rho$ is :

1) a pseudometric if it satisfies 1), 3) and 4);

2) a semimetric (on $\left(S{,0}_{S}\right)$ ) if it satisfies 1) - 3);

3) a pseudosemimetric (on $\left(S{,0}_{S}\right)$ ) if it satisfies 1) and 3).

Definition 2.5. (Hierro  ) For $A,B\in {F}_{LR}$, define $A\preccurlyeq B$ w.r.t. $\left({D}_{c},P\right)$ only if ${D}_{c}A\le {D}_{c}B$, $sprA\le sprB$, $\Delta {\underset{_}{a}}_{i}\le \Delta {\underset{_}{b}}_{i}$ and $\Delta {\stackrel{¯}{a}}_{i}\ge \Delta {\stackrel{¯}{b}}_{i}$ for all $i\in 1,2,\cdots ,n$ .

Theorem 2.1. (Hierro  ) For $A\square B\square C\in F$, if $A\preccurlyeq B$, then $A+C\preccurlyeq B+C$ and $rA\preccurlyeq rB$, for all $r>0$ .

Theorem 2.2. (Hierro  ) The relationship $\preccurlyeq$ on ${F}_{LR}$ w.r.t. $\left({D}_{c},P\right)$ is reflexive, transitive and antisymmetric, then $\preccurlyeq$ is a partial order on ${F}_{LR}$ .

Definition 2.6. (Hierro  ) Let $q,{q}_{0}\ge 0,{q}_{1},{q}_{2}:\mathbb{I}\to \left[0,\infty \right)$ be two standard negation on $\mathbb{I}$ and let ${\varphi }_{0}:ℝ×ℝ\to {ℝ}^{+}$, $\psi :{ℝ}_{0}^{+}×{ℝ}_{0}^{+}\to {ℝ}_{0}^{+}$, ${\left\{{\varphi }_{i}:{ℝ}_{0}^{+}×{ℝ}_{0}^{+}\to {ℝ}_{0}^{+}\right\}}_{i=1}^{n}$ and ${\left\{{\phi }_{i}:{ℝ}_{0}^{-}×{ℝ}_{0}^{-}\to {ℝ}_{0}^{+}\right\}}_{i=1}^{n}$ be pseudo-semimetrics on their respective domains. For $A,B\in {F}_{LR}$ and $\alpha \in \mathbb{I}$, define

${\underset{_}{D\left(A,B\right)}}_{\alpha }={q}_{0}{\varphi }_{0}\left({D}_{c}A,{D}_{c}B\right)-q\psi \left(sprA,sprB\right)-{q}_{1}\left(\alpha \right)\sum _{i=1}^{n}\text{ }{\varphi }_{i}\left(\Delta {\underset{_}{a}}_{i},\Delta {\underset{_}{b}}_{i}\right)$ (9)

${\overline{D\left(A,B\right)}}_{\alpha }={q}_{0}{\varphi }_{0}\left({D}_{c}A,{D}_{c}B\right)+q\psi \left(sprA,sprB\right)+{q}_{2}\left(\alpha \right)\sum _{i=1}^{n}\text{ }{\varphi }_{i}\left(\Delta {\overline{a}}_{i},{\overline{b}}_{i}\right)$ (10)

$D\left(A,B\right)$ be the only LRFN determined by its α-cuts.

3. On Several Characterization of Fuzzy Distance and a Distance Measure Between LRFN

Theorem 3.1. If ${q}_{1},{q}_{2}$ are the standard negation, $A,B\in {F}_{LR}$, then $D\left(A,B\right)\in {F}_{LR}$ . In addition, if ${q}_{0},q>0$, then $D:{F}_{LR}×{F}_{LR}\to {F}_{LR}$ is a semimetric on $\left({F}_{LR},\stackrel{˜}{0}\right)$ .

Proposition 3.1. (Hierro  ) If ${q}_{0},q>0$, and ${q}_{1},{q}_{2}$ are the standard negation, then D verifies the following properties for $A,B,C\in F$ :

1) $D\left(A,A\right)=\stackrel{˜}{0}$ ;

2) if $D\left(A,B\right)=\stackrel{˜}{0}$, then $A=B$ ;

3) $D\left(A,B\right)=D\left(B,A\right)$ ;

4) $D\left(A,C\right)\preccurlyeq D\left(A,B\right)+D\left(B,C\right)$ .

whatever the metrics ${\varphi }_{0},\psi ,{\varphi }_{i},{\phi }_{i}$ and the partition $P$ .

Therefore, D is a metric on $\left({F}_{LR},\stackrel{˜}{0}\right)$ .

Proposition 3.2. Let D be a metric on $\left({F}_{LR},\stackrel{˜}{0}\right)$, $A,B,C\in {F}_{LR}$, then we have:

1) $D\left(\stackrel{˜}{1},\stackrel{˜}{0}\right)={q}_{0}{\varphi }_{0}\left(1,0\right)$ ;

2) If $A\preccurlyeq B\preccurlyeq C,\rho \left(x,y\right)\le \rho \left(x,z\right)$ when $|y-x|\le |z-x|$ where $\rho ={\varphi }_{0},\psi ,{\varphi }_{i},{\phi }_{i}$, then $D\left(A,B\right)\preccurlyeq D\left(A,C\right)$, $D\left(B,C\right)\preccurlyeq D\left(A,C\right)$ ;

3) If $\rho \left(kx,ky\right)=k\rho \left(x,y\right)$ where $\rho ={\varphi }_{0},\psi ,{\varphi }_{i},{\phi }_{i},k\in {ℝ}^{+}$, then $D\left(kA,kB\right)=kD\left(A,B\right)$ .

Proposition 3.3. If $A={\left({a}_{1}/{a}_{2}/{a}_{3}/{a}_{4}\right)}_{LR},B={\left({b}_{1}/{b}_{2}/{b}_{3}/{b}_{4}\right)}_{LR}$ satisfy $A\preccurlyeq B$ w.r.t. $\left({D}_{c},P\right)$, only if ${a}_{2}+{a}_{3}\le {b}_{2}+{b}_{3}$, ${a}_{3}-{a}_{2}\le {b}_{3}-{b}_{2}$, ${a}_{2}-{a}_{1}\le {b}_{2}-{b}_{1}$, ${a}_{4}-{a}_{3}\le {b}_{4}-{b}_{3}$ .

Theorem 3.2. Assume that, Definition 2.6, we choose $q={q}_{0}=1$, ${q}_{1}\left(\alpha \right)={q}_{2}\left(\alpha \right)$ are the standard negation, ${\varphi }_{0}\left(x,y\right)=\psi \left(x,y\right)={\left(x-y\right)}^{2}$, and ${\varphi }_{i}\left(x,y\right)={\phi }_{i}\left(x,y\right)={\left({\delta }_{i}-{\delta }_{i-1}\right)}^{2}{\left(x-y\right)}^{2}$, for all $i\in \left\{2,3,\cdots ,n\right\}$ in their respective domains. $A={\left({a}_{1}/{a}_{2}/{a}_{3}/{a}_{4}\right)}_{LR}$, $B={\left({b}_{1}/{b}_{2}/{b}_{3}/{b}_{4}\right)}_{LR}$, then we define the distance measure $D\left(A,B\right)$ as:

$D\left(A,B\right)={\left({d}_{1}/{d}_{2}/{d}_{3}/{d}_{4}\right)}_{LR}$ (11)

where

$\begin{array}{l}{d}_{1}=\left({a}_{2}-{b}_{2}\right)\left({a}_{3}-{b}_{3}\right)-{\left[\left({a}_{2}-{a}_{1}\right)-\left({b}_{2}-{b}_{1}\right)\right]}^{2}\\ {d}_{2}=\left({a}_{2}-{b}_{2}\right)\left({a}_{3}-{b}_{3}\right)\\ {d}_{3}=\left[{\left({a}_{3}-{b}_{3}\right)}^{2}+{\left({a}_{2}-{b}_{2}\right)}^{2}\right]/2\\ {d}_{4}=\left[{\left({a}_{3}-{b}_{3}\right)}^{2}+{\left({a}_{2}-{b}_{2}\right)}^{2}\right]/2+{\left[\left({a}_{4}-{a}_{3}\right)-\left({b}_{4}-{b}_{3}\right)\right]}^{2}\end{array}$ (12)

Proof. If $A={\left({a}_{1}/{a}_{2}/{a}_{3}/{a}_{4}\right)}_{LR}$ and $B={\left({b}_{1}/{b}_{2}/{b}_{3}/{b}_{4}\right)}_{LR}$, we deduce from (9)-(10) that, first of all, notice that

${q}_{0}{\varphi }_{0}\left({D}_{c}A,{D}_{c}B\right)={\left({D}_{c}A-{D}_{c}B\right)}^{2}={\left[\left({a}_{2}+{a}_{3}\right)-\left({b}_{2}+{b}_{3}\right)\right]}^{2}/4$ (13)

$q\psi \left(sprA,sprB\right)={\left(sprA-sprB\right)}^{2}={\left[\left({a}_{3}-{a}_{2}\right)-\left({b}_{3}-{b}_{2}\right)\right]}^{2}/4$ (14)

On the other hand, by (7)

$\begin{array}{c}{\varphi }_{i}\left(\Delta {\underset{_}{a}}_{i},\Delta {\underset{_}{b}}_{i}\right)={\left({\delta }_{i}-{\delta }_{i-1}\right)}^{2}{\left(\Delta {\underset{_}{a}}_{i}-\Delta {\underset{_}{b}}_{i}\right)}^{2}\\ ={\left[\left({a}_{2}-{a}_{1}\right)-\left({b}_{2}-{b}_{1}\right)\right]}^{2}{\left[{L}^{-1}\left({\delta }_{i}\right)-{L}^{-1}\left({\delta }_{i-1}\right)\right]}^{2}\end{array}$ (15)

Similarly by (8)

${\phi }_{i}\left(\Delta {\stackrel{¯}{a}}_{i},\Delta {\stackrel{¯}{b}}_{i}\right)={\left[\left({a}_{4}-{a}_{3}\right)-\left({b}_{4}-{b}_{3}\right)\right]}^{2}{\left[{R}^{-1}\left({\delta }_{i-1}\right)-{R}^{-1}\left({\delta }_{i}\right)\right]}^{2}$ (16)

Then we have

$\begin{array}{c}{\underset{_}{D\left(A,B\right)}}_{\alpha }={\left[\left({a}_{2}+{a}_{3}\right)-\left({b}_{2}+{b}_{3}\right)\right]}^{2}/4-{\left[\left({a}_{3}-{a}_{2}\right)-\left({b}_{3}-{b}_{2}\right)\right]}^{2}/4\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-{q}_{1}\left(\alpha \right){\left[\left({a}_{2}-{a}_{1}\right)-\left({b}_{2}-{b}_{1}\right)\right]}^{2}\sum _{i=1}^{n}{\left[{L}^{-1}\left({\delta }_{i}\right)-{L}^{-1}\left({\delta }_{i-1}\right)\right]}^{2}\end{array}$ (17)

$\begin{array}{c}{\stackrel{¯}{D\left(A,B\right)}}_{\alpha }={\left[\left({a}_{2}+{a}_{3}\right)-\left({b}_{2}+{b}_{3}\right)\right]}^{2}/4+{\left[\left({a}_{3}-{a}_{2}\right)-\left({b}_{3}-{b}_{2}\right)\right]}^{2}/4\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{q}_{2}\left(\alpha \right){\left[\left({a}_{4}-{a}_{3}\right)-\left({b}_{4}-{b}_{3}\right)\right]}^{2}\underset{i=1}{\overset{n}{\sum }}{\left[{R}^{-1}\left({\delta }_{i-1}\right)-{R}^{-1}\left({\delta }_{i}\right)\right]}^{2}\end{array}$ (18)

Finally, by (17)-(18) we have

$\begin{array}{l}{d}_{1}={\underset{_}{D\left(A,B\right)}}_{0}=\left({a}_{2}-{b}_{2}\right)\left({a}_{3}-{b}_{3}\right)-{\left[\left({a}_{2}-{a}_{1}\right)-\left({b}_{2}-{b}_{1}\right)\right]}^{2}\\ {d}_{2}={\underset{_}{D\left(A,B\right)}}_{1}=\left({a}_{2}-{b}_{2}\right)\left({a}_{3}-{b}_{3}\right)\\ {d}_{3}={\overline{D\left(A,B\right)}}_{1}=\left[{\left({a}_{3}-{b}_{3}\right)}^{2}+{\left({a}_{2}-{b}_{2}\right)}^{2}\right]/2\\ {d}_{4}={\overline{D\left(A,B\right)}}_{0}=\left[{\left({a}_{3}-{b}_{3}\right)}^{2}+{\left({a}_{2}-{b}_{2}\right)}^{2}\right]/2+{\left[\left({a}_{4}-{a}_{3}\right)-\left({b}_{4}-{b}_{3}\right)\right]}^{2}\end{array}$ (19)

4. The Fuzzy Regression Procedure Based on LRFN

In this section, the regression methodology of minimizing the mean fuzzy error as the objective function is introduction. The regression parameters is obtained by the least square method based on Theorem 3.2, and Proposition 3.3 to contrast which is the model with least fuzzy error.

Let $\stackrel{˜}{X}={\left({\stackrel{˜}{x}}_{1},\cdots ,{\stackrel{˜}{x}}_{N}\right)}^{\prime }$ is a random vector which have N LR fuzzy input, then we get a crisp random vector ${X}^{\prime }=\left({x}_{1,1},{x}_{1,2},{x}_{1,3},{x}_{1,4},\cdots ,{x}_{N,1},{x}_{N,2},{x}_{N,3},{x}_{N,4}\right)$ when we put the four corners of each fuzzy input as explanatory variables, so we have a new crisp random vector $X=\left({x}_{1},{x}_{2},{x}_{3},\cdots ,{x}_{4N}\right)$ . The LR fuzzy response variable is $\stackrel{˜}{Y}={\left({y}_{1}/{y}_{2}/{y}_{3}/{y}_{4}\right)}_{LR}$ (suppose there are n samples). We will analyze the relationship between $\stackrel{˜}{Y}$ and X. The regression model we consider can be formalized as:

$\stackrel{˜}{Y}={\left({\stackrel{^}{y}}_{1}+{\epsilon }_{1}/{\stackrel{^}{y}}_{2}+{\epsilon }_{2}/{\stackrel{^}{y}}_{3,j}+{\epsilon }_{3,j}/{\stackrel{^}{y}}_{4}+{\epsilon }_{4}\right)}_{LR},$ (20)

where ${\epsilon }_{1},{\epsilon }_{2},{\epsilon }_{3},{\epsilon }_{4}$ are the residuals (i.e., real-valued random variables such that $E\left[{\epsilon }_{1}|X\right]=E\left[{\epsilon }_{2}|X\right]=E\left[{\epsilon }_{3}|X\right]=E\left[{\epsilon }_{4}|X\right]=0$ ) and the estimated variables $\stackrel{^}{\stackrel{˜}{Y}}={\left({\stackrel{^}{y}}_{1}/{\stackrel{^}{y}}_{2}/{\stackrel{^}{y}}_{3}/{\stackrel{^}{y}}_{4}\right)}_{LR}$ be formalized as:

$\begin{array}{l}{\stackrel{^}{y}}_{1}={b}_{0,1}+\underset{i=1}{\overset{4N}{\sum }}\text{ }{b}_{i,1}{x}_{i},\text{ }{\stackrel{^}{y}}_{2}={b}_{0,2}+\underset{i=1}{\overset{4N}{\sum }}\text{ }{b}_{i,2}{x}_{i},\\ {\stackrel{^}{y}}_{3}={b}_{0,3}+\underset{i=1}{\overset{4N}{\sum }}\text{ }{b}_{i,3}{x}_{i},\text{ }{\stackrel{^}{y}}_{4}={b}_{0,4}+\underset{i=1}{\overset{4N}{\sum }}\text{ }{b}_{i,4}{x}_{i},\end{array}$ (21)

there ${b}_{0,k},{b}_{i,k}$ ( $k=1,2,3,4$ represents the kth corner) are regression parameters for ${\stackrel{^}{y}}_{1},{\stackrel{^}{y}}_{2},{\stackrel{^}{y}}_{3}$ and ${\stackrel{^}{y}}_{4}$ respectively.

Consider the distance measure D defined in (11), we will minimize the distance between the observed and the estimated values as the objective function. In other words, we use the mean fuzzy error $\epsilon$ as the objective function and it be formalized as:

$\mathrm{min}:\epsilon =\frac{1}{n}\underset{j=1}{\overset{n}{\sum }}\text{ }D\left({\stackrel{˜}{Y}}_{j},{\stackrel{^}{\stackrel{˜}{Y}}}_{j}\right)$ (22)

that is, we are looking for a function $\stackrel{^}{\stackrel{˜}{Y}}={\left({\stackrel{^}{y}}_{1}/{\stackrel{^}{y}}_{2}/{\stackrel{^}{y}}_{3}/{\stackrel{^}{y}}_{4}\right)}_{LR}$ such that the mean fuzzy error $\epsilon$ (22) is as small as possible w.r.t. the partial order $\preccurlyeq$ w.r.t. on ${F}_{LR}$ . If the objective regression function $\stackrel{^}{\stackrel{˜}{Y}}$ is given by (21) then the mean fuzzy error (23) is:

$\begin{array}{l}\mathrm{min}:\epsilon =\frac{1}{n}\underset{j=1}{\overset{n}{\sum }}{\left({\epsilon }_{j,1}/{\epsilon }_{j,2}/{\epsilon }_{j,3}/{\epsilon }_{j,4}\right)}_{LR}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}={\left(\frac{1}{n}\underset{j=1}{\overset{n}{\sum }}\text{ }{\epsilon }_{j,1}/\frac{1}{n}\underset{j=1}{\overset{n}{\sum }}\text{ }{\epsilon }_{j,2}/\frac{1}{n}\underset{j=1}{\overset{n}{\sum }}\text{ }{\epsilon }_{j,3}/\frac{1}{n}\underset{j=1}{\overset{n}{\sum }}\text{ }{\epsilon }_{j,4}\right)}_{LR}\end{array}$ (23)

where

${\epsilon }_{j,1}={\underset{_}{D\left(\stackrel{˜}{Y},\stackrel{^}{\stackrel{˜}{Y}}\right)}}_{j,0}=\left({y}_{j,2}-{\stackrel{^}{y}}_{j,2}\right)\left({y}_{j,3}-{\stackrel{^}{y}}_{j,3}\right)-{\left[\left({y}_{j,2}-{y}_{j,1}\right)-\left({\stackrel{^}{y}}_{j,2}-{\stackrel{^}{y}}_{j,1}\right)\right]}^{2}$

${\epsilon }_{j,2}={\underset{_}{D\left(\stackrel{˜}{Y},\stackrel{^}{\stackrel{˜}{Y}}\right)}}_{j,1}=\left({y}_{j,2}-{\stackrel{^}{y}}_{j,2}\right)\left({y}_{j,3}-{\stackrel{^}{y}}_{j,3}\right)$

${\epsilon }_{j,3}={\stackrel{¯}{D\left(\stackrel{˜}{Y},\stackrel{^}{\stackrel{˜}{Y}}\right)}}_{j,1}=\left[{\left({y}_{j,3}-{\stackrel{^}{y}}_{j,3}\right)}^{2}+{\left({y}_{j,2}-{\stackrel{^}{y}}_{j,2}\right)}^{2}\right]/2$

$\begin{array}{l}{\epsilon }_{j,4}={\stackrel{¯}{D\left(\stackrel{˜}{Y},\stackrel{^}{\stackrel{˜}{Y}}\right)}}_{j,0}=\left[{\left({y}_{j,3}-{\stackrel{^}{y}}_{j,3}\right)}^{2}+{\left({y}_{j,2}-{\stackrel{^}{y}}_{j,2}\right)}^{2}\right]/2\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\left[\left({y}_{j,4}-{y}_{j,3}\right)-\left({\stackrel{^}{y}}_{j,4}-{\stackrel{^}{y}}_{j,3}\right)\right]}^{2}\end{array}$

As a consequence, we must minimize each component to obtain the optimal solution. To sum up, ${y}_{1},{y}_{2},{y}_{3},{y}_{4}$, four corners of $\stackrel{˜}{Y}$ can be relates to ${x}_{i}\left(i=1,2,\cdots ,4N\right)$ .

For each of the possible combinations of ${y}_{1},{y}_{2},{y}_{3},{y}_{4}$ we calculate a mean fuzzy error using the previous equation. Finally, we sort the fuzzy models using the partial order $\preccurlyeq$ and, for the optimal solution of the fuzzy regression problem, we choose the fuzzy model with the lowest fuzzy error.

The main purpose of this paper is to develop a LR fuzzy methodology that can be considered easy to understand and powerful. Based on this, we will consider using the stepwise linear regression method to solve the problem. Stepwise regression is a method of fitting regression models in which the choice of predictive variables is carried out by an automatic procedure. Eliminate unimportant explanatory variables to ensure that only significant variables are included in the regression equation before each new variable is introduced. The final set of explanatory variables is optimal. We propose to consider models obtained by this approach in the solution of the fuzzy problem.

Applying the previous methodology, we will obtain the fuzzy regression models $\stackrel{^}{\stackrel{˜}{Y}}$ . To evaluate the goodness-of-fit of the different models, we considered two numerical estimations of the following statistics:

1) Mean fuzzy error $\epsilon$ are given by (23);

2) $SSE=\frac{1}{4}{\sum }_{k=1}^{4}{\sum }_{j=1}^{n}{\left({y}_{j,k}-{\stackrel{^}{y}}_{j,k}\right)}^{2}$ .

5. Numerical Example

Example 1. Triangular fuzzy observations

In this example, the fuzzy input-output data from Sakawa  are used. One explanatory variable and the dependent variable for all observations are represented as triangular fuzzy numbers, as listed in the left part of Table 1.

We use these data to regress the fuzzy response variable $\stackrel{˜}{Y}={\left({y}_{1}/{y}_{2}/{y}_{4}\right)}_{T}$ about the fuzzy exploratory variable $\stackrel{˜}{X}={\left({x}_{1}/{x}_{2}/{x}_{4}\right)}_{T}$ . This problem can be reduced to one that can be solves with the methodology described in the previous sections and considering the crisp random vector $X=\left({x}_{1},{x}_{2},{x}_{4}\right)$ as the exploratory random variable.

In order to search for a suitable fuzzy regression model capable to express the statistical relationship between $\stackrel{˜}{Y}$ and X, we consider the methodology explained previously. We used the stepwise method to select the explanatory variables which are needed to build the model.

Therefore, proposed method (PM) with the lowest error is given by

Table 1. Comparison of the estimations errors from various models and criteria for Example 1.

${\stackrel{^}{\stackrel{˜}{Y}}}_{PM}={\left(3.572+{x}_{1}-0.481{x}_{2}/3.572+0.519{x}_{2}/3.572-0.481{x}_{2}+{x}_{4}\right)}_{T}$,

following Sakawa  ’s ( $\alpha =0.5$ ) work, their model (SYM) is constructed as

${\stackrel{^}{\stackrel{˜}{Y}}}_{SY}={\left(3.031/3.20/3.371\right)}_{T}+{\left(0.498/0.579/0.659\right)}_{T}\stackrel{˜}{X}$,

Yang  ’s model (YLM) as

${\stackrel{^}{\stackrel{˜}{Y}}}_{YL}={\left(3.203/3.497/3.788\right)}_{T}+{\left(0.525/0.529/0.534\right)}_{T}\stackrel{˜}{X}$,

Kao  ’s model (KCM) as

${\stackrel{^}{\stackrel{˜}{Y}}}_{KC}=3.565+0.522\stackrel{˜}{X}+{\left(-0.962/-0.011/0.938\right)}_{T}$,

Chen  ’s model (CHM) as

${\stackrel{^}{\stackrel{˜}{Y}}}_{CH}={\left(3.272/3.572/3.872\right)}_{T}+0.519\stackrel{˜}{X}$,

Wu  ’s ( $\alpha =0.6$ ) model (WuM) as

${\stackrel{^}{\stackrel{˜}{Y}}}_{Wu}=3.57+0.5196\stackrel{˜}{X}$,

Modarres  ’s model (MEM) as

${\stackrel{^}{\stackrel{˜}{Y}}}_{ME}={\left(3.278/3.511/3.744\right)}_{T}+{\left(0.544/0.553/0.562\right)}_{T}\stackrel{˜}{X}$ .

Figure 1 depicts the observed values and the fuzzy estimated values of each model. The X axis represents the number of the input, not the input value. The Y axis represents the value of output $\stackrel{^}{\stackrel{˜}{Y}}={\left({\stackrel{^}{y}}_{1}/{\stackrel{^}{y}}_{2}/{\stackrel{^}{y}}_{4}\right)}_{T}$ . In view of the fact that the fuzzy number can’t be represented as a crisp in a coordinate system, we use a line segment with three vertices to represent a fuzzy number, in which the line segment from the top to the bottom represents the right support value( ${\stackrel{^}{y}}_{4}$ ), the

Figure 1. The predictive value of each model by Example 1.

kernel value( ${\stackrel{^}{y}}_{2}$ ), and the left support value( ${\stackrel{^}{y}}_{1}$ ), respectively. Therefore, when the graph of the model is closer to the output, the fitting effect of the model is better. Right part of Table 1 also lists the estimation errors from the earlier models based on the two criteria, $\epsilon$ and SSE. To ranking $\epsilon$ by Proposition 3.3 and compare SSE, the table indicates that the performance of the proposed approach is satisfactory among the models in terms of total estimation error based on the two criteria.

Example 2. One crisp explanatory variable and non-triangular fuzzy response variable

From Example 1, it was found that if data were single variable symmetric triangular fuzzy numbers, the performance of these models were similar, except for the SYM. So there were a few changes in the data  of this example. The input was a single variable real number and the output was LR fuzzy number.

Since SYM perform badly in Example 1, it would no longer be added to the comparison.

Following Yang  ’s work, their model is constructed as

${\stackrel{^}{\stackrel{˜}{Y}}}_{YL}={\left(3.185/3.850/4.516\right)}_{LR}+{\left(0.919/0.924/0.928\right)}_{LR}X$,

in addition, Wu  and Chen  respectively, built up their fuzzy regression models as

${\stackrel{^}{\stackrel{˜}{Y}}}_{Wu}=3.85+0.924X,\left(\alpha =0.6\right)$,

${\stackrel{^}{\stackrel{˜}{Y}}}_{CH}=0.924X+{\left(3.15/3.85/4.55\right)}_{LR}$,

Kao  ’s model as

${\stackrel{^}{\stackrel{˜}{Y}}}_{KC}=3.806+0.927X+{\left(-0.681/0.019/0.719\right)}_{LR}$,

Proposed Method’s model as

${\stackrel{^}{\stackrel{˜}{Y}}}_{PM}={\left(3.185+0.919x/3.85+0.924x/4.516+0.928x\right)}_{LR}$ .

It can be seem from Figure 2 that the estimated values of several models are almost the same, and in the right of Table 2 showed that PM and YLM, CHM, KCM have almost the same fuzzy error, but the fuzzy error of WuM is relatively large. And the estimated value of Wu’s model was still crisp in this case, this would be a consequence of that the parameter of WuM was crisp. PM model and YL et al. model parameters were all fuzzy numbers, thus PM model could be applied widely.

Example 3. Two explanatory and response variables both are asymmetrical

Since the input of the first two examples was single variable, two independent variables were taken to this example. The input and output were triangular fuzzy number, and the numerical difference between the two independent variables was larger. Consisted of 15 fuzzy observations with two fuzzy explanatory variables ${\stackrel{˜}{X}}_{1}={\left({x}_{1,1}/{x}_{1,2}/{x}_{1,4}\right)}_{T}$, ${\stackrel{˜}{X}}_{2}={\left({x}_{2,1}/{x}_{2,2}/{x}_{2,4}\right)}_{T}$ and one fuzzy response variable $\stackrel{˜}{Y}$, which is listed in the left part of Table 3 from Chen  .

Figure 2. The predictive value of each model by Example 2.

Table 2. Comparison of the estimations errors from various models and criteria for Example 2.

Table 3. Comparison of the estimations errors from various models and criteria for Example 3.

In Example 2, YLM, CHM and PM model had the same fuzzy error, and KCM had larger fuzzy error. Therefore, KCM was not as good as PM. Thus, KCM was removed from the comparison. Following Yang  ’s work, their model is constructed as

${\stackrel{^}{\stackrel{˜}{Y}}}_{YL}=12.726+0.49{\stackrel{˜}{X}}_{1}+0.007{\stackrel{˜}{X}}_{2}$,

in addition, Wu  and Chen  respectively, built up their fuzzy regression model as

${\stackrel{^}{\stackrel{˜}{Y}}}_{Wu}=3.453+0.496{\stackrel{˜}{X}}_{1}+0.009{\stackrel{˜}{X}}_{2},\left(\alpha =1\right)$,

${\stackrel{^}{\stackrel{˜}{Y}}}_{CH}=0.507{\stackrel{˜}{X}}_{1}+0.009{\stackrel{˜}{X}}_{2}+{\left(-18.167/0.06/10.592\right)}_{T}$ .

We use these data to regress the fuzzy response variable $\stackrel{˜}{Y}$ about the fuzzy exploratory variable ${\stackrel{˜}{X}}_{1}={\left({x}_{1,1}/{x}_{1,2}/{x}_{1,4}\right)}_{T}$, ${\stackrel{˜}{X}}_{2}={\left({x}_{2,1}/{x}_{2,2}/{x}_{2,4}\right)}_{T}$ . This problem can be reduced to one that can be solved with the methodology described in the previous sections and considering the crisp random vector $X=\left({x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5},{x}_{6}\right)$ as the exploratory random variable. Then we used the stepwise method to select the explanatory variables.

Proposed Method:

${\stackrel{^}{\stackrel{˜}{Y}}}_{PM}={\left({\stackrel{^}{y}}_{1}/{\stackrel{^}{y}}_{2}/{\stackrel{^}{y}}_{4}\right)}_{T}$

where

${\stackrel{^}{y}}_{1}=-1.128+0.345{x}_{2,1}+0.009{x}_{2,2}$,

${\stackrel{^}{y}}_{2}=3.453+0.496{x}_{2,1}+0.009{x}_{2,2}$,

${\stackrel{^}{y}}_{4}=6.596+0.594{x}_{1,1}-0.0009{x}_{2,1}+0.009{x}_{2,2}+0.241{x}_{4,1}$ .

From the perspective of fuzzy error, PM was significantly better than other models. It can be seen from Figure 3 that there was little difference in the fitting kernal values of these models, but the estimated values are quite different from the observed values. As seen from Figure 3, the support value (x11, x14) of estimations of CHM, YLM and WuM were much longer than those of observations.

Example 4. Real-life data

When the input was a single crisp variable, PM performed quite equal to other models; when input was fuzzy multi-variable, PM is significantly better than

Figure 3. The predictive value of each model by Example 3.

other models. Thus, it would make sense to see whether our model would perform well when the input was crisp multi-variable. A set of real-life data are adopted in this example to demonstrate the proposed solution approaches for the fuzzy regression problem, which is listed in Table 4 from Wei  . Here we give a numerical example of the relationship between the heat released by a certain cement $\stackrel{˜}{Y}$ and the two chemical components ${X}_{1}$, ${X}_{2}$ .

Following Yang  ’s work, their model is constructed as

$\begin{array}{l}{\stackrel{^}{\stackrel{˜}{Y}}}_{YL}={\left(49.414/52.577/57.921\right)}_{T}+{\left(1.243/1.468/1.616\right)}_{T}{X}_{1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\left(0.653/0.662/0.701\right)}_{T}{X}_{2}\end{array}$

Chen  built up their fuzzy regression models as

${\stackrel{^}{\stackrel{˜}{Y}}}_{CH}=1.854{X}_{1}+0.795{X}_{2}+{\left(38.016/43.324/51.632\right)}_{T}$

In addition, Kao  ’s model as:

${\stackrel{^}{\stackrel{˜}{Y}}}_{KC}=51.394+1.481{X}_{1}+0.674{X}_{2}+{\left(-4.807/0.501/8.808\right)}_{T}$

Proposed Method:

${\stackrel{^}{\stackrel{˜}{Y}}}_{PM}={\left({\stackrel{^}{y}}_{1}/{\stackrel{^}{y}}_{2}/{\stackrel{^}{y}}_{4}\right)}_{T}$

where

${\stackrel{^}{y}}_{1}=48.998+1.237{X}_{1}+0.663{X}_{2}$

${\stackrel{^}{y}}_{2}=52.577+1.468{X}_{1}+0.663{X}_{2}$

${\stackrel{^}{y}}_{4}=57.921+1.616{X}_{1}+0.701{X}_{2}$

Table 5 lists the estimation errors from the earlier models based on the two criteria, $\epsilon$ and SSE. Figure 4 also depicts the observed values and the fuzzy estimated values of each model. The same as that of Example 2 is that the input of this example is also crisp and the mean fuzzy error $\epsilon$ and SSE of our model are slightly greater than or equal to the error of the YLM. What’s different from Example 2 is that this example has two independent variables, which distinguishes YLM and PM from CHM and KCM. The error of CHM and KCM is obviously larger than that of YLM and PM.

Table 4. Crisp input and fuzzy output data set for Example 4.

Table 5. Comparison of the estimations errors from various models and criteria for Example 4.

Figure 4. The predictive value of each model by Example 4.

6. Conclusion

The previous four different types of examples suggested that when the data structure was simple, for example, in Example 1 the data were single variable symmetric triangular fuzzy number, in Example 2 the input was a single variable crisp and the output was LR fuzzy number, in Example 4 the input was multivariate crisp and the output was LR fuzzy number, PM was not significantly better than any other model, but also not inferior to other models. When the data structure was complex, for example, in Example 3 the input was multivariate LR fuzzy number and the output also was LR fuzzy number, PM was significantly better than other models. This could be explained by those previous models which did not investigate the influence of the left and right values of input on the center value of output or the center value of input to the left and right values of output. In practice, there is more than one factor that affects the dependent variable, which indicates that our model could have a wider range of application.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Deng, J. and Lu, Q.J. (2018) Fuzzy Regression Model Based on Fuzzy Distance Measure. Journal of Data Analysis and Information Processing, 6, 126-140. https://doi.org/10.4236/jdaip.2018.63008

References

1. 1. Zadeh, L.A. (1965) Fuzzy Sets. Information and Control, 8, 338-353. https://doi.org/10.1016/S0019-9958(65)90241-X

2. 2. Dubois, D. and Prade, H. (1978) Operations on Fuzzy Numbers. International Journal of Systems Science, 9, 613-626. https://doi.org/10.1080/00207727808941724

3. 3. Tanaka, H., Uejima, S. and Asai, K. (1982) Linear Regression Analysis with Fuzzy Model. IEEE Transactions on Systems Man and Cybernetics, 12, 903-907. https://doi.org/10.1109/TSMC.1982.4308925

4. 4. Sakawa, M. and Yano, H. (1992) Multiobjective Fuzzy Linear Regression Analysis for Fuzzy Input-Output Data. Fuzzy Sets and Systems, 63, 191-206. https://doi.org/10.1016/0165-0114(92)90175-4

5. 5. Diamond, P. (1988) Fuzzy Least Squares. Information Sciences, 46, 141-157. https://doi.org/10.1016/0020-0255(88)90047-3

6. 6. Wu, H.C. (2003) Linear Regression Analysis for Fuzzy Input and Output Data Using the Extension Principle. Computers and Mathematics with Applications, 45, 1849-1859. https://doi.org/10.1016/S0898-1221(03)90006-X

7. 7. Yang, M.S. and Lin, T.S. (2002) Fuzzy Least-Squares Linear Regression Analysis for Fuzzy Input-Output Data. Fuzzy Sets and Systems, 126, 389-399. https://doi.org/10.1016/S0165-0114(01)00066-5

8. 8. Modarres, M., Nasrabadi, E. and Nasrabadi, M.M. (2008) Fuzzy Linear Regression Analysis from the Point of View Risk. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 12, 635-649. https://doi.org/10.1142/S0218488504003120

9. 9. Aguilar-Pe a, C. and Martlnez-Moreno, J. (2016) A Family of Fuzzy Distance Measures of Fuzzy Numbers. Soft Computing, 20, 237-250. https://doi.org/10.1007/s00500-014-1497-0

10. 10. Alfonso, G., Hierro, A.F.R.L.D. and Roldan, C. (2017) A Fuzzy Regression Model Based on Finite Fuzzy Numbers and Its Application to Real-World Financial Data. Journal of Computational and Applied Mathematics, 318, 47-58. https://doi.org/10.1016/j.cam.2016.12.001

11. 11. Hierro, A.F.R.L.D., Martínez-Moreno, J. and Aguilar-Peña, C. (2016) Estimation of a Fuzzy Regression Model Using Fuzzy Distances. IEEE Transactions on Fuzzy Systems, 24, 344-359. https://doi.org/10.1109/TFUZZ.2015.2455533

12. 12. Kao, C. and Chyu, C.L. (2003) Least-Squares Estimates in Fuzzy Regression Analysis. European Journal of Operational Research, 148, 426-435. https://doi.org/10.1016/S0377-2217(02)00423-X

13. 13. Chen, L.H. and Hsueh, C.C. (2009) Fuzzy Regression Models Using the Least-Squares Method Based on the Concept of Distance. IEEE Transactions on Fuzzy Systems, 17, 1259-1272. https://doi.org/10.1109/TFUZZ.2009.2026891

14. 14. Wei, L. and Liu, R. (2002) Multivariate Linear Least Squares Regression Coefficient of Unsymmetrical Exponential Fuzzy Numbers. Journal of Ningxia University (Natural Edition), 23, 9-14.