Theoretical Economics Letters, 2011, 1, 21-27
doi:10.4236/tel.2011.12006 Published Online August 2011 (http://www.scirp.org/journal/tel)
Copyright © 2011 SciRes. TEL
Decentralized Policy in Resolving Environmental Disputes
with Private Information
Huei-Chin Lin
Department of Economics, National Dong Hwa University, Hualien, Taiwan, China
E-mail: hclin@mail.ndhu.edu.tw
Received July 3, 2011; revised August 1, 2011; accepted August 7, 2011
Abstract
We have design a private-information game to incorporate independent experts’ assistance. With the better
information provided by experts, the mistrust of the uninformed party might be dissolved. And we may get
an effective and efficient resolution outcome. We will investigate conditions under which the experts’ infor-
mation may help the economy to get an efficient outcome or an effective resolution result.
Keywords: Conflict Resolution, Extensive-Form Game, Implementation, Strategic Information Transmission,
Asymmetric Information
1. Introduction
For the last few decades, environmental regulations have
always been one of the major issues of government poli-
cies and world collaborations. Countless conferences
were held to solve the global warming (or climate change)
problem since the adoption of Kyoto Protocol in 1997.
This protocol has the enforcing power to commit the
signing countries to reduce GHG (greenhouse gas) emis-
sion. However, this protocol was not enforced until Feb-
ruary 16, 2005, and unfortunately, the protocol will ex-
pire at the end of 2012. A new framework of GHG re-
duction plan needs to be negotiated before 2012, but af-
ter the failed attempt in 2009 Copenhagen Summit, the
gap between Kyoto protocol and new commitment will
be inevitable. It is usually difficult and time-consuming
to reach an agreement in almost every international con-
ference. There are conflicts between economic interests
and concerns about the environment, global or otherwise.
There are also political struggles between different inter-
est groups in each country.
Climate change has the devastating effect on many
aspects of our economic and political life. In addition to
rising sea level, glacier retreat and disappearance, life-th-
reatening storm and heat waves, extreme weather would
also cause damages not only to agricultural sector but
also other industries that depend on the stable supply of
natural resources, such as water for processing and
cleaning. Apart from the climate change issue, many
countries also face different civil disputes concerning
resource redistribution and local environmental issues,
such as who is entitled to what rights, or who should get
more.
Environmental disputes may even cause political in-
stability, depending on the scope and scale of the disputes,
and subsequently, causing economic problems. Dispute
is inevitably inefficient because of the wasted time, en-
ergy and resources, if unresolved, the loss will be large.
Many disputes are unresolved simply because of distrust
caused by private information. Even with agreement,
distrust may lengthen the process, causing inefficiency.
To eliminate this type of inefficiency, Moore and Re-
pullo [1] implement subgame perfect outcomes, Abreu
and Sen [2] modified and extend M-R model to more
applications for subgame perfect implementation. Sub-
game perfect implementation loses its power if informa-
tion is incomplete. When disputes caused by private in-
formation, such case is usually fallen under the incom-
plete information framework. Within incomplete infor-
mation framework, Baliga [3] and Bergin and Sen [4]
provide sufficient and necessary conditions for sequen-
tial equilibrium implementation. However, their model is
rather restrictive in its applications and failed to address
the specific information restriction (e.g. highly asymmet-
ric and not easy to verify) which applies to the cases of
environmental disputes and conflict resolutions.
We modified the analytical framework and conditions,
extending it to the issues of conflict resolution using in-
dependent experts as the verification mechanism and to
deter people from cheating. In reality, civil court cases
H.-C. LIN
22
resolved many such disputes, and distrust was the main
reason for them to file the suit in the first place. In court,
they would use expert witness to verify the information
and evidences, and the case would settle in court or out
of court. Since environmental disputes involve compli-
cated technical details, the resolution outcome requires
experts’ verification. We devise a mechanism closely
resembled court practice and use third party expert as a
credible threat to force “would-be deceiver” reveal the
true information, and to resolve the dispute “out-of-court”
without losing efficiency. This mechanism may apply to
international dispute cases, including negotiating GHG
reduction commitment.
2. The Notation and Definitions
Suppose, in the dispute, only one involved party has pri-
vate information (for example, he can afford to abate
more pollution) that he tries to conceal in order to gain a
more favorable outcome. We denote this private infor-
mation as the type of nature,
. Let denote the fi-
nite feasible type profile,
k1,,
 , where k is the
total number of mutually exclusive possible types. We
assume only player j knows the type, that is, only player j
can distinguishl
form m
for all 1 and for
all
lm k
,
lm
. In this one-sided incomplete information
game, the sequential equilibrium can be defined through
the following definitions.
Definition: An assessment in an extensive form game
is a pair
,
, where
is the strategy,AH
:
,
and H is the set of possible history of moves,Hh
, and
A is the action space,. Player i’s strategy is Aa
ii
ah
|,
. And her belief system is

|,h

.
Definition PR: In a game of perfect recall (PR), no
player ever forgets any information he knew earlier and
the actions he has chosen previously.
Definition SR: In an extensive form game with perfect
recall, an assessment(,)
iN
(|,h
is sequentially rational (SR)
if, for every player and every history of informa-
tion set ,
hH

,(, |,,
iiii
EuEuh
 
,
for all strategy profile '
, where u is the payoff func-
tion.
Definition BR: Belief system is updated by using
Bayes’ rule (BR): for all ,ij N
, , if there exists aA
'
 with 0
i an0,
then for
'|h

,'
d
all

|,'
jj
ah

,
 



'
||,
|, '|| ,'
ijj
i
ijj
hah
ha hah
 

 
Definition: An assessment is consistent if there is a
sequence of assessment that converges to
1
)},{( n
nn

(,)
, whereis the perfectly mixed behavior
strategy, i.e.,
1
}{n
n
|
ii
ah
0 for all h and a, and
is
(uniquely) defined from
by Bayes’ rule.
Definition SE: An assessment (,)
is a sequential
equilibrium (SE) of a finite extensive form game with PR
if it is SR and consistent.
In most environmental dispute cases, information is
too technical and complex for the uninformed parties to
grasp. However, with expert’s trustworthy verification or
the voluntary disclosure of player j, uninformed parties
may rule out some types and then update their belief.
Partitional information structure describes such cases
exactly.
Definition for Partitional Information (PI): Informa-
tion is partitional if player j’s type profile can be parti-
tioned into mutually disjoint subsets by player i, such as
m
P
1,,
i
P 
'll
, where 1,, for
all
mkll
PP

, and
, m
P
;1,Pl
l
Player i can distinguish l from l,
P
.
,ll
1m
i
,
,
but she cannot differentiate the component within the
same partition. The prior distribution of partition
is
im1,,
ii

 .
Definition:
is a finer partition (FPI) than
if
P
is at least as fine as , which means for every
l
and ''
l
P
, either or
l
PP lll
PP

is satisfied.
The economic decisions are made based on the infor-
mation partition structure available at the time of deci-
sion. Since player i cannot distinguish the states within
each partition, i.e., l
Pi
Pl
, the decisions are the same
within each .
Definition: Information structure is nonexclusive (NEI)
if
ji j

for all
and , where iN
{1,2 ,,N}n
, . 3n
Under NEI structure, no player has exclusive private
information. NEI is a special case of partitional informa-
tion. Complete information is a special case of NEI
structure, where number of element in each partition is 1.
Information can be renewed when new evidence is in-
troduced by expert’s report. A credence probability is a
subjective belief that the player put on the expert’s report.
If player believes the expert’s report is sincere, then she
will make the decision based on the report. A proper re-
ward structure can influence the sincerity of the expert’s
report. The expert may care too much about his reputa-
tion to be insincere even with little reward. Suppose the
reward for the expert’s report is , where m is the
report,

p
vm
Mm
. M is the set of possible reports,
1
M
e
.
e denotes the effort level, and . With }1,0{e
, the
expert can make a sincere (accurate) report with the cost
of c. A risk-neutral expert would choose an effort level to
maximize his/her expected payoff.
Definition S1: A report is called “sincere” if 1e
.
Definition S2: A reward structure is sincerity-inducing
Copyright © 2011 SciRes. TEL
H.-C. LIN 23
(SI) if , .

|1 |
pp
vme vme
0,1e
Suppose player i put a credence probability, i
, on
the expert’s report. The expected payoff of player i is
ii
|,, ,
i
vEu h

Here we assume i
is ex-
ogenous or predetermined.
Based on the initial information, uninformed player
would form the prior belief about the opponent’s strategy.
When new information is released either by the expert’s
report or by the opponent’s voluntary disclosure, the un-
informed party would use this new information to form a
better strategy against her opponent. This new informa-
tion is a FPI. This new FPI can rule out more improbable
type profiles. An updating rule should incorporate this
new information. So with a new FPI, updating starts with
a new prior (based on new FPI), while discarding the old
prior.
Considering the refinement of Grossman and Perry [5],
we can assign positive probabilities to test the credibility
of off-the-equilibrium-path (OTEP) strategies. When the
uninformed parties get a new FPI with a new prior belief,
the old information partition and prior belief system will
be discarded, otherwise a credible OTEP deviation might
be found with a support of a positive probability on some
profiles which do not exist in the new information set.
The definition for the new updating rule can be con-
structed as:
Definition (NUR): An updating rule
,
for
new information structure is credible if the following
condition holds: Let
k
P
''
,, and
, .
k

kP

''
k
P
'
kk
P
(1) If and  is at least as fine as 
, then
probability distribution for the partition, i.e.
, changes
to a new prior , which has the support contained
in the support of .

(2) If an unexpected move a
(OTEP move) occurs,
and there exists a set
K (or if no new
information is available), such that
K
(2.1)



,,",,' ',
ii
vaK vaK


(2.2)




,,",, ,
ii
vaK vaK



then

', '"
K

 .
(3) If the move is on-the-equilibrium-path, i.e. a,
Bayes’ rule is used for
', '
.
(4) If the belief becomes degenerated, it remains de-
generated. That is, if ,

,| 1h
 
() 1prob
for
all subsequent history of h. Similarly, if

,|h
 
0
,

p
rob 0
for all subsequent history of h.
Our new SE is defined henceforth with this definition
of NUR. Under NUR, the SE is stronger, because when
player updates her belief, she not only deals with zero-
probability events, but also a new information partition.
3. Necessary and Sufficient Conditions
Necessary and sufficient conditions are required, before
a working mechanism can be constructed to implement
the truth-revealing SE outcome:
3.1. Necessary Condition
For the truth-revealing purpose, our necessary condition
is a revised version of condition C in [1], condition
in [2], and condition B in [3]. We will need some defini-
tions before constructing our necessary condition.
Definition (N1): An allocation x is a function, such
that :x
X
, where X is a finite set of possible allo-
cations.
Definition (N2): A choice function is a subset of all
possible allocations.
Definition (N3): A deception for player i is a strategy,
such that
D:
ii iii

. A deception set is
i

iN
, and 1
DD D
n
.
Definition (IC): A choice function

f
satisfies in-
centive compatibility (IC) condition if and only if, for all
,
ii i
 ,

f|, f,
iii
vv

,|
iii
 
 i
.
Now we can define the necessary condition and its
associated proposition:
Condition A: Let f be the choice function, and
be
the deception such that ff
, where f is the choice set
of truth-revealing outcomes and f
is the choice set of
deception outcomes. For each , there exists f
x
1) type profiles i
i
, where is the information
partition for player i,
i
N
, and
2) a finite sequence of strategies:
01 1
,,, ,,,A
LL
aaxa aa

, and also
3) a sequence of probability measures:
01
{, ,,,
LL1
}
 
,
such that, for each agent , k = 0, , L, and

jk N
ff
k
a
, f satisfies:
(A1)

()() 1
|, |,
j
kkk jkkk
va va

, and
(A2)

1
() ()
|, |,
LL
jL jL
va va
 

,
where denotes some belief system which support
this deviation.
We can derive proposition 1 by using condition A.
Proposition 1. If a truth-revealing choice function f is
implemented as the SE outcome then it satisfies condi-
tion A.
Proof: Assume that f is implemented as a SE by an
extensive mechanism g. Let
SE, ,g
denote the
sequential equilibrium of the game g with associated
equilibrium assessment
,
. Thus, for all
, f
is implemented in
SE, ,g
with the support of
,,h

, where
is the prior probability distribu-
Copyright © 2011 SciRes. TEL
H.-C. LIN
24
tion,

, ,g
, and . hH
Inequality relation in (A1) of condition A is quite
straight-forward. Suppose that f is implemented in
SE
with the support of . Let k be
the first point where agent j(k) deviate from the equilib-
rium path. Condition (A1) shows that a deviation from
an equilibrium strategy
,,h

to the next stage is not as
profitable as staying on the equilibrium path. The ex-
pected payoff for deviation, () 1jk k
|,va
, is less than
or equal to the expected payoff of staying on the equilib-
rium path,

jk
va
|,
k

. This condition must hold for
all k for
,
to be a SE.
Suppose some agents play deception α which imple-
ment fx
, for this deception to be non-optimal, it
must be profitable for some types of agents, say j(L), to
defect from deception, thus, the deception is upset. This
is shown as a “preference reversal” condition in the (A2)
part of condition A. That is, a deviation from deception
would generate an expected payoff,

jL
1|,
L
vx


,
that is greater than or equal to the expected payoff of
deception,

jL
|,
L
vx

. So condition (A2) makes
sure that no deception is profitable in a sequential game.
Suppose there exists some assessment
,

such
that

is in
, ,g
SE
with the associated
choice function

f
. That is, suppose that deception
is an optimal outcome, then from (A1),


1|,
() |,
j
kkkk k
jk
vaxva x


Since and f
x
x
x
,
, then f cannot be an out-
come in
,SE g
, which contradict the initial as-
sumption that f is implemented in
SE, ,g
. Q. E. D.
The necessary condition eliminates the deception α
played by the informed player. For the deception to be
non-optimal, it must be worthwhile for some types of
agent to defect from the deception. Condition A allows a
sequence of strategies for some agents to play deception
until stage L, then he will find himself to face a prefer-
ence reversal in the next stage. Condition A is only a
necessary condition because it does not consider the ef-
fect on the posterior belief and the associated strategies
when deception is played in the previous stage. If decep-
tion is played and there are consistent beliefs supporting
this deception, the deception outcome may not be ruled
out.
3.2. Sufficient Condition
To achieve a truth-revealing dispute resolution, we need
a sufficient condition that would allow us to design a
mechanism using expert’s report as a credible threat to
implement truth-revealing SE outcome. We use the weak
domain restriction and a restriction on the posterior be-
liefs on deception to rule out the possibility of deception.
The following definition and condition are the necessary
parts for our sufficient condition.
Definition DR: A choice function f satisfies the do-
main restriction if not all agents have the same ranking
over all outcomes.
Condition B follows the same reasoning of the “poste-
rior reversal” condition in [4].
Condition B: Let f be the choice set of the
truth-revealing outcomes, which satisfies the domain
restriction. For each deception D
, there exist an
associated outcome set, f
, and the supporting posterior
belief,
. Supposefx
,f, and x
x
x
, then
(B1)
,,,vx
iiii
vx

, for all iN
,
, where i
is the truth-telling strategy set for all
other players except player i.
Suppose there exist two constant allocations, i.e.,
,
y
zX
, and suppose that
(B1-1) there exists a consistent belief, i.e., '
, which
support truthful reporting with new information partition .
Let
 denotes the information partition supported
by
. For all ,Nij
, and , if truth-telling is the
strategy for the previous stage, then
ij
,,
ii
vy vz

, and

,,
jj
vy vz
 

(B1-2) for all consistent beliefs which support a de-
ception (by reporting
instead of
), if deception
occurs, then there exist some type
 and the sup-
porting consistent belief
, such that
,,
ii
vz vy


and

,
j
,
j
vz y

v

for all
"


(B2) for all outcomes *
x
X and
,


*,'
ii
vxvx ,

Under this condition, the posterior reversal condition
identifies the properties of posterior distribution which
separate the beliefs of truth-telling from deception. The
posterior distribution translates the variations in beliefs
into variation in the distribution over outcomes. At some
point, the belief under truth-telling (i.e.,
) is separated
from the beliefs under deception (i.e.,
). Condition
(B1) shows that if player i challenges and push the game
into 2nd stage, then y will be the equilibrium under
truth-telling, and it will be z if player j plays deception.
Condition (B2) shows that the challenger will change her
counter offer to *
x
after the elicitation of the private
information under condition (B1).
Condition B can be simplified when we introduce a
new player, i.e. the expert. Although the expert has the
perfect information about the type profile, but this in-
formation structure is although similar to but not really a
true NEI structure, because uninformed players will be
informed only after a sincere report or a voluntary reve-
Copyright © 2011 SciRes. TEL
H.-C. LIN 25
lation by informed player. However, with a proper re-
ward structure, the expert will be more likely to make a
sincere report, which will be a formidable and credible
threat to deter the deception. Our sufficient condition is
constructed in condition D.
Condition D: Let f be the choice set of truth-revealing
outcomes. Let i denote the uninformed player and j de-
note the informed player. For each deception D
,
there exists an associated outcome set f
and a poste-
rior belief
. Suppose fx
,, and fx
x
x
,
then
(D1)


,,
iiii
vx vx
,

, for all iN
and
, where i
i
is the truth-telling strategy set for all
other players except player i.
is the deception played
by player i.
If truth is disputed in the previous stage, and there ex-
ist two constant allocations as the final outcome, i.e.
,
y
zX, and suppose that
(D2) there exists a consistent probability
for unin-
formed player to believe and to rely on the expert’s re-
port, (0,1]
. Suppose there also exists a new informa-
tion partition associated with a consistent belief, i.e.,
,
which denotes the belief supporting truthful reporting,
such that
D2. 1) for all
N,*
x
X
, and
 , if
truth-telling is the strategy for the previous stage, then

*
(,) ('),,('),,
ss s
vxv yv z

, and
D2. 2) for all the consistent belief which support a de-
ception, i.e.
, if deception occurs, then there exists
some type profile

"

 , such that, for all
N
,
*
x
X,

,,
ss
vz

,vy
*
vx s
 
.
D2. 3) Once the updated belief become degenerated, it
remains degenerated.
With condition D, we can derive proposition 2.
Proposition 2. If a truth-revealing choice function f
satisfies condition D and domain restriction in definition
DR, then f can be implemented as a SE.
Our remark: The proof of proposition 2 is quite
similar to the proof of proposition 1, except the restric-
tions set on the posterior belief, which depends on
whether a deception is played previously. With domain
restriction, a dispute is probable. A SE resolution is thus
desirable. Suppose f is a choice function in
SE, ,g
with the support of
,
, where
is the probability
to get expert’s report that could add new information to
the updating of a new consistent belief, i.e.
. If for
each deception α there exists an outcome f
with a
supporting posterior belief
, and for eachfx
,
, and
f
x
x
x
, condition (D1) subscribes that
truth-telling is preferable to any other strategies, i.e., it is
the IC condition for truth-telling. If the deception is sus-
pected and the game is pushed to the next stage, condi-
tion (D2) ensures that when the probability of the poste-
rior verification is positive, i.e. (0,1]
, a deception
will result in preference reversal with some possible
worse outcomes. Thus, deception is not profitable for the
informed player. If truth-telling was the strategy in the
previous stage, challenge the truth would not be profit-
able either, as described in condition 1) of (D2). Never-
theless, deception or challenge will never be an equilib-
rium strategy in a sequential game if condition D is satis-
fied.
Proof: Suppose f is a truth-telling SE outcome, i.e.
fSE,,g
. And f satisfies condition D, which
means, according to (D1), for all , iN
,

ii
vx vx

,,
ii
,

where fx
,fx
,
x
x
, i
is the equilibrium
strategy to tell the truth for all other players. Suppose f
isfies condition D and it is a SE outcome. Then sat
i
,v
i
(,)xvx

must be satisfied according to con-
dition (D1), which contradict the initial assumption that f
satisfies condition D and is implemented in SE. So f and
f
can’t both satisfy condition D and be in the same
equilibrium set. Thus, this partially provides a contradic-
tion toward proving proposition 2.
Next we need to eliminate the possibility of a devia-
tion from equilibrium strategies for both informed and
uninformed players.
Suppose there exist a SE strategy with a supporting
belief
in the previous stage, then the game ends
with the final outcome of (z,
). Condition D2. 2)
shows that (z,
) cannot be the SE outcome, because
*
ss s
vx vy,,vz ,
 
 for all
N
,*
x
X
. So condition D2. 2) contradicts the as-
sumption that a SE supported by previous deception
could exist. This is the second contradiction to prove
only truth-telling strategy is SE.
So in order to gain more payoffs, i.e. (,)vx
, the in-
formed player will not deviate from the equilibrium
strategy of truth-telling in the first stage. Since the equi-
librium strategy is “truth-telling”, would it be possible
that “challenge the truth” can be a SE outcome?
Suppose, in equilibrium, a challenge is issued by the
uninformed player while no deception is played in the
previous stage, the final outcome will be (y,
), which
cannot be the SE outcome, because by condition D2. 1),

*
(,) ('),,
ss
vxv yv
('),,
s
z

 for all
N
; that is, “challenge the truth” will not benefit the
challenger. This provides the final contradiction: when
equilibrium strategy is telling the truth in the previous
stage, condition D2. 1) contradict the assumption that a
“challenge” could be a SE strategy.
So deception and challenge would never be a SE strat-
egy in the first stage, and f (a truth-revealing SE) will be
implemented. Q. E. D.
Copyright © 2011 SciRes. TEL
H.-C. LIN
26
3.3. Mechanism: An Example
We can construct many types of mechanism to resolve
environmental conflict and disputes. Suppose game G con-
sists of the following stages:
Stage j.0 To elicit player j’s private information.
Player i would form her prior belief

,
.
Stage j. i: Player j announces his type, j
, and player i
simultaneously announces either “agree” or “challenge”.
If player i announces “agree”, j
f( )
is chosen and im-
plemented, game ends here and no more information will
be extracted; therefore, no more sunk costs to spend. If
player i issues a “challenge”, she suspects player j’s an-
nouncement. Player i has two options after issuing the
“challenge”:
(1) By announcing a credence probability η as the
mixed strategy measure, she chooses to randomize be-
tween eliciting and believing in the expert’s report to
make her final offer (i.e., proceed to stage j.p) or break-
ing up the negotiation (i.e., game ends here).
(2) Player j is allowed to make another announcement,
i.e., jj
. Player i can either “agree” and implement
the resolution outcome according to
and
(the
new supporting beliefs), or “challenge” again. If she
agrees, game ends here. If she challenges, player j will
pay more shares toward hiring the expert, then proceed
to stage j. p.
Stage j. p: Expert is hired to reveal the true state, and
to choose a pair of worse outcomes
,
y
z, which are
chosen according to condition (D2). Player i and player j
share the cost of hiring the expert. At this stage, player j
can be punished according to the deception made in stage
j. i, an player i can also pay some penalty for issuing
unnecessary challenges. Game ends here.
This mechanism can be applied to resolve environ-
mental disputes as well as other kind of conflict, as long
as the dispute is caused by deception alone and the only
way to gain resolution is to reveal the truth. There are
also some case studies on similar types of conflict reso-
lution and mechanism suggestions in Lin [6]. We will
discuss some environmental dispute cases which could
be resolved by our mechanism.
3.4 The Applicable Example Explained
One example of possible application would be the dis-
putes between Formosa Plastic (FP) and Texas' local
environmental watchdog, i.e. Calhoun County Resource
Watch (CCRW) since late 1980s. Texas needed FP to
boost their plummeting economy in the 1980s, but the
waste water discharged from FP would cause a huge
degradation in quality and quantities to the shrimps in
Lavaca Bay (the 3rd largest fishing ground in the United
States at that time). CCRW's president took some ex-
treme measures to stop FP's operations, for example, she
undertook the hunger strike for more than 40 days and
sunk her boat on the spot at the time of FP’s effluent
discharge. After the news exposure, FP had to pay huge
fine for the violations. CCRW suspected FP of covering
up spills, silencing workers, flouting the EPA and
dumping highly toxic chemicals into the air, land and sea.
FP claimed they are willing to put forward a plan of fur-
ther abatement. But CCRW did not trust the company
enough to negotiate. So the war between them went on
for years, before they had to sit down and talk in order to
solve the problem. Our mechanism would make the in-
formation revealed to all parties involved, and the reso-
lution would start from there. This actually happened in
1993 when an outside expert trusted by CCRW joined
the negotiation process and an agreement was signed
after that. However, there’s no real law or legal mecha-
nism for the disputing parties to conduct such resolution
process, so the agreement in 1993 is really accidental.
Thus, when another dispute started again in 2002, the
local activist had to chain herself to one of the plant's
towers.
Our mechanism could be applied in the following
fashion: a law is enacted to required all disputing parties
to form a resolution committee (acts like the arbitrator),
and the law gives this committee the legal right to put
forward a set of “rewards” and “penalties” according to
our sufficient and necessary conditions, and the resolu-
tion process starts from there. When the true information
is undoubtedly revealed, an agreement will be signed,
just like the 1993 agreement between CCRW and FP,
and the dispute is resolved.
Global disputes like climate change and GHG reduc-
tion issues is too complex for a single mechanism to re-
solve the problem, but as long as the true information is
revealed to all parties concerned, no one could morally
condemn the country that cannot truly afford the costs of
abatement. When there is no deception and no private
agenda across the negotiation table, they might find a
possible solution to reduce GHG and in the mean time to
preserve the economy of that country as well. The con-
tribution of our theory is to eliminate the possible decep-
tion which may worsen the problem of the disputes and
make everyone worse off in the end.
4. Conclusions
We have shown the basic model and the sufficient and
necessary conditions for a proper mechanism to imple-
ment a perfect information (truth-revealing) SE outcome
and to resolve the disputes caused by information asym-
metry. We have also shown some possible applications.
Copyright © 2011 SciRes. TEL
H.-C. LIN
Copyright © 2011 SciRes. TEL
27
Our model can be applied to a wide variety of interesting
cases, such as externality and compensation mechanism,
conflict resolution, negotiation, and bargaining problems,
if the conflict is caused by deception alone. The inde-
pendent third-party experts serve as an option to catch
deception if necessary, even though we may never be
called upon to use them in SE implementation, since the
essence of our model is to get the information revealed in
the first stage and the dispute is resolve then and there.
5. References
[1] J. Moore and R. Repullo, “Subgame Perfect Implementa-
tion,” Econometrical, Vol. 56, No. 5, 1988, pp. 1191-
1220. doi:10.2307/1911364
[2] D. Abreu and A. Sen, “Subgame Perfect Implementation:
A Necessary and Almost Sufficient Condition,” Journal
of Economic Theory, Vol. 50, No. 2, 1990, pp. 285-299.
doi:10.1016/0022-0531(90)90003-3
[3] S. Baliga, “Implementation in Economic Environments
with Incomplete Information: the Use of Multi-Stage,”
Games and Economic Behavior, Vol. 27, No. 2, 1999, pp.
173-183. doi:10.1006/game.1998.0667
[4] J. Bergin and A. Sen, “Extensive form Implementation in
Incomplete Information Environments,” Journal of Eco-
nomic Theory, Vol. 80, No. 2, 1998, pp. 222-256.
doi:10.1006/jeth.1997.2388
[5] S. J. Grossman and M. Perry, “Perfect Sequential Equi-
librium,” Journal of Economic Theory, Vol. 39, No. 1,
1986, pp. 97-119. doi:10.1016/0022-0531(86)90022-0
[6] H. C. Lin, “Strategic information and Bargaining: the
case of Environmental Concerns,” Professor of Doctor
Thesis, University of Wisconsin-Madison, Madison,
1999.