**Theoretical Economics Letters**

Vol.07 No.06(2017), Article ID:79470,20 pages

10.4236/tel.2017.76113

Normative Utility Models for Pareto Scalar Equilibria in n-Person, Semi-Cooperative Games in Strategic Form^{*}

H. W. Corley^{ }

Center on Stochastic Modeling, Optimization, & Statistics (Cosmos), The University of Texas at Arlington, Arlington, USA

Copyright © 2017 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: August 4, 2017; Accepted: September 27, 2017; Published: September 30, 2017

ABSTRACT

Semi-cooperative games in strategic form are considered in which either a negotiation among the n players determines their actions or else an arbitrator specifies them. Methods are presented for selecting such action profiles by using multiple-objective optimization techniques. In particular, a scalar equilibrium (SE) is an action profile for the n players that maximize a utility function over the acceptable joint actions. Thus the selection of “solutions” to the game involves the selection of an acceptable utility function. In a greedy SE, the goal is to assign individual actions giving each player the largest payoff jointly possible. In a compromise SE, the goal is to make individual player payoffs equitable, while a satisficing SE achieves a target payoff level while weighting each player for possible additional payoff. These SEs are formally defined and shown to be Pareto optimal over the acceptable joint actions of the players. The advantage of these SEs is that they involve only pure strategies that are easily computed. Examples are given, including some well-known coordination games, and the worst-case time complexity for obtaining these SEs is shown to be linear in the number of individual payoffs in the payoff matrix. Finally, the SEs of this paper are checked against some standard game-theoretic bargaining axioms.

**Keywords:**

Pareto Equilibrium, Scalar Equilibrium, Semi-Cooperative Games, Utility Transformation, Normative Game Theory, Nash Equilibrium, Berge Equilibrium, Compromise Equilibrium, Greedy Equilibrium, Satisficing Equilibrium, Arbitration

*Pareto Scalar Equilibria for n-Person Games.

1. Introduction

Game theory is the study of strategic interactions among n rational decision makers called agents (or players), whose decisions affect each other. Its systematic development began with von Neumann and Morgenstern [1] , who described both non-cooperative and cooperative games. These categories have become one approach for classifying games [2] . Either type involves a solution concept to recommend, predict, or explain player choices. Depending on the use of these solutions, game theory can be also divided into normative, predictive, and descriptive branches. The normative use of game theory is to recommend ideal decisions that the participants should make in a given game. The predictive application is to predict the choices of the participants. On the other hand, descriptive game theory, which involves empirical data, attempts to explain the actual behavior of these decision makers. Of course, both classification schemes are inexact. For example, in the latter taxonomy, the normative, predictive, and descriptive uses are interrelated. Descriptive game theory can lead to better mathematical model that give better recommendations and predictions. However, some decision theorists―including von Neumann [1] , Savage [3] , and Aumann [4] ―believe that the principal use of game theory is normative.

The purpose of this paper is to present normative models for games with both cooperative and noncooperative aspects. In each model, all players are assumed to have the same notion of rationality, which differs among the models. Solutions to each model are obtained by solving a scalar optimization problem to avoid the difficulties associated with the usual game theoretic equilibria. Moreover, these solutions involve only pure strategies. The models developed here could also be used by an arbitrator to prescribe an action profile for the game. In this section, we will first summarize the basic ideas of both non-cooperative and cooperative games. Next, we review the literature on games with both competitive and cooperative aspects, including that on arbitration for such decision problems. Then the restriction of our solutions to pure strategies will be discussed, and finally the notion of a scalar equilibrium will be defined.

Modern game theory as described in Myerson [5] and Maschler et al. [6] is predominantly non-cooperative. A non-cooperative game involves two or more utility-maximizing players. The key feature is that it focuses on the actions of the individual players. Non-cooperative game theory requires the solution concept to be a Nash equilibrium [7] [8] [9] . In other words, rational players are considered selfish. They act in their individual self-interest in the sense that each player’s strategy would maximize his expected payoff for the strategy profile of the other $n-1$ players. Thus in a Nash equilibrium (NE) no player can improve his expected payoff by unilaterally changing his pure or mixed strategy. A NE always exists in mixed strategies but may not be Pareto optimal. Moreover, there may be multiple pure or mixed NEs in which various refinements such as properness have been proposed to eliminate implausible equilibria.

Social dilemmas [10] [11] [12] illustrate that the selfish behavior manifested in NEs may conflict with group or team interests. In Prisoner’s Dilemma, for example, each player can do better by cooperating. To accommodate these situations, Berge [13] proposed a pure-strategy solution that was formalized by Zhukovskiy [14] . A Berge equilibrium (BE) is a pure strategy profile in which every $n-1$ players choose strategies that maximize the remaining player’s payoff. For a game with unselfish players invoking this Golden Rule rationale, a BE is thus an equilibrium since no unilateral change of strategy by any player can improve another player’s payoff. Such mutual support has been studied in Colman [15] , Corley and Kwain [16] , Corley [17] , and the references therein. The mixed BE is called a dual equilibrium to the NE in the latter two references because of the duality discussed there. Regardless, with this interpretation a pure or mixed BE can be considered as a mutually cooperative or altruistic solution concept for non-cooperative games when each player’s goal is to help the remaining players as opposed to himself. However, the BE is a solution concept for non-cooperative games since it is each player’s individual choice to be altruistic as above. Moreover, the players are rational in the following sense: a person is rational if the person makes decisions and acts in a manner consistent with his or her stated objective. In the BE model, it is assumed that each player’s objective is to be mutually supportive if possible.

In contrast, cooperative (or coalitional) game theory focuses on groups of the n players, rather than individual players themselves. Players form coalitions in cooperative games so that the members can receive more benefit than they could individually. Cooperative game theory focuses on predicting the coalitions that will form, the joint actions the coalitions will take, the resulting collective payoffs, and the binding agreements the coalitions will make [18] . Given this information, the cooperative model is concerned with identifying a fair allocation of benefits to the n players. Different solution concepts for cooperative games define fairness differently and thus assign different payoffs to the individual players. Common solution concepts include the stable set of von Neumann and Morgenstern [1] , the Shapley value [19] , and the core of Gillies [20] , as well as the kernel of Davis and Maschler [21] and the nucleolus of Schmeidler [22] .

The essential difference between non-cooperative and cooperative game theory is that non-cooperative games focus on what individuals can do acting alone while cooperative games focus on what groups can accomplish if they work together. Contracts must be self-enforcing in non-cooperative games, whereas players can make enforceable contracts in cooperative games. However, the contracts in cooperative games are not enforced internally, but externally by an outside party such as an arbitrator. In this paper a class of hybrid games in strategic form, with aspects of both non-cooperative and cooperative games, are called semi-cooperative. They may involve either negotiation by the players or external arbitration.

An early example of such a game was considered by Nash [23] , who presents a unique solution for a two-person bargaining problem in strategic form with complete information. This solution in mixed strategies involves a competitive component in which neither player will accept less than his status quo payoff, as well as a cooperative component modeled by the Nash product in which the players share in the benefits of cooperation. In [24] Nash extended this work to two-person demand games involving threats. Raiffa [25] defined an arbitration scheme as a mapping from a given generalized two-person game into a unique element of the set of mixed strategies for the game. Different imposed conditions gave different mappings that yield mixed strategies to be used as either a bargaining solution or as strategies imposed by an arbitrator on the players. Rosenthal [26] developed an arbitration model for two-person cooperative games in strategic form for an arbitrator to select a single Pareto-optimal pure strategy reflecting the relative strengths of the players. Kalai and Rosenthal [27] gave schemes for an arbitrator to use under his ignorance and survey the game- theoretical arbitration literature to that point.

More recently treatments of semi-cooperative games includes Baccharch et al. [28] , who consider a team reasoning approach to strategic games in which the players make decisions best for their team of players and not simply themselves. Hart and Mas-Colell [29] studied cooperation in n-person strategic-form games and developed a multistage bargaining procedure based on proposer commitment to obtain a ubgame-perfect equilibria that approaches Pareto efficiency. Diskin et al. [30] extended Raiffa’s arbitration model to an iterative procedure that converges to a Pareto optimum of the bargaining set for n players. Cao [31] modified the Hart-Colell procedure by delaying the realization of all threats to end of the game so that the Hart-Colell procedure would be consistent with the min-max solution in two-person zero-sum games. Finally, for two-person strategic-form games Kalai and Kalai [32] proposed a solution concept for cooperation called the cooperative-competitive (or coco) value. Their approach was developed either as a bargaining solution or for an arbitrator to obtain fair mixed strategies to a two-person in strategic form. This coco value combines the two players’ payoff matrices into a max-max cooperative component as well as a max-min competitive one. An extensive literature survey of semi-cooperative games is also provided.

We next justify seeking only pure strategies here by describing the difficulties with mixed strategies. Historically, according to both von Neumann [1] and later Nash [24] , a randomizing process is an essential ingredient in the concept of a mixed strategy. According to Nash [24] , “the use of mixed strategies involves deliberate decisions to randomize, to decide between alternative possibilities by using a randomizing process involving specified probabilities.” However, Aumann [4] considered the concept of mixed strategies to be “intuitively problematic” since people rarely make decision choices by lottery. Thus randomization lacks behavioral support. This difficulty is compounded, Aumann argued, by the fact that people are usually unable to generate random outcomes without some type of random or pseudo-random number generator.

Rubinstein [33] later described two interpretations of mixed strategies. The first was based on the purification theorem of Harsanyi [34] . The basic idea behind purification is that a mixed strategy merely reflects a player’s lack of knowledge of the other players’ information and decision-making process. In this view, random choices are seen as consequences of non-specified, payoff-irrelevant exogeneous factors, which to Rubenstein was unsatisfactory. Rubenstein’s second interpretation considers the game players standing for a large population of agents with n subpopulations, the members of which habitually choose specific actions, i.e., pure strategies. In other words, the fractions of the whole population choosing different strategies resembles a mixed strategy. But then, Rubenstein argued, no reason is provided for the individual agents’ choices. An alternate interpretation, perhaps the most intuitive, is that the probability of a player choosing a given action in a mixed strategy is the fraction of time that player would choose to it in a long series of repeated games. Unfortunately, games are often not repeated and certainly not enough for a frequentist interpretation of probability. Hence, Aumann and Brandenburger [35] interpret a mixed strategy for player i as an representation of the beliefs of i’s opponents about the action that player i will choose. However, opponents will likely have different beliefs, especially if they have different notions of rationality, so the predictive power of a mixed strategy is limited since a Nash equilibrium becomes an equilibrium in beliefs rather than actions.

Thus the interpretation of mixed strategies is controversial. The principal advantage of mixed strategies seems to be theoretical. They provide NEs when none exist in pure strategies and allow for the development of a rich mathematical theory as opposed to a combinatorial one. Nonetheless, practitioners are frequently ambivalent towards mixed strategies, which are currently difficult (if not impossible) to compute except for a relatively small games. Moreover, practitioners often view the mixed strategy solution concept as incomplete since it does not specify why and how players randomize their decisions.

In view of the above discussion, this paper develops pure-strategy solution concepts for n-person, strategic-form, semi-cooperative games with complete information and no threats. The solutions are normative since that they are suggestions for assigning action profiles to the players, under the assumptions of the models. Each is the pure-strategy solution to a scalar optimization problem. Such a solution is designated as a scalar equilibrium and provides a rational assignment in either a binding agreement among the players or a decision by an arbitrator. However, no negotiations are considered here. If the players cannot reach an agreement, a model of this paper could also be used by arbitrator to yield a fair and rational decision. The latter situation would be similar to two bargaining players having an arbitrator impose either the Nash [23] , Kalai- Smorodinsky [36] , or Kalai [37] bargaining solution. The work here restricts such bargaining problems to pure strategies. If an arbitrator is employed, however, the solutions proposed here are not game-theoretical concepts in a strict sense since the players do not choose their own actions.

For the semi-cooperative games described above, a scalar equilibrium (SE) is defined as an assignment of actions to the players using a decision criterion that maximizes an appropriate utility function of the players’ payoff profiles over the players’ acceptable joint actions. This choice of this utility function would be the result of a binding agreement among the players or an arbitrator’s decision. It is designated an equilibrium only in the sense that no player would change his action because the agreement or the arbitrator. In other words, for an n-person, strategic-form game, the agreed-upon utility function T associates a scalar value to each payoff profile in the payoff matrix. A payoff profile with a maximum utility is considered an optimal action profile. In the case of ties, an action profile would be chosen according to secondary criteria, which are not studied here. As an agreement among the players, an SE could be either externally or internally enforceable. In the case of a treaty, for example, an external enforcement mechanism could be an international court, while an internal enforcement mechanism could be a war or trade sanctions. In an arbitration, the arbitrator would obviously be the enforcement mechanism. An SE need not be a NE or a BE.

A semi-cooperative game as considered here has both competitive and cooperative aspects. It is competitive in the sense that each player wants a good payoff, possibly meaning either a fair one or a large one. It is reasonable to assume that no player would agree to an action profile giving him less than his pure-strategy security level. Nor―as argued by Aumann [38] , Rubinstein [33] , and Bacharach et al. [28] ―would reasonable players accept a payoff profile that was Pareto dominated. Otherwise, there would be an alternate profile for which some players’ payoffs would be improved without diminishing any player’s. On the other hand, the games of this paper are also cooperative in the sense that ideally an agreement among the players is required to select T. The goal here is to reduce such semi-cooperative games to the selection of a reasonable T either by the players or by an arbitrator if the players are unable to reach an agreement. Such a T would then determine the players’ actions, and thus the approach is normative.

In summary, the SE approach extends the classic bargaining results of (23), (36), and (37), for example, to n players and to different decision criteria. It also restricts previous results for semi-cooperative games to the more difficult and practical problem of obtaining pure strategies. In particular, the SE approach addresses four problematic areas of non-cooperative game theory.

1) An SE, which always exists, assigns a specific action to each player, as opposed to a mixed strategy that is difficult to calculate, interpret, and implement.

2) An SE can be obtained quickly as the maximum of a finite number of scalar values, as opposed to the computational effort required to determine mixed strategy NEs. A particular T might yield an approximation to a mixed NE or BE. In the latter case, even a mixed BE may not exist (17). However, approximation is satisfactory since implicit in the equilibrium refinement literature is the idea that a game is not a complete description of a strategic situation but rather an approximation itself.

3) SEs do not require that any particular notion of rationality for the individual players. An assignment is rational if it maximizes the utility function T selected by the players. For example, a “rational” action profile could be one that gives each player a “fair” payoff relative to all the players. Another could model the team concept of Bacharach et al. [28] . T would capture the players’ notions of rationality, which could conceivably differ, though this possibility is not considered in this paper.

4) For games with multiple NEs or BEs, one could also use T to refine these equilibria and choose one with the highest feasible value of T.

The paper is organized as follows. In Section 2 we present preliminary notation, definitions, and results. In Section 3 we develop the greedy SE (GSE), establish that a GSE is Pareto optimal for a semi-cooperative game in strategic form, and present two-person examples including the coordination games Prisoner’s Dilemma, and Chicken games to compare the GSE with other solution concepts. In addition, it is shown that a GSE is computationally tractable. In Section 4 we define a compromise SE and then a satisficing SE in Section 5. Both are again Pareto optimal and computationally tractable, and examples are presented. In Section 6 we state some standard bargaining axioms and note those satisfied by the SEs here. In Section 8 we offer conclusions.

2. Preliminaries

Consider first a standard n-person, non-cooperative game in strategic form for pure strategies. Let ${G}_{n}=\langle I,{\left({S}_{i}\right)}_{i\in I},{\left({u}_{i}\right)}_{i\in I}\rangle $ denote such a game, where $I=\left\{1,\cdots ,n\right\}$ is the set of players and ${S}_{i}$ is the finite set of ${m}_{i}$ actions for

player i. For an action profile $s=\left({s}_{1},\cdots ,{s}_{n}\right)\in S={\displaystyle \underset{i\in I}{\prod}{S}_{i}}$ , ${u}_{i}\left(s\right)$ is the von

Neumann-Morgenstern (VMN) utility of player i, and the payoff matrix consists of the n-tuples $u\left(s\right)=\left({u}_{1}\left(s\right),\cdots ,{u}_{n}\left(s\right)\right)$ ordered in the usual way. It is well known that VMN utilities are both ordinal and cardinal but that these utilities are usually incomparable between players. In other words, for any strategies $s,{s}^{\prime},{s}^{\u2033}\in S$ and $i\ne j$ , ${u}_{i}\left(s\right)-{u}_{j}\left(s\right)$ is not well defined but ${u}_{i}\left({s}^{\prime}\right)-{u}_{i}\left({s}^{\u2033}\right)$ is. We assume here that ${G}_{n}$ is a TU game with transferable utilities. This assumption means that the players have a common currency, or numeraire, valued equally by all players and that there is no wealth effect. Thus all players derive the same utility for the same currency level. In such a currency (i.e., dollars), it follows that the players have quasi-linear utility functions in $\left({s}_{1},\cdots ,{s}_{n}\right)\in S$ . Furthermore, we assume that utilities for player $i$ , $\forall i\in I$ , have been standardized by transformations of the form ${u}_{i}\left(s\right)={\alpha}_{i}{\stackrel{^}{u}}_{i}\left(s\right)+{\beta}_{i}$ for scalars ${\alpha}_{i}>0$ and ${\beta}_{i}$ , which yield an equivialent game [5] , so that each player’s VNM utility for $C is C utils and that a difference of one util is as significant for any player as any other. We call this Assumption U on the utilities.

The Nash and Berge equilibria are next defined as in (17) for pure strategies to contrast with an SE defined below.

Definition 2.1. The action profile ${s}^{*}$ an NE for ${G}_{n}$ if and only if ${u}_{i}\left({s}^{*}\right)=\underset{{s}_{i}\in {S}_{i}}{\mathrm{max}}{u}_{i}\left({s}_{i},{s}_{-i}^{*}\right),\forall i\in I.$

Definition 2.2. The action profile ${s}^{*}$ is a BE for ${G}_{n}$ if and only if ${u}_{i}\left({s}^{*}\right)=\underset{{s}_{-i}\in {S}_{-i}}{\mathrm{max}}{u}_{i}\left({s}_{i}{}^{*},{s}_{-i}\right),\forall i\in I.$

Now let the n-person, pure-strategy, strategic-form, semi-cooperative game ${\Gamma}_{n}$ corresponding to ${G}_{n}$ be denoted by ${\Gamma}_{n}=\langle I,{\left({S}_{i}\right)}_{i\in I},{\left({u}_{i}\right)}_{i\in I},\Omega ,T\rangle $ , where $I,S,{S}_{i},{s}_{i},{u}_{i}$ are as in ${G}_{n}$ . Only the action profiles $s=\left({s}_{1},\cdots ,{s}_{n}\right)$ in $\Omega $ are deemed acceptable to the n players, so the feasible set $\Omega $ for ${\Gamma}_{n}$ should incorporate the pure-strategy security level. Finally, $T:u\left(\Omega \right)\to {R}^{1}$ is a utility function defined according to the particular situation modeled by a game ${\Gamma}_{n}$ .

Definition 2.3. An action profile $\stackrel{^}{s}$ is a security profile for ${\Gamma}_{n}$ if and only if ${\stackrel{^}{s}}_{i}\in \underset{{s}_{i}\in {S}_{i}}{\mathrm{arg}\mathrm{max}}\underset{{s}_{-i}\in {S}_{-i}}{\mathrm{min}}{u}_{i}\left({s}_{i},{s}_{-i}\right),\forall i\in I.$ For each player $i\in I$ , ${\stackrel{^}{s}}_{i}$ is called a security action and ${L}_{i}=\underset{{s}_{i}\in {S}_{i}}{\mathrm{max}}\underset{{s}_{-i}\in {S}_{-i}}{\mathrm{min}}{u}_{i}\left({s}_{i},{s}_{-i}\right)$ is called the security level.

The value ${L}_{i}$ is the least payoff that player i can be guaranteed to receive from his action in the game, regardless of what the other players’ actions are. It is possible that ${u}_{i}\left(\stackrel{^}{s}\right)>{L}_{i}$ since ${\stackrel{^}{s}}_{-i}$ is not necessarily a worst response to ${\stackrel{^}{s}}_{i}$ in a security profile. Regardless, it is reasonable to assume that no player would agree to an action profile in which he received less than ${L}_{i}$ . Define $\Psi =\left\{s\in S:{u}_{i}\left(s\right)\ge {L}_{i}\right\}$ . It follows that $\Omega \subseteq \Psi $ . Leyton-Brown and Shoham [39] , for example, show that any pure NE of the non-cooperative game ${G}_{n}$ is a member of $\Psi $ .

An SE is an action profile ${s}^{*}\in \Omega $ determined from a decision criterion involving an aggregate utility function $T:u\left(S\right)\to {R}^{1}$ for all players in $I$ that induces a preference relation ${\le}_{T}$ on $u\left(\Omega \right)$ as described in [40] . In particular, for all ${s}^{\prime},{s}^{\u2033}\in \Omega $ ,

i) $u\left({s}^{\prime}\right){<}_{T}u\left({s}^{\u2033}\right)$ if $T\left[u\left({s}^{\prime}\right)\right]<T\left[u\left({s}^{\u2033}\right)\right]$ ,

ii) $u\left({s}^{\prime}\right){=}_{T}u\left({s}^{\u2033}\right)$ if $T\left[u\left({s}^{\prime}\right)\right]=T\left[u\left({s}^{\u2033}\right)\right]$ ,

iii) $u\left({s}^{\prime}\right){\le}_{T}u\left({s}^{\u2033}\right)$ if either $u\left({s}^{\prime}\right){<}_{T}u\left({s}^{\u2033}\right)$ or $u\left({s}^{\prime}\right){=}_{T}u\left({s}^{\u2033}\right)$ .

We next maximize the aggregate utility function $T\circ u=T\left[u\left(s\right)\right]$ over $\Omega $ according to this preference relation and assign actions profiles to the players based on this decision criterion. Such an approach is consistent in the sense that ${\le}_{T}$ is complete and transitive. An SE is now formally defined.

Definition 2.4. Let $T:u\left(\Omega \right)\to {R}^{1}$ be the utility function for a semi-cooperative game ${\Gamma}_{n}$ . The joint action profile ${s}^{*}\in \Omega $ is an SE for ${\Gamma}_{n}$ if and only if $T\left[u\left(s\right)\right]\le T\left[u\left({s}^{*}\right)\right]$ for all $s\in \Omega $ . Thus ${s}^{*}$ is an SE if and only if ${s}^{*}$ maximizes the aggregate utility function composition $T\circ u=T\left[u\left(s\right)\right]$ over $\Omega $ .

If ${\Gamma}_{n}$ has multiple SEs resulting from ties in the maximization, it is assumed that a negotiation among the players, similar to the one stipulating $T$ and $\Omega $ , will choose a single ${s}^{*}$ . If the game ${\Gamma}_{n}$ is arbitrated, the arbitrator will select $T,\Omega $ , and a single SE. A further significant application of the SE approach is the use as a refinement mechanism for multiple equilibria of ${G}_{n}$ . For example, $\Omega $ could be taken as the set of pure NEs for ${G}_{n}$ , all of which are in $\Psi $ , with $T$ specified either by the players, by an arbitrator, or possibly by the game’s model builder for normative purposes. Then the number of NEs could be reduced to those maximizing $T\circ u$ over $\Omega $ .

For a certain class of $T$ , an SE ${s}^{*}$ for ${\Gamma}_{n}$ will be Pareto optimal. To establish this fact, the following definitions and results are used.

Definition 2.5. For the game ${\Gamma}_{n}$ , the action profile ${s}^{\u2033}\in \Omega $ dominates ${s}^{\prime}\in \Omega $ if and only if ${u}_{i}\left({s}^{\prime}\right)\le {u}_{i}\left({s}^{\u2033}\right),\forall i\in I$ and ${u}_{j}\left({s}^{\prime}\right)<{u}_{j}\left({s}^{\u2033}\right)$ for some $j$ . An action profile ${s}^{*}\in \Omega $ is a Pareto maximum for ${\Gamma}_{n}$ if ${s}^{*}$ is not dominated by any $s\in \Omega $ . A Pareto maximum ${s}^{*}$ is said to be Pareto maximal.

Definition 2.6. For the game ${\Gamma}_{n}$ , the utility function $T:u\left(\Omega \right)\to {R}^{1}$ is said to be strictly increasing on $u\left(\Omega \right)$ if and only if $T\left[u\left({s}^{\prime}\right)\right]<T\left[u\left({s}^{\u2033}\right)\right]$ for any ${s}^{\prime},{s}^{\u2033}\in \Omega $ for which ${s}^{\u2033}$ dominates ${s}^{\prime}$ .

An immediate consequence of Definitions 2.5 and 2.6 is the next result.

Lemma 2.7. If ${s}^{\ast}$ is an SE for ${\Gamma}_{n}$ and $T$ is strictly increasing on $u\left(\Omega \right)$ , then ${s}^{\ast}$ is Pareto maximal for ${\Gamma}_{n}$ .

Proof. Let ${s}^{*}\in \Omega $ be an SE for ${\Gamma}_{n}$ . We prove the contrapositive. If ${s}^{\ast}$ is not a Pareto maximum, there exists ${s}^{\prime}\in \Omega $ that dominates ${s}^{\ast}$ . But since $T$ is strictly increasing, it follows from Definition 2.5 that $T\left[u\left({s}^{\ast}\right)\right]<T\left[u\left({s}^{\prime}\right)\right]$ . Thus ${s}^{\ast}$ is not an SE for ${\Gamma}_{n}$ to establish the result.+

3. Greedy Scalar Equilibrium

It is now assumed that each player is greedy and wants a payoff as high as jointly possible. A greedy semi-cooperative equilibrium (GSE) for ${\Gamma}_{n}$ is defined as follows. For $\Omega =\Psi $ and ${M}_{i}=\underset{s\in \Omega}{\mathrm{max}}{u}_{i}\left(s\right)$ , consider the utility function ${T}_{G}:u\left(\Omega \right)\to {R}^{1}$ for which

${T}_{G}\left[u\left(s\right)\right]={\displaystyle \underset{i\in I}{\prod}\frac{1}{{M}_{i}-{u}_{i}\left(s\right)+1}}\text{,}s\in \Omega $ (1)

Because of Assumption U, the multiplication in (1) defines a reasonable aggregate utility function ${T}_{G}\circ u$ on $\Omega $ . The number 1 in the denominators of (1) prevents a division by 0 if any ${u}_{i}\left(s\right)={M}_{i}$ .

Definition 3.1. The pure strategy profile ${s}^{\ast}$ is a GSE for ${\Gamma}_{n}$ if and only if ${s}^{\ast}$ maximizes the aggregate utility function ${T}_{G}\circ u={T}_{G}\left[u\left(s\right)\right]$ over $\Omega $ .

From Definition 3.1, a GSE always exists even though a pure NE modeling player selfishness may not. Moreover, from (3.1), it follows that $0<{T}_{G}\left[u\left(s\right)\right]\le 1$ for all $s\in \Omega $ . Maximizing ${T}_{G}\left[u\left(s\right)\right]$ over $\Omega $ requires that each ${u}_{i}\left({s}^{*}\right)$ be as close to ${M}_{i}$ as jointly possible. This maximization of (3.1) is a discrete version

of maximizing $f\left({x}_{1},\cdots ,{x}_{n}\right)={\displaystyle \underset{i\in I}{\prod}\frac{1}{{x}_{i}+1}}$ over the region ${x}_{i}\ge 0,\forall i\in I$ . In this continuous version, $\frac{\partial f}{\partial {x}_{i}}<0,\forall i\in I$ , over the feasible region, so the maximum is the point ${x}_{i}=0,\forall i\in I$ . We now establish that a GSE is a Pareto maximum.

Result 3.2. If ${s}^{\ast}$ is a GSE for ${\Gamma}_{n}$ , then ${s}^{\ast}$ is Pareto maximal for ${\Gamma}_{n}$ .

Proof. By Lemma 2.7, it suffices to show that ${T}_{G}:u\left(\Omega \right)\to {R}^{1}$ is strictly increasing. Let ${s}^{\prime},{s}^{\u2033}\in S$ be pure strategies such that ${s}^{\u2033}$ dominates ${s}^{\prime}$ . Thus

$0<\frac{1}{{M}_{i}-{u}_{i}\left({s}^{\prime}\right)+1}\le \frac{1}{{M}_{i}-{u}_{i}\left({s}^{\u2033}\right)+1},\forall i\in I$ and $0<\frac{1}{{M}_{j}-{u}_{j}\left({s}^{\prime}\right)+1}<\frac{1}{{M}_{j}-{u}_{j}\left({s}^{\u2033}\right)+1}$ for some $j\in I$ . Since these fractions are all positive, ${T}_{G}\left[u\left({s}^{\prime}\right)\right]={\displaystyle \underset{i\in I}{\prod}\frac{1}{{M}_{i}-{u}_{i}\left({s}^{\prime}\right)+1}}<{\displaystyle \underset{i\in I}{\prod}\frac{1}{{M}_{i}-{u}_{i}\left({s}^{\u2033}\right)+1}}={T}_{G}\left[u\left({s}^{\u2033}\right)\right]$ , and ${T}_{G}$ is strictly increasing on $u\left(\Omega \right)$ .+

A special case of Result 3.2 is the following corollary.

Corollary 3.3. If ${s}^{\ast}$ is a GSE for ${\Gamma}_{n}$ , then ${s}^{\ast}$ is not dominated by any pure NE of the associated ${G}_{n}$ .

On the other hand, the next example shows that a GSE for ${\Gamma}_{n}$ can dominate a pure NE for ${G}_{n}$ . This fact and Corollary 3.3 raise the possibility that a GSE for ${\Gamma}_{n}$ models player selfishness better than a pure NE for ${G}_{n}$ .

Example 3.4. Consider the two-person prescriptive game ${\Gamma}_{2}$ with the $3\times 3$ payoff matrix in Figure 1 below, where for simplicity we write $S=\left\{\left({a}_{i},{b}_{j}\right):i,j=1,2,3\right\}$ . In this case, the security levels ${L}_{1}=2$ and ${L}_{2}=1$ , so $\Omega =S$ . Immediately from Figure 1, ${M}_{1}=7$ and ${M}_{2}=6$ . We calculate ${T}_{G}\left[u\left(s\right)\right]$ for each cell of Figure 1 to give the GSE corresponding to the underlined number in the scalar matrix of the ${T}_{G}\left[u\left(s\right)\right]$ values shown in Figure 2.

From Figure 1 and Figure 2, the unique GSE for ${\Gamma}_{2}$ is $\left({a}_{3},{b}_{3}\right)$ with payoff profile $\left(6,6\right)$ . The two pure NEs for ${G}_{2}$ are $\left({a}_{1},{b}_{1}\right)$ and $\left({a}_{2},{b}_{3}\right)$ with payoff profiles $\left(3,4\right)$ and $\left(7,4\right)$ , respectively, while the two BEs are $\left({a}_{1},{b}_{1}\right)$ and $\left({a}_{3},{b}_{3}\right)$ with payoff vectors $\left(3,4\right)$ and $\left(6,6\right)$ , respectively. This example illustrates that a GSE for ${\Gamma}_{n}$ is not necessarily an NE for ${G}_{n}$ and vice versa. Morever, the GSE $\left({a}_{3},{b}_{3}\right)$ dominates the NE $\left({a}_{1},{b}_{1}\right)$ . There is also a BE that is a

Figure 1. Payoff matrix for Example 3.4.

Figure 1. Payoff matrix for Example 3.4.

Figure 2. Scalar matrix for Example 3.4.

Figure 2. Scalar matrix for Example 3.4.

GSE. Finally, we note that if the payoff for $\left({a}_{3},{b}_{3}\right)$ were changed to $\left(6,5\right)$ for $\left({a}_{3},{b}_{3}\right)$ and if Figure 2 were recalculated, then the scalar value ${T}_{G}\left[\left(6,5\right)\right]=0.2500$ , $\left({a}_{3},{b}_{3}\right)$ is not a GSE, and no BE is a GSE and vice versa.

Example 3.5. Consider now the classic two-person Prisoner’s Dilemma (PD) game with the payoff matrix of Figure 3. The associated scalar matrix for ${\Gamma}_{2}$ is shown in Figure 4. In this case, ${L}_{1}=1$ and so ${L}_{2}=1$ . Thus $\Omega =\left\{\left({a}_{1},{b}_{1}\right),\left({a}_{2},{b}_{2}\right)\right\}$ , and the symbol ´ denotes that a strategy profile is not in $\Omega $ . From Figure 4, $\left({a}_{2},{b}_{2}\right)$ is the unique GSE, which is not an NE. However, it is the unique BE. Thus in this example, $\left({a}_{2},{b}_{2}\right)$ is both a greedy and mutually supportive action profile. Being mutually supportive in PD is better for each player than being selfish in the sense of an NE.

Example 3.6. For $1<x<3$ , Figure 5 is the payoff matrix of a Hawk-Dove game ${G}_{2}$ in which two countries vie for a contested resource. The Hawk pure strategy involves an escalated fight or even war for the resource, while the Dove pure strategy eschews such a fight. When $1<x<3$ , ${L}_{1}=1$ and ${L}_{2}=1$ , so $\Omega =\left\{\left({a}_{1},{b}_{2}\right),\left({a}_{2},{b}_{1}\right),\left({a}_{2},{b}_{2}\right)\right\}$ . The pure NEs are $\left({a}_{1},{b}_{2}\right)$ and $\left({a}_{2},{b}_{1}\right)$ for ${G}_{2}$ , and the BE is $\left({a}_{1},{b}_{1}\right)$ , which is not feasible. The scalar matrix for ${\Gamma}_{2}$ is shown in Figure 6. For $1<x<2.2679$ , $\left({a}_{1},{b}_{2}\right)$ and $\left({a}_{2},{b}_{1}\right)$ are GSEs as well as NEs. In this case, the greedy decision criterion has one player fighting and the other not. Such a situation could possibly lead to eventual retaliation by the Dove country and years of turmoil. When $2.2679<x<3$ , however, the GSE is $\left({a}_{2},{b}_{2}\right)$ , while $\left({a}_{1},{b}_{2}\right)$ and $\left({a}_{2},{b}_{1}\right)$ are NEs. Hence, the GSE approach recognizes that a larger payoff $x>2.2679$ would result in a greedy Dove-Dove

Figure 3. Payoff matrix of Example 3.5.

Figure 3. Payoff matrix of Example 3.5.

Figure 4. Scalar matrix of Example 3.5.

Figure 4. Scalar matrix of Example 3.5.

Figure 5. Payoff matrix of Example 3.6.

Figure 5. Payoff matrix of Example 3.6.

Figure 6. Scalar matrix of Example 3.6.

Figure 6. Scalar matrix of Example 3.6.

strategy and avoid war, whereas the non-cooperative NE strategy does not.

Computational Procedure 3.7. A general procedure for obtaining the GSEs for ${\Gamma}_{n}$ is outlined below. ${T}_{G}\left[u\left(s\right)\right]$ will be maximized over $S$ and not $\Omega $ . However, for any $s\in S$ such that ${u}_{i}\left(s\right)<{L}_{i}$ , the method will replace ${u}_{i}\left(s\right)$ by a very negative number to ensure that an ${s}^{\ast}$ maximizing ${T}_{G}\left[u\left(s\right)\right]$ over $S$ also maximizes it over $\Omega $ .

Step 1. Enumerate the $s\in S$ (i.e., the cells of the payoff matrix) as $1,\cdots ,{\displaystyle \underset{j\in I}{\prod}{m}_{j}}$ , and let $Q$ be a positive number much larger than any ${u}_{i}\left(s\right)$ in the matrix. Read a single player’s payoff at a time from each cell in order the order $1,\cdots ,n$ . The length of the input is thus $N=n{\displaystyle \underset{j\in I}{\prod}{m}_{j}}$ numbers. As these numbers are read, if any ${u}_{i}\left(s\right)<{L}_{i}$ , set ${u}_{i}\left(s\right)=-Q$ so that $s$ cannot max-

imize ${T}_{G}\left[u\left(s\right)\right]$ . After all cells have been read and all replacements have been made, every n numbers from the beginning of the input list represents an n-tuple $\left({u}_{1}\left(s\right),\cdots ,{u}_{n}\left(s\right)\right)$ for some $s\in S$ ; and every action profile $s\notin \Omega $ . will have at least one ${u}_{i}\left(s\right)$ with value $-Q$ .

Step 2. For $\forall i\in I$ , compute ${M}_{i}=\underset{s\in \Omega}{\mathrm{max}}{u}_{i}\left(s\right)$ , which is the maximum of the individual inputs $i,i+n,i+2n,\cdots ,i+n\left({\displaystyle \underset{j\in I}{\prod}{m}_{j}}-1\right)$ one or more of which have value at least ${L}_{i}$ .

Step 3. For each of the possible $\underset{j\in I}{\prod}{m}_{j}$ cells, i.e., joint actions $s\in S$ , compute ${T}_{G}\left[v\left(s\right)\right]={\displaystyle \underset{i\in I}{\prod}\frac{1}{{M}_{i}-{u}_{i}\left(s\right)+1}}$ .

Step 4. Find the action profiles ${s}^{*}\in S$ that maximize ${T}_{G}\left[u\left(s\right)\right]$ in Step 3.

The worst-case time complexity of obtaining all GSEs for ${\Gamma}_{n}$ is now shown to be linear in the size of the input data. Recall that in the standard Random Access Machine (RAM) model [41] , each addition, subtraction, multiplication, division, replacement, if statement, and call is considered to take one time step.

Result 3.8. The worst-case time complexity for obtaining all GSEs for ${\Gamma}_{n}$ is $O\left(N\right)$ for $N=n{\displaystyle \underset{j\in I}{\prod}{m}_{j}}$ .

Proof. Without loss of generality it may be assumed that all have players have the same number of actions, so let ${m}_{i}=M,\forall i\in I$ , in which case $N=n{M}^{n}$ is the size of the input. The maximum possible number of replacements in the Step 1 is $n{M}^{n}-1$ with time complexity $O\left(n{M}^{n}\right)$ . Finding all the n maxima ${M}_{i}$ in Step 2 has complexity $O\left(nM\right)$ as established by Blum et al. [42] . With all ${M}_{i}$ computed, finding each term of the product in Step 3 for a given $s\in S$ takes $3n$ time steps. But there are ${M}^{n}$ joint actions $s\in S$ and thus ${M}^{n}-1$ multiplications. Hence, determining the product in Step 3 for each $s\in S$ has complexity $O\left(n{M}^{n}\right)$ . Next finding the GSEs by taking the maximum in Step 4 over all $s\in S$ has complexity $O\left({M}^{n}\right)$ . It follows that all GSEs can be obtained in time complexity $O\left(n{M}^{n}\right)+O\left(nM\right)+O\left(n{M}^{n}\right)+O\left({M}^{n}\right)$ , which is $O\left(n{M}^{n}\right)$ to establish the result.+

It is worth noting that the problem of determining if pure NEs exists for a strategic-form game and then finding them is $O\left(N\mathrm{log}N\right)$ as shown by Gottlob, et al. [43] . A similar result holds for BEs [44] . However, finding a mixed NE is a PPAD-complete problem [45] , a different type of complexity than NP-comple- teness but still a strong evidence for intractability [46] .

4. Compromise Scalar Equilibrium

Now assume now that each player wants a fair payoff or compromise as compared with the other players. Again, because of Assumption U, a reasonable notion of fairness is proposed as a compromise semi-cooperative equilibrium (CSE) for ${\Gamma}_{n}$ , which is defined in a manner similar to Definition 3.1. For $\Omega =\Psi $ ,

${m}_{i}=\underset{s\in \Omega}{\mathrm{min}}{u}_{i}\left(s\right)$ , and ${M}_{i}=\underset{s\in \Omega}{\mathrm{max}}{u}_{i}\left(s\right)$ , consider the aggregate utility function ${T}_{C}:u\left(\Omega \right)\to {R}^{1}$ for which

${T}_{C}\left[u\left(s\right)\right]={\displaystyle \underset{i=1}{\overset{n}{\prod}}\frac{{u}_{i}\left(s\right)-{m}_{i}+1}{{M}_{i}-{m}_{i}+1}\text{,}s\in \Omega}$ (2)

From the definition of ${m}_{i}$ and ${M}_{i}$ it follows that $0<{T}_{C}\left[u\left(s\right)\right]\le 1$ for all $s\in \Omega $ . The number 1 in the numerators of (4.1) prevents ${T}_{C}\left[u\left(s\right)\right]$ from being 0 if ${u}_{i}\left(s\right)={m}_{i}$ for some $i$ , while the number 1 in the denominators prevent a division by 0 if ${m}_{i}={M}_{i}$ . Maximizing ${T}_{C}\left[u\left(s\right)\right]$ over $\Omega $ requires that each ${u}_{i}\left({s}^{*}\right)$ be as close to ${M}_{i}$ as jointly possible. Thus the players with ${M}_{i}>{m}_{i}$ will receive payoffs in approximately the same percentile of their payoff ranges over the feasible action profiles. Players with ${M}_{i}={m}_{i}$ will receive ${M}_{i}$ . For this decision criterion, the outcome can be construed as an equitable compromise between the players’ selfishness and unselfishness. A CE differs substantially from the fairness equilibrium of Rabin [47] for two players and from other notions of fairness in Korth [48] . However, ${T}_{C}\left[u\left(s\right)\right]$ could be considered as a discrete analog to the Nash product for the two-person bargaining problem [23] . More precisely, maximizing ${T}_{C}\left[u\left(s\right)\right]$ over $\Omega $ is a discrete version of maximizing

$f\left({x}_{1},\cdots ,{x}_{n}\right)={\displaystyle \underset{i\in I}{\prod}{x}_{i}}$ over the region $0<{x}_{i}\le 1,\forall i\in I$ , in which case $\frac{\partial f}{\partial {x}_{i}}>0$

for $0<{x}_{i}\le 1,\forall i\in I$ , and the maximum is at ${x}_{i}=1,\forall i\in I$ . The following definition extends one in Corley et al. [49] .

Definition 4.1. The pure strategy profile ${s}^{\ast}$ is a CSE for ${\Gamma}_{n}$ if and only if ${s}^{\ast}$ maximizes the aggregate utility function ${T}_{C}\circ u={T}_{C}\left[u\left(s\right)\right]$ over $\Omega $ .

${T}_{C}$ is easily shown to be strictly increasing on $u\left(\Omega \right)$ , much as in the proof of Result 3.2, so the next result follows directly from Lemma 2.7.

Result 4.2. If ${s}^{\ast}$ is a CSE for ${\Gamma}_{n}$ , then ${s}^{\ast}$ is Pareto maximal for ${\Gamma}_{n}$ .

Obviously a CSE always exists for ${\Gamma}_{n}$ . A computational procedure to obtain all CSEs is a simple modification of the Computational Procedure 3.7 in which any ${u}_{i}\left(s\right)>{L}_{i}$ is now replaced by ${u}_{i}\left(s\right)={m}_{i}-1$ and ${T}_{C}$ is used instead of ${T}_{G}$ . This procedure again has worst-case time complexity $O\left(N\right)$ for $N=n{\displaystyle \underset{j\in I}{\prod}{m}_{j}}$ .

Example 4.3. Consider again two-person PD game ${G}_{2}$ of Figure 3 above. The associated scalar matrix for ${\Gamma}_{2}$ is shown in Figure 7. Thus $\left({a}_{2},{b}_{2}\right)$ is the unique CSE, which is not an NE but is a BE.

Example 4.4. Consider again the Hawk-Dove of Figure 5 with associated compromise scalar matrix in Figure 8. Now $\left({a}_{2},{b}_{2}\right)$ is the unique CSE for $1.7321<x<3$ , which gives a larger range on $x$ for avoiding war by compromise than by greed as in Example 3.6.

5. Satisficing Scalar Equilibrium

Aspiration levels are widely used in decision theory [50] [51] and will be used here in a satisficing scalar equilibrium (SSE) unrelated to the satisficing games of Stirling [52] . The SSE achieves four objectives.

1) An SSE gives each player $i\in I$ at least some targeted payoff level ${p}_{i}$ required for the player to accept a binding agreement gives each player $i\in I$ at least the targeted payoff level ${p}_{i}$ that is required for the player to accept a binding agreement for an action profile ${s}^{\ast}$ . It is assumed that ${p}_{i}\ge {L}_{i},\forall i\in I$ , with some ${p}_{k}>{L}_{k}$ to distinguish the aspiration levels from the security levels, which are always obtainable.

2) The SSE model focuses the players or arbitrator on the parameters ${p}_{i}$ and ${d}_{i}$ . For example, if all ${p}_{i}$ cannot be simultaneously satisfied, they can be modified by negotiation.

Figure 7. Scalar matrix of Example 4.3.

Figure 7. Scalar matrix of Example 4.3.

Figure 8. Scalar matrix of Example 4.4.

Figure 8. Scalar matrix of Example 4.4.

3) An SSE can give certain players higher relative payoffs than other players. Each each player $i\in I$ will be assigned a weighting factory ${d}_{i}>0$ . If ${d}_{j}>{d}_{k}$ , then player $j$ will receive a higher payoff than player $k$ if possible.

4) For any ${d}_{i}>0,\forall i\in I$ , an SSE is Pareto optimal over the joint actions achieving the players’ aspiration levels.

Definition 5.1. For any $p=\left({p}_{1},\cdots ,{p}_{n}\right),d=\left({d}_{1},\cdots ,{d}_{n}\right)\in {R}^{n}$ , where ${d}_{i}>0,\forall i\in I$ , let ${T}_{W}\left[u\left(s\right)\right]={\displaystyle \underset{i\in I}{\sum}{d}_{i}{u}_{i}\left(s\right)}\text{,}s\in \Omega $ , where

$\Omega =\left\{s\in S:{u}_{i}\left(s\right)\ge {p}_{i},i\in I\right\}$ . Then the action profile ${s}^{\ast}$ is an SSE for the game ${\Gamma}_{n}$ , if and only if ${s}^{\ast}$ maximizes the aggregate utility function ${T}_{W}\circ u={T}_{W}\left[u\left(s\right)\right]$ over $\Omega $ .

Result 5.2. For all $p=\left({p}_{1},\cdots ,{p}_{n}\right),d=\left({d}_{1},\cdots ,{d}_{n}\right)\in {R}^{n}$ , where ${d}_{i}>0,\forall i\in I$ , any ${s}^{*}\in S$ solving

$\begin{array}{l}\underset{s\in S}{\text{maximize}}\text{\hspace{0.17em}}{\displaystyle \underset{i\in I}{\sum}{d}_{i}}{u}_{i}\left(s\right)\\ \text{subjectto}\text{\hspace{0.17em}}{u}_{i}\left(s\right)\ge {p}_{i},\forall i\in I,\end{array}$ (3)

is an SSE and Pareto maximal for ${\Gamma}_{n}$ .

Proof. Let ${d}_{i}>0,\forall i\in I$ , and let ${s}^{\prime},{s}^{\u2033}\in \Omega $ be such that ${s}^{\prime}$ dominates ${s}^{\u2033}$ . Then it immediately follows that $0<{\displaystyle \underset{i\in I}{\sum}{d}_{i}}\left[{u}_{i}\left({s}^{\prime}\right)-{u}_{i}\left({s}^{\u2033}\right)\right]={\displaystyle \underset{i\in I}{\sum}{d}_{i}{u}_{i}\left({s}^{\prime}\right)}-{\displaystyle \underset{i\in I}{\sum}{d}_{i}\left[{u}_{i}\left({s}^{\u2033}\right)\right]}$ , from which $\underset{i\in I}{\sum}{d}_{i}{u}_{i}\left({s}^{\prime}\right)}>{\displaystyle \underset{i\in I}{\sum}{d}_{i}\left[{u}_{i}\left({s}^{\u2033}\right)\right]$ . Thus ${T}_{W}$ is strictly increasing on $u\left(\Omega \right)$ , and the result follows from Lemma 2.7.+

One method of determining feasible aspiration levels ${p}_{i}$ satisfied by at least one $s\in S$ in Definition 5.1 is to select the weights ${d}_{i}>0,\forall i\in I$ , and then to

$\underset{s\in S}{\text{maximize}}{\displaystyle \underset{i\in I}{\sum}{d}_{i}}{u}_{i}\left(s\right)$ . The aspiration levels ${p}_{i}={u}_{i}\left({s}^{*}\right),\forall i\in I$ are then feasible. Moreover, this method could be construed as fair if ${d}_{i}=1,\forall i\in I$ . The approach suggests the following counterpart to Result 5.2.

Result 5.3. If ${s}^{\ast}$ is Pareto maximal for ${\Gamma}_{n}$ , then for any ${d}_{i}>0,\forall i\in I$ , ${s}^{\ast}$ is an SSE for the aspiration levels ${p}_{i}={u}_{i}\left({s}^{*}\right),\forall i\in I$ .

Proof. Let ${s}^{*}$ be Pareto maximal for ${\Gamma}_{n}$ . Then ${s}^{*}$ is obviously feasible for (3) with ${p}_{i}={u}_{i}\left({s}^{*}\right),\forall i\in I$ . Now suppose there exist ${{d}^{\prime}}_{i}>0,\forall i\in I$ , for which ${s}^{*}$ does not solve (3). Then for some ${s}^{\prime}\in S$ satisfying ${u}_{i}\left({s}^{\prime}\right)\ge {p}_{i}={u}_{i}\left({s}^{*}\right),\forall i\in I$ , it follows that $\underset{i\in I}{\sum}{{d}^{\prime}}_{i}}{u}_{i}\left({s}^{\prime}\right)>{\displaystyle \underset{i\in I}{\sum}{{d}^{\prime}}_{i}}{u}_{i}\left({s}^{*}\right)$ and hence $\underset{i\in I}{\sum}{{d}^{\prime}}_{i}}\left[{u}_{i}\left({s}^{\prime}\right)-{u}_{i}\left({s}^{*}\right)\right]>0$ . But since ${d}^{\prime}>0,\forall i\in I$ , then ${u}_{i}\left({s}^{\prime}\right)\ge {u}_{i}\left({s}^{*}\right),\forall i\in I$ and ${u}_{k}\left({s}^{\prime}\right)>{u}_{k}\left({s}^{*}\right)$ for some $k$ . Thus ${s}^{*}$ is not Pareto optimal for ${\Gamma}_{n}$ , in contradiction to the assumption. It follows that for any ${d}_{i}>0,\forall i\in I$ , ${s}^{*}$ solves (3) and is an SSE for aspiration levels ${p}_{i}={u}_{i}\left({s}^{*}\right),\forall i\in I$ .■

Example 5.4. Consider now the payoff matrix of Example 3.4 in Figure 1. Note that $\left({p}_{1},{p}_{2}\right)=\left(7,5\right)$ yields no feasible action profiles. When

$\left({p}_{1},{p}_{2}\right)=\left(5,4\right)$ , problem (3) has a solution, and the feasible $s\in \Omega $ are $\left({a}_{2},{b}_{3}\right)$

Figure 9. Scalar matrix for Example 3.4.

Figure 9. Scalar matrix for Example 3.4.

with payoff profile $\left(7,4\right)$ , $\left({a}_{3},{b}_{2}\right)$ with $\left(7,4\right)$ , and $\left({a}_{3},{b}_{3}\right)$ with $\left(6,6\right)$ . Setting ${d}_{1}={d}_{2}=1$ in (3) gives the unique SSE ${s}^{*}=\left({a}_{3},{b}_{3}\right)$ with payoff profile $\left(6,6\right)$ . For $\left({p}_{1},{p}_{2}\right)=\left(5,4\right)$ and $\left({d}_{1},{d}_{2}\right)=\left(0.7,0.3\right)$ , however, the scalar matrix of ${T}_{W}\left[u\left(s\right)\right]$ values for the feasible payoffs is shown in Figure 9, where cells with ´ have infeasible payoffs. The unique SSE is $\left({a}_{2},{b}_{3}\right)$ with payoff profile $\left(7,4\right)$ . Its scalar value is underlined.

For appropriate aspiration levels, an SSE always exists for ${\Gamma}_{n}$ . Computational Procedure 3.7 can again be modified to determine if an SSE exists and then to obtain them all. To do so, ${L}_{i}$ is replaced by ${L}_{i}$ and ${T}_{G}$ by ${T}_{W}$ for a given set of ${d}_{i}>0$ . Whenever ${u}_{i}\left(s\right)<{p}_{i}$ , the replacement ${u}_{i}\left(s\right)=-Q$ in the maximization of (3) assures that $s$ will not be an SSE. This modified procedure also has worst-case time complexity $O\left(N\right)$ for $N=n{\displaystyle \underset{j\in I}{\prod}{m}_{j}}$ .

6. Axiomatic Considerations

A set of standard axioms for bargaining solutions will now be applied to the semi-cooperative models studied here. Nash [23] proposed four axioms stated below as Axioms 1 - 4 for the game ${\Gamma}_{n}=\langle I,{\left({S}_{i}\right)}_{i\in I},{\left({u}_{i}\right)}_{i\in I},\Omega ,T\rangle $ . Axiom 5 was formulated by Kalai [37] . We add Axiom 6, which seems reasonable.

Axiom 1 (Pareto Efficiency). If ${s}^{*}\in S$ is an SE for ${\Gamma}_{n}=\langle I,{\left({S}_{i}\right)}_{i\in I},{\left({u}_{i}\right)}_{i\in I},\Omega ,T\rangle $ , then $u\left({s}^{*}\right)=\left({u}_{1}\left({s}^{*}\right),\cdots ,{u}_{n}\left({s}^{*}\right)\right)$ is Pareto optimal over $\Omega $ . In other words, no player’s payoff is better for another strategy $s\in \Omega $ without at least one other player’s being worse.

Axiom 2 (Symmetry). If the players are indistinguishable, the solution will not discriminate between them. The names of the players do not matter. In other words, if ${s}^{*}=\left({s}_{1}^{*},\cdots ,{s}_{n}^{*}\right)\in S$ is an SE for ${s}^{*}=\left({s}_{1}^{*},\cdots ,{s}_{n}^{*}\right)\in S$ with a feasible set $\Omega $ and if the indices $1,\cdots ,n$ are permuted in $\left({s}_{1}^{*},\cdots ,{s}_{n}^{*}\right)$ , then the resulting strategy profile is an SE for the permuted game ${{\Gamma}^{\prime}}_{n}$ where the indices are similarly permuted.

Axiom 3 (Invariance to Affine Transformations). If ${s}^{*}\in S$ is an SE for ${\Gamma}_{n}=\langle I,{\left({S}_{i}\right)}_{i\in I},{\left({u}_{i}\right)}_{i\in I},\Omega ,T\rangle $ , then ${s}^{*}$ is an SE for the game ${{\Gamma}^{\prime}}_{n}=\langle I,{\left({S}_{i}\right)}_{i\in I},{\left({\alpha}_{i}{u}_{i}\left(s\right)+{\beta}_{i}\right)}_{i\in I},\Omega ,T\rangle $ for any real numbers ${\alpha}_{i}>0$ and ${\beta}_{i}$ . In other words, an affine transformation ${\alpha}_{i}{u}_{i}\left(s\right)+{\beta}_{i},{\alpha}_{i}>0,\forall i\in I$ of the utilities and aspiration levels will not change the SEs.

Axiom 4 (Independence of Irrelevant Alternatives). If ${s}^{*}\in S$ is an SE for ${\Gamma}_{n}=\langle I,{\left({S}_{i}\right)}_{i\in I},{\left({u}_{i}\right)}_{i\in I},\Omega ,T\rangle $ with feasible set $\Omega $ and if ${s}^{*}\in \Lambda \subset \Omega $ , then ${s}^{*}$ is an SE for ${{\Gamma}^{\prime}}_{n}=\langle I,{\left({S}_{i}\right)}_{i\in I},{\left({u}_{i}\right)}_{i\in I},\Lambda ,T\rangle $ with feasible set $\Lambda $ .

Axiom 5 (Resource Monotonicity). If ${s}^{*}\in S$ is an SE for ${\Gamma}_{n}=\langle I,{\left({S}_{i}\right)}_{i\in I},{\left({u}_{i}\right)}_{i\in I},\Omega ,T\rangle $ with feasible set $\Omega $ and if ${s}^{**}\in S$ is an SE for a ${{\Gamma}^{\prime}}_{n}=\langle I,{\left({S}_{i}\right)}_{i\in I},{\left({u}_{i}\right)}_{i\in I},\Lambda ,T\rangle $ with feasible set $\Lambda \supset \Omega $ , then ${u}_{i}\left({s}^{*}\right)\le {u}_{i}\left({s}^{**}\right),\forall i\in I$ . In other words, if additional feasible action profiles are considered, a new solution should be at least as good for all agents as the previous solution.

Axiom 6 (Individual Rationality). No player would agree to a payoff lower than his guaranteed security level, nor would a reasonable arbitrator impose such a payoff.

The Nash two-person bargaining solution [23] satisfies the above axioms except for Axiom 5 if the disagreement level is taken as a type of security level. The two-person bargaining solution of Kalai [37] satisfies all the axioms except Axiom 4 under the same assumption. However, the solution of Kalai and Smorodinsky [36] satisfies Axioms 1-3 but not Axioms 4 - 5, again under the same assumption.

The SEs of this paper satisfy Axioms 1, 2, 4, 6 but not Axioms 3, 5. However, an SSE is easily seen to be invariant for affine transformations of the form $\alpha {u}_{i}\left(s\right)+{\beta}_{i},\alpha >0,\forall i\in I$ , where the same $\alpha $ is applied to ${u}_{i},\forall i\in I$ , if the aspiration levels are also transformed. Moreover, a GSE and CSE would be invariant if not for the number 1’s added to prevent a numerator or denominator from having a value 0. It is likely that useful transformations T other than ${T}_{G},{T}_{C}$ and ${T}_{W}$ can be formulated to satisfy Axioms 1 - 4. On the other hand, satisfying Axiom 5 presents a difficulty. The aggregation of all individual utilities into some $T\left[u\left(s\right)\right]$ may not dictate that the monotonicity requirement of Axiom 5 on the individual utilities ${u}_{i}\left(s\right),\forall i\in I,$ will be satisfied.

7. Conclusion

The normative SE approach presented here extends classical bargaining models to semi-cooperative games with alternate decision criteria for the players or an arbitrator. In effect, this paper attempts to reduce the negotiations of semi- cooperative games to the new approach of selecting an appropriate utility function T, of which three were offered. It also restricts previous work to the pure strategies actually required for implementation. In doing so, an SE can be obtained quickly as the maximum of a finite number of scalar values, as opposed to the computational effort required to determine mixed equilibria. Future work should focus on formulating utility functions T that would model decision criteria besides the greedy, compromise, and satisficing ones presented in this paper. The next step would then be to form an aggregate utility function T by combining individual utility functions for players with different notions of rationality.

Cite this paper

Corley, H.W. (2017) Normative Utility Models for Pareto Scalar Equilibria in n-Person, Semi-Coope- rative Games in Strategic Form. Theoretical Economics Letters, 7, 1667-1686. https://doi.org/10.4236/tel.2017.76113

References

- 1. Von Neumann, J. and Morgenstern, O. (1944) Theory of Games and Economic Behavior. Princeton University Press, Princeton.
- 2. Rapport, A. (1989) Decision Theory and Decision Behaviour: Normative and Descriptive Approaches. Springer-Verlag, Berlin.
- 3. Savage, L. (1954) The Foundations of Statistics. John Wiley and Sons, New York.
- 4. Aumann, R.J. (1985) What Is Game Theory Trying to Accomplish? In: Arrow, K. and Honkapohja, S., Eds., Frontiers of Economics, Basil Blackwell, Oxford, 5-46.
- 5. Myerson, R. (1991) Game Theory: Analysis of Conflict. Harvard University Press, Harvard.
- 6. Maschler, M., Solan, E. and Zamir, S. (2013) Game Theory. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511794216
- 7. Nash, J. (1950) Equilibrium Points in n-Person Games. Proceedings of the National Academy of Science, 36, 48-49. https://doi.org/10.1073/pnas.36.1.48
- 8. Nash, J. (1951) Non-Cooperative Games. The Annals of Mathematics, 54, 286-295. https://doi.org/10.2307/1969529
- 9. Serrano R. (2005) Fifty Years of the Nash Program, 1953-2003. Investigaciones Económicas, 29, 219-258.
- 10. Poundstone, W. (2011) Prisoner’s Dilemma. Penguin Random House, New York.
- 11. Beckenkamp, M. (2006) A Game-Theoretic Taxonomy of Social Dilemmas. Central European Journal of Operations Research, 14, 337-353. https://doi.org/10.1007/s10100-006-0008-5
- 12. Kollock, P. (1998) Social Dilemmas: The Anatomy of Cooperation. Annual Review of Sociology, 24, 183-214. https://doi.org/10.1146/annurev.soc.24.1.183
- 13. Berge, C. (1957) Théorie Générale des Jeux à n Personnes (General Theory of Games for n Players). Gauthier-Villars, Paris.
- 14. Zhukovskiy, V.I. (1985) Some Problems of Nonantagonistic Differential Games. In: Kenderov, P., Ed., Matematicheskie Metody v Issledovanie Operacij, Bulgarian Academy of Sciences, Sofia, 103-195. (In Russian)
- 15. Colman, A.M., Körner, T.W., Musy, O. and Tazdaï, T. (2011) Mutual Support in Games: Some Properties of Berge Equilibria. Journal of Mathematical Psychology, 55, 166-175. https://doi.org/10.1016/j.jmp.2011.02.001
- 16. Corley, H.W. and Kwain, P. (2014) A Cooperative Dual to the Nash Equilibrium for Two-Person Prescriptive Games. Journal of Applied Mathematics, 2014, Article ID: 806794, 4 p.
- 17. Corley, H.W. (2015) A Mixed Cooperative Dual to the Nash Equilibrium. Game Theory, 2015, Article ID: 647246, 7 p.
- 18. Chakravarty, S.R., Mitra, M. and Sarkar, P. (2015) A Course on Cooperative Game Theory. Cambridge University Press, Cambridge.
- 19. Shapley, L. (1953) A Value for n-Person Games. In: Kuhn, H. and Tucker, A., Eds., Contributions to the Theory of Games II, Princeton University Press, Princeton, 307-317. https://doi.org/10.1515/9781400881970-018
- 20. Gillies, D.B. (1959) Solutions to General Zero-Sum Games. In: Tucker, A. and Luce, R., Eds., Contributions to the Theory of Games IV, Princeton University Press, Princeton, 47-85.
- 21. Davis, M. and Maschler, M. (1965) The Kernel of a Cooperative Game. Naval Research Logistics Quarterly, 12, 223-259. https://doi.org/10.1002/nav.3800120303
- 22. Schmeidler, D. (1969) The Nucleolus of a Characteristic Function Game. SIAM Journal of Applied Mathematics, 17, 1163-1170. https://doi.org/10.1137/0117107
- 23. Nash, J. (1950) The Bargaining Problem. Econometrica, 18, 155-162. https://doi.org/10.2307/1907266
- 24. Nash, J. (1953) Two-Person Cooperative Games. Econometrica, 21, 128-140. https://doi.org/10.2307/1906951
- 25. Raiffa, H. (1953) Arbitration Schemes for Generalized Two Person Games. In: Kuhn, H. and Tucker, A., Eds., Contributions to the Theory of Games II, Princeton University Press, Princeton, 361-387. https://doi.org/10.1515/9781400881970-022
- 26. Rosenthal, R. (1976) An Arbitration Model for Strategic-Form Games. Mathematics of Operations Research, 1, 82-88. https://doi.org/10.1287/moor.1.1.82
- 27. Kalai, E. and Rosenthal, R.W. (1978) Arbitration of Two-Party Disputes under Ignorance. International Journal of Game Theory, 7, 65-72. https://doi.org/10.1007/BF01753235
- 28. Bacharach, M., Gold, N. and Sugden R. (2006) Beyond Individual Choice: Teams and Frames in Game Theory. Princeton University Press, Princeton.
- 29. Hart, S. and Mas-Colell, A. (2010) Bargaining and Cooperation in Strategic Form Games. Journal of the European Economic Association, 8, 7-33. https://doi.org/10.1111/j.1542-4774.2010.tb00493.x
- 30. Diskin, A., Koppel, M. and Samet, D. (2011) Generalized Raiffa Solutions. Games and Economic Behavior, 73, 452-445. https://doi.org/10.1016/j.geb.2011.04.002
- 31. Cao, Z. (2013) Bargaining and Cooperation in Strategic Form Games with Suspended Realizations of Threats. Social Choice and Welfare, 41, 337-358. https://doi.org/10.1007/s00355-012-0686-y
- 32. Kalai, A. and Kalai, E. (2013) Cooperation in Strategic Games Revisited. In: Kuhn, H. and Tucker A., Eds., Contributions to the Theory of Games II, Princeton University Press, 128, 917-966. https://doi.org/10.1093/qje/qjs074
- 33. Rubinstein, A. (1991) Comments on the Interpretation of Game Theory. Econometrica, 59, 909-924. https://doi.org/10.2307/2938166
- 34. Harsanyi, J. (1973) Games with Randomly Disturbed Payoffs: A New Rationale for Mixed-Strategy Equilibrium Points. International Journal of Game Theory, 2, 1-23. https://doi.org/10.1007/BF01737554
- 35. Aumann, R.J. and Brandenburger, A. (1995) Epistemic Conditions for Nash Equilibrium. Econometrica, 63, 1161-1180. https://doi.org/10.2307/2171725
- 36. Kalai, E. and Smorodinsky, M. (1975) Other Solutions to Nash’s Bargaining Problem. Econometrica, 43, 513-518. https://doi.org/10.2307/1914280
- 37. Kalai, E. (1977) Proportional Solutions to Bargaining Situations: Intertemporal utility Comparisons. Econometrica, 45, 1623-1630. https://doi.org/10.2307/1913954
- 38. Aumann, R.J. (1959) Acceptable Points in General Cooperative n-Person Games. In: Tucker, A. and Luce, R., Eds., Contributions to the Theory of Games IV, Princeton University Press, Princeton. https://doi.org/10.1515/9781400882168-018
- 39. Leyton-Brown, K. and Shoham, Y. (2008) Essentials of Game Theory: A Concise, Multidisciplinary Introduction. Morgan & Claypool, Williston.
- 40. Barbera, S., Hammond, P. and Seidl, C., Eds. (1999) Handbook of Utility Theory— Volume 1. Principles. Springer-Verlag, Berlin.
- 41. Skiena, S.S. (2008) The Algorithm Design Manual. 2nd Edition, Springer-Verlag, Berlin. https://doi.org/10.1007/978-1-84800-070-4
- 42. Blum, M., Floyd, R.W., Pratt, V.R., Rivest, R.L. and Tarjan, R.E. (1973) Time Bounds for Selection. Journal of Computer and System Sciences, 7, 448-461. https://doi.org/10.1016/S0022-0000(73)80033-9
- 43. Gottlob, G., Greco, G. and Scarcello, F. (2005) Pure Nash Equilibria: Hard and Easy Games. Journal of Artificial Intelligence Research, 24, 357-406.
- 44. Corley, H.W. and Kwain, P. (2015) An Algorithm for Computing All Berge Equilibria. Game Theory, 2015, Article ID: 862842, 2 p.
- 45. Papadimitriou, C. (1994) On the Complexity of the Parity Argument and Other Inefficient Proofs of Existence. Journal of Computer and System Sciences, 48, 498-532. https://doi.org/10.1016/S0022-0000(05)80063-7
- 46. Nisan, N, Roughgarden, T., Tardos, E. and Vazirani, V., Eds. (2007) Algorithmic Game Theory. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511800481
- 47. Rabin, M. (1993) Incorporating Fairness into Game Theory and Economics. The American Economic Review, 83, 1281-1302.
- 48. Korth, C. (2009) Fairness in Bargaining and Markets. Springer-Verlag, Berlin. https://doi.org/10.1007/978-3-642-02253-1
- 49. Corley, H.W., Charoensri, S. and Engsuwan, N. (2014) A Scalar Compromise Equilibrium for n-Person Prescriptive Games. Natural Science, 6, 1103-1107. https://doi.org/10.4236/ns.2014.613098
- 50. Selten. R. (1998) Aspiration Adaptation Theory. Journal of Mathematical Psychology, 42, 191-214. https://doi.org/10.1006/jmps.1997.1205
- 51. Schniederjans, M. (1995) Goal Programming: Methodology and Applications. Springer-Verlag, Berlin. https://doi.org/10.1007/978-1-4615-2229-4
- 52. Stirling, W. (2002) Satisficing Equilibria: A Non-Classical Theory of Games and Decisions. Autonomous Agents and Multi-Agent Systems, 5, 305-328. https://doi.org/10.1023/A:1015556407380