Sale!

Stat4DS / Homework 02 solved

Original price was: $35.00.Current price is: $30.00. $25.50

Category:

Description

5/5 - (3 votes)

1. Background: Cooperative Games (more info)
1.1 Generalities
A game in the sense of game theory is an abstract mathematical model of a scenario in which some sort of agents interact. It
is abstract in the sense that irrelevant detail is omitted: the game aims to capture only those features of the scenario that are
relevant to the decisions that must be made by players within the game.
Agents can be anything depending on the application: typically real humans (e.g. investors, political parties, etc.), but also
signals/towers in a communication network, genes in a biological setup or even variables in a predictive/learning problem.
The form of games we are interested in is the most basic and widely-studied model of cooperative games.
More specifically, our games are populated by a (non-empty) set P = {1, . . . , p} of agents: the players of the game. A
coalition C is simply any subset of the player-set P. The grand coalition is the set P of all players.
All this said, let’s define what we mean by a characteristic function game. In the following, with 2
P we will denote the
power-set, that is, the set of all subsets, of P.
Definition 1. A characteristic function game G is given by a pair (P, ν), where P = {1, . . . , p} is a finite, non-empty set of
agents, and ν : 2P 7→ R is a characteristic function, which maps each coalition C ⊆ P to a real number ν(C).
The number ν(C) is usually referred to as the value of the coalition C.
There are many possible interpretations for the characteristic function, but note right away that characteristic function games
assign values to a coalition as a whole, and not to its individual members. That is, the model behind a characteristic function
game does not tell you how the coalition value ν(C) should be divided among the members of C.
In fact, the question of how to divide the coalition value is a fundamental research topic in cooperative game theory1
.
Notice also that, from a computational point of view, the naïve representation of a characteristic function game that consists
in explicitly listing every coalition C together with the associated value ν(C), being of the order 2
p
in size, is not practical
unless the number of players is very small. On the other hand, most real-life interactions admit an encoding of size polynomial
in p; such an encoding provides an implicit description of the characteristic function.
As an example, consider modeling the decision-making process in voting bodies.
Example 1. (weighted voting games)
• A country has a 101 seats in its parliament, and each representative belongs to one of four parties: Liberal (L 40
seats), Moderate (M 22 seats), Conservative (C 30 seats), or Green (G 9).
• The parliament needs to decide how to allocate “1 billion euros” of discretionary spending.
1An implicit assumption here: the coalition value ν(C) can be divided among the members of C in any way that the members of C choose.
Formally, games with this property are said to be transferable utility games (TU games)
1
• The decision is made by a simple majority vote, and we assume that all representatives vote along the party lines.
• Parties can form coalitions; a coalition has value “1 billion euros” IF it can win the vote no matter what the other
parties do, and value 0 otherwise.
Hence, after some thinking, we see that the associated 4-players game has P = {L, M, C, G} and characteristic function
ν(S) = (
0 if
#S 6 1

or
G ∈ S

109 otherwise
where #S = {cardinality of the coalition S}.
1.2 Solution Concepts: the Shapley Value
A key problem in game theory is to try to understand what the outcomes of a game will be. In our cooperative framework,
an outcome of a game G consists of two parts:
1. a coalition structure, that is, a partition CS = {C1, . . . , Cm} of the player-set P = {1, . . . , p} into coalitions;
2. a payoff vector x = (x1, . . . , xp) ∈ R
p
for a coalition structure CS = {C1, . . . , Cm}, which distributes the value ν(Cj )
of each coalition among its members. Any legit payoff vector x must satisfy the following natural conditions:
• xj > 0 for all j ∈ P;

P
r∈Cj
xr 6 ν(Cj ) for any j ∈ {1, . . . , m}
Now, any partition of agents into coalitions and any payoff vector that respects this partition corresponds to a plausible
outcome of a characteristic function game. However, not all outcomes are equally desirable.
For instance, if all agents contribute equally to the value of a coalition, a payoff vector that allocates the entire payoff to just
one of the agents is less appealing than the one that shares the profits equally among all agents.
Similarly, an outcome that push all agents to work together is (typically) preferable to an outcome that some of the agents
want to deviate from.
More broadly, one can evaluate outcomes according to two criteria: fairness (i.e., how well each agent’s payoff reflects his
contribution), and stability (i.e., what are the incentives for the agents to stay in the coalition structure). These criteria give
rise to two families of solution concepts having as most notable members the Shapley Value and the Core respectively.
Given its broad success for defining variable importance in supervised learning problems (see here, here, here, here, and
also here, just to mention a few) in the following we will focus on the former, the Shapley Value, forged in the ’50s by the one
and only, sir Lloyd S. Shapley.
The Shapley value is a solution concept that is usually formulated with respect to the grand coalition: it defines a way of
distributing the value ν(P) that could be obtained by the grand coalition, and it is based on the intuition that the payment
that each agent receives should be proportional to his contribution.
Idea v1.0: a naïve implementation of this idea would be to pay each agent according to how much he increases the value of
the coalition of all other players when he joins it, i.e., set the payoff of player j to ν(P) − ν(P\{j}).
Problem: Under this payoff scheme the total payoff assigned to the agents may differ from the value of the grand coalition.
Idea v2.0: we can fix an ordering of the agents and pay each agent according to how much he contributes to the coalition
formed by his predecessors in this ordering: Agent 1 receives ν({1}), Agent 2 receives ν({1, 2}) − ν({1}), and so on.
It is easy to see that this payoff scheme distributes the value of the grand coalition among the agents.
Problem: agents that play symmetric roles in the game may receive different payoffs depending on their position in the order.
Shapley’s insight: the dependence on the agents ordering can be eliminated by averaging over all possible orderings
(or permutations) of the players.
Now, to formally define the Shapley value, we need some additional notation.
1. Fix a characteristic function game G = (P, ν) and let ΠP be the set of all permutations of P. Given a specific permutation
π ∈ ΠP , we denote by Sπ(j) the set of all predecessors of player j in π, i.e., we set Sπ(j) = {r ∈ P such that π(r) < π(j)}.
For example, if P = {1, 2, 3}, then
ΠP =

(1, 2, 3),(1, 3, 2),(2, 1, 3),(3, 1, 2),(3, 2, 1)
,
moreover, if π = (3, 1, 2), then Sπ(3) = ∅, Sπ(1) = {3} and Sπ(2) = {1, 3}.
2. The marginal contribution of an agent j with respect to a permutation π in a game G = (P, ν) is denoted by ∆G
π
(j)
and measures how much player j increases the value of the coalition consisting of its predecessor in π. More specifically:
∆G
π
(j) = ν

Sπ(j) ∪ {j}

− ν

Sπ(j)

. (1