N-INRG and B-INRG Classes¶
This module contain the classes used for game-theoretical analysis of interdependent network restoration
- class gameclasses.BayesianGame(L, net, v_r, act_rduc=None)¶
This class models a Bayesian restoration game for a given time step and finds bayes nash equilibria. This class inherits from
NormalGame
.- fundamental_types¶
List of fundamental types of players. Currently, it consists of two types:
Cooperative(C) player, which prefers cooperative and partially cooperative actions.
Non-cooperative (N) player, which prefers non-cooperative actions.
- Type
list
- states¶
List of state(s) of the game.
- Type
dict
- states_payoffs¶
List of payoff matrices of state(s) of the game.
- Type
dict
- types¶
List of players’ type(s).
- Type
dict
- bayesian_players¶
List of bayesian players of the game comprising combiantion of players and their types.
- Type
list
- build_bayesian_game(save_model=None, suffix='')¶
This function constructs the bayesian restoration game
- Parameters
save_model (str, optional) – Directory to which the game should be written as a .txt file. The default is None, which prevents writing to file.
suffix (str, optional) – An optional suffix thta is added to the file name. The default is ‘’.
- Returns
None.
- create_bayesian_players()¶
This function create one player for each combination of player and its types.
- Returns
- Return type
None.
- label_actions(action)¶
This function return the type of an input action in accordance with
fundamental_types
:‘C’ (cooperative) action consists of one or several actions that some of them are relevent—i.e., reparing damaged nodes that on which other players depend. A ‘C’ action may include ‘OA’ action as well, but it cannot be only one ‘OA’ action.
‘N’ (non-cooperative) action is either ‘NA’ or ‘OA’.
- Parameters
action (tuple) – An action.
- Returns
label – The action type.
- Return type
str
- set_states()¶
This function set the states based on
fundamental_types
and for each state compute the payoff matrix of all players by doubling the payoff of actions that are not consistent with the player’s type.Todo
Games: refine how to reduce importance of action not consistent with the player’s type.
- Returns
- Return type
None.
- set_types(beliefs)¶
This function set players type based on the beliefs it receives. Currently, it can interpret the following signals:
Uninformed (‘U’): Players do not have any information about other players, and assign equal probability to other players’ types.
False consensus (‘F’): false consensus effect or consensus bias.
Inverse false consensus (‘I’): Inverse false consensus effect.
- Parameters
beliefs (dict) – The collection of beliefs for all players .
- Returns
- Return type
None.
- class gameclasses.GameSolution(L, sol, actions)¶
This class extracts (from the gambit solution) and saves the solution of the normal game
- players¶
List of players, set based on ‘L’ from input variables of
GameSolution
- Type
list
- gambit_sol¶
Solutions to the game computed by gambit, set based on ‘sol’ from input variables of
GameSolution
- Type
list
- sol¶
Dictionary of solutions of the normal game, including actions, their probabilities, payoffs, and the total cost, set by
extract_solution()
.- Type
dict
- extract_solution(actions)¶
This function extracts the solution of the normal game from the solution structure from gambit
- Parameters
actions (dict) – Possible actions for each player.
- Returns
sol – Dictionary of solutions of the normal game, including actions, their probabilities, payoffs, and the total cost.
- Return type
dict
- class gameclasses.InfrastructureGame(params)¶
This class is employed to find the restoration strategy for an interdependent network using game theoretic methods over a given time horizon
- equib_alg¶
Algorithm to solve the game, set based on [‘EQUIBALG’] in
params
Options: enumpure_solve, enummixed_solve, lcp_solve, lp_solve, simpdiv_solve, ipa_solve, gnm_solve
- Type
str
- magnitude¶
Magnitude parameter of the current simulation, set based on [‘MAGNITUDE’] in
params
- Type
int
- net¶
Object that stores network information, set based on
params
usingset_network()
- time_steps¶
Number of time steps pf the restoration process, set based on
params
usingset_time_steps()
- Type
int
- objs¶
Dictionary of game objects (
NormalGame
) for all time steps of the restoration- Type
dict of
NormalGame
- judgments¶
Object that stores the judgment attitude of agents, which is only needed for computing the resource allocation when using auction. So, it is not used in building or solving the games
- Type
- results¶
Object that stores the restoration strategies for all time steps of the restoration process
- Type
- resource¶
Model that allocates resources and stores the resource allocation data for all time steps of the restoration process
- Type
- res_alloc_type¶
Resource allocation method
- Type
str
- v_r¶
Dictionary that stores the number of available resources, \(R_c\), for all time steps of the restoration process
- Type
dict
- output_dir¶
Directory to which the results are written, set by
set_out_dir()
- Type
str
- run_game(print_cmd=True, compute_optimal=False, save_results=True, plot=False, save_model=False)¶
Runs the infrastructure restoration game for a given number of
time_steps
- Parameters
print_cmd (bool, optional) – Should the game solution be printed to the command line. The default is True.
compute_optimal (bool, optional) – Should the optimal restoration action be found in each time step. The default is False.
save_results (bool, optional) – Should the results and game be written to file. The default is True.
plot (bool, optional) – Should the payoff matrix be plotted (only for 2-players games). The default is False.
save_model (bool, optional) – Should the games and indp models to compute payoffs be written to file. The default is False.
- Returns
None
- save_object_to_file()¶
Writes the object to file using pickle
- Returns
None.
- save_results_to_file()¶
Writes results to file
- Returns
None.
- set_network(params)¶
Checks if the network object exists, and if so, make a deepcopy of it to preserve the initial network object as the initial network object is used for all simulations , and must not be altered
- Parameters
params (dict) – Dictionary of input parameters
- Returns
Network Object
- Return type
- set_out_dir(root)¶
This function generates and sets the directory to which the results are written
- Parameters
root (str) – Root directory to write results
- Returns
output_dir – Directory to which the results are written
- Return type
str
- set_payoff_dir(root)¶
This function generates and sets the directory to which the past results were written, from which the payoffs for the first time step are read
- Parameters
root (str) – Root directory to read past results
- Returns
payoff_dir – Directory from which the payoffs are read
- Return type
str
- static set_time_steps(T, num_iter)¶
Checks if the window length is equal to one as the current version of games is devised based on iterative INDP
Todo
Games: Expand the code to imitate td-INDP
- Parameters
T (int) – Window length
num_iter (TYPE) – Number of time steps
- Returns
num_iter – Number of time steps.
- Return type
int
- class gameclasses.NormalGame(L, net, v_r, act_rduc=None)¶
This class models a normal (strategic) restoration game for a given time step and finds pure and mixed strategy nash equilibria.
- players¶
List of players, set based on ‘L’ from input variables of
NormalGame
- Type
list
- net¶
Object that stores network information, set based on ‘net’ from input variables of
NormalGame
- v_r¶
Dictionary that stores the number of available resources, \(R_c\), for the current time step, set based on ‘v_r’ from input variables of
NormalGame
- Type
dict
- dependee_nodes¶
Dictionary of all dependee nodes in the network
- Type
dict
- actions¶
Dictionary of all relevant restoration actions (including ‘No Action (NA)’ and possibly ‘Other Action (OA)’), which are used as the possible moves by players, set by
find_actions()
- Type
dict
- first_actions¶
List of first action of each player in
actions
, which is sed to check if any action is left for any one the players.- Type
list
- actions_reduced¶
Provisional: If true, actions for at least one agent are more than 1000 and hence are reduced
- Type
bool
- payoffs¶
Dictionary of payoffs for all possible action profiles, calculated by solving INDP or the corresponding flow problem. It is populated by
compute_payoffs()
.- Type
dict
- solving_time¶
Time to solve the game using
solve_game()
- Type
float
- normgame¶
Normal game object defined by gambit. It is populated by
build_game()
.- Type
gambit.Game
- solution¶
Solution of the normal game. It is populated by
solve_game()
.- Type
- chosen_equilibrium¶
Action chosen from Nash equilibria (if there are more than one) as the action of the current time step to proceed to the next step. It is populated by
choose_equilibrium()
.The solution is the one with lowest total cost assuming that after many games (as supposed in NE) people know which one has the overall lowest total cost. If there are more than one minimum total cost equilibrium, one of them is chosen randomly.
If the chosen NE is a mixed strategy, the mixed strategy with the highest probability is chosen. If there are more than one of such mixed strategy, then one of them is chosen randomly
Todo
Games: refine the way the game solution is chosen in each time step
- Type
dict
- optimal_solution¶
Optimal Solution from INDP. It is populated by
find_optimal_solution()
.- Type
dict
- build_game(save_model=None, suffix='')¶
This function constructs the normal restoratuion game
- Parameters
save_model (str, optional) – Directory to which the game should be written as a .txt file. The default is None, which prevents writing to file.
suffix (str, optional) – An optional suffix that is added to the file name. The default is ‘’.
- Returns
None.
- choose_equilibrium(preferred_players=None)¶
Choose one action frm pure or mixed strtegies
- Parameters
excluded_players (dict, optional) – The dictionary of player that should be included and excluded from NE choosing process. The default is None, which means all players are considered in choosing payoffs.
- Returns
- Return type
None.
- compute_payoffs(save_model=None, payoff_dir=None)¶
This function finds all possible combinations of actions and their corresponding payoffs considering resource limitations
- Parameters
save_model (list, optional) – The folder name and the current time step, which are needed to create the folder that contains INDP models for computing payoffs. The default is None, which prevent saving models to file.
payoff_dir (list, optional) – Address to the file containing past results including payoff values for the first time step. For other time steps, the payoff value may have been computed based on different initial conditions. The default is None.
- Returns
None.
- find_actions()¶
This function finds all relevant restoration actions for each player
Todo
Games: Add the geographical interdependency to find actions, which means to consider the arc repaires as indepndent actions rather than aggregate them in the ‘OA’ action.
- Returns
actions – Dictionary of all relevant restoration actions.
- Return type
dict
- find_optimal_solution()¶
Computes the centralized, optimal solution corresponding to the normal restoration game using INDP
- Returns
None.
- flow_problem(action)¶
Solves a flow problem for a given combination of actions
Damaged arc repairs and damage, non-dependee node repairs are removed from actions, and collected under “Other Action (OA)” since they do not affect other agents’ actions.
To find OA payoff, I solve an INDP problem with fixed node values. For example, assume player 1’s relevant damaged nodes (damaged, dependee node) are \([1, 2, 3]\). Also, player 2’s relevant damaged nodes are \([4, 5, 6]\). For an action profile \(\{[1, OA], [5, 6, OA]\}\), I set \(1\) to be repaired and \(2,3\) to stay damaged. Similarly, I set \(5,6\) to to be repaired, and \(4\) to stay damaged. Then, I solve INDP under these restrictions.
Also, I restrict the number of resources for each layer. Say, if 2 resources are available for each layer (i.e. \(R_c=2\)), I impose this restriction too. Effectively, in solving INDP for player 1, I assume that \(1\) is repaired, \(2,3\) must not be repaired, and the repair of non-relevant elements are decided by INDP.
Furthermore, if there are, for example, \(R_c=3\) for each player, but the action profile is \(\{[1], [5, 6]\}\), then I solve INDP by restricting \(R_c\) to 1 for player 1 and to 2 for player 2. Therefore, I make sure that only the nodes in the action profile are considered repaired.
Moreover, by removing arcs from the actions, I am ignoring the geographical interdependency among layers.
Todo
Games: Add the geographical interdependency to computation of payoff values
- Parameters
action (tuple) – Action profile for which the payoff is computed.
- Returns
None.
- solve_game(method='enumerate_pure', print_to_cmd=False, game_info=None)¶
This function solves the normal restoration game given a solving method
- Parameters
method (str, optional) – Method to solve the normal game. The default is ‘enumerate_pure’. Options: enumerate_pure, enumerate_mixed_2p, linear_complementarity_2p, linear_programming_2p, simplicial_subdivision, iterated_polymatrix_approximation, global_newton_method
print_to_cmd (bool, optional) – Should the found equilibria be written to console. The default is False.
game_info (list, optional) – The list of information about the game that should be solved. The first item in the list is a class:~gambit.Game object similar to
normgame
. The second item is a list of players of the game similar toplayers
. The third item is a dictionary that contatins the actions of each player similar toactions
. The default is ‘None’, which is the equivalent of passing [self.normgame, self.players, self.actions].
- Returns
None.