Utility classes

This page is organized as follow:

Objectives

This module exposes some utility classes that can be used for example to compute run and store some information about saved agents.

Detailed Documentation by class

Classes:

EpisodeStatistics(env[, name_stats])

This class allows to serialize / de serialize some information about the data of a given environment.

ScoreICAPS2021(env[, env_seeds, ...])

This class allows to compute the same score as the one computed for the ICAPS 2021 competitions.

ScoreL2RPN2020(env[, env_seeds, ...])

This class allows to compute the same score as the one computed for the L2RPN 2020 competitions.

ScoreL2RPN2022(env[, env_seeds, ...])

This class implements the score used for the L2RPN 2022 competition, taking place in the context of the WCCI 2022 competition.

ScoreL2RPN2023(env[, env_seeds, ...])

This class allows to compute the same score as the one computed for the L2RPN 2023 competitions.

class grid2op.utils.EpisodeStatistics(env, name_stats=None)[source]

Bases: object

This class allows to serialize / de serialize some information about the data of a given environment.

Its use happens in two steps:

  • EpisodeStatistics.compute() where you run some experiments to generate some data. Be carefull, some data (for example obs.a_or, obs.rho etc.) depends on the agent you use! This needs to be performed at least once.

  • EpisodeStatistics.get() retrieve the stored information and get back a numpy array with each rows representing a step.

Note that it does not check what agent do you use. If you want statistics on more than 1 agent, please use the name_stats key word attribute when you create the EpisodeStatistics object.

Examples

A basic use of this class is the following:

import grid2op
from grid2op.utils import EpisodeStatistics
env = grid2op.make("l2rpn_case14_sandbox")

stats = EpisodeStatistics(env)

#################################
# This need to be done only once
stats.compute(nb_scenario=100)   # this will take a while to compute in most cases
################################

rhos_, scenario_ids = stats.get("rho")
load_p_, scenario_ids = stats.get("load_p")

# do something with them

If you want some statistics with different agent you might also consider giving some names to the way they are saved as follow:

import grid2op
from grid2op.utils import EpisodeStatistics
from grid2op.Parameters import Parameters
env = grid2op.make("l2rpn_case14_sandbox")

nb_scenario = 8

# for a example a simple do nothing agent
stats_dn = EpisodeStatistics(env, name_stats="do_nothing")
stats_dn.compute(nb_scenario=nb_scenario)   # this will take a while to compute in most cases

# you can also change the parameters
param = Parameters()
param.NO_OVERFLOW_DISCONNECTION = True
stats_no_overflow = EpisodeStatistics(env, name_stats="no_overflow")
stats_no_overflow.compute(nb_scenario=nb_scenario, parameters=param)   # this will take a while to compute in most cases

# or use a different agent
my_agent = ...  # use any grid2op agent you want here
stats_custom_agent = EpisodeStatistics(env, name_stats="custom_agent")
stats_custom_agent.compute(nb_scenario=nb_scenario, agent=my_agent)   # this will take a while to compute in most cases

# and then you can retrieve the statistics
rho_dn, ids = stats_dn.get("rho")
rho_dn_all, ids = stats_no_overflow.get("rho")
rho_custom_agent, ids = stats_custom_agent.get("rho")

Notes

The observations computed highly depends on the agent and the stochastic part of the environment, such as the maintenance or the opponent etc. We highly recommend you to use the env_seeds and agent_seeds keyword arguments when using the EpisodeStatistics.compute() function.

Methods:

__init__(env[, name_stats])

clean_all_stats(env)

Has possibly huge side effects

clear_all()

Has side effects

clear_episode_data()

Has side effects

compute([agent, parameters, nb_scenario, ...])

This function will save (to be later used with EpisodeStatistics.get_statistics()) all the observation at all time steps, for a given number of scenario (see attributes nb_scenario).

get(attribute_name)

This function supposes that you previously ran the EpisodeStatistics.compute() to have lots of observations.

get_metadata()

return the metadata as a dictionary

get_name_dir(name_stats)

return the name of the folder in which the statistics will be computed

get_name_file(observation_attribute)

get the name of the file that is used to save a given attribute names

list_stats(env)

this is a function listing all the stats that have been computed for this environment

Attributes:

__weakref__

list of weak references to the object (if defined)

__init__(env, name_stats=None)[source]
__weakref__

list of weak references to the object (if defined)

staticmethod clean_all_stats(env)[source]

Has possibly huge side effects

Warning

/!\ Be extremely careful /!\

This function cleans all the statistics that have been computed for this environment.

This cannot be undone is permanent and is equivalent to calling EpisodeStatistics.clear_all() on all statistics ever computed on this episode.

clear_all()[source]

Has side effects

Warning

/!\ Be careful /!\

Clear the whole statistics directory.

This is permanent. If you want this data to be available again, you will need to run an expensive EpisodeStatistics.compute() again.

Once done, this cannot be undone.

clear_episode_data()[source]

Has side effects

Warning

/!\ Be careful /!\

To save space, it clears the data for each episode.

This is permanent. If you want this data to be available again, you will need to run an expensive EpisodeStatistics.compute() again.

Notes

It clears all directory into the “statistics” directory

compute(agent=None, parameters=None, nb_scenario=1, scores_func=None, max_step=-1, env_seeds=None, agent_seeds=None, nb_process=1, pbar=False)[source]

This function will save (to be later used with EpisodeStatistics.get_statistics()) all the observation at all time steps, for a given number of scenario (see attributes nb_scenario).

This is useful when you want to store at a given place some information to use later on on your agent.

Notes

Depending on its parameters (mainly the environment, the agent and the number of scenarios computed) this function might take a really long time to compute.

However you only need to compute it once (unless you delete its results with EpisodeStatistics.clear_all() or EpisodeStatistics.clear_episode_data()

Results might also take a lot of space on the hard drive (possibly few GB as all information of all observations encountered are stored)

Parameters:
  • agent (grid2op.Agent.BaseAgent) – The agent you want to use to generate the statistics. Note that the statistics are highly dependant on the agent. For now only one set of statistics are computed. If you want to run a different agent previous results will be erased.

  • parameters (grid2op.Parameters.Parameters) – The parameters you want to use when computing this statistics

  • nb_scenario (int) – Number of scenarios that will be evaluated

  • scores_func (grid2op.Reward.BaseReward) – A reward used to compute the score of an Agent (it can now be a dictionary of BaseReward)

  • nb_scenario – On how many scenarios you want the statistics to be computed

  • max_step (int) – Maximum number of steps you want to compute (see grid2op.Runner.Runner.run())

  • env_seeds (list) – List of seeds used for the environment (for reproducible results) (see grid2op.Runner.Runner.run())

  • agent_seeds (list) – List of seeds used for the agent (for reproducible results) (see grid2op.Runner.Runner.run()).

  • nb_process (int) – Number of process to use (see grid2op.Runner.Runner.run())

  • pbar (bool) – Whether a progress bar is displayed (see grid2op.Runner.Runner.run())

get(attribute_name)[source]

This function supposes that you previously ran the EpisodeStatistics.compute() to have lots of observations.

It allows the retrieval of the information about the observation that were previously stored on drive.

Parameters:

attribute_name (str) – The name of the attribute of an observation on which you want some information.

Returns:

  • values (numpy.ndarray) – All the values for the “attribute_name” of all the observations that were obtained when running the EpisodeStatistics.compute(). It has the shape (nb step, dim_attribute).

  • ids (numpy.ndarray) – The scenario ids to which belong the “values” value. It has the same number of rows than “values” but only one column. This unique column contains an integer. If two rows have the same id then they come from the same scenario.

get_metadata()[source]

return the metadata as a dictionary

staticmethod get_name_dir(name_stats)[source]

return the name of the folder in which the statistics will be computed

get_name_file(observation_attribute)[source]

get the name of the file that is used to save a given attribute names

staticmethod list_stats(env)[source]

this is a function listing all the stats that have been computed for this environment

class grid2op.utils.ScoreICAPS2021(env, env_seeds=None, agent_seeds=None, nb_scenario=16, min_losses_ratio=0.8, verbose=0, max_step=-1, nb_process_stats=1, scale_alarm_score=100.0, weight_op_score=0.7, weight_alarm_score=0.3, add_nb_highres_sim=False)[source]

Bases: ScoreL2RPN2020

This class allows to compute the same score as the one computed for the ICAPS 2021 competitions.

It uses some “EpisodeStatistics” of the environment to compute these scores. These statistics, if not available are computed at the initialization.

When using it a second time these information are reused.

This scores is the combination of the ScoreL2RPN2020 score and some extra scores based on the alarm feature.

Examples

This class can be used as follow:

import grid2op
from grid2op.utils import ScoreICAPS2021
from grid2op.Agent import DoNothingAgent

env = grid2op.make("l2rpn_case14_sandbox")
nb_scenario = 2
my_score = ScoreICAPS2021(env,
                          nb_scenario=nb_scenario,
                          env_seeds=[0 for _ in range(nb_scenario)],
                          agent_seeds=[0 for _ in range(nb_scenario)]
                          )

my_agent = DoNothingAgent(env.action_space)
print(my_score.get(my_agent))

Notes

To prevent overfitting, we strongly recommend you to use the grid2op.Environment.Environment.train_val_split() and use this function on the built validation set only.

Also note than computing the statistics, and evaluating an agent on a whole dataset of multiple GB can take a really long time and a lot of memory. This fact, again, plea in favor of using this function only on a validation set.

We also strongly recommend to set the seeds of your agent (agent_seeds) and of the environment (env_seeds) if you want to use this feature. Reproducibility is really important if you want to make progress.

Warning

The triggering (or not) of the recomputation of the statistics is not perfect for now. We recommend you to use always the same seeds (env_seeds and agent_seeds key word argument of this functions) and the same parameters (env.parameters) when using a given environments.

You might need to clean it manually if you change one of theses things by calling ScoreL2RPN2020.clear_all() function .

Methods:

__init__(env[, env_seeds, agent_seeds, ...])

__init__(env, env_seeds=None, agent_seeds=None, nb_scenario=16, min_losses_ratio=0.8, verbose=0, max_step=-1, nb_process_stats=1, scale_alarm_score=100.0, weight_op_score=0.7, weight_alarm_score=0.3, add_nb_highres_sim=False)[source]
class grid2op.utils.ScoreL2RPN2020(env, env_seeds=None, agent_seeds=None, nb_scenario=16, min_losses_ratio=0.8, verbose=0, max_step=-1, nb_process_stats=1, scores_func=<class 'grid2op.Reward.l2RPNSandBoxScore.L2RPNSandBoxScore'>, score_names=None, add_nb_highres_sim=False)[source]

Bases: object

This class allows to compute the same score as the one computed for the L2RPN 2020 competitions.

It uses some “EpisodeStatistics” of the environment to compute these scores. These statistics, if not available are computed at the initialization.

When using it a second time these information are reused.

Examples

This class can be used as follow:

import grid2op
from grid2op.utils import ScoreL2RPN2020
from grid2op.Agent import DoNothingAgent

env = grid2op.make("l2rpn_case14_sandbox")
nb_scenario = 2
my_score = ScoreL2RPN2020(env,
                          nb_scenario=nb_scenario,
                          env_seeds=[0 for _ in range(nb_scenario)],
                          agent_seeds=[0 for _ in range(nb_scenario)]
                          )

my_agent = DoNothingAgent(env.action_space)
print(my_score.get(my_agent))

Notes

To prevent overfitting, we strongly recommend you to use the grid2op.Environment.Environment.train_val_split() and use this function on the built validation set only.

Also note than computing the statistics, and evaluating an agent on a whole dataset of multiple GB can take a really long time and a lot of memory. This fact, again, plea in favor of using this function only on a validation set.

We also strongly recommend to set the seeds of your agent (agent_seeds) and of the environment (env_seeds) if you want to use this feature. Reproducibility is really important if you want to make progress.

Warning

The triggering (or not) of the recomputation of the statistics is not perfect for now. We recommend you to use always the same seeds (env_seeds and agent_seeds key word argument of this functions) and the same parameters (env.parameters) when using a given environments.

You might need to clean it manually if you change one of theses things by calling ScoreL2RPN2020.clear_all() function .

Methods:

__init__(env[, env_seeds, agent_seeds, ...])

clear_all()

Has side effects

get(agent[, path_save, nb_process])

Get the score of the agent depending on what has been computed.

Attributes:

__weakref__

list of weak references to the object (if defined)

__init__(env, env_seeds=None, agent_seeds=None, nb_scenario=16, min_losses_ratio=0.8, verbose=0, max_step=-1, nb_process_stats=1, scores_func=<class 'grid2op.Reward.l2RPNSandBoxScore.L2RPNSandBoxScore'>, score_names=None, add_nb_highres_sim=False)[source]
__weakref__

list of weak references to the object (if defined)

clear_all()[source]

Has side effects

Warning

/!\ Be careful /!\

Clear the whole statistics directory for the 3 different computed statistics used for the score. It will remove the previously computed statistics.

Once done, this cannot be undone.

get(agent, path_save=None, nb_process=1)[source]

Get the score of the agent depending on what has been computed.

TODO The plots will be done later.

Parameters:
  • agent (grid2op.Agent.BaseAgent) – The agent you want to score

  • path_save (str) – the path were you want to store the logs of your agent.

  • nb_process (int) – Number of process to use for the evaluation

Returns:

  • all_scores (list) – List of the score of your agent per scenarios

  • ts_survived (list) – List of the number of step your agent successfully managed for each scenario

  • total_ts (list) – Total number of step for each scenario

class grid2op.utils.ScoreL2RPN2022(env, env_seeds=None, agent_seeds=None, nb_scenario=16, min_losses_ratio=0.8, verbose=0, max_step=-1, nb_process_stats=1, scores_func=<class 'grid2op.Reward.l2rpn_wcci2022_scorefun.L2RPNWCCI2022ScoreFun'>, score_names=None, add_nb_highres_sim=False)[source]

Bases: ScoreL2RPN2020

This class implements the score used for the L2RPN 2022 competition, taking place in the context of the WCCI 2022 competition.

Methods:

__init__(env[, env_seeds, agent_seeds, ...])

__init__(env, env_seeds=None, agent_seeds=None, nb_scenario=16, min_losses_ratio=0.8, verbose=0, max_step=-1, nb_process_stats=1, scores_func=<class 'grid2op.Reward.l2rpn_wcci2022_scorefun.L2RPNWCCI2022ScoreFun'>, score_names=None, add_nb_highres_sim=False)[source]
class grid2op.utils.ScoreL2RPN2023(env, env_seeds=None, agent_seeds=None, nb_scenario=16, min_losses_ratio=0.8, verbose=0, max_step=-1, nb_process_stats=1, scores_func={'assistant_confidence': <class 'grid2op.Reward._alertTrustScore._AlertTrustScore'>, 'grid_operational_cost': <class 'grid2op.Reward.l2RPNSandBoxScore.L2RPNSandBoxScore'>, 'new_renewable_sources_usage': <class 'grid2op.Reward._newRenewableSourcesUsageScore._NewRenewableSourcesUsageScore'>}, score_names=['grid_operational_cost_scores', 'assistant_confidence_scores', 'new_renewable_sources_usage_scores'], add_nb_highres_sim=False, scale_assistant_score=100.0, scale_nres_score=100.0, weight_op_score=0.6, weight_assistant_score=0.25, weight_nres_score=0.15, min_nres_score=-100, min_assistant_score=-300)[source]

Bases: ScoreL2RPN2020

This class allows to compute the same score as the one computed for the L2RPN 2023 competitions.

It uses some “EpisodeStatistics” of the environment to compute these scores. These statistics, if not available are computed at the initialization.

When using it a second time these information are reused.

This scores is the combination of the ScoreL2RPN2020 score and some extra scores based on the assistant feature (alert) and the use of new renewable energy sources.

Examples

This class can be used as follow:

import grid2op
from grid2op.utils import ScoreL2RPN2023
from grid2op.Agent import DoNothingAgent

env = grid2op.make("l2rpn_case14_sandbox")
nb_scenario = 2
my_score = ScoreL2RPN2023(env,
                          nb_scenario=nb_scenario,
                          env_seeds=[0 for _ in range(nb_scenario)],
                          agent_seeds=[0 for _ in range(nb_scenario)]
                          )

my_agent = DoNothingAgent(env.action_space)
print(my_score.get(my_agent))

Notes

To prevent overfitting, we strongly recommend you to use the grid2op.Environment.Environment.train_val_split() and use this function on the built validation set only.

Also note than computing the statistics, and evaluating an agent on a whole dataset of multiple GB can take a really long time and a lot of memory. This fact, again, plea in favor of using this function only on a validation set.

We also strongly recommend to set the seeds of your agent (agent_seeds) and of the environment (env_seeds) if you want to use this feature. Reproducibility is really important if you want to make progress.

Warning

The triggering (or not) of the recomputation of the statistics is not perfect for now. We recommend you to use always the same seeds (env_seeds and agent_seeds key word argument of this functions) and the same parameters (env.parameters) when using a given environments.

You might need to clean it manually if you change one of theses things by calling ScoreL2RPN2020.clear_all() function .

Methods:

__init__(env[, env_seeds, agent_seeds, ...])

__init__(env, env_seeds=None, agent_seeds=None, nb_scenario=16, min_losses_ratio=0.8, verbose=0, max_step=-1, nb_process_stats=1, scores_func={'assistant_confidence': <class 'grid2op.Reward._alertTrustScore._AlertTrustScore'>, 'grid_operational_cost': <class 'grid2op.Reward.l2RPNSandBoxScore.L2RPNSandBoxScore'>, 'new_renewable_sources_usage': <class 'grid2op.Reward._newRenewableSourcesUsageScore._NewRenewableSourcesUsageScore'>}, score_names=['grid_operational_cost_scores', 'assistant_confidence_scores', 'new_renewable_sources_usage_scores'], add_nb_highres_sim=False, scale_assistant_score=100.0, scale_nres_score=100.0, weight_op_score=0.6, weight_assistant_score=0.25, weight_nres_score=0.15, min_nres_score=-100, min_assistant_score=-300)[source]

If you still can’t find what you’re looking for, try in one of the following pages:

Still trouble finding the information ? Do not hesitate to send a github issue about the documentation at this link: Documentation issue template