Make: Using pre defined Environments

This page is organized as follow:

Objectives

The function define in this module is the easiest and most convenient ways to create a valid grid2op.Environment.Environment.

To get started with such an environment, you can simply do:

import grid2op
env = grid2op.make("l2rpn_case14_sandbox")

You can consult the different notebooks in the getting_stared directory of this package for more information on how to use it.

Created Environment should behave exactly like a gym environment. If you notice any unwanted behavior, please address an issue in the official grid2op repository: Grid2Op

The environment created with this method should be fully compatible with the gym framework: if you are developing a new algorithm of “Reinforcement Learning” and you used the openai gym framework to do so, you can port your code in a few minutes (basically this consists in adapting the input and output dimension of your BaseAgent) and make it work with a Grid2Op environment. An example of such modifications is exposed in the getting_started/ notebooks.

Usage

There are two main ways to use the make() function. The first one is to directly pass the name of the environment you want to use:

import grid2op
env = grid2op.make("l2rpn_case14_sandbox")

This will create the environment known “l2rpn_case14_sandbox” with all default parameters. If this environment is has not been downloaded, at the first call to this function it will download it and store it in a cache on your system ( see the section Cache manipulation for more information), afterwards it will use the downloaded environment.

If your computer don’t have internet access, or you prefer to download things manually, it is also possible to provide the full absolute path of you dataset. On linux / unix (including macos) machines this will be something like

import grid2op
env = grid2op.make("/full/path/where/the/env/is/located/l2rpn_case14_sandbox")

And on windows based machine this will look like:

import grid2op
env = grid2op.make("C:\\where\\the\\env\\is\\located\\l2rpn_case14_sandbox")

In bot cases it will load the environment named “l2rpn_case14_sandbox” (provided that you found a way to get it on your machine) located at the path “/full/path/where/the/env/is/located/l2rpn_case14_sandbox” (or “C:\the\full\path\where\the\env\is\located\l2rpn_case14_sandbox”).

Important notes

As of version 0.8.0 a “make()” has been updated in grid2op. This function, replace the current implementation of renamed make_old(), merges the behaviour of “grid2op.download” script and “make_old” function.

It has the following behavior:

  1. if you specify a full path to a local environment (containing the chronics and the default parameters), it will be used (see section Usage)

  2. if you specify the name of an environment that you have already downloaded, it will use this environment (NB currently no checks are implemented if the environment has been updated remotely, which can happen if we realize there were some issues with it.). If you want to update the environments you downloaded please use grid2op.update_env()

  3. you are expected to provide an environment name (if you don’t know what this is just put “l2rpn_case14_sandbox”)

  4. if the flag test is set to False (default behaviour) and none of the above conditions are met, the make() will download the data of this environment locally the first time it is called. If you don’t want to download anything then you can pass the flag test=True (in this case only a small sample of time series will be available. We don’t recommend to do that at all !)

  5. if test=True (NON default behaviour) nothing will be loaded, and the make() will attempt to use a pre defined environment provided with the python package. We want to emphasize that because the environments provided with this package contains only little data, they are not suitable for leaning a consistent agent / controler. That is why a warning is sent in this case. Also, keep in mind that if you don’t pass test=True then you will not have the possibility to search for these environments provided in the package. Setting “test=True” is NOT recommended for most usage. Have a look at the section Usage for more details on how to use make, especially if you don’t have an internet connection.

  6. if no valid environment is found, make() throws a EnvError.

Cache manipulation

Editing the file ~/.grid2opconfig.json allows you to change the data cache location. Programatically, it can be done with change_local_dir().

Call get_current_local_dir() to get the local cache directory location.

You can list the environments in the local cache directory by calling list_available_local_env() and list all environments that can be downloaded with list_available_remote_env() (nb list_available_remote_env requires an internet connection)

import grid2op
print("The current local directory where the environment are downloaded is \n{}"
      "".format(grid2op.get_current_local_dir()))
print("The environments available without necessary download are: \n{}"
      "".format(grid2op.list_available_local_env()))
print("I can download these environments from the internet: \n{}"
      "".format(grid2op.list_available_remote_env()))

NB if you change the cache directory, all previously downloaded environments will not be visible by grid2op and they will not be removed from your local hard drive. This is why we don’t recommend to change this folder unless you have a valid reason to do so.

Customize your environment

When you create it, you can change different parameters of the environments. We summarize all parameters that can be modified at the creation of your environment. We recommend you to see the section Parameters of the make_from_dataset_path() for more information about the effect of this attributes. NB arguments preceding by a * are listed to be exhaustive. They are technical arguments and should not be modified unless you have a reason to. For example, in the context of the L2RPN competition, we don’t recommend to modify them.

  • dataset_path: used to specify the name (or the path) of the environment you want to load

  • backend: a initialized backend that will carry out the computation related to power system [mainly use if you want to change from PandapowerBackend (default) to a different one eg LightSim2Grid]

  • reward_class: change the type of reward you want to use for your agent (see section Reward for more information).

  • other_reward: tell “env.step” to return addition “rewards”(see section Reward for more information).

  • difficulty, param: control the difficulty level of the game (might not always be available)

  • chronics_class, data_feeding_kwargs: further customization to how the data will be generated, see section Optimize the data pipeline for more information

  • n_busbar: (int, default 2) [new in version 1.9.9] see section Substations for more information

  • * chronics_path, data_feeding, : to overload default path for the data (not recommended)

  • * action_class: which action class your agent is allowed to use (not recommended).

  • * gamerules_class: the rules that are checked to declare an action legal / illegal (not recommended)

  • * volagecontroler_class: how the voltages are set on the grid (not recommended)

  • * grid_path: the path where the default powergrid properties are stored (not recommended)

  • * observation_class, kwargs_observation: which type of observation do you use (not recommended)

  • * opponent_action_class, opponent_class, opponent_init_budget, opponent_budget_per_ts, opponent_budget_class, opponent_space_type, kwargs_opponent: all configuration for the opponent. (not recommended)

  • * has_attention_budget, attention_budget_class, kwargs_attention_budget: all configuration

    for the “alarm” / “attention budget” parameters. (not recommended)

More information about the “customization” of the environment, especially to optimize the I/O or to manipulate which data you interact with are available in the Environment module (Usage section).

Warning

Don’t modify the action class

We do not recommend to modify the keyword arguments starting with *, and especially the action_class.

You can customize an environment with:

import grid2op
env = grid2op.make(dataset_path,
                   backend=...,  # put a compatible backend here
                   reward_class=...,  # change the reward function, see BaseReward
                   other_reward={key: reward_func}, # with `key` being strings and `reward_func` inheriting from BaseReward
                   difficulty=...,  # str or ints
                   param=...,  # any Parameters (from grid2op.Parameters import Parameters)
                   etc.
                   )

See documentation of grid2op.MakeEnv.make_from_dataset_path() for more information about all these parameters.

Detailed Documentation by class

Functions:

change_local_dir(new_path)

This function will change the path were datasets are read to / from.

get_current_local_dir()

This function allows you to get the directory in which grid2op will download the datasets.

list_available_local_env()

This function returns the environment that are available locally.

list_available_remote_env()

This function returns the list of available environments.

list_available_test_env()

This functions list the environment available through "grid2op.make(..., test=True)", which are the environment used for testing purpose, but available without the need to download any data.

make(dataset, *[, test, logger, ...])

This function is a shortcut to rapidly create some (pre defined) environments within the grid2op framework.

make_from_dataset_path([dataset_path, ...])

INTERNAL USE ONLY

make_old([name_env])

INTERNAL USE ONLY

update_env([env_name])

This function allows you to retrieve the latest version of the some of files used to create the environment.

grid2op.MakeEnv.change_local_dir(new_path)[source]

This function will change the path were datasets are read to / from.

The previous datasets will be left in the previous configuration folder and will not be accessible by other grid2op function such as “make” for example.

Parameters:

new_path (str) – The new path in which to download the datasets.

Examples

To set the download path, and the path where grid2op will look for available local environment you can:

import grid2op
local_dir = ...  # should be a valid path on your machine
grid2op.change_local_dir(local_dir)

# check it has worked:
print(f"Data about grid2op downloaded environments are now stored in: "{grid2op.get_current_local_dir()}"")
grid2op.MakeEnv.get_current_local_dir()[source]

This function allows you to get the directory in which grid2op will download the datasets. This path can be modified with the “.grid2opconfig.json” file.

Returns:

res – The current path were data are downloaded in.

Return type:

str

Examples

import grid2op
print(f"Data about grid2op downloaded environments are stored in: "{grid2op.get_current_local_dir()}"")
grid2op.MakeEnv.list_available_local_env()[source]

This function returns the environment that are available locally. It does not return the environments that are included in the package.

Returns:

res – a sorted list of available environments locally.

Return type:

list

Examples

import grid2op
li = grid2op.list_available_local_env()
li_fmt = '\n + '.join(li)
print(f"The locally available environments (without downloading anything) are: \n * {li_fmt}")
grid2op.MakeEnv.list_available_remote_env()[source]

This function returns the list of available environments. It returns all the environments, you might already have downloaded some, they will be listed here.

Returns:

res – a sorted list of available to environments that can be downloaded.

Return type:

list

Examples

A usage example is

import grid2op
li = grid2op.list_available_remote_env()
li_fmt = '\n * '.join(li)
print(f"The available environments are: \n * {li_fmt}")
grid2op.MakeEnv.list_available_test_env()[source]

This functions list the environment available through “grid2op.make(…, test=True)”, which are the environment used for testing purpose, but available without the need to download any data.

The “test” environment are provided with the grid2op package.

Returns:

res – a sorted list of available environments for testing / illustration purpose.

Return type:

list

Examples

import grid2op
li = grid2op.list_available_test_env()

env = grid2op.make(li[0], test=True)
grid2op.MakeEnv.make(dataset: str | PathLike, *, test: bool = False, logger: Logger | None = None, experimental_read_from_local_dir: bool = False, n_busbar=2, _add_to_name: str = '', _compat_glop_version: str | None = None, **kwargs) Environment[source]

This function is a shortcut to rapidly create some (pre defined) environments within the grid2op framework.

Other environments, with different powergrids will be made available in the future and will be easily downloadable using this function.

It mimic the gym.make function.

Changed in version 1.9.3: Remove the possibility to use this function with arguments (force kwargs)

New in version 1.10.0: The n_busbar parameters

Parameters:
  • dataset (str or path) – Name of the environment you want to create

  • test (bool) – Whether you want to use a test environment (NOT recommended). Use at your own risk.

  • logger – If you want to use a specific logger for environment and all other grid2op objects, you can put it here. This feature is still under development.

  • experimental_read_from_local_dir (bool) – Grid2op “embed” the grid description into the description of the classes themselves. By default this is done “on the fly” (when the environment is created) but for some usecase (especially ones involving multiprocessing or “pickle”) it might not be easily usable. If you encounter issues with pickle or multi processing, you can set this flag to True. See the doc of grid2op.Environment.BaseEnv.generate_classes() for more information.

  • n_busbar (int) – Number of independant busbars allowed per substations. By default it’s 2.

  • kwargs – Other keyword argument to give more control on the environment you are creating. See the Parameters information of the make_from_dataset_path().

  • _add_to_name – Internal, do not use (and can only be used when setting “test=True”). If experimental_read_from_local_dir is set to True, this has no effect.

  • _compat_glop_version – Internal, do not use (and can only be used when setting “test=True”)

Returns:

env – The created environment.

Return type:

grid2op.Environment.Environment

Examples

If you want to create the environment “l2rpn_case14_sandbox”:

NB the first time you type this command, the dataset (approximately 300 MB for this one) will be downloaded from the internet, sizes vary per dataset.

grid2op.MakeEnv.make_from_dataset_path(dataset_path='/', logger=None, experimental_read_from_local_dir=False, n_busbar=2, _add_to_name='', _compat_glop_version=None, **kwargs) Environment[source]

INTERNAL USE ONLY

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Prefer using the grid2op.make() function.

This function is a shortcut to rapidly create environments within the grid2op Framework. We don’t recommend using directly this function. Prefer using the make() function.

It mimic the gym.make function.

Parameters:
  • dataset_path (str) – Path to the dataset folder

  • logger – Something to pass to grid2op environment to be used as logger.

  • param (grid2op.Parameters.Parameters, optional) – Type of parameters used for the Environment. Parameters defines how the powergrid problem is cast into an markov decision process, and some internal

  • backend (grid2op.Backend.Backend, optional) – The backend to use for the computation. If provided, it must be an instance of grid2op.Backend.Backend.

  • n_busbar (int) – Number of independant busbars allowed per substations. By default it’s 2.

  • action_class (type, optional) – Type of BaseAction the BaseAgent will be able to perform. If provided, it must be a subclass of grid2op.BaseAction.BaseAction

  • observation_class (type, optional) – Type of BaseObservation the BaseAgent will receive. If provided, It must be a subclass of grid2op.BaseAction.BaseObservation

  • reward_class (type, optional) – Type of reward signal the BaseAgent will receive. If provided, It must be a subclass of grid2op.BaseReward.BaseReward

  • other_rewards (dict, optional) – Used to additional information than the “info” returned value after a call to env.step.

  • gamerules_class (type, optional) – Type of “Rules” the BaseAgent need to comply with. Rules are here to model some operational constraints. If provided, It must be a subclass of grid2op.RulesChecker.BaseRules

  • data_feeding_kwargs (dict, optional) – Dictionnary that is used to build the data_feeding (chronics) objects.

  • chronics_class (type, optional) – The type of chronics that represents the dynamics of the Environment created. Usually they come from different folders.

  • data_feeding (type, optional) – The type of chronics handler you want to use.

  • volagecontroler_class (type, optional) – The type of grid2op.VoltageControler.VoltageControler to use, it defaults to

  • chronics_path (str) – Path where to look for the chronics dataset (optional)

  • grid_path (str, optional) – The path where the powergrid is located. If provided it must be a string, and point to a valid file present on the hard drive.

  • difficulty (str, optional) – the difficulty level. If present it starts from “0” the “easiest” but least realistic mode. In the case of the dataset being used in the l2rpn competition, the level used for the competition is “competition” (“hardest” and most realistic mode). If multiple difficulty levels are available, the most realistic one (the “hardest”) is the default choice.

  • opponent_space_type (type, optional) – The type of opponent space to use. If provided, it must be a subclass of OpponentSpace.

  • opponent_action_class (type, optional) – The action class used for the opponent. The opponent will not be able to use action that are invalid with the given action class provided. It defaults to grid2op.Action.DontAct which forbid any type of action possible.

  • opponent_class (type, optional) – The opponent class to use. The default class is grid2op.Opponent.BaseOpponent which is a type of opponents that does nothing.

  • opponent_init_budget (float, optional) – The initial budget of the opponent. It defaults to 0.0 which means the opponent cannot perform any action if this is not modified.

  • opponent_attack_duration (int, optional) – The number of time steps an attack from the opponent lasts.

  • opponent_attack_cooldown (int, optional) – The number of time steps the opponent as to wait for an attack.

  • opponent_budget_per_ts (float, optional) – The increase of the opponent budget per time step. Each time step the opponent see its budget increase. It defaults to 0.0.

  • opponent_budget_class (type, optional) – defaults: grid2op.Opponent.UnlimitedBudget

  • kwargs_observation (dict) –

    Key words used to initialize the observation. For example, in case of NoisyObservation, it might be the standar error for each underlying distribution. It might be more complicated for other type of custom observations but should be deep copiable.

    Each observation will be initialized (by the observation_space) with:

    obs = observation_class(obs_env=self.obs_env,
                            action_helper=self.action_helper_env,
                            random_prng=self.space_prng,
                            **kwargs_observation  # <- this kwargs is used here
                           )
    

  • observation_backend_class – The class used to build the observation backend (used for Simulator obs.simulate and obs.get_forecasted_env). If provided, this should be a type / class and not an instance of this class. (by default it’s None)

  • observation_backend_kwargs – The key-word arguments to build the observation backend (used for Simulator, obs.simulate and obs.get_forecasted_env). This should be a dictionnary. (by default it’s None)

  • _add_to_name – Internal, used for test only. Do not attempt to modify under any circumstances.

  • _compat_glop_version – Internal, used for test only. Do not attempt to modify under any circumstances.

  • budget (# TODO update doc with attention) –

Returns:

env – The created environment with the given properties.

Return type:

grid2op.Environment.Environment

grid2op.MakeEnv.make_old(name_env='case14_realistic', **kwargs)[source]

INTERNAL USE ONLY

Warning

/!\ Internal, do not use unless you know what you are doing /!\

(DEPRECATED) This function is a shortcut to rapidly create some (pre defined) environments within the grid2op Framework.

For now, only the environment corresponding to the IEEE “case14” powergrid, with some pre defined chronics is available.

Other environments, with different powergrids will be made available in the future.

It mimic the gym.make function.

Parameters:
  • name_env (str) – Name of the environment to create.

  • param (grid2op.Parameters.Parameters, optional) – Type of parameters used for the Environment. Parameters defines how the powergrid problem is cast into an markov decision process, and some internal

  • backend (grid2op.Backend.Backend, optional) – The backend to use for the computation. If provided, it must be an instance of grid2op.Backend.Backend.

  • action_class (type, optional) – Type of BaseAction the BaseAgent will be able to perform. If provided, it must be a subclass of grid2op.BaseAction.BaseAction

  • observation_class (type, optional) – Type of BaseObservation the BaseAgent will receive. If provided, It must be a subclass of grid2op.BaseAction.BaseObservation

  • reward_class (type, optional) – Type of reward signal the BaseAgent will receive. If provided, It must be a subclass of grid2op.BaseReward.BaseReward

  • gamerules_class (type, optional) – Type of “Rules” the BaseAgent need to comply with. Rules are here to model some operational constraints. If provided, It must be a subclass of grid2op.RulesChecker.BaseRules

  • grid_path (str, optional) – The path where the powergrid is located. If provided it must be a string, and point to a valid file present on the hard drive.

  • data_feeding_kwargs (dict, optional) – Dictionnary that is used to build the data_feeding (chronics) objects.

  • chronics_class (type, optional) – The type of chronics that represents the dynamics of the Environment created. Usually they come from different folders.

  • data_feeding (type, optional) – The type of chronics handler you want to use.

  • chronics_path (str) – Path where to look for the chronics dataset.

  • volagecontroler_class (type, optional) – The type of grid2op.VoltageControler.VoltageControler to use, it defaults to

  • other_rewards (dict, optional) – Dictionnary with other rewards we might want to look at at during training. It is given as a dictionnary with keys the name of the reward, and the values a class representing the new variables.

Returns:

env – The created environment.

Return type:

grid2op.Environment.Environment

grid2op.MakeEnv.update_env(env_name=None)[source]

This function allows you to retrieve the latest version of the some of files used to create the environment.

File can be for example “config.py” or “prod_charac.csv” or “difficulty_levels.json”.

Parameters:

env_name (str) – The name of the environment you want to update the config file (must be an environment you have already downloaded). If None it will look for updates for all the environments locally available.

Examples

Here is an example on how to for the update of your environments:

import grid2op
grid2op.update_env()
# it will download the files "config.py" or "prod_charac.csv" or "difficulty_levels.json"
# of your local environment to match the latest version available.

If you still can’t find what you’re looking for, try in one of the following pages:

Still trouble finding the information ? Do not hesitate to send a github issue about the documentation at this link: Documentation issue template