Time series (formerly called “chronics”)

This page is organized as follow:

Objectives

This module is present to handle everything related to input data that are not structural.

In the Grid2Op vocabulary a “GridValue” or “Chronics” is something that provides data to change the input parameter of a power flow between 1 time step and the other.

It is a more generic terminology. Modification that can be performed by GridValue object includes, but are not limited to:

  • injections such as:

    • generators active production setpoint

    • generators voltage setpoint

    • loads active consumption

    • loads reactive consumption

  • structural informations such as:

    • planned outage: powerline disconnection anticipated in advance

    • hazards: powerline disconnection that cannot be anticipated, for example due to a windstorm.

All powergrid modification that can be performed using an grid2op.Action.BaseAction can be implemented as form of a GridValue.

The same mechanism than for grid2op.Action.BaseAction or grid2op.Observation.BaseObservation is pursued here. All state modifications made by the grid2op.Environment must derived from the GridValue. It is not recommended to create them directly, but rather to use the ChronicsHandler for such a purpose.

Note that the values returned by a GridValue are backend dependant. A GridValue object should always return the data in the order expected by the grid2op.Backend, regardless of the order in which data are given in the files or generated by the data generator process.

This implies that changing the backend will change the output of GridValue. More information about this is given in the description of the GridValue.initialize() method.

Finally, compared to other Reinforcement Learning problems, is the possibility to use “forecast”. This optional feature can be accessed via the grid2op.Observation.BaseObservation and mainly the grid2op.Observation.BaseObservation.simulate() method. The data that are used to generate this forecasts come from the grid2op.GridValue and are detailed in the GridValue.forecasts() method.

More control on the chronics

We explained, in the description of the grid2op.Environment in sections Time series Customization and following how to have more control on which chronics is used, with steps are used within a chronics etc. We will not detailed here again, please refer to this page for more information.

However, know that you can have a very detailed control on which chronics are used:

Chosing the right chronics can also lead to some large advantage in terms of computation time. This is particularly true if you want to benefit the most from HPC for example. More detailed is given in the Optimize the data pipeline section. In summary:

  • set the “chunk” size (amount of data read from the disk, instead of reading an entire scenarios, you read from the hard drive only a certain amount of data at a time, see grid2op.Chronics.ChronicsHandler.set_chunk_size()) you can use it with env.chronics_handler.set_chunk_size(100)

  • cache all the chronics and use them from memory (instead of reading them from the hard drive, see grid2op.Chronics.MultifolderWithCache) you can do this with env = grid2op.make(…, chronics_class=MultifolderWithCache)

Finally, if you need to study machine learning in a “regular” fashion, with a train / validation / set you can use the env.train_val_split or env.train_val_split_random functions to do that. See an example usage in the section Splitting into raining, validation, test scenarios.

Detailed Documentation by class

Classes:

ChangeNothing([time_interval, max_iter, ...])

INTERNAL

ChronicsHandler([chronicsClass, ...])

Represents a Chronics handler that returns a grid state.

FromChronix2grid(env_path, with_maintenance)

This class of "chronix" allows to use the chronix2grid package to generate data "on the fly" rather than having to read it from the hard drive.

FromHandlers(path, load_p_handler, ...[, ...])

This class allows to use the grid2op.Chronics.handlers.BaseHandler (and all the derived class, see Time Series Handlers) to generate the "input time series" of the environment.

FromMultiEpisodeData(path, li_ep_data[, ...])

This class allows to redo some episode that have been previously run using a runner.

FromNPY(load_p, load_q, prod_p[, prod_v, ...])

This class allows to generate some chronics compatible with grid2op if the data are provided in numpy format.

FromOneEpisodeData(path, ep_data[, ...])

This class allows to use the grid2op.Chronics.handlers.BaseHandler to read back data stored in grid2op.Episode.EpisodeData

GridStateFromFile(path[, sep, ...])

INTERNAL

GridStateFromFileWithForecasts(path[, sep, ...])

An extension of GridStateFromFile that implements the "forecast" functionality.

GridStateFromFileWithForecastsWithMaintenance(path)

An extension of GridStateFromFileWithForecasts that implements the maintenance chronic generator on the fly (maintenance are not read from files, but are rather generated when the chronics is created).

GridStateFromFileWithForecastsWithoutMaintenance(path)

INTERNAL

GridValue([time_interval, max_iter, ...])

This is the base class for every kind of data for the _grid.

Multifolder(path[, time_interval, ...])

The classes GridStateFromFile and GridStateFromFileWithForecasts implemented the reading of a single folder representing a single episode.

MultifolderWithCache(path[, time_interval, ...])

This class is a particular type of Multifolder that, instead of reading is all from disk each time stores it into memory.

class grid2op.Chronics.ChangeNothing(time_interval=datetime.timedelta(seconds=300), max_iter=-1, start_datetime=datetime.datetime(2019, 1, 1, 0, 0), chunk_size=None, **kwargs)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Do not attempt to create an object of this class. This is initialized by the environment at its creation.

This set of class is mainly internal.

We don’t recommend you, unless you want to code a custom “chroncis class” to change anything on these classes.

This class is the most basic class to modify a powergrid values. It does nothing aside from increasing GridValue.max_iter and the GridValue.current_datetime.

Examples

Usage example, for what you don’t really have to do:

import grid2op
from grid2op.Chronics import ChangeNothing

env_name = "l2rpn_case14_sandbox"  # or any other name
# env = grid2op.make(env_name, data_feeding_kwargs={"gridvalueClass": ChangeNothing})
env = grid2op.make(env_name, chronics_class=ChangeNothing)

It can also be used with the “blank” environment:

import grid2op
from grid2op.Chronics import ChangeNothing
env = grid2op.make("blank",
                   test=True,
                   grid_path=EXAMPLE_CASEFILE,
                   chronics_class=ChangeNothing,
                   action_class=TopologyAndDispatchAction)

Methods:

check_validity(backend)

INTERNAL

initialize(order_backend_loads, ...[, ...])

This function is used to initialize the data generator.

load_next()

INTERNAL

next_chronics()

INTERNAL

check_validity(backend)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is called at the creation of the environment to ensure the Backend and the chronics are consistent with one another.

A call to this method ensure that the action that will be sent to the current grid2op.Environment can be properly implemented by its grid2op.Backend. This specific method check that the dimension of all vectors are consistent

Parameters:

backend (grid2op.Backend.Backend) – The backend used by the grid2op.Environment.Environment

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

This function is used to initialize the data generator. It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also initialize GridValue.maintenance_time, GridValue.maintenance_duration and GridValue.hazard_duration.

This function should also increment GridValue.curr_iter of 1 each time it is called.

The GridValue is what makes the connection between the data (generally in a shape of files on the hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change the tool that computes the load flows. Generally, such grid2op.Backend expects data in a specific format that is given by the way their internal powergrid is represented, and in particular, the “same” objects can have different name and different position. To ensure that the same chronics would produce the same results on every backend (ie regardless of the order of which the Backend is expecting the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name of the object in the chronics to the name of the same object in the backend.

This is done with the “names_chronics_to_backend” dictionnary that has the following keys:

  • “loads”

  • “prods”

  • “lines”

The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend. This means that each keys of these subdictionnary is a name of one column in the files, and each values is the corresponding name of this same object in the dictionnary. An example is provided bellow.

Parameters:
  • order_backend_loads (numpy.ndarray, dtype:str) – Ordered name, in the Backend, of the loads. It is required that a grid2op.Backend object always output the informations in the same order. This array gives the name of the loads following this order. See the documentation of grid2op.Backend for more information about this.

  • order_backend_prods (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for generators.

  • order_backend_lines (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • order_backend_subs (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • names_chronics_to_backend (dict) – See in the description of the method for more information about its format.

Examples

For example, suppose we have a grid2op.Backend with:

  • substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)

  • loads named “load_i” with “i” the subtations to which it is connected

  • generators units named “gen_i” (i still the substation id to which it is connected)

  • powerlnes are named “i_j” if it connected substations i to substation j

And on the other side, we have some files with the following conventions:

  • substations are numbered from 1 to N

  • loads are named “i_C” with i being the substation to which it is connected

  • generators are named “i_G” with is being the id of the substations to which it is connected

  • powerlines are namesd “i_j_k” where i is the origin substation, j the extremity substations and “k” is a unique identifier of this powerline in the powergrid.

In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the following elements and initialize the object gridval of type GridValue with:

gridval = GridValue()  # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
                         'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
                           '3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
                          'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
                                           "14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
                                           "6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
                                           "11_C": 'load_10', "12_C": 'load_11',
                                           "13_C": 'load_12'},
                                 "lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
                                          '10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
                                           '2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
                                           '4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
                                           '6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
                                          '7_8_14': '6_7', '7_9_15': '6_8'},
                                 "prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
                                           "2_G": "gen_1", "8_G": "gen_7"},
                                }
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
load_next()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is automatically called by the “env.step” function. It loads the next information about the grid state (load p and load q, prod p and prod v as well as some maintenance or hazards information)

Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary compatible with the grid2op.BaseAction class allowed for the Environment.

More information about this dictionary can be found at grid2op.BaseAction.update().

As a (quick) reminder: this dictionary has for keys:

  • “injection” (optional): a dictionary with keys (optional) “load_p”, “load_q”, “prod_p”, “prod_v”

  • “hazards” (optional) : the outage suffered from the _grid

  • “maintenance” (optional) : the maintenance operations planned on the grid for the current time step.

Returns:

  • timestamp (datetime.datetime) – The current timestamp for which the modifications have been generated.

  • dict_ (dict) – Always empty, indicating i do nothing (for this case)

  • maintenance_time (numpy.ndarray, dtype:int) – Information about the next planned maintenance. See GridValue.maintenance_time for more information.

  • maintenance_duration (numpy.ndarray, dtype:int) – Information about the duration of next planned maintenance. See GridValue.maintenance_duration for more information.

  • hazard_duration (numpy.ndarray, dtype:int) – Information about the current hazard. See GridValue.hazard_duration for more information.

  • prod_v (numpy.ndarray, dtype:float) – the (stored) value of the generator voltage setpoint

Raises:

StopIteration – if the chronics is over

next_chronics()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Move to the next “chronics”, representing the next “level” if we make the parallel with video games.

A call to this function should at least restart:

class grid2op.Chronics.ChronicsHandler(chronicsClass=<class 'grid2op.Chronics.changeNothing.ChangeNothing'>, time_interval=datetime.timedelta(seconds=300), max_iter=-1, **kwargs)[source]

Represents a Chronics handler that returns a grid state.

As stated previously, it is not recommended to make an directly an object from the class GridValue. This utility will ensure that the creation of such objects are properly made.

The types of chronics used can be specified in the ChronicsHandler.chronicsClass attribute.

chronicsClass

Type of chronics that will be loaded and generated. Default is ChangeNothing (NB the class, and not an object / instance of the class should be send here.) This should be a derived class from GridValue.

Type:

type, optional

kwargs

key word arguments that will be used to build new chronics.

Type:

dict, optional

max_iter

Maximum number of iterations per episode.

Type:

int, optional

real_data

An instance of type given by ChronicsHandler.chronicsClass.

Type:

GridValue

path

path where the data are located.

Type:

str (or None)

Methods:

get_name()

This method retrieve a unique name that is used to serialize episode data on disk.

max_episode_duration()

returns:

max_duration -- The maximum duration of the current episode

next_time_step()

This method returns the modification of the powergrid at the next time step for the same episode.

seed(seed)

Seed the chronics handler and the GridValue that is used to generate the chronics.

set_max_iter(max_iter)

This function is used to set the maximum number of iterations possible before the chronics ends.

get_name()[source]

This method retrieve a unique name that is used to serialize episode data on disk.

See definition of EpisodeData for more information about this method.

max_episode_duration()[source]
Returns:

max_duration – The maximum duration of the current episode

Return type:

int

Notes

Using this function (which we do not recommend) you will receive “-1” for “infinite duration” otherwise you will receive a positive integer

next_time_step()[source]

This method returns the modification of the powergrid at the next time step for the same episode.

See definition of GridValue.load_next() for more information about this method.

seed(seed)[source]

Seed the chronics handler and the GridValue that is used to generate the chronics.

Parameters:

seed (int) – Set the seed for this instance and for the data it holds

Returns:

  • seed (int) – The seed used for this object

  • seed_chronics (int) – The seed used for the real data

set_max_iter(max_iter: int)[source]

This function is used to set the maximum number of iterations possible before the chronics ends.

You can reset this by setting it to -1.

Parameters:

max_iter (int) – The maximum number of steps that can be done before reaching the end of the episode

class grid2op.Chronics.FromChronix2grid(env_path: PathLike, with_maintenance: bool, with_loss: bool = True, time_interval: timedelta = datetime.timedelta(seconds=300), max_iter: int = 2016, start_datetime: datetime = datetime.datetime(2019, 1, 1, 0, 0), chunk_size: int | None = None, **kwargs)[source]

This class of “chronix” allows to use the chronix2grid package to generate data “on the fly” rather than having to read it from the hard drive.

New in version 1.6.6.

Warning

It requires the chronix2grid package to be installed, please install it with :

pip install grid2op[chronix2grid]

And visit https://github.com/bdonnot/chronix2grid#installation for more installation details (in particular you need the coinor-cbc software on your machine)

As of writing, this class is really slow compared to reading data from the hard drive. Indeed to generate a week of data at the 5 mins time resolution (ie to generate the data for a “standard” episode) it takes roughly 40/45 s for the l2rpn_wcci_2022 environment (based on the IEEE 118).

Notes

It requires lots of extra metadata to use this class. As of writing, only the l2rpn_wcci_2022 is compatible with it.

Examples

To use it (though we do not recommend to use it) you can do:

import grid2op
from grid2op.Chronics import FromChronix2grid
env_nm = "l2rpn_wcci_2022"  # only compatible environment at time of writing

env = grid2op.make(env_nm,
                   chronics_class=FromChronix2grid,
                   data_feeding_kwargs={"env_path": os.path.join(grid2op.get_current_local_dir(), env_nm),
                                        "with_maintenance": True,  # whether to include maintenance (optional)
                                        "max_iter": 2 * 288,  # duration (in number of steps) of the data generated (optional)
                                        }
                   )

Before using it, please consult the Generate and use an “infinite” data section of the document, that provides a much faster way to do this.

Methods:

check_validity(backend)

INTERNAL

done()

INTERNAL

forecasts()

By default, forecasts are only made 1 step ahead.

get_id()

Utility to get the current name of the path of the data are looked at, if data are files.

initialize(order_backend_loads, ...[, ...])

This function is used to initialize the data generator.

load_next()

INTERNAL

max_timestep()

This method returned the maximum timestep that the current episode can last.

next_chronics()

INTERNAL

tell_id(id_[, previous])

Tell the backend to use one folder for the chronics in particular.

check_validity(backend: Backend | None) None[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is called at the creation of the environment to ensure the Backend and the chronics are consistent with one another.

A call to this method ensure that the action that will be sent to the current grid2op.Environment can be properly implemented by its grid2op.Backend. This specific method check that the dimension of all vectors are consistent

Parameters:

backend (grid2op.Backend.Backend) – The backend used by the grid2op.Environment.Environment

done()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Compare to GridValue.done() an episode can be over for 2 main reasons:

  • GridValue.max_iter has been reached

  • There are no data in the numpy array.

  • i_end has been reached

The episode is done if one of the above condition is met.

Returns:

res – Whether the episode has reached its end or not.

Return type:

bool

forecasts()[source]

By default, forecasts are only made 1 step ahead.

We could change that. Do not hesitate to make a feature request (https://github.com/rte-france/Grid2Op/issues/new?assignees=&labels=enhancement&template=feature_request.md&title=) if that is necessary for you.

get_id() str[source]

Utility to get the current name of the path of the data are looked at, if data are files.

This could also be used to return a unique identifier to the generated chronics even in the case where they are generated on the fly, for example by return a hash of the seed.

Returns:

res – A unique identifier of the chronics generated for this episode. For example, if the chronics comes from a specific folder, this could be the path to this folder.

Return type:

str

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

This function is used to initialize the data generator. It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also initialize GridValue.maintenance_time, GridValue.maintenance_duration and GridValue.hazard_duration.

This function should also increment GridValue.curr_iter of 1 each time it is called.

The GridValue is what makes the connection between the data (generally in a shape of files on the hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change the tool that computes the load flows. Generally, such grid2op.Backend expects data in a specific format that is given by the way their internal powergrid is represented, and in particular, the “same” objects can have different name and different position. To ensure that the same chronics would produce the same results on every backend (ie regardless of the order of which the Backend is expecting the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name of the object in the chronics to the name of the same object in the backend.

This is done with the “names_chronics_to_backend” dictionnary that has the following keys:

  • “loads”

  • “prods”

  • “lines”

The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend. This means that each keys of these subdictionnary is a name of one column in the files, and each values is the corresponding name of this same object in the dictionnary. An example is provided bellow.

Parameters:
  • order_backend_loads (numpy.ndarray, dtype:str) – Ordered name, in the Backend, of the loads. It is required that a grid2op.Backend object always output the informations in the same order. This array gives the name of the loads following this order. See the documentation of grid2op.Backend for more information about this.

  • order_backend_prods (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for generators.

  • order_backend_lines (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • order_backend_subs (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • names_chronics_to_backend (dict) – See in the description of the method for more information about its format.

Examples

For example, suppose we have a grid2op.Backend with:

  • substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)

  • loads named “load_i” with “i” the subtations to which it is connected

  • generators units named “gen_i” (i still the substation id to which it is connected)

  • powerlnes are named “i_j” if it connected substations i to substation j

And on the other side, we have some files with the following conventions:

  • substations are numbered from 1 to N

  • loads are named “i_C” with i being the substation to which it is connected

  • generators are named “i_G” with is being the id of the substations to which it is connected

  • powerlines are namesd “i_j_k” where i is the origin substation, j the extremity substations and “k” is a unique identifier of this powerline in the powergrid.

In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the following elements and initialize the object gridval of type GridValue with:

gridval = GridValue()  # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
                         'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
                           '3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
                          'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
                                           "14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
                                           "6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
                                           "11_C": 'load_10', "12_C": 'load_11',
                                           "13_C": 'load_12'},
                                 "lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
                                          '10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
                                           '2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
                                           '4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
                                           '6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
                                          '7_8_14': '6_7', '7_9_15': '6_8'},
                                 "prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
                                           "2_G": "gen_1", "8_G": "gen_7"},
                                }
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
load_next()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is automatically called by the “env.step” function. It loads the next information about the grid state (load p and load q, prod p and prod v as well as some maintenance or hazards information)

Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary compatible with the grid2op.BaseAction class allowed for the Environment.

More information about this dictionary can be found at grid2op.BaseAction.update().

As a (quick) reminder: this dictionary has for keys:

  • “injection” (optional): a dictionary with keys (optional) “load_p”, “load_q”, “prod_p”, “prod_v”

  • “hazards” (optional) : the outage suffered from the _grid

  • “maintenance” (optional) : the maintenance operations planned on the grid for the current time step.

Returns:

  • timestamp (datetime.datetime) – The current timestamp for which the modifications have been generated.

  • dict_ (dict) – Always empty, indicating i do nothing (for this case)

  • maintenance_time (numpy.ndarray, dtype:int) – Information about the next planned maintenance. See GridValue.maintenance_time for more information.

  • maintenance_duration (numpy.ndarray, dtype:int) – Information about the duration of next planned maintenance. See GridValue.maintenance_duration for more information.

  • hazard_duration (numpy.ndarray, dtype:int) – Information about the current hazard. See GridValue.hazard_duration for more information.

  • prod_v (numpy.ndarray, dtype:float) – the (stored) value of the generator voltage setpoint

Raises:

StopIteration – if the chronics is over

max_timestep()[source]

This method returned the maximum timestep that the current episode can last. Note that if the grid2op.BaseAgent performs a bad action that leads to a game over, then the episode can lasts less.

Returns:

res – -1 if possibly infinite length or a positive integer representing the maximum duration of this episode

Return type:

int

next_chronics()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Move to the next “chronics”, representing the next “level” if we make the parallel with video games.

A call to this function should at least restart:

tell_id(id_, previous=False)[source]

Tell the backend to use one folder for the chronics in particular. This method is mainly use when the GridValue object can deal with many folder. In this case, this method is used by the grid2op.Runner to indicate which chronics to load for the current simulated episode.

This is important to ensure reproducibility, especially in parrallel computation settings.

This should also be used in case of generation “on the fly” of the chronics to ensure the same property.

By default it does nothing.

Note

As of grid2op 1.6.4, this function now accepts the return value of self.get_id().

class grid2op.Chronics.FromHandlers(path, load_p_handler, load_q_handler, gen_p_handler, gen_v_handler, maintenance_handler=None, hazards_handler=None, load_p_for_handler=None, load_q_for_handler=None, gen_p_for_handler=None, gen_v_for_handler=None, time_interval=datetime.timedelta(seconds=300), sep=';', max_iter=-1, start_datetime=datetime.datetime(2019, 1, 1, 0, 0), chunk_size=None, h_forecast=(5,))[source]

This class allows to use the grid2op.Chronics.handlers.BaseHandler (and all the derived class, see Time Series Handlers) to generate the “input time series” of the environment.

This class does nothing in particular beside making sure the “formalism” of the Handlers can be adapted to generate compliant grid2op data.

See also

Time Series Handlers for more information

In order to use the handlers you need to:

  • tell grid2op that you are going to generate time series from “handlers” by using FromHandlers class

  • for each type of data (“gen_p”, “gen_v”, “load_p”, “load_q”, “maintenance”, “gen_p_forecasted”, “load_p_forecasted”, “load_q_forecasted” and “load_v_forecasted”) you need to provide a way to “handle” this type of data: you need a specific handler.

You need at least to provide handlers for the environment data types (“gen_p”, “gen_v”, “load_p”, “load_q”).

If you do not provide handlers for some data (e.g for “maintenance”, “gen_p_forecasted”, “load_p_forecasted”, “load_q_forecasted” and “load_v_forecasted”) then it will be treated like “change nothing”:

  • there will be no maintenance if you do not provide a handler for maintenance

  • for forecast it’s a bit different… You will benefit from forecast if at least one handler generates some (though we do not recommend to do it). And in that case, the “missing handlers” will be treated as “no data available, keep as it was last time”

Warning

You cannot mix up all types of handler with each other. We wrote in the description of each Handlers some conditions for them to work well.

Examples

You can use the handers this way:

import grid2op
from grid2op.Chronics import FromHandlers
from grid2op.Chronics.handlers import CSVHandler, DoNothingHandler, PerfectForecastHandler
env_name = "l2rpn_case14_sandbox"

env = grid2op.make(env_name,
               data_feeding_kwargs={"gridvalueClass": FromHandlers,
                                    "gen_p_handler": CSVHandler("prod_p"),
                                    "load_p_handler": CSVHandler("load_p"),
                                    "gen_v_handler": DoNothingHandler("prod_v"),
                                    "load_q_handler": CSVHandler("load_q"),
                                    "gen_p_for_handler": PerfectForecastHandler("prod_p_forecasted"),
                                    "load_p_for_handler": PerfectForecastHandler("load_p_forecasted"),
                                    "load_q_for_handler": PerfectForecastHandler("load_q_forecasted"),
                                   }
              )

obs = env.reset()

# and now you can use "env" as any grid2op environment.

More examples are given in the Time Series Handlers .

Notes

For the environment, data, the handler are called in the order: “load_p”, “load_q”, “gen_p” and finally “gen_v”. They are called once per step (per handler) at most.

Then the maintenance (and hazards) data are generated with the appropriate handler.

Finally, the forecast data are called after the environment data (and the maintenance data) once per step and per horizon. Horizon are called “in the order” (all data “for 5 minutes”, all data “for 10 minutes”, all data for “15 minutes” etc.). And for a given horizon, like the environment it is called in the order: “load_p”, “load_q”, “gen_p” and “gen_v”.

About the seeding, the handlers are seeded in the order:

  • load_p

  • load_q

  • gen_p

  • gen_v

  • maintenance

  • hazards

  • load_p_for

  • load_q_for

  • gen_p_for

  • gen_v_for

Each individual handler will have its own pseudo random generator and the same seed will be used regardless of the presence / absence of other handlers.

For example, regardless of the fact that you have a maintenance_handler, if you type env.seed(0) the load_p_for_handler will behave exactly the same (it will generate the same numbers whether or not you have maintenance or not.)

Methods:

check_validity(backend)

INTERNAL

done()

INTERNAL

fast_forward(nb_timestep)

INTERNAL

forecasts()

INTERNAL

get_id()

Utility to get the current name of the path of the data are looked at, if data are files.

get_kwargs(dict_)

Overload this function if you want to pass some data when building a new instance of this class.

initialize(order_backend_loads, ...[, ...])

This function is used to initialize the data generator.

load_next()

INTERNAL

max_timestep()

This method returned the maximum timestep that the current episode can last.

next_chronics()

INTERNAL

sample_next_chronics([probabilities])

this is used to sample the next chronics used with given probabilities

seed(seed)

INTERNAL

set_chunk_size(new_chunk_size)

This parameters allows to set, if the data generation process support it, the amount of data that is read at the same time.

shuffle([shuffler])

This method can be overridden if the data that are represented by this object need to be shuffle.

check_validity(backend)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is called at the creation of the environment to ensure the Backend and the chronics are consistent with one another.

A call to this method ensure that the action that will be sent to the current grid2op.Environment can be properly implemented by its grid2op.Backend. This specific method check that the dimension of all vectors are consistent

Parameters:

backend (grid2op.Backend.Backend) – The backend used by the grid2op.Environment.Environment

done()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Use the ChroncisHandler for such purpose

Whether the episode is over or not.

Returns:

doneTrue means the episode has arrived to the end (no more data to generate) False means that the episode is not over yet.

Return type:

bool

fast_forward(nb_timestep)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Prefer using grid2op.Environment.BaseEnv.fast_forward_chronics()

This method allows you to skip some time step at the beginning of the chronics.

This is useful at the beginning of the training, if you want your agent to learn on more diverse scenarios. Indeed, the data provided in the chronics usually starts always at the same date time (for example Jan 1st at 00:00). This can lead to suboptimal exploration, as during this phase, only a few time steps are managed by the agent, so in general these few time steps will correspond to grid state around Jan 1st at 00:00.

Parameters:

nb_timestep (int) – Number of time step to “fast forward”

forecasts()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Use the ChroncisHandler for such purpose

This method is used to generate the forecasts that are made available to the grid2op.BaseAgent. This forecasts are behaving the same way than a list of tuple as the one returned by GridValue.load_next() method.

The way they are generated depends on the GridValue class. If not forecasts are made available, then the empty list should be returned.

Returns:

res – Each element of this list having the same type as what is returned by GridValue.load_next().

Return type:

list

get_id() str[source]

Utility to get the current name of the path of the data are looked at, if data are files.

This could also be used to return a unique identifier to the generated chronics even in the case where they are generated on the fly, for example by return a hash of the seed.

Returns:

res – A unique identifier of the chronics generated for this episode. For example, if the chronics comes from a specific folder, this could be the path to this folder.

Return type:

str

get_kwargs(dict_)[source]

Overload this function if you want to pass some data when building a new instance of this class.

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

This function is used to initialize the data generator. It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also initialize GridValue.maintenance_time, GridValue.maintenance_duration and GridValue.hazard_duration.

This function should also increment GridValue.curr_iter of 1 each time it is called.

The GridValue is what makes the connection between the data (generally in a shape of files on the hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change the tool that computes the load flows. Generally, such grid2op.Backend expects data in a specific format that is given by the way their internal powergrid is represented, and in particular, the “same” objects can have different name and different position. To ensure that the same chronics would produce the same results on every backend (ie regardless of the order of which the Backend is expecting the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name of the object in the chronics to the name of the same object in the backend.

This is done with the “names_chronics_to_backend” dictionnary that has the following keys:

  • “loads”

  • “prods”

  • “lines”

The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend. This means that each keys of these subdictionnary is a name of one column in the files, and each values is the corresponding name of this same object in the dictionnary. An example is provided bellow.

Parameters:
  • order_backend_loads (numpy.ndarray, dtype:str) – Ordered name, in the Backend, of the loads. It is required that a grid2op.Backend object always output the informations in the same order. This array gives the name of the loads following this order. See the documentation of grid2op.Backend for more information about this.

  • order_backend_prods (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for generators.

  • order_backend_lines (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • order_backend_subs (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • names_chronics_to_backend (dict) – See in the description of the method for more information about its format.

Examples

For example, suppose we have a grid2op.Backend with:

  • substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)

  • loads named “load_i” with “i” the subtations to which it is connected

  • generators units named “gen_i” (i still the substation id to which it is connected)

  • powerlnes are named “i_j” if it connected substations i to substation j

And on the other side, we have some files with the following conventions:

  • substations are numbered from 1 to N

  • loads are named “i_C” with i being the substation to which it is connected

  • generators are named “i_G” with is being the id of the substations to which it is connected

  • powerlines are namesd “i_j_k” where i is the origin substation, j the extremity substations and “k” is a unique identifier of this powerline in the powergrid.

In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the following elements and initialize the object gridval of type GridValue with:

gridval = GridValue()  # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
                         'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
                           '3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
                          'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
                                           "14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
                                           "6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
                                           "11_C": 'load_10', "12_C": 'load_11',
                                           "13_C": 'load_12'},
                                 "lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
                                          '10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
                                           '2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
                                           '4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
                                           '6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
                                          '7_8_14': '6_7', '7_9_15': '6_8'},
                                 "prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
                                           "2_G": "gen_1", "8_G": "gen_7"},
                                }
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
load_next()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is automatically called by the “env.step” function. It loads the next information about the grid state (load p and load q, prod p and prod v as well as some maintenance or hazards information)

Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary compatible with the grid2op.BaseAction class allowed for the Environment.

More information about this dictionary can be found at grid2op.BaseAction.update().

As a (quick) reminder: this dictionary has for keys:

  • “injection” (optional): a dictionary with keys (optional) “load_p”, “load_q”, “prod_p”, “prod_v”

  • “hazards” (optional) : the outage suffered from the _grid

  • “maintenance” (optional) : the maintenance operations planned on the grid for the current time step.

Returns:

  • timestamp (datetime.datetime) – The current timestamp for which the modifications have been generated.

  • dict_ (dict) – Always empty, indicating i do nothing (for this case)

  • maintenance_time (numpy.ndarray, dtype:int) – Information about the next planned maintenance. See GridValue.maintenance_time for more information.

  • maintenance_duration (numpy.ndarray, dtype:int) – Information about the duration of next planned maintenance. See GridValue.maintenance_duration for more information.

  • hazard_duration (numpy.ndarray, dtype:int) – Information about the current hazard. See GridValue.hazard_duration for more information.

  • prod_v (numpy.ndarray, dtype:float) – the (stored) value of the generator voltage setpoint

Raises:

StopIteration – if the chronics is over

max_timestep()[source]

This method returned the maximum timestep that the current episode can last. Note that if the grid2op.BaseAgent performs a bad action that leads to a game over, then the episode can lasts less.

Returns:

res – -1 if possibly infinite length or a positive integer representing the maximum duration of this episode

Return type:

int

next_chronics()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Move to the next “chronics”, representing the next “level” if we make the parallel with video games.

A call to this function should at least restart:

sample_next_chronics(probabilities=None)[source]

this is used to sample the next chronics used with given probabilities

Parameters:

probabilities (np.ndarray) – Array of integer with the same size as the number of chronics in the cache. If it does not sum to one, it is rescaled such that it sums to one.

Returns:

selected – The integer that was selected.

Return type:

int

Examples

Let’s assume in your chronics, the folder names are “Scenario_august_dummy”, and “Scenario_february_dummy”. For the sake of the example, we want the environment to loop 75% of the time to the month of february and 25% of the time to the month of august.

import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1", test=True)  # don't add "test=True" if
# you don't want to perform a test.

# check at which month will belong each observation
for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always alternatively prints "8" (if chronics if from august) or
    # "2" if chronics is from february) with a probability of 50% / 50%

env.seed(0)  # for reproducible experiment
for i in range(10):
    _ = env.chronics_handler.sample_next_chronics([0.25, 0.75])
    obs = env.reset()
    print(obs.month)
    # it prints "2" with probability 0.75 and "8" with probability 0.25
seed(seed)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\ We do not recommend to use this function outside of the two examples given in the description of this class.

Set the seed of the source of pseudo random number used for this RandomObject.

Parameters:

seed (int) – The seed to be set.

Returns:

res – The associated tuple of seeds used. Tuples are returned because in some cases, multiple objects are seeded with the same call to RandomObject.seed()

Return type:

tuple

set_chunk_size(new_chunk_size)[source]

This parameters allows to set, if the data generation process support it, the amount of data that is read at the same time. It can help speeding up the computation process by adding more control on the io operation.

Parameters:

new_chunk_size (int) – The chunk size (ie the number of rows that will be read on each data set at the same time)

shuffle(shuffler=None)[source]

This method can be overridden if the data that are represented by this object need to be shuffle.

By default it does nothing.

Parameters:

shuffler (object) – Any function that can be used to shuffle the data.

class grid2op.Chronics.FromMultiEpisodeData(path, li_ep_data: List[str | Path | EpisodeData | Tuple[str, str]], time_interval=datetime.timedelta(seconds=300), sep=';', max_iter=-1, start_datetime=datetime.datetime(2019, 1, 1, 0, 0), chunk_size=None, list_perfect_forecasts=None, **kwargs)[source]

This class allows to redo some episode that have been previously run using a runner.

It is an extension of the class FromOneEpisodeData but with multiple episodes.

See also

:class:`grid2op.Chronics.FromOneEpisodeData`if you want to use only one episode

Warning

It has the same limitation as grid2op.Chronics.FromOneEpisodeData, including:

  • forecasts are not saved so cannot be retrieved with this class. You can however use obs.simulate and in this case it will lead perfect forecasts.

  • to make sure you are running the exact same episode, you need to create the environment with the grid2op.Opponent.FromEpisodeDataOpponent opponent

Examples

You can use this class this way:

First, you generate some data by running an episode with do nothing or reco powerline agent, preferably episode that go until the end of your time series

import grid2op
from grid2op.Runner import Runner
from grid2op.Agent import RecoPowerlineAgent

path_agent = ....
nb_episode = ...
env_name = "l2rpn_case14_sandbox"  # or any other name
env = grid2op.make(env_name, etc.)

# optional (change the parameters to allow the )
param = env.parameters
param.NO_OVERFLOW_DISCONNECTION = True
env.change_parameters(param)
env.reset()
# end optional

runner = Runner(**env.get_params_for_runner(),
                agentClass=RecoPowerlineAgent)
runner.run(nb_episode=nb_episode,
           path_save=path_agent)

And then you can load it back and run the exact same environment with the same time series, the same attacks etc. with:

import grid2op
from grid2op.Chronics import FromMultiEpisodeData
from grid2op.Opponent import FromEpisodeDataOpponent
from grid2op.Episode import EpisodeData

path_agent = ....  # same as above
env_name = .... # same as above

# path_agent is the path where data coming from a grid2op runner are stored
# NB it should come from a do nothing agent, or at least
# an agent that does not modify the injections (no redispatching, curtailment, storage)
li_episode = EpisodeData.list_episode(path_agent)

env = grid2op.make(env_name,
                   chronics_class=FromMultiEpisodeData,
                   data_feeding_kwargs={"li_ep_data": li_episode},
                   opponent_class=FromEpisodeDataOpponent,
                   opponent_attack_cooldown=1,
              )
# li_ep_data in this case is a list of anything that is accepted by `FromOneEpisodeData`

obs = env.reset()

# and now you can use "env" as any grid2op environment.

Methods:

check_validity(backend)

INTERNAL

done()

INTERNAL

fast_forward(nb_timestep)

INTERNAL

forecasts()

INTERNAL

get_id()

Utility to get the current name of the path of the data are looked at, if data are files.

initialize(order_backend_loads, ...[, ...])

This function is used to initialize the data generator.

load_next()

INTERNAL

max_timestep()

This method returned the maximum timestep that the current episode can last.

next_chronics()

INTERNAL

tell_id(id_num[, previous])

Tell the backend to use one folder for the chronics in particular.

check_validity(backend)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is called at the creation of the environment to ensure the Backend and the chronics are consistent with one another.

A call to this method ensure that the action that will be sent to the current grid2op.Environment can be properly implemented by its grid2op.Backend. This specific method check that the dimension of all vectors are consistent

Parameters:

backend (grid2op.Backend.Backend) – The backend used by the grid2op.Environment.Environment

done()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Use the ChroncisHandler for such purpose

Whether the episode is over or not.

Returns:

doneTrue means the episode has arrived to the end (no more data to generate) False means that the episode is not over yet.

Return type:

bool

fast_forward(nb_timestep)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Prefer using grid2op.Environment.BaseEnv.fast_forward_chronics()

This method allows you to skip some time step at the beginning of the chronics.

This is useful at the beginning of the training, if you want your agent to learn on more diverse scenarios. Indeed, the data provided in the chronics usually starts always at the same date time (for example Jan 1st at 00:00). This can lead to suboptimal exploration, as during this phase, only a few time steps are managed by the agent, so in general these few time steps will correspond to grid state around Jan 1st at 00:00.

Parameters:

nb_timestep (int) – Number of time step to “fast forward”

forecasts()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Use the ChroncisHandler for such purpose

This method is used to generate the forecasts that are made available to the grid2op.BaseAgent. This forecasts are behaving the same way than a list of tuple as the one returned by GridValue.load_next() method.

The way they are generated depends on the GridValue class. If not forecasts are made available, then the empty list should be returned.

Returns:

res – Each element of this list having the same type as what is returned by GridValue.load_next().

Return type:

list

get_id() str[source]

Utility to get the current name of the path of the data are looked at, if data are files.

This could also be used to return a unique identifier to the generated chronics even in the case where they are generated on the fly, for example by return a hash of the seed.

Returns:

res – A unique identifier of the chronics generated for this episode. For example, if the chronics comes from a specific folder, this could be the path to this folder.

Return type:

str

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

This function is used to initialize the data generator. It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also initialize GridValue.maintenance_time, GridValue.maintenance_duration and GridValue.hazard_duration.

This function should also increment GridValue.curr_iter of 1 each time it is called.

The GridValue is what makes the connection between the data (generally in a shape of files on the hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change the tool that computes the load flows. Generally, such grid2op.Backend expects data in a specific format that is given by the way their internal powergrid is represented, and in particular, the “same” objects can have different name and different position. To ensure that the same chronics would produce the same results on every backend (ie regardless of the order of which the Backend is expecting the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name of the object in the chronics to the name of the same object in the backend.

This is done with the “names_chronics_to_backend” dictionnary that has the following keys:

  • “loads”

  • “prods”

  • “lines”

The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend. This means that each keys of these subdictionnary is a name of one column in the files, and each values is the corresponding name of this same object in the dictionnary. An example is provided bellow.

Parameters:
  • order_backend_loads (numpy.ndarray, dtype:str) – Ordered name, in the Backend, of the loads. It is required that a grid2op.Backend object always output the informations in the same order. This array gives the name of the loads following this order. See the documentation of grid2op.Backend for more information about this.

  • order_backend_prods (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for generators.

  • order_backend_lines (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • order_backend_subs (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • names_chronics_to_backend (dict) – See in the description of the method for more information about its format.

Examples

For example, suppose we have a grid2op.Backend with:

  • substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)

  • loads named “load_i” with “i” the subtations to which it is connected

  • generators units named “gen_i” (i still the substation id to which it is connected)

  • powerlnes are named “i_j” if it connected substations i to substation j

And on the other side, we have some files with the following conventions:

  • substations are numbered from 1 to N

  • loads are named “i_C” with i being the substation to which it is connected

  • generators are named “i_G” with is being the id of the substations to which it is connected

  • powerlines are namesd “i_j_k” where i is the origin substation, j the extremity substations and “k” is a unique identifier of this powerline in the powergrid.

In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the following elements and initialize the object gridval of type GridValue with:

gridval = GridValue()  # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
                         'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
                           '3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
                          'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
                                           "14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
                                           "6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
                                           "11_C": 'load_10', "12_C": 'load_11',
                                           "13_C": 'load_12'},
                                 "lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
                                          '10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
                                           '2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
                                           '4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
                                           '6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
                                          '7_8_14': '6_7', '7_9_15': '6_8'},
                                 "prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
                                           "2_G": "gen_1", "8_G": "gen_7"},
                                }
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
load_next()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is automatically called by the “env.step” function. It loads the next information about the grid state (load p and load q, prod p and prod v as well as some maintenance or hazards information)

Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary compatible with the grid2op.BaseAction class allowed for the Environment.

More information about this dictionary can be found at grid2op.BaseAction.update().

As a (quick) reminder: this dictionary has for keys:

  • “injection” (optional): a dictionary with keys (optional) “load_p”, “load_q”, “prod_p”, “prod_v”

  • “hazards” (optional) : the outage suffered from the _grid

  • “maintenance” (optional) : the maintenance operations planned on the grid for the current time step.

Returns:

  • timestamp (datetime.datetime) – The current timestamp for which the modifications have been generated.

  • dict_ (dict) – Always empty, indicating i do nothing (for this case)

  • maintenance_time (numpy.ndarray, dtype:int) – Information about the next planned maintenance. See GridValue.maintenance_time for more information.

  • maintenance_duration (numpy.ndarray, dtype:int) – Information about the duration of next planned maintenance. See GridValue.maintenance_duration for more information.

  • hazard_duration (numpy.ndarray, dtype:int) – Information about the current hazard. See GridValue.hazard_duration for more information.

  • prod_v (numpy.ndarray, dtype:float) – the (stored) value of the generator voltage setpoint

Raises:

StopIteration – if the chronics is over

max_timestep()[source]

This method returned the maximum timestep that the current episode can last. Note that if the grid2op.BaseAgent performs a bad action that leads to a game over, then the episode can lasts less.

Returns:

res – -1 if possibly infinite length or a positive integer representing the maximum duration of this episode

Return type:

int

next_chronics()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Move to the next “chronics”, representing the next “level” if we make the parallel with video games.

A call to this function should at least restart:

tell_id(id_num, previous=False)[source]

Tell the backend to use one folder for the chronics in particular. This method is mainly use when the GridValue object can deal with many folder. In this case, this method is used by the grid2op.Runner to indicate which chronics to load for the current simulated episode.

This is important to ensure reproducibility, especially in parrallel computation settings.

This should also be used in case of generation “on the fly” of the chronics to ensure the same property.

By default it does nothing.

Note

As of grid2op 1.6.4, this function now accepts the return value of self.get_id().

class grid2op.Chronics.FromNPY(load_p: ndarray, load_q: ndarray, prod_p: ndarray, prod_v: ndarray | None = None, hazards: ndarray | None = None, maintenance: ndarray | None = None, load_p_forecast: ndarray | None = None, load_q_forecast: ndarray | None = None, prod_p_forecast: ndarray | None = None, prod_v_forecast: ndarray | None = None, time_interval: timedelta = datetime.timedelta(seconds=300), max_iter: int = -1, start_datetime: datetime = datetime.datetime(2019, 1, 1, 0, 0), chunk_size: int | None = None, i_start: int | None = None, i_end: int | None = None, **kwargs)[source]

This class allows to generate some chronics compatible with grid2op if the data are provided in numpy format.

It also enables the use of the starting the chronics at different time than the original time and to end it before the end of the chronics.

It is then much more flexible in its usage than the defaults chronics. But it is also much more error prone. For example, it does not check the order of the loads / generators that you provide.

Warning

It assume the order of the elements are consistent with the powergrid backend ! It will not attempt to reorder the columns of the dataset

Note

The effect if “i_start” and “i_end” are persistant. If you set it once, it affects the object even after “env.reset()” is called. If you want to modify them, you need to use the FromNPY.chronics.change_i_start() and FromNPY.chronics.change_i_end() methods (and call env.reset()!)

TODO implement methods to change the loads / production “based on sampling” (online sampling instead of only reading data) TODO implement the possibility to simulate maintenance / hazards “on the fly” TODO implement hazards !

Examples

Usage example, for what you don’t really have to do:

import grid2op
from grid2op.Chronics import FromNPY

# first retrieve the data that you want, the easiest wayt is to create an environment and read the data from it.
env_name = "l2rpn_case14_sandbox"  # for example
env_ref = grid2op.make(env_name)
# retrieve the data
load_p = 1.0 * env_ref.chronics_handler.real_data.data.load_p
load_q = 1.0 * env_ref.chronics_handler.real_data.data.load_q
prod_p = 1.0 * env_ref.chronics_handler.real_data.data.prod_p
prod_v = 1.0 * env_ref.chronics_handler.real_data.data.prod_v

# now create an environment with these chronics:
env = grid2op.make(env_name,
                   chronics_class=FromNPY,
                   data_feeding_kwargs={"i_start": 5,  # start at the "step" 5 NB first step is first observation, available with `obs = env.reset()`
                                        "i_end": 18,  # end index: data after that will not be considered (excluded as per python convention)
                                        "load_p": load_p,
                                        "load_q": load_q,
                                        "prod_p": prod_p,
                                        "prod_v": prod_v
                                        # other parameters includes
                                        # maintenance
                                        # load_p_forecast
                                        # load_q_forecast
                                        # prod_p_forecast
                                        # prod_v_forecast
                                        })

# you can use env normally, including in runners
obs = env.reset()
# obs.load_p is load_p[5] (because you set "i_start" = 5, by default it's 0)

You can, after creation, change the data with:

# create env as above

# retrieve some new values that you would like
new_load_p = ...
new_load_q = ...
new_prod_p = ...
new_prod_v = ...

# change the values
env.chronics_handler.real_data.change_chronics(new_load_p, new_load_q, new_prod_p, new_prod_v)
obs = env.reset()  # mandatory if you want the change to be taken into account
# obs.load_p is new_load_p[5]  (or rather load_p[env.chronics_handler.real_data._i_start])
TODO

Methods:

change_chronics([new_load_p, new_load_q, ...])

Allows to change the data used by this class.

change_forecasts([new_load_p, new_load_q, ...])

Allows to change the data used by this class in the "obs.simulate" function.

change_i_end(new_i_end)

Allows to change the "i_end".

change_i_start(new_i_start)

Allows to change the "i_start".

check_validity(backend)

INTERNAL

done()

INTERNAL

forecasts()

By default, forecasts are only made 1 step ahead.

get_id()

To return a unique ID of the chronics, we use a hash function (black2b), but it outputs a name too big (64 characters or so).

initialize(order_backend_loads, ...[, ...])

This function is used to initialize the data generator.

load_next()

INTERNAL

max_timestep()

This method returned the maximum timestep that the current episode can last.

next_chronics()

INTERNAL

change_chronics(new_load_p: ndarray | None = None, new_load_q: ndarray | None = None, new_prod_p: ndarray | None = None, new_prod_v: ndarray | None = None)[source]

Allows to change the data used by this class.

Warning

This has an effect only after “env.reset” has been called !

Parameters:
  • new_load_p (np.ndarray, optional) – change the load_p. Defaults to None (= do not change).

  • new_load_q (np.ndarray, optional) – change the load_q. Defaults to None (= do not change).

  • new_prod_p (np.ndarray, optional) – change the prod_p. Defaults to None (= do not change).

  • new_prod_v (np.ndarray, optional) – change the prod_v. Defaults to None (= do not change).

Examples

import grid2op
from grid2op.Chronics import FromNPY
# create an environment as in this class description (in short: )

load_p = ...  # find somehow a suitable "load_p" array: rows represent time, columns the individual load
load_q = ...
prod_p = ...
prod_v = ...

# now create an environment with these chronics:
env = grid2op.make(env_name,
                   chronics_class=FromNPY,
                   data_feeding_kwargs={"load_p": load_p,
                                        "load_q": load_q,
                                        "prod_p": prod_p,
                                        "prod_v": prod_v}
                   )
obs = env.reset()  # obs.load_p is load_p[0] (or rather load_p[env.chronics_handler.real_data._i_start])

new_load_p = ...  # find somehow a new suitable "load_p"
new_load_q = ...
new_prod_p = ...
new_prod_v = ...

env.chronics_handler.real_data.change_chronics(new_load_p, new_load_q, new_prod_p, new_prod_v)
# has no effect at this stage

obs = env.reset()  # now has some effect !
# obs.load_p is new_load_p[0]  (or rather load_p[env.chronics_handler.real_data._i_start])
change_forecasts(new_load_p: ndarray | None = None, new_load_q: ndarray | None = None, new_prod_p: ndarray | None = None, new_prod_v: ndarray | None = None)[source]

Allows to change the data used by this class in the “obs.simulate” function.

Warning

This has an effect only after “env.reset” has been called !

Parameters:
  • new_load_p (np.ndarray, optional) – change the load_p_forecast. Defaults to None (= do not change).

  • new_load_q (np.ndarray, optional) – change the load_q_forecast. Defaults to None (= do not change).

  • new_prod_p (np.ndarray, optional) – change the prod_p_forecast. Defaults to None (= do not change).

  • new_prod_v (np.ndarray, optional) – change the prod_v_forecast. Defaults to None (= do not change).

Examples

import grid2op
from grid2op.Chronics import FromNPY
# create an environment as in this class description (in short: )

load_p = ...  # find somehow a suitable "load_p" array: rows represent time, columns the individual load
load_q = ...
prod_p = ...
prod_v = ...
load_p_forecast = ...
load_q_forecast = ...
prod_p_forecast = ...
prod_v_forecast = ...

env = grid2op.make(env_name,
                   chronics_class=FromNPY,
                   data_feeding_kwargs={"load_p": load_p,
                                        "load_q": load_q,
                                        "prod_p": prod_p,
                                        "prod_v": prod_v,
                                        "load_p_forecast": load_p_forecast
                                        "load_q_forecast": load_q_forecast
                                        "prod_p_forecast": prod_p_forecast
                                        "prod_v_forecast": prod_v_forecast
                                        })

new_load_p_forecast = ...  # find somehow a new suitable "load_p"
new_load_q_forecast = ...
new_prod_p_forecast = ...
new_prod_v_forecast = ...

env.chronics_handler.real_data.change_forecasts(new_load_p_forecast, new_load_q_forecast, new_prod_p_forecast, new_prod_v_forecast)
# has no effect at this stage

obs = env.reset()  # now has some effect !
sim_o, *_ = obs.simulate()  # sim_o.load_p has the values of new_load_p_forecast[0]
change_i_end(new_i_end: int | None)[source]

Allows to change the “i_end”.

Warning

It has only an affect after “env.reset()” is called.

Examples

import grid2op
from grid2op.Chronics import FromNPY
# create an environment as in this class description (in short: )

load_p = ...  # find somehow a suitable "load_p" array: rows represent time, columns the individual load
load_q = ...
prod_p = ...
prod_v = ...

# now create an environment with these chronics:
env = grid2op.make(env_name,
                   chronics_class=FromNPY,
                   data_feeding_kwargs={"load_p": load_p,
                                        "load_q": load_q,
                                        "prod_p": prod_p,
                                        "prod_v": prod_v}
                   )
obs = env.reset()

env.chronics_handler.real_data.change_i_end(150)
obs = env.reset()
# indeed `env.chronics_handler.real_data._i_end` has been changed to 10.
# scenario lenght will be at best 150 !

# to undo all changes (and use the defaults) you can:
# env.chronics_handler.real_data.change_i_end(None)
change_i_start(new_i_start: int | None)[source]

Allows to change the “i_start”.

Warning

It has only an affect after “env.reset()” is called.

Examples

import grid2op
from grid2op.Chronics import FromNPY
# create an environment as in this class description (in short: )

load_p = ...  # find somehow a suitable "load_p" array: rows represent time, columns the individual load
load_q = ...
prod_p = ...
prod_v = ...

# now create an environment with these chronics:
env = grid2op.make(env_name,
                   chronics_class=FromNPY,
                   data_feeding_kwargs={"load_p": load_p,
                                        "load_q": load_q,
                                        "prod_p": prod_p,
                                        "prod_v": prod_v}
                   )
obs = env.reset()  # obs.load_p is load_p[0] (or rather load_p[env.chronics_handler.real_data._i_start])

env.chronics_handler.real_data.change_i_start(10)
obs = env.reset()  # obs.load_p is load_p[10]
# indeed `env.chronics_handler.real_data._i_start` has been changed to 10.

# to undo all changes (and use the defaults) you can:
# env.chronics_handler.real_data.change_i_start(None)
check_validity(backend: Backend | None) None[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is called at the creation of the environment to ensure the Backend and the chronics are consistent with one another.

A call to this method ensure that the action that will be sent to the current grid2op.Environment can be properly implemented by its grid2op.Backend. This specific method check that the dimension of all vectors are consistent

Parameters:

backend (grid2op.Backend.Backend) – The backend used by the grid2op.Environment.Environment

done()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Compare to GridValue.done() an episode can be over for 2 main reasons:

  • GridValue.max_iter has been reached

  • There are no data in the numpy array.

  • i_end has been reached

The episode is done if one of the above condition is met.

Returns:

res – Whether the episode has reached its end or not.

Return type:

bool

forecasts()[source]

By default, forecasts are only made 1 step ahead.

We could change that. Do not hesitate to make a feature request (https://github.com/rte-france/Grid2Op/issues/new?assignees=&labels=enhancement&template=feature_request.md&title=) if that is necessary for you.

get_id() str[source]

To return a unique ID of the chronics, we use a hash function (black2b), but it outputs a name too big (64 characters or so). So we hash it again with md5 to get a reasonable length id (32 characters)

Returns:

the hash of the arrays (load_p, load_q, etc.) in the chronics

Return type:

str

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

This function is used to initialize the data generator. It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also initialize GridValue.maintenance_time, GridValue.maintenance_duration and GridValue.hazard_duration.

This function should also increment GridValue.curr_iter of 1 each time it is called.

The GridValue is what makes the connection between the data (generally in a shape of files on the hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change the tool that computes the load flows. Generally, such grid2op.Backend expects data in a specific format that is given by the way their internal powergrid is represented, and in particular, the “same” objects can have different name and different position. To ensure that the same chronics would produce the same results on every backend (ie regardless of the order of which the Backend is expecting the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name of the object in the chronics to the name of the same object in the backend.

This is done with the “names_chronics_to_backend” dictionnary that has the following keys:

  • “loads”

  • “prods”

  • “lines”

The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend. This means that each keys of these subdictionnary is a name of one column in the files, and each values is the corresponding name of this same object in the dictionnary. An example is provided bellow.

Parameters:
  • order_backend_loads (numpy.ndarray, dtype:str) – Ordered name, in the Backend, of the loads. It is required that a grid2op.Backend object always output the informations in the same order. This array gives the name of the loads following this order. See the documentation of grid2op.Backend for more information about this.

  • order_backend_prods (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for generators.

  • order_backend_lines (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • order_backend_subs (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • names_chronics_to_backend (dict) – See in the description of the method for more information about its format.

Examples

For example, suppose we have a grid2op.Backend with:

  • substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)

  • loads named “load_i” with “i” the subtations to which it is connected

  • generators units named “gen_i” (i still the substation id to which it is connected)

  • powerlnes are named “i_j” if it connected substations i to substation j

And on the other side, we have some files with the following conventions:

  • substations are numbered from 1 to N

  • loads are named “i_C” with i being the substation to which it is connected

  • generators are named “i_G” with is being the id of the substations to which it is connected

  • powerlines are namesd “i_j_k” where i is the origin substation, j the extremity substations and “k” is a unique identifier of this powerline in the powergrid.

In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the following elements and initialize the object gridval of type GridValue with:

gridval = GridValue()  # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
                         'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
                           '3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
                          'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
                                           "14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
                                           "6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
                                           "11_C": 'load_10', "12_C": 'load_11',
                                           "13_C": 'load_12'},
                                 "lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
                                          '10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
                                           '2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
                                           '4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
                                           '6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
                                          '7_8_14': '6_7', '7_9_15': '6_8'},
                                 "prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
                                           "2_G": "gen_1", "8_G": "gen_7"},
                                }
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
load_next()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is automatically called by the “env.step” function. It loads the next information about the grid state (load p and load q, prod p and prod v as well as some maintenance or hazards information)

Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary compatible with the grid2op.BaseAction class allowed for the Environment.

More information about this dictionary can be found at grid2op.BaseAction.update().

As a (quick) reminder: this dictionary has for keys:

  • “injection” (optional): a dictionary with keys (optional) “load_p”, “load_q”, “prod_p”, “prod_v”

  • “hazards” (optional) : the outage suffered from the _grid

  • “maintenance” (optional) : the maintenance operations planned on the grid for the current time step.

Returns:

  • timestamp (datetime.datetime) – The current timestamp for which the modifications have been generated.

  • dict_ (dict) – Always empty, indicating i do nothing (for this case)

  • maintenance_time (numpy.ndarray, dtype:int) – Information about the next planned maintenance. See GridValue.maintenance_time for more information.

  • maintenance_duration (numpy.ndarray, dtype:int) – Information about the duration of next planned maintenance. See GridValue.maintenance_duration for more information.

  • hazard_duration (numpy.ndarray, dtype:int) – Information about the current hazard. See GridValue.hazard_duration for more information.

  • prod_v (numpy.ndarray, dtype:float) – the (stored) value of the generator voltage setpoint

Raises:

StopIteration – if the chronics is over

max_timestep()[source]

This method returned the maximum timestep that the current episode can last. Note that if the grid2op.BaseAgent performs a bad action that leads to a game over, then the episode can lasts less.

Returns:

res – -1 if possibly infinite length or a positive integer representing the maximum duration of this episode

Return type:

int

next_chronics()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Move to the next “chronics”, representing the next “level” if we make the parallel with video games.

A call to this function should at least restart:

class grid2op.Chronics.FromOneEpisodeData(path, ep_data: str | Path | EpisodeData | Tuple[str, str], time_interval=datetime.timedelta(seconds=300), sep=';', max_iter=-1, start_datetime=datetime.datetime(2019, 1, 1, 0, 0), chunk_size=None, list_perfect_forecasts=None, **kwargs)[source]

This class allows to use the grid2op.Chronics.handlers.BaseHandler to read back data stored in grid2op.Episode.EpisodeData

It can be used if you want to loop indefinitely through one episode.

New in version 1.9.4.

TODO there will be “perfect” forecast, as original forecasts are not stored !

Warning

Original forecasts are not stored by the runner. This is why you cannot use the same information as available in the original “obs.simulate”.

However, you can still use PERFECT FORECAST if you want to by providing the extra parameters “list_perfect_forecasts=[forecast_horizon_1, forecast_horizon_2, etc.]” when you build this class. (see examples below)

Danger

If you want the created environment to be exactly that the original environment, make sure to generate data using a “do nothing” agent.

If the agent modified the injections (eg with redispatching, curtailment or storage) then the resulting time series will “embed” these modifications: they will NOT match the orignal implementation

Danger

If you load an episode data with an opponent, make sure also to build your environment with grid2op.Opponent.FromEpisodeDataOpponent and assign opponent_attack_cooldown=1 (see example below) otherwise you might end up with different time series than what you initially had in the EpisodeData.

Note

As this class reads from the hard drive an episode that has been played, we strongly encourage you to build this class with a complete episode (and not using an agent that games over after a few steps), for example by using the “RecoPowerlineAgent” and the NO_OVERFLOW_DISCONNECTION parameters (see example below)

See also

grid2op.Chronics.FromMultiEpisodeData if you want to use multiple episode data

Examples

You can use this class this way:

First, you generate some data by running an episode with do nothing or reco powerline agent, preferably episode that go until the end of your time series

import grid2op
from grid2op.Runner import Runner
from grid2op.Agent import RecoPowerlineAgent

path_agent = ....
env_name = "l2rpn_case14_sandbox"  # or any other name
env = grid2op.make(env_name, etc.)

# optional (change the parameters to allow the )
param = env.parameters
param.NO_OVERFLOW_DISCONNECTION = True
env.change_parameters(param)
env.reset()
# end optional

runner = Runner(**env.get_params_for_runner(),
                agentClass=RecoPowerlineAgent)
runner.run(nb_episode=1,
           path_save=path_agent)

And then you can load it back and run the exact same environment with the same time series, the same attacks etc. with:

import grid2op
from grid2op.Chronics import FromOneEpisodeData
from grid2op.Opponent import FromEpisodeDataOpponent
from grid2op.Episode import EpisodeData

path_agent = ....  # same as above
env_name = .... # same as above

# path_agent is the path where data coming from a grid2op runner are stored
# NB it should come from a do nothing agent, or at least
# an agent that does not modify the injections (no redispatching, curtailment, storage)
li_episode = EpisodeData.list_episode(path_agent)
ep_data = li_episode[0]

env = grid2op.make(env_name,
                   chronics_class=FromOneEpisodeData,
                   data_feeding_kwargs={"ep_data": ep_data},
                   opponent_class=FromEpisodeDataOpponent,
                   opponent_attack_cooldown=1,
              )
# ep_data can be either a tuple of 2 elements (like above)
# or a full path to a saved episode
# or directly an object of type EpisodeData

obs = env.reset()

# and now you can use "env" as any grid2op environment.

If you want to include perfect forecast (unfortunately you cannot retrieve the original forecasts) you can do:

# same as above

env = grid2op.make(env_name,
            chronics_class=FromOneEpisodeData,
            data_feeding_kwargs={"ep_data": ep_data,
                                 "list_perfect_forecasts": (5, 10, 15)},
            opponent_class=FromEpisodeDataOpponent,
            opponent_attack_cooldown=1,
        )
# it creates an environment with perfect forecasts available for the next step (5),
# the step afterwards (10) and again the following one (15)

Methods:

check_validity(backend)

INTERNAL

done()

INTERNAL

fast_forward(nb_timestep)

INTERNAL

forecasts()

Retrieve PERFECT forecast from this time series generator.

get_id()

Utility to get the current name of the path of the data are looked at, if data are files.

get_kwargs(dict_)

Overload this function if you want to pass some data when building a new instance of this class.

initialize(order_backend_loads, ...[, ...])

This function is used to initialize the data generator.

load_next()

INTERNAL

max_timestep()

This method returned the maximum timestep that the current episode can last.

next_chronics()

INTERNAL

sample_next_chronics([probabilities])

this is used to sample the next chronics used with given probabilities

seed(seed)

INTERNAL

shuffle([shuffler])

This method can be overridden if the data that are represented by this object need to be shuffle.

check_validity(backend)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is called at the creation of the environment to ensure the Backend and the chronics are consistent with one another.

A call to this method ensure that the action that will be sent to the current grid2op.Environment can be properly implemented by its grid2op.Backend. This specific method check that the dimension of all vectors are consistent

Parameters:

backend (grid2op.Backend.Backend) – The backend used by the grid2op.Environment.Environment

done()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Use the ChroncisHandler for such purpose

Whether the episode is over or not.

Returns:

doneTrue means the episode has arrived to the end (no more data to generate) False means that the episode is not over yet.

Return type:

bool

fast_forward(nb_timestep)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Prefer using grid2op.Environment.BaseEnv.fast_forward_chronics()

This method allows you to skip some time step at the beginning of the chronics.

This is useful at the beginning of the training, if you want your agent to learn on more diverse scenarios. Indeed, the data provided in the chronics usually starts always at the same date time (for example Jan 1st at 00:00). This can lead to suboptimal exploration, as during this phase, only a few time steps are managed by the agent, so in general these few time steps will correspond to grid state around Jan 1st at 00:00.

Parameters:

nb_timestep (int) – Number of time step to “fast forward”

forecasts()[source]

Retrieve PERFECT forecast from this time series generator.

Danger

These are perfect forecast and NOT the original forecasts.

Notes

As in grid2op the forecast information is not stored by the runner, it is NOT POSSIBLE to retrieve the forecast informations used by the “original” env (the one that generated the EpisodeData).

This class however, thanks to the list_perfect_forecasts kwarg you can set at building time, can generate perfect forecasts: the agent will see into the future if using these forecasts.

get_id() str[source]

Utility to get the current name of the path of the data are looked at, if data are files.

This could also be used to return a unique identifier to the generated chronics even in the case where they are generated on the fly, for example by return a hash of the seed.

Returns:

res – A unique identifier of the chronics generated for this episode. For example, if the chronics comes from a specific folder, this could be the path to this folder.

Return type:

str

get_kwargs(dict_)[source]

Overload this function if you want to pass some data when building a new instance of this class.

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

This function is used to initialize the data generator. It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also initialize GridValue.maintenance_time, GridValue.maintenance_duration and GridValue.hazard_duration.

This function should also increment GridValue.curr_iter of 1 each time it is called.

The GridValue is what makes the connection between the data (generally in a shape of files on the hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change the tool that computes the load flows. Generally, such grid2op.Backend expects data in a specific format that is given by the way their internal powergrid is represented, and in particular, the “same” objects can have different name and different position. To ensure that the same chronics would produce the same results on every backend (ie regardless of the order of which the Backend is expecting the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name of the object in the chronics to the name of the same object in the backend.

This is done with the “names_chronics_to_backend” dictionnary that has the following keys:

  • “loads”

  • “prods”

  • “lines”

The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend. This means that each keys of these subdictionnary is a name of one column in the files, and each values is the corresponding name of this same object in the dictionnary. An example is provided bellow.

Parameters:
  • order_backend_loads (numpy.ndarray, dtype:str) – Ordered name, in the Backend, of the loads. It is required that a grid2op.Backend object always output the informations in the same order. This array gives the name of the loads following this order. See the documentation of grid2op.Backend for more information about this.

  • order_backend_prods (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for generators.

  • order_backend_lines (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • order_backend_subs (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • names_chronics_to_backend (dict) – See in the description of the method for more information about its format.

Examples

For example, suppose we have a grid2op.Backend with:

  • substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)

  • loads named “load_i” with “i” the subtations to which it is connected

  • generators units named “gen_i” (i still the substation id to which it is connected)

  • powerlnes are named “i_j” if it connected substations i to substation j

And on the other side, we have some files with the following conventions:

  • substations are numbered from 1 to N

  • loads are named “i_C” with i being the substation to which it is connected

  • generators are named “i_G” with is being the id of the substations to which it is connected

  • powerlines are namesd “i_j_k” where i is the origin substation, j the extremity substations and “k” is a unique identifier of this powerline in the powergrid.

In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the following elements and initialize the object gridval of type GridValue with:

gridval = GridValue()  # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
                         'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
                           '3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
                          'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
                                           "14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
                                           "6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
                                           "11_C": 'load_10', "12_C": 'load_11',
                                           "13_C": 'load_12'},
                                 "lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
                                          '10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
                                           '2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
                                           '4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
                                           '6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
                                          '7_8_14': '6_7', '7_9_15': '6_8'},
                                 "prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
                                           "2_G": "gen_1", "8_G": "gen_7"},
                                }
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
load_next()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is automatically called by the “env.step” function. It loads the next information about the grid state (load p and load q, prod p and prod v as well as some maintenance or hazards information)

Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary compatible with the grid2op.BaseAction class allowed for the Environment.

More information about this dictionary can be found at grid2op.BaseAction.update().

As a (quick) reminder: this dictionary has for keys:

  • “injection” (optional): a dictionary with keys (optional) “load_p”, “load_q”, “prod_p”, “prod_v”

  • “hazards” (optional) : the outage suffered from the _grid

  • “maintenance” (optional) : the maintenance operations planned on the grid for the current time step.

Returns:

  • timestamp (datetime.datetime) – The current timestamp for which the modifications have been generated.

  • dict_ (dict) – Always empty, indicating i do nothing (for this case)

  • maintenance_time (numpy.ndarray, dtype:int) – Information about the next planned maintenance. See GridValue.maintenance_time for more information.

  • maintenance_duration (numpy.ndarray, dtype:int) – Information about the duration of next planned maintenance. See GridValue.maintenance_duration for more information.

  • hazard_duration (numpy.ndarray, dtype:int) – Information about the current hazard. See GridValue.hazard_duration for more information.

  • prod_v (numpy.ndarray, dtype:float) – the (stored) value of the generator voltage setpoint

Raises:

StopIteration – if the chronics is over

max_timestep()[source]

This method returned the maximum timestep that the current episode can last. Note that if the grid2op.BaseAgent performs a bad action that leads to a game over, then the episode can lasts less.

Returns:

res – -1 if possibly infinite length or a positive integer representing the maximum duration of this episode

Return type:

int

next_chronics()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Move to the next “chronics”, representing the next “level” if we make the parallel with video games.

A call to this function should at least restart:

sample_next_chronics(probabilities=None)[source]

this is used to sample the next chronics used with given probabilities

Parameters:

probabilities (np.ndarray) – Array of integer with the same size as the number of chronics in the cache. If it does not sum to one, it is rescaled such that it sums to one.

Returns:

selected – The integer that was selected.

Return type:

int

Examples

Let’s assume in your chronics, the folder names are “Scenario_august_dummy”, and “Scenario_february_dummy”. For the sake of the example, we want the environment to loop 75% of the time to the month of february and 25% of the time to the month of august.

import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1", test=True)  # don't add "test=True" if
# you don't want to perform a test.

# check at which month will belong each observation
for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always alternatively prints "8" (if chronics if from august) or
    # "2" if chronics is from february) with a probability of 50% / 50%

env.seed(0)  # for reproducible experiment
for i in range(10):
    _ = env.chronics_handler.sample_next_chronics([0.25, 0.75])
    obs = env.reset()
    print(obs.month)
    # it prints "2" with probability 0.75 and "8" with probability 0.25
seed(seed)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\ We do not recommend to use this function outside of the two examples given in the description of this class.

Set the seed of the source of pseudo random number used for this RandomObject.

Parameters:

seed (int) – The seed to be set.

Returns:

res – The associated tuple of seeds used. Tuples are returned because in some cases, multiple objects are seeded with the same call to RandomObject.seed()

Return type:

tuple

shuffle(shuffler=None)[source]

This method can be overridden if the data that are represented by this object need to be shuffle.

By default it does nothing.

Parameters:

shuffler (object) – Any function that can be used to shuffle the data.

class grid2op.Chronics.GridStateFromFile(path, sep=';', time_interval=datetime.timedelta(seconds=300), max_iter=-1, start_datetime=datetime.datetime(2019, 1, 1, 0, 0), chunk_size=None)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Do not attempt to create an object of this class. This is initialized by the environment at its creation.

Read the injections values from a file stored on hard drive. More detailed about the files is provided in the GridStateFromFile.initialize() method.

This class reads only files stored as csv. The header of the csv is mandatory and should represent the name of the objects. This names should either be matched to the name of the same object in the backend using the names_chronics_to_backend argument pass into the GridStateFromFile.initialize() (see GridValue.initialize() for more information) or match the names of the object in the backend.

When the grid value is initialized, all present csv are read, sorted in order compatible with the backend and extracted as numpy array.

For now, the current date and times are not read from file. It is mandatory that the chronics starts at 00:00 and its first time stamps is corresponds to January, 1st 2019.

Chronics read from this files don’t implement the “forecast” value.

In this values, only 1 episode is stored. If the end of the episode is reached and another one should start, then it will loop from the beginning.

It reads the following files from the “path” location specified:

  • “prod_p.csv”: for each time steps, this file contains the value for the active production of each generators of the grid (it counts as many rows as the number of time steps - and its header) and as many columns as the number of generators on the grid. The header must contains the names of the generators used to map their value on the grid. Values must be convertible to floating point and the column separator of this file should be semi-colon ; (unless you specify a “sep” when loading this class)

  • “prod_v.csv”: same as “prod_p.csv” but for the production voltage setpoint.

  • “load_p.csv”: same as “prod_p.csv” but for the load active value (number of columns = number of loads)

  • “load_q.csv”: same as “prod_p.csv” but for the load reactive value (number of columns = number of loads)

  • “maintenance.csv”: that contains whether or not there is a maintenance for a given powerline (column) at each time step (row).

  • “hazards.csv”: that contains whether or not there is a hazard for a given powerline (column) at each time step (row).

  • “start_datetime.info”: the time stamp (date and time) at which the chronic is starting.

  • “time_interval.info”: the amount of time between two consecutive steps (e.g. 5 mins, or 1h)

If a file is missing, it is understood as “this value will not be modified”. For example, if the file “prod_v.csv” is not present, it will be equivalent as not modifying the production voltage setpoint, never.

Except if the attribute GridStateFromFile.sep is modified, the above tables should be “semi colon” (;) separated.

path

The path of the folder where the data are stored. It is recommended to set absolute path, and not relative paths.

Type:

str

load_p

All the values of the load active values

Type:

numpy.ndarray, dtype: float

load_q

All the values of the load reactive values

Type:

numpy.ndarray, dtype: float

prod_p

All the productions setpoint active values.

Type:

numpy.ndarray, dtype: float

prod_v

All the productions setpoint voltage magnitude values.

Type:

numpy.ndarray, dtype: float

hazards

This vector represents the possible hazards. It is understood as: True there is a hazard for the given powerline, False there is not.

Type:

numpy.ndarray, dtype: bool

maintenance

This vector represents the possible maintenance. It is understood as: True there is a maintenance for the given powerline, False there is not.

Type:

numpy.ndarray, dtype: bool

current_index

The index of the last observation sent to the grid2op.Environment.

Type:

int

sep

The csv columns separator. By defaults it’s “;”

Type:

str, optional

names_chronics_to_backend

This directory matches the name of the objects (line extremity, generator or load) to the same object in the backed. See the help of GridValue.initialize() for more information).

Type:

dict

Methods:

check_validity(backend)

INTERNAL

done()

INTERNAL

get_id()

Utility to get the current name of the path of the data are looked at, if data are files.

initialize(order_backend_loads, ...[, ...])

INTERNAL

load_next()

INTERNAL

max_timestep()

This method returned the maximum timestep that the current episode can last.

next_chronics()

INTERNAL

set_chunk_size(new_chunk_size)

This parameters allows to set, if the data generation process support it, the amount of data that is read at the same time.

split_and_save(datetime_beg, datetime_end, ...)

You can use this function to save the values of the chronics in a format that will be loadable by GridStateFromFile

check_validity(backend)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is called at the creation of the environment to ensure the Backend and the chronics are consistent with one another.

A call to this method ensure that the action that will be sent to the current grid2op.Environment can be properly implemented by its grid2op.Backend. This specific method check that the dimension of all vectors are consistent

Parameters:

backend (grid2op.Backend.Backend) – The backend used by the grid2op.Environment.Environment

done()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Compare to GridValue.done() an episode can be over for 2 main reasons:

The episode is done if one of the above condition is met.

Returns:

res – Whether the episode has reached its end or not.

Return type:

bool

get_id() str[source]

Utility to get the current name of the path of the data are looked at, if data are files.

This could also be used to return a unique identifier to the generated chronics even in the case where they are generated on the fly, for example by return a hash of the seed.

Returns:

res – A unique identifier of the chronics generated for this episode. For example, if the chronics comes from a specific folder, this could be the path to this folder.

Return type:

str

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Called at the creation of the environment.

In this function, the numpy arrays are read from the csv using the panda.dataframe engine.

In order to be valid, the folder located at GridStateFromFile.path can contain:

All these csv must have the same separator specified by GridStateFromFile.sep. If one of these file is missing, it is equivalent to “change nothing” class.

If a file named “start_datetime.info” is present, then it will be used to initialized GridStateFromFile.start_datetime. If this file exists, it should count only one row, with the initial datetime in the “%Y-%m-%d %H:%M” format.

If a file named “time_interval.info” is present, then it will be used to initialized the GridStateFromFile.time_interval attribute. If this file exists, it should count only one row, with the initial datetime in the “%H:%M” format. Only timedelta composed of hours and minutes are supported (time delta cannot go above 23 hours 55 minutes and cannot be smaller than 0 hour 1 minutes)

The first row of these csv is understood as the name of the object concerned by the column. Either this name is present in the grid2op.Backend, in this case no modification is performed, or in case the name is not found in the backend and in this case it must be specified in the “names_chronics_to_backend” parameters how to understand it. See the help of GridValue.initialize() for more information about this dictionnary.

All files should have the same number of rows.

:param See help of GridValue.initialize() for a detailed help about the parameters.:

load_next()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is automatically called by the “env.step” function. It loads the next information about the grid state (load p and load q, prod p and prod v as well as some maintenance or hazards information)

Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary compatible with the grid2op.BaseAction class allowed for the Environment.

More information about this dictionary can be found at grid2op.BaseAction.update().

As a (quick) reminder: this dictionary has for keys:

  • “injection” (optional): a dictionary with keys (optional) “load_p”, “load_q”, “prod_p”, “prod_v”

  • “hazards” (optional) : the outage suffered from the _grid

  • “maintenance” (optional) : the maintenance operations planned on the grid for the current time step.

Returns:

  • timestamp (datetime.datetime) – The current timestamp for which the modifications have been generated.

  • dict_ (dict) – Always empty, indicating i do nothing (for this case)

  • maintenance_time (numpy.ndarray, dtype:int) – Information about the next planned maintenance. See GridValue.maintenance_time for more information.

  • maintenance_duration (numpy.ndarray, dtype:int) – Information about the duration of next planned maintenance. See GridValue.maintenance_duration for more information.

  • hazard_duration (numpy.ndarray, dtype:int) – Information about the current hazard. See GridValue.hazard_duration for more information.

  • prod_v (numpy.ndarray, dtype:float) – the (stored) value of the generator voltage setpoint

Raises:

StopIteration – if the chronics is over

max_timestep()[source]

This method returned the maximum timestep that the current episode can last. Note that if the grid2op.BaseAgent performs a bad action that leads to a game over, then the episode can lasts less.

Returns:

res – -1 if possibly infinite length or a positive integer representing the maximum duration of this episode

Return type:

int

next_chronics()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Move to the next “chronics”, representing the next “level” if we make the parallel with video games.

A call to this function should at least restart:

set_chunk_size(new_chunk_size)[source]

This parameters allows to set, if the data generation process support it, the amount of data that is read at the same time. It can help speeding up the computation process by adding more control on the io operation.

Parameters:

new_chunk_size (int) – The chunk size (ie the number of rows that will be read on each data set at the same time)

split_and_save(datetime_beg, datetime_end, path_out)[source]

You can use this function to save the values of the chronics in a format that will be loadable by GridStateFromFile

Notes

Prefer using the Multifolder.split_and_save() that handles different chronics

Parameters:
  • datetime_beg (str) – Time stamp of the beginning of the data you want to save (time stamp in “%Y-%m-%d %H:%M” format)

  • datetime_end (str) – Time stamp of the end of the data you want to save (time stamp in “%Y-%m-%d %H:%M” format)

  • path_out (str) – Location where to save the data

class grid2op.Chronics.GridStateFromFileWithForecasts(path, sep=';', time_interval=datetime.timedelta(seconds=300), max_iter=-1, chunk_size=None, h_forecast=(5,))[source]

An extension of GridStateFromFile that implements the “forecast” functionality.

Forecast are also read from a file. For this class, only 1 forecast per timestep is read. The “forecast” present in the file at row $i$ is the one available at the corresponding time step, so valid for the grid state at the next time step.

To have more advanced forecasts, this class could be overridden.

load_p_forecast

Array used to store the forecasts of the load active values.

Type:

numpy.ndarray, dtype: float

load_q_forecast

Array used to store the forecasts of the load reactive values.

Type:

numpy.ndarray, dtype: float

prod_p_forecast

Array used to store the forecasts of the generator active production setpoint.

Type:

numpy.ndarray, dtype: float

prod_v_forecast

Array used to store the forecasts of the generator voltage magnitude setpoint.

Type:

numpy.ndarray, dtype: float

Methods:

check_validity(backend)

INTERNAL

forecasts()

This is the major difference between GridStateFromFileWithForecasts and GridStateFromFile.

get_id()

Utility to get the current name of the path of the data are looked at, if data are files.

initialize(order_backend_loads, ...[, ...])

The same condition as GridStateFromFile.initialize applies also for GridStateFromFileWithForecasts.load_p_forecast, GridStateFromFileWithForecasts.load_q_forecast, GridStateFromFileWithForecasts.prod_p_forecast, and GridStateFromFileWithForecasts.prod_v_forecast.

check_validity(backend)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is called at the creation of the environment to ensure the Backend and the chronics are consistent with one another.

A call to this method ensure that the action that will be sent to the current grid2op.Environment can be properly implemented by its grid2op.Backend. This specific method check that the dimension of all vectors are consistent

Parameters:

backend (grid2op.Backend.Backend) – The backend used by the grid2op.Environment.Environment

forecasts()[source]

This is the major difference between GridStateFromFileWithForecasts and GridStateFromFile. It returns non empty forecasts.

As explained in the GridValue.forecasts(), forecasts are made of list of tuple. Each tuple having exactly 2 elements:

  1. Is the time stamp of the forecast

  2. An grid2op.BaseAction representing the modification of the powergrid after the forecast.

For this class, only the forecast of the next time step is given, and only for the injections and maintenance.

Return type:

See GridValue.forecasts() for more information.

get_id() str[source]

Utility to get the current name of the path of the data are looked at, if data are files.

This could also be used to return a unique identifier to the generated chronics even in the case where they are generated on the fly, for example by return a hash of the seed.

Returns:

res – A unique identifier of the chronics generated for this episode. For example, if the chronics comes from a specific folder, this could be the path to this folder.

Return type:

str

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

The same condition as GridStateFromFile.initialize applies also for GridStateFromFileWithForecasts.load_p_forecast, GridStateFromFileWithForecasts.load_q_forecast, GridStateFromFileWithForecasts.prod_p_forecast, and GridStateFromFileWithForecasts.prod_v_forecast.

:param See help of GridValue.initialize() for a detailed help about the _parameters.:

class grid2op.Chronics.GridStateFromFileWithForecastsWithMaintenance(path, sep=';', time_interval=datetime.timedelta(seconds=300), max_iter=-1, chunk_size=None, h_forecast=(5,))[source]

An extension of GridStateFromFileWithForecasts that implements the maintenance chronic generator on the fly (maintenance are not read from files, but are rather generated when the chronics is created).

maintenance_starting_hour

The hour at which every maintenance will start

Type:

int

maintenance_ending_hour

The hour at which every maintenance will end (we suppose mainteance end on same day for now

Type:

int

line_to_maintenance

Array used to store the name of the lines that can happen to be in maintenance

Type:

array, dtype: string

daily_proba_per_month_maintenance

Array used to store probability each line can be in maintenance on a day for a given month

Type:

array, dtype: float

max_daily_number_per_month_maintenance

Array used to store maximum number of maintenance per day for each month

Type:

array, dtype: int

Methods:

initialize(order_backend_loads, ...[, ...])

The same condition as GridStateFromFile.initialize applies also for GridStateFromFileWithForecasts.load_p_forecast, GridStateFromFileWithForecasts.load_q_forecast, GridStateFromFileWithForecasts.prod_p_forecast, and GridStateFromFileWithForecasts.prod_v_forecast.

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

The same condition as GridStateFromFile.initialize applies also for GridStateFromFileWithForecasts.load_p_forecast, GridStateFromFileWithForecasts.load_q_forecast, GridStateFromFileWithForecasts.prod_p_forecast, and GridStateFromFileWithForecasts.prod_v_forecast.

:param See help of GridValue.initialize() for a detailed help about the _parameters.:

class grid2op.Chronics.GridStateFromFileWithForecastsWithoutMaintenance(path, sep=';', time_interval=datetime.timedelta(seconds=300), max_iter=-1, chunk_size=None, h_forecast=(5,))[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This class is made mainly for debugging. And it is not well tested.

Behaves exactly like “GridStateFromFileWithForecasts” but ignore all maintenance and hazards

Examples

You can use it as follow:

import grid2op
from grid2op.Chronics import GridStateFromFileWithForecastsWithoutMaintenance

env= make(ENV_NAME,
          data_feeding_kwargs={"gridvalueClass": GridStateFromFileWithForecastsWithoutMaintenance},
          )

# even if there are maintenance in the environment, they will not be used.

Methods:

initialize(order_backend_loads, ...[, ...])

The same condition as GridStateFromFile.initialize applies also for GridStateFromFileWithForecasts.load_p_forecast, GridStateFromFileWithForecasts.load_q_forecast, GridStateFromFileWithForecasts.prod_p_forecast, and GridStateFromFileWithForecasts.prod_v_forecast.

load_next()

INTERNAL

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

The same condition as GridStateFromFile.initialize applies also for GridStateFromFileWithForecasts.load_p_forecast, GridStateFromFileWithForecasts.load_q_forecast, GridStateFromFileWithForecasts.prod_p_forecast, and GridStateFromFileWithForecasts.prod_v_forecast.

:param See help of GridValue.initialize() for a detailed help about the _parameters.:

load_next()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is automatically called by the “env.step” function. It loads the next information about the grid state (load p and load q, prod p and prod v as well as some maintenance or hazards information)

Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary compatible with the grid2op.BaseAction class allowed for the Environment.

More information about this dictionary can be found at grid2op.BaseAction.update().

As a (quick) reminder: this dictionary has for keys:

  • “injection” (optional): a dictionary with keys (optional) “load_p”, “load_q”, “prod_p”, “prod_v”

  • “hazards” (optional) : the outage suffered from the _grid

  • “maintenance” (optional) : the maintenance operations planned on the grid for the current time step.

Returns:

  • timestamp (datetime.datetime) – The current timestamp for which the modifications have been generated.

  • dict_ (dict) – Always empty, indicating i do nothing (for this case)

  • maintenance_time (numpy.ndarray, dtype:int) – Information about the next planned maintenance. See GridValue.maintenance_time for more information.

  • maintenance_duration (numpy.ndarray, dtype:int) – Information about the duration of next planned maintenance. See GridValue.maintenance_duration for more information.

  • hazard_duration (numpy.ndarray, dtype:int) – Information about the current hazard. See GridValue.hazard_duration for more information.

  • prod_v (numpy.ndarray, dtype:float) – the (stored) value of the generator voltage setpoint

Raises:

StopIteration – if the chronics is over

class grid2op.Chronics.GridValue(time_interval=datetime.timedelta(seconds=300), max_iter=-1, start_datetime=datetime.datetime(2019, 1, 1, 0, 0), chunk_size=None)[source]

This is the base class for every kind of data for the _grid.

It allows the grid2op.Environment to perform powergrid modification that make the “game” time dependant.

It is not recommended to directly create GridValue object, but to use the grid2op.Environment.chronics_handler" for such a purpose. This is made in an attempt to make sure the :func:`GridValue.initialize is called. Before this initialization, it is not recommended to use any GridValue object.

The method GridValue.next_chronics() should be used between two epoch of the game. If there are no more data to be generated from this object, then GridValue.load_next() should raise a StopIteration exception and a call to GridValue.done() should return True.

In grid2op, the production and loads (and hazards or maintenance) can be stored in this type of of “GridValue”. This class will map things generated (or read from a file) and assign the given element of the powergrid with its proper value at each time steps.

time_interval

Time interval between 2 consecutive timestamps. Default 5 minutes.

Type:

datetime.timedelta

start_datetime

The datetime of the first timestamp of the scenario.

Type:

datetime.datetime

current_datetime

The timestamp of the current scenario.

Type:

datetime.datetime

max_iter

Number maximum of data to generate for one episode.

Type:

int

curr_iter

Duration of the current episode.

Type:

int

maintenance_time

Number of time steps the next maintenance will take place with the following convention:

  • -1 no maintenance are planned for the forseeable future

  • 0 a maintenance is taking place

  • 1, 2, 3 … a maintenance will take place in 1, 2, 3, … time step

Some examples are given in GridValue.maintenance_time_1d().

Type:

numpy.ndarray, dtype:int

maintenance_duration

Duration of the next maintenance. 0 means no maintenance is happening. If a maintenance is planned for a given powerline, this number decreases each time step, up until arriving at 0 when the maintenance is over. Note that if a maintenance is planned (see GridValue.maintenance_time) this number indicates how long the maintenance will last, and does not suppose anything on the maintenance taking place or not (= there can be positive number here without a powerline being removed from the grid for maintenance reason). Some examples are given in GridValue.maintenance_duration_1d().

Type:

numpy.ndarray, dtype:int

hazard_duration

Duration of the next hzard. 0 means no maintenance is happening. If a hazard is taking place for a given powerline, this number decreases each time step, up until arriving at 0 when the maintenance is over. On the contrary to GridValue.maintenance_duration, if a component of this vector is higher than 1, it means that the powerline is out of service. Some examples are given in GridValue.get_hazard_duration_1d().

Type:

numpy.ndarray, dtype:int

Methods:

check_validity(backend)

INTERNAL

done()

INTERNAL

fast_forward(nb_timestep)

INTERNAL

forecasts()

INTERNAL

get_hazard_duration_1d(hazard)

This function allows to transform a 1d numpy aarray maintenance (or hazards), where is specify:

get_id()

Utility to get the current name of the path of the data are looked at, if data are files.

get_kwargs(dict_)

Overload this function if you want to pass some data when building a new instance of this class.

get_maintenance_duration_1d(maintenance)

This function allows to transform a 1d numpy aarray maintenance (or hazards), where is specify:

get_maintenance_time_1d(maintenance)

This function allows to transform a 1d numpy aarray maintenance, where is specify:

initialize(order_backend_loads, ...)

This function is used to initialize the data generator.

load_next()

INTERNAL

max_timestep()

This method returned the maximum timestep that the current episode can last.

next_chronics()

INTERNAL

sample_next_chronics([probabilities])

this is used to sample the next chronics used with given probabilities

set_chunk_size(new_chunk_size)

This parameters allows to set, if the data generation process support it, the amount of data that is read at the same time.

set_filter(filter_fun)

Assign a filtering function to remove some chronics from the next time a call to "reset_cache" is called.

shuffle([shuffler])

This method can be overridden if the data that are represented by this object need to be shuffle.

tell_id(id_num[, previous])

Tell the backend to use one folder for the chronics in particular.

abstractmethod check_validity(backend)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is called at the creation of the environment to ensure the Backend and the chronics are consistent with one another.

A call to this method ensure that the action that will be sent to the current grid2op.Environment can be properly implemented by its grid2op.Backend. This specific method check that the dimension of all vectors are consistent

Parameters:

backend (grid2op.Backend.Backend) – The backend used by the grid2op.Environment.Environment

done()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Use the ChroncisHandler for such purpose

Whether the episode is over or not.

Returns:

doneTrue means the episode has arrived to the end (no more data to generate) False means that the episode is not over yet.

Return type:

bool

fast_forward(nb_timestep)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Prefer using grid2op.Environment.BaseEnv.fast_forward_chronics()

This method allows you to skip some time step at the beginning of the chronics.

This is useful at the beginning of the training, if you want your agent to learn on more diverse scenarios. Indeed, the data provided in the chronics usually starts always at the same date time (for example Jan 1st at 00:00). This can lead to suboptimal exploration, as during this phase, only a few time steps are managed by the agent, so in general these few time steps will correspond to grid state around Jan 1st at 00:00.

Parameters:

nb_timestep (int) – Number of time step to “fast forward”

forecasts()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Use the ChroncisHandler for such purpose

This method is used to generate the forecasts that are made available to the grid2op.BaseAgent. This forecasts are behaving the same way than a list of tuple as the one returned by GridValue.load_next() method.

The way they are generated depends on the GridValue class. If not forecasts are made available, then the empty list should be returned.

Returns:

res – Each element of this list having the same type as what is returned by GridValue.load_next().

Return type:

list

staticmethod get_hazard_duration_1d(hazard)[source]

This function allows to transform a 1d numpy aarray maintenance (or hazards), where is specify:

  • 0 there is no maintenance at this time step

  • 1 there is a maintenance at this time step

Into the representation in terms of “hzard duration” as specified in GridValue.maintenance_duration which is:

  • 0 no forseeable hazard operation will be performed

  • 1, 2 etc. is the number of time step the next hzard will last (it is positive only when a hazard

    affect a given powerline)

Compared to GridValue.get_maintenance_duration_1d() we only know when the hazard occurs how long it will last.

Parameters:

hazard (numpy.ndarray) – 1 dimensional array representing the time series of the hazards (0 there is no hazard, 1 there is a hazard at this time step)

Returns:

hazard_duration – Array representing the time series of the duration of the next hazard forseeable.

Return type:

numpy.ndarray

Examples

If no maintenance are planned:

hazard = np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0])
hazard_duration = GridValue.get_hazard_duration_1d(hazard)
assert np.all(hazard_duration == np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]))

If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0)

hazard = np.array([0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0])
hazard_duration = GridValue.get_hazard_duration_1d(hazard)
assert np.all(hazard_duration == np.array([0,0,0,0,0,3,2,1,0,0,0,0,0,0,0,0]))

If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0), and a second one for 2 time steps at time step 13

hazard = np.array([0,0,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0])
hazard_duration = GridValue.get_hazard_duration_1d(hazard)
assert np.all(hazard_duration == np.array([0,0,0,0,0,3,2,1,0,0,0,0,2,1,0,0,0]))
get_id() str[source]

Utility to get the current name of the path of the data are looked at, if data are files.

This could also be used to return a unique identifier to the generated chronics even in the case where they are generated on the fly, for example by return a hash of the seed.

Returns:

res – A unique identifier of the chronics generated for this episode. For example, if the chronics comes from a specific folder, this could be the path to this folder.

Return type:

str

get_kwargs(dict_)[source]

Overload this function if you want to pass some data when building a new instance of this class.

staticmethod get_maintenance_duration_1d(maintenance)[source]

This function allows to transform a 1d numpy aarray maintenance (or hazards), where is specify:

  • 0 there is no maintenance at this time step

  • 1 there is a maintenance at this time step

Into the representation in terms of “next maintenance duration” as specified in GridValue.maintenance_duration which is:

  • 0 no forseeable maintenance operation will be performed

  • 1, 2 etc. is the number of time step the next maintenance will last (it can be positive even in the

    case that no maintenance is currently being performed.

Parameters:

maintenance (numpy.ndarray) – 1 dimensional array representing the time series of the maintenance (0 there is no maintenance, 1 there is a maintenance at this time step)

Returns:

maintenance_duration – Array representing the time series of the duration of the next maintenance forseeable.

Return type:

numpy.ndarray

Examples

If no maintenance are planned:

maintenance = np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0])
maintenance_duration = GridValue.get_maintenance_duration_1d(maintenance)
assert np.all(maintenance_duration == np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]))

If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0)

maintenance = np.array([0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0])
maintenance_duration = GridValue.get_maintenance_duration_1d(maintenance)
assert np.all(maintenance_duration == np.array([3,3,3,3,3,3,2,1,0,0,0,0,0,0,0,0]))

If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0), and a second one for 2 time steps at time step 13

maintenance = np.array([0,0,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0])
maintenance_duration = GridValue.get_maintenance_duration_1d(maintenance)
assert np.all(maintenance_duration == np.array([3,3,3,3,3,3,2,1,2,2,2,2,2,1,0,0,0]))
staticmethod get_maintenance_time_1d(maintenance)[source]

This function allows to transform a 1d numpy aarray maintenance, where is specify:

  • 0 there is no maintenance at this time step

  • 1 there is a maintenance at this time step

Into the representation in terms of “next maintenance time” as specified in GridValue.maintenance_time which is:

  • -1 no foreseeable maintenance operation will be performed

  • 0 a maintenance operation is being performed

  • 1, 2 etc. is the number of time step the next maintenance will be performed.

Parameters:

maintenance (numpy.ndarray) – 1 dimensional array representing the time series of the maintenance (0 there is no maintenance, 1 there is a maintenance at this time step)

Returns:

maintenance_duration – Array representing the time series of the duration of the next maintenance forseeable.

Return type:

numpy.ndarray

Examples

If no maintenance are planned:

maintenance_time = GridValue.get_maintenance_time_1d(np.array([0 for _ in range(10)]))
assert np.all(maintenance_time == np.array([-1  for _ in range(10)]))

If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0)

maintenance = np.array([0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0])
maintenance_time = GridValue.get_maintenance_time_1d(maintenance)
assert np.all(maintenance_time == np.array([5,4,3,2,1,0,0,0,-1,-1,-1,-1,-1,-1,-1,-1]))

If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0), and a second one for 2 time steps at time step 13

maintenance = np.array([0,0,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0])
maintenance_time = GridValue.get_maintenance_time_1d(maintenance)
assert np.all(maintenance_time == np.array([5,4,3,2,1,0,0,0,4,3,2,1,0,0,-1,-1,-1]))
abstractmethod initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend)[source]

This function is used to initialize the data generator. It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also initialize GridValue.maintenance_time, GridValue.maintenance_duration and GridValue.hazard_duration.

This function should also increment GridValue.curr_iter of 1 each time it is called.

The GridValue is what makes the connection between the data (generally in a shape of files on the hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change the tool that computes the load flows. Generally, such grid2op.Backend expects data in a specific format that is given by the way their internal powergrid is represented, and in particular, the “same” objects can have different name and different position. To ensure that the same chronics would produce the same results on every backend (ie regardless of the order of which the Backend is expecting the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name of the object in the chronics to the name of the same object in the backend.

This is done with the “names_chronics_to_backend” dictionnary that has the following keys:

  • “loads”

  • “prods”

  • “lines”

The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend. This means that each keys of these subdictionnary is a name of one column in the files, and each values is the corresponding name of this same object in the dictionnary. An example is provided bellow.

Parameters:
  • order_backend_loads (numpy.ndarray, dtype:str) – Ordered name, in the Backend, of the loads. It is required that a grid2op.Backend object always output the informations in the same order. This array gives the name of the loads following this order. See the documentation of grid2op.Backend for more information about this.

  • order_backend_prods (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for generators.

  • order_backend_lines (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • order_backend_subs (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • names_chronics_to_backend (dict) – See in the description of the method for more information about its format.

Examples

For example, suppose we have a grid2op.Backend with:

  • substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)

  • loads named “load_i” with “i” the subtations to which it is connected

  • generators units named “gen_i” (i still the substation id to which it is connected)

  • powerlnes are named “i_j” if it connected substations i to substation j

And on the other side, we have some files with the following conventions:

  • substations are numbered from 1 to N

  • loads are named “i_C” with i being the substation to which it is connected

  • generators are named “i_G” with is being the id of the substations to which it is connected

  • powerlines are namesd “i_j_k” where i is the origin substation, j the extremity substations and “k” is a unique identifier of this powerline in the powergrid.

In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the following elements and initialize the object gridval of type GridValue with:

gridval = GridValue()  # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
                         'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
                           '3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
                          'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
                                           "14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
                                           "6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
                                           "11_C": 'load_10', "12_C": 'load_11',
                                           "13_C": 'load_12'},
                                 "lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
                                          '10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
                                           '2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
                                           '4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
                                           '6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
                                          '7_8_14': '6_7', '7_9_15': '6_8'},
                                 "prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
                                           "2_G": "gen_1", "8_G": "gen_7"},
                                }
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
abstractmethod load_next()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is automatically called by the “env.step” function. It loads the next information about the grid state (load p and load q, prod p and prod v as well as some maintenance or hazards information)

Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary compatible with the grid2op.BaseAction class allowed for the Environment.

More information about this dictionary can be found at grid2op.BaseAction.update().

As a (quick) reminder: this dictionary has for keys:

  • “injection” (optional): a dictionary with keys (optional) “load_p”, “load_q”, “prod_p”, “prod_v”

  • “hazards” (optional) : the outage suffered from the _grid

  • “maintenance” (optional) : the maintenance operations planned on the grid for the current time step.

Returns:

  • timestamp (datetime.datetime) – The current timestamp for which the modifications have been generated.

  • dict_ (dict) – Always empty, indicating i do nothing (for this case)

  • maintenance_time (numpy.ndarray, dtype:int) – Information about the next planned maintenance. See GridValue.maintenance_time for more information.

  • maintenance_duration (numpy.ndarray, dtype:int) – Information about the duration of next planned maintenance. See GridValue.maintenance_duration for more information.

  • hazard_duration (numpy.ndarray, dtype:int) – Information about the current hazard. See GridValue.hazard_duration for more information.

  • prod_v (numpy.ndarray, dtype:float) – the (stored) value of the generator voltage setpoint

Raises:

StopIteration – if the chronics is over

max_timestep()[source]

This method returned the maximum timestep that the current episode can last. Note that if the grid2op.BaseAgent performs a bad action that leads to a game over, then the episode can lasts less.

Returns:

res – -1 if possibly infinite length or a positive integer representing the maximum duration of this episode

Return type:

int

abstractmethod next_chronics()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Move to the next “chronics”, representing the next “level” if we make the parallel with video games.

A call to this function should at least restart:

sample_next_chronics(probabilities=None)[source]

this is used to sample the next chronics used with given probabilities

Parameters:

probabilities (np.ndarray) – Array of integer with the same size as the number of chronics in the cache. If it does not sum to one, it is rescaled such that it sums to one.

Returns:

selected – The integer that was selected.

Return type:

int

Examples

Let’s assume in your chronics, the folder names are “Scenario_august_dummy”, and “Scenario_february_dummy”. For the sake of the example, we want the environment to loop 75% of the time to the month of february and 25% of the time to the month of august.

import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1", test=True)  # don't add "test=True" if
# you don't want to perform a test.

# check at which month will belong each observation
for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always alternatively prints "8" (if chronics if from august) or
    # "2" if chronics is from february) with a probability of 50% / 50%

env.seed(0)  # for reproducible experiment
for i in range(10):
    _ = env.chronics_handler.sample_next_chronics([0.25, 0.75])
    obs = env.reset()
    print(obs.month)
    # it prints "2" with probability 0.75 and "8" with probability 0.25
set_chunk_size(new_chunk_size)[source]

This parameters allows to set, if the data generation process support it, the amount of data that is read at the same time. It can help speeding up the computation process by adding more control on the io operation.

Parameters:

new_chunk_size (int) – The chunk size (ie the number of rows that will be read on each data set at the same time)

set_filter(filter_fun)[source]

Assign a filtering function to remove some chronics from the next time a call to “reset_cache” is called.

NB filter_fun is applied to all element of Multifolder.subpaths. If True then it will be put in cache, if False this data will NOT be put in the cache.

NB this has no effect until Multifolder.reset is called.

Notes

As of now, this has no effect unless the chronics are generated using Multifolder or MultifolderWithCache

Examples

Let’s assume in your chronics, the folder names are “Scenario_august_dummy”, and “Scenario_february_dummy”. For the sake of the example, we want the environment to loop only through the month of february, because why not. Then we can do the following:

import re
import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1", test=True)  # don't add "test=True" if
# you don't want to perform a test.

# check at which month will belong each observation
for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always alternatively prints "8" (if chronics if from august) or
    # "2" if chronics is from february)

# to see where the chronics are located
print(env.chronics_handler.subpaths)

# keep only the month of february
env.chronics_handler.set_filter(lambda path: re.match(".*february.*", path) is not None)
env.chronics_handler.reset()  # if you don't do that it will not have any effect

for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always prints "2" (representing february)
shuffle(shuffler=None)[source]

This method can be overridden if the data that are represented by this object need to be shuffle.

By default it does nothing.

Parameters:

shuffler (object) – Any function that can be used to shuffle the data.

tell_id(id_num, previous=False)[source]

Tell the backend to use one folder for the chronics in particular. This method is mainly use when the GridValue object can deal with many folder. In this case, this method is used by the grid2op.Runner to indicate which chronics to load for the current simulated episode.

This is important to ensure reproducibility, especially in parrallel computation settings.

This should also be used in case of generation “on the fly” of the chronics to ensure the same property.

By default it does nothing.

Note

As of grid2op 1.6.4, this function now accepts the return value of self.get_id().

class grid2op.Chronics.Multifolder(path, time_interval=datetime.timedelta(seconds=300), start_datetime=datetime.datetime(2019, 1, 1, 0, 0), gridvalueClass=<class 'grid2op.Chronics.gridStateFromFile.GridStateFromFile'>, sep=';', max_iter=-1, chunk_size=None, filter_func=None, **kwargs)[source]

The classes GridStateFromFile and GridStateFromFileWithForecasts implemented the reading of a single folder representing a single episode.

This class is here to “loop” between different episode. Each one being stored in a folder readable by GridStateFromFile or one of its derivate (eg. GridStateFromFileWithForecasts).

Chronics are always read in the alpha-numeric order for this class. This means that if the folder is not modified, the data are always loaded in the same order, regardless of the grid2op.Backend, grid2op.BaseAgent or grid2op.Environment.

Note

Most grid2op environments, by default, use this type of “chronix”, read from the hard drive.

gridvalueClass

Type of class used to read the data from the disk. It defaults to GridStateFromFile.

Type:

type, optional

data

Data that will be loaded and used to produced grid state and forecasted values.

Type:

GridStateFromFile

path: str

Path where the folders of the episodes are stored.

sep: str

Columns separtor, forwarded to Multifolder.data when it’s built at the beginning of each episode.

subpaths: list

List of all the episode that can be “played”. It’s a sorted list of all the directory in Multifolder.path. Each one should contain data in a format that is readable by MultiFolder.gridvalueClass.

Methods:

available_chronics()

return the list of available chronics.

check_validity(backend)

This method check that the data loaded can be properly read and understood by the grid2op.Backend.

done()

Tells the grid2op.Environment if the episode is over.

fast_forward(nb_timestep)

INTERNAL

forecasts()

The representation of the forecasted grid state(s), if any.

get_id()

Full absolute path of the current folder used for the current episode.

get_kwargs(dict_)

Overload this function if you want to pass some data when building a new instance of this class.

init_subpath()

Read the content of the main directory and initialize the subpaths where the data could be located.

initialize(order_backend_loads, ...[, ...])

This function is used to initialize the data generator.

load_next()

Load the next data from the current episode.

max_timestep()

This method returned the maximum timestep that the current episode can last.

next_chronics()

INTERNAL

reset()

Rebuilt the Multifolder._order.

sample_next_chronics([probabilities])

This function should be called before "next_chronics".

set_chunk_size(new_chunk_size)

This parameters allows to set, if the data generation process support it, the amount of data that is read at the same time.

set_filter(filter_fun)

Assign a filtering function to remove some chronics from the next time a call to "reset_cache" is called.

shuffle([shuffler])

This method is used to have a better control on the order in which the subfolder containing the episode are processed.

split_and_save(datetime_beg, datetime_end, ...)

This function allows you to split the data (keeping only the data between datetime_beg and datetime_end) and to save it on your local machine.

tell_id(id_num[, previous])

This tells this chronics to load for the next episode.

Attributes:

chronics_used

return the full path of the chronics currently in use.

available_chronics()[source]

return the list of available chronics.

Examples

# TODO

check_validity(backend)[source]

This method check that the data loaded can be properly read and understood by the grid2op.Backend.

Parameters:

backend (grid2op.Backend) – The backend used for the experiment.

Returns:

property chronics_used

return the full path of the chronics currently in use.

done()[source]

Tells the grid2op.Environment if the episode is over.

Returns:

res – Whether or not the episode, represented by MultiFolder.data is over.

Return type:

bool

fast_forward(nb_timestep)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Prefer using grid2op.Environment.BaseEnv.fast_forward_chronics()

This method allows you to skip some time step at the beginning of the chronics.

This is useful at the beginning of the training, if you want your agent to learn on more diverse scenarios. Indeed, the data provided in the chronics usually starts always at the same date time (for example Jan 1st at 00:00). This can lead to suboptimal exploration, as during this phase, only a few time steps are managed by the agent, so in general these few time steps will correspond to grid state around Jan 1st at 00:00.

Parameters:

nb_timestep (int) – Number of time step to “fast forward”

forecasts()[source]

The representation of the forecasted grid state(s), if any.

Returns:

  • See the return type of GridStateFromFile.forecasts (or of MultiFolder.gridvalueClass if it

  • has been changed) for more information.

get_id() str[source]

Full absolute path of the current folder used for the current episode.

Returns:

res – Path from which the data are generated for the current episode.

Return type:

str

get_kwargs(dict_)[source]

Overload this function if you want to pass some data when building a new instance of this class.

init_subpath()[source]

Read the content of the main directory and initialize the subpaths where the data could be located.

This is usefull, for example, if you generated data and want to be able to use them.

NB this has no effect until Multifolder.reset is called.

Warning

By default, it will only consider data that are present at creation time. If you add data after, you need to call this function (and do a reset)

Examples

A “typical” usage of this function can be the following workflow.

Start a script to train an agent (say “train_agent.py”):

import os
import grid2op
from lightsim2grid import LightSimBackend  # highly recommended for speed !

env_name = "l2rpn_wcci_2022"  # only compatible with what comes next (at time of writing)
env = grid2op.make(env_name, backend=LightSimBackend())

# now train an agent
# see l2rpn_baselines package for more information, for example
# l2rpn-baselines.readthedocs.io/
from l2rpn_baselines.PPO_SB3 import train
nb_iter = 10000  # train for that many iterations
agent_name = "WhaetverIWant"  # or any other name
agent_path = os.path.expand("~")  # or anywhere else on your computer
trained_agent = train(env,
                      iterations=nb_iter,
                      name=agent_name,
                      save_path=agent_path)

On another script (say “generate_data.py”), you can generate more data:

import grid2op
env_name = "l2rpn_wcci_2022"  # only compatible with what comes next (at time of writing)
env = grid2op.make(env_name)
env.generate_data(nb_year=50)  # generates 50 years of data
# (takes roughly 50s per week, around 45mins per year, in this case 50 * 45 mins = lots of minutes)

Let the script to generate the data run normally (don’t interupt it). And from time to time, in the script “train_agent.py” you can do:

# reload the generated data
env.chronics_handler.init_subpath()
env.chronics_handler.reset()

# retrain the agent taking into account new data
trained_agent = train(env,
                      iterations=nb_iter,
                      name=agent_name,
                      save_path=agent_path,
                      load_path=agent_path
                      )

# the script to generate data is still running, you can reload some data again
env.chronics_handler.init_subpath()
env.chronics_handler.reset()

# retrain the agent
trained_agent = train(env,
                      iterations=nb_iter,
                      name=agent_name,
                      save_path=agent_path,
                      load_path=agent_path
                      )

# etc.

Both scripts you run “at the same time” for it to work efficiently.

To recap: - script “generate_data.py” will… generate data - these data will be reloaded from time to time by the script “train_agent.py”

Warning

Do not delete data between calls to env.chronics_handler.init_subpath() and env.chronics_handler.reset(), and even less so during training !

If you want to delete data (for example not to overload your hard drive) you should remove them right before calling env.chronics_handler.init_subpath().

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

This function is used to initialize the data generator. It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also initialize GridValue.maintenance_time, GridValue.maintenance_duration and GridValue.hazard_duration.

This function should also increment GridValue.curr_iter of 1 each time it is called.

The GridValue is what makes the connection between the data (generally in a shape of files on the hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change the tool that computes the load flows. Generally, such grid2op.Backend expects data in a specific format that is given by the way their internal powergrid is represented, and in particular, the “same” objects can have different name and different position. To ensure that the same chronics would produce the same results on every backend (ie regardless of the order of which the Backend is expecting the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name of the object in the chronics to the name of the same object in the backend.

This is done with the “names_chronics_to_backend” dictionnary that has the following keys:

  • “loads”

  • “prods”

  • “lines”

The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend. This means that each keys of these subdictionnary is a name of one column in the files, and each values is the corresponding name of this same object in the dictionnary. An example is provided bellow.

Parameters:
  • order_backend_loads (numpy.ndarray, dtype:str) – Ordered name, in the Backend, of the loads. It is required that a grid2op.Backend object always output the informations in the same order. This array gives the name of the loads following this order. See the documentation of grid2op.Backend for more information about this.

  • order_backend_prods (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for generators.

  • order_backend_lines (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • order_backend_subs (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • names_chronics_to_backend (dict) – See in the description of the method for more information about its format.

Examples

For example, suppose we have a grid2op.Backend with:

  • substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)

  • loads named “load_i” with “i” the subtations to which it is connected

  • generators units named “gen_i” (i still the substation id to which it is connected)

  • powerlnes are named “i_j” if it connected substations i to substation j

And on the other side, we have some files with the following conventions:

  • substations are numbered from 1 to N

  • loads are named “i_C” with i being the substation to which it is connected

  • generators are named “i_G” with is being the id of the substations to which it is connected

  • powerlines are namesd “i_j_k” where i is the origin substation, j the extremity substations and “k” is a unique identifier of this powerline in the powergrid.

In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the following elements and initialize the object gridval of type GridValue with:

gridval = GridValue()  # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
                         'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
                           '3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
                          'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
                                           "14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
                                           "6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
                                           "11_C": 'load_10', "12_C": 'load_11',
                                           "13_C": 'load_12'},
                                 "lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
                                          '10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
                                           '2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
                                           '4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
                                           '6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
                                          '7_8_14': '6_7', '7_9_15': '6_8'},
                                 "prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
                                           "2_G": "gen_1", "8_G": "gen_7"},
                                }
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
load_next()[source]

Load the next data from the current episode. It loads the next time step for the current episode.

Returns:

max_timestep()[source]

This method returned the maximum timestep that the current episode can last. Note that if the grid2op.BaseAgent performs a bad action that leads to a game over, then the episode can lasts less.

Returns:

res – -1 if possibly infinite length or a positive integer representing the maximum duration of this episode

Return type:

int

next_chronics()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Move to the next “chronics”, representing the next “level” if we make the parallel with video games.

A call to this function should at least restart:

reset()[source]

Rebuilt the Multifolder._order. This should be called after a call to Multifolder.set_filter() is performed.

Warning

This “reset” is different from the env.reset. It should be only called after the function to set the filtering function has been called.

This “reset” only reset which chronics are used for the environment.

Returns:

new_order – The selected chronics paths after a call to this method.

Return type:

numpy.ndarray, dtype: str

Notes

Except explicitly mentioned, for example by Multifolder.set_filter() you should not use this function. This will erased every selection of chronics, every shuffle etc.

sample_next_chronics(probabilities=None)[source]

This function should be called before “next_chronics”. It can be used to sample non uniformly for the next next chronics.

Parameters:

probabilities (np.ndarray) – Array of integer with the same size as the number of chronics in the cache. If it does not sum to one, it is rescaled such that it sums to one.

Returns:

selected – The integer that was selected.

Return type:

int

Examples

Let’s assume in your chronics, the folder names are “Scenario_august_dummy”, and “Scenario_february_dummy”. For the sake of the example, we want the environment to loop 75% of the time to the month of february and 25% of the time to the month of august.

import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1", test=True)  # don't add "test=True" if
# you don't want to perform a test.

# check at which month will belong each observation
for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always alternatively prints "8" (if chronics if from august) or
    # "2" if chronics is from february) with a probability of 50% / 50%

env.seed(0)  # for reproducible experiment
for i in range(10):
    _ = env.chronics_handler.sample_next_chronics([0.25, 0.75])
    obs = env.reset()
    print(obs.month)
    # it prints "2" with probability 0.75 and "8" with probability 0.25
set_chunk_size(new_chunk_size)[source]

This parameters allows to set, if the data generation process support it, the amount of data that is read at the same time. It can help speeding up the computation process by adding more control on the io operation.

Parameters:

new_chunk_size (int) – The chunk size (ie the number of rows that will be read on each data set at the same time)

set_filter(filter_fun)[source]

Assign a filtering function to remove some chronics from the next time a call to “reset_cache” is called.

NB filter_fun is applied to all element of Multifolder.subpaths. If True then it will be put in cache, if False this data will NOT be put in the cache.

NB this has no effect until Multifolder.reset is called.

Examples

Let’s assume in your chronics, the folder names are “Scenario_august_dummy”, and “Scenario_february_dummy”. For the sake of the example, we want the environment to loop only through the month of february, because why not. Then we can do the following:

import re
import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1", test=True)  # don't add "test=True" if
# you don't want to perform a test.

# check at which month will belong each observation
for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always alternatively prints "8" (if chronics if from august) or
    # "2" if chronics is from february)

# to see where the chronics are located
print(env.chronics_handler.subpaths)

# keep only the month of february
env.chronics_handler.set_filter(lambda path: re.match(".*february.*", path) is not None)
env.chronics_handler.reset()  # if you don't do that it will not have any effect

for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always prints "2" (representing february)
shuffle(shuffler=None)[source]

This method is used to have a better control on the order in which the subfolder containing the episode are processed.

It can focus the evaluation on one specific folder, shuffle the folders, use only a subset of them etc. See the examples for more information.

Parameters:

shuffler (object) – Shuffler should be a function that is called on MultiFolder.subpaths that will shuffle them. It can also be used to remove some path if needed (see example).

Returns:

new_order – The order in which the chronics will be looped through

Return type:

numpy.ndarray, dtype: str

Examples

If you want to simply shuffle the data you can do:

# create an environment
import numpy as np
import grid2op
env_name = "l2rpn_case14_sandbox"
env = grid2op.make(env_name)

# shuffle the chronics (uniformly at random, without duplication)
env.chronics_handler.shuffle()
# use the environment as you want, here do 10 episode with the selected data
for i in range(10):
    obs = env.reset()
    print(f"Path of the chronics used: {env.chronics_handler.data.path}")
    done = False
    while not done:
        act = ...
        obs, reward, done, info = env.step(act)

# re shuffle them (still uniformly at random, without duplication)
env.chronics_handler.shuffle()

# use the environment as you want, here do 10 episode with the selected data
for i in range(10):
    obs = env.reset()
    print(f"Path of the chronics used: {env.chronics_handler.data.path}")
    done = False
    while not done:
        act = ...
        obs, reward, done, info = env.step(act)

If you want to use only a subset of the path, say for example the path with index 1, 5, and 6

# create an environment
import numpy as np
import grid2op
env_name = "l2rpn_case14_sandbox"
env = grid2op.make(env_name)

# select the chronics (here 5 at random amongst the 10 "last" chronics of the environment)
nb_chron = len(env.chronics_handler.chronics_used)
chron_id_to_keep = np.random.choice(np.arange(nb_chron - 10, nb_chron), size=5, replace=True)
env.chronics_handler.shuffle(lambda x: chron_id_to_keep)

# use the environment as you want, here do 10 episode with the selected data
for i in range(10):
    obs = env.reset()
    print(f"Path of the chronics used: {env.chronics_handler.data.path}")
    done = False
    while not done:
        act = ...
        obs, reward, done, info = env.step(act)

# re shuffle them (uniformly at random, without duplication, among the chronics "selected" above.)
env.chronics_handler.shuffle()

# use the environment as you want, here do 10 episode with the selected data
for i in range(10):
    obs = env.reset()
    print(f"Path of the chronics used: {env.chronics_handler.data.path}")
    done = False
    while not done:
        act = ...
        obs, reward, done, info = env.step(act)

Warning

Though it is possible to use this “shuffle” function to only use some chronics, we highly recommend you to have a look at the sections Time series Customization or Splitting into raining, validation, test scenarios. It is likely that you will find better way to do what you want to do there. Use this last example with care then.

Warning

As stated on the MultiFolder.reset(), any call to env.chronics_handler.reset will remove anything related to shuffling, including the selection of chronics !

split_and_save(datetime_beg, datetime_end, path_out)[source]

This function allows you to split the data (keeping only the data between datetime_beg and datetime_end) and to save it on your local machine. This is espacially handy if you want to extract only a piece of the dataset we provide for example.

Parameters:
  • datetime_beg (dict) – Keys are the name id of the scenarios you want to save. Values are the corresponding starting date and time (in “%Y-%m-ùd %H:%M” format). See example for more information.

  • datetime_end (dict) –

    keys must be the same as in the “datetime_beg” argument.

    See example for more information

  • path_out (str) – The path were the data will be stored.

Examples

Here is a short example on how to use it

import grid2op
import os
env = grid2op.make("l2rpn_case14_sandbox")

env.chronics_handler.real_data.split_and_save({"004": "2019-01-08 02:00",
                                     "005": "2019-01-30 08:00",
                                     "006": "2019-01-17 00:00",
                                     "007": "2019-01-17 01:00",
                                     "008": "2019-01-21 09:00",
                                     "009": "2019-01-22 12:00",
                                     "010": "2019-01-27 19:00",
                                     "011": "2019-01-15 12:00",
                                     "012": "2019-01-08 13:00",
                                     "013": "2019-01-22 00:00"},
                                    {"004": "2019-01-11 02:00",
                                     "005": "2019-02-01 08:00",
                                     "006": "2019-01-18 00:00",
                                     "007": "2019-01-18 01:00",
                                     "008": "2019-01-22 09:00",
                                     "009": "2019-01-24 12:00",
                                     "010": "2019-01-29 19:00",
                                     "011": "2019-01-17 12:00",
                                     "012": "2019-01-10 13:00",
                                     "013": "2019-01-24 00:00"},
                          path_out=os.path.join("/tmp"))
tell_id(id_num, previous=False)[source]

This tells this chronics to load for the next episode. By default, if id_num is greater than the number of episode, it is equivalent at restarting from the first one: episode are played indefinitely in the same order.

Parameters:
  • id_num (int | str) – Id of the chronics to load.

  • previous – Do you want to set to the previous value of this one or not (note that in general you want to set to the previous value, as calling this function as an impact only after env.reset() is called)

class grid2op.Chronics.MultifolderWithCache(path, time_interval=datetime.timedelta(seconds=300), start_datetime=datetime.datetime(2019, 1, 1, 0, 0), gridvalueClass=<class 'grid2op.Chronics.gridStateFromFile.GridStateFromFile'>, sep=';', max_iter=-1, chunk_size=None, filter_func=None, **kwargs)[source]

This class is a particular type of Multifolder that, instead of reading is all from disk each time stores it into memory.

For now it’s only compatible (because it only present some kind of interest) with GridValue class inheriting from GridStateFromFile.

The function MultifolderWithCache.reset() will redo the cache from scratch. You can filter which type of data will be cached or not with the MultifolderWithCache.set_filter() function.

NB Efficient use of this class can dramatically increase the speed of the learning algorithm, especially at the beginning where lots of data are read from the hard drive and the agent games over after a few time steps ( typically, data are given by months, so 30*288 >= 8600 time steps, while during exploration an agent usually performs less than a few dozen of steps leading to more time spent reading 8600 rows than computing the few dozen of steps.

Danger

When you create an environment with this chronics class (eg by doing env = make(…,chronics_class=MultifolderWithCache)), the “cache” is not pre loaded, only the first scenario is loaded in memory (to save loading time).

In order to load everything, you NEED to call env.chronics_handler.reset(), which, by default, will load every scenario into memory. If you want to filter some data, for example by reading only the scenario of decembre, you can use the set_filter method.

A typical workflow (at the start of your program) when using this class is then:

  1. create the environment: env = make(…,chronics_class=MultifolderWithCache)

  2. (optional but recommended) select some scenarios: env.chronics_handler.real_data.set_filter(lambda x: re.match(“.*december.*”, x) is not None)

  3. load the data in memory: env.chronics_handler.reset()

  4. do whatever you want using env

Note

After creation (anywhere in your code), you can use other scenarios by calling the set_filter function again:

  1. select other scenarios: env.chronics_handler.real_data.set_filter(lambda x: re.match(“.*january.*”, x) is not None)

  2. load the data in memory: env.chronics_handler.reset()

  3. do whatever you want using env

Examples

This is how this class can be used:

import re
from grid2op import make
from grid2op.Chronics import MultifolderWithCache
env = make(...,chronics_class=MultifolderWithCache)

# set the chronics to limit to one week of data (lower memory footprint)
env.chronics_handler.set_max_iter(7*288)
# assign a filter, use only chronics that have "december" in their name
env.chronics_handler.real_data.set_filter(lambda x: re.match(".*december.*", x) is not None)
# create the cache
env.chronics_handler.reset()

# and now you can use it as you would do any gym environment:
my_agent = ...
obs = env.reset()
done = False
reward = env.reward_range[0]
while not done:
    act = my_agent.act(obs, reward, done)
    obs, reward, done, info = env.step(act)  # and step will NOT load any data from disk.

Methods:

get_kwargs(dict_)

Overload this function if you want to pass some data when building a new instance of this class.

initialize(order_backend_loads, ...[, ...])

This function is used to initialize the data generator.

load_next()

Load the next data from the current episode.

max_timestep()

This method returned the maximum timestep that the current episode can last.

reset()

Rebuilt the cache as if it were built from scratch.

seed(seed)

This seeds both the MultiFolderWithCache (which has an impact for example on MultiFolder.sample_next_chronics()) and each data present in the cache.

set_filter(filter_fun)

Assign a filtering function to remove some chronics from the next time a call to "reset_cache" is called.

get_kwargs(dict_)[source]

Overload this function if you want to pass some data when building a new instance of this class.

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

This function is used to initialize the data generator. It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also initialize GridValue.maintenance_time, GridValue.maintenance_duration and GridValue.hazard_duration.

This function should also increment GridValue.curr_iter of 1 each time it is called.

The GridValue is what makes the connection between the data (generally in a shape of files on the hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change the tool that computes the load flows. Generally, such grid2op.Backend expects data in a specific format that is given by the way their internal powergrid is represented, and in particular, the “same” objects can have different name and different position. To ensure that the same chronics would produce the same results on every backend (ie regardless of the order of which the Backend is expecting the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name of the object in the chronics to the name of the same object in the backend.

This is done with the “names_chronics_to_backend” dictionnary that has the following keys:

  • “loads”

  • “prods”

  • “lines”

The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend. This means that each keys of these subdictionnary is a name of one column in the files, and each values is the corresponding name of this same object in the dictionnary. An example is provided bellow.

Parameters:
  • order_backend_loads (numpy.ndarray, dtype:str) – Ordered name, in the Backend, of the loads. It is required that a grid2op.Backend object always output the informations in the same order. This array gives the name of the loads following this order. See the documentation of grid2op.Backend for more information about this.

  • order_backend_prods (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for generators.

  • order_backend_lines (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • order_backend_subs (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • names_chronics_to_backend (dict) – See in the description of the method for more information about its format.

Examples

For example, suppose we have a grid2op.Backend with:

  • substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)

  • loads named “load_i” with “i” the subtations to which it is connected

  • generators units named “gen_i” (i still the substation id to which it is connected)

  • powerlnes are named “i_j” if it connected substations i to substation j

And on the other side, we have some files with the following conventions:

  • substations are numbered from 1 to N

  • loads are named “i_C” with i being the substation to which it is connected

  • generators are named “i_G” with is being the id of the substations to which it is connected

  • powerlines are namesd “i_j_k” where i is the origin substation, j the extremity substations and “k” is a unique identifier of this powerline in the powergrid.

In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the following elements and initialize the object gridval of type GridValue with:

gridval = GridValue()  # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
                         'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
                           '3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
                          'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
                                           "14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
                                           "6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
                                           "11_C": 'load_10', "12_C": 'load_11',
                                           "13_C": 'load_12'},
                                 "lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
                                          '10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
                                           '2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
                                           '4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
                                           '6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
                                          '7_8_14': '6_7', '7_9_15': '6_8'},
                                 "prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
                                           "2_G": "gen_1", "8_G": "gen_7"},
                                }
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
load_next()[source]

Load the next data from the current episode. It loads the next time step for the current episode.

Returns:

max_timestep()[source]

This method returned the maximum timestep that the current episode can last. Note that if the grid2op.BaseAgent performs a bad action that leads to a game over, then the episode can lasts less.

Returns:

res – -1 if possibly infinite length or a positive integer representing the maximum duration of this episode

Return type:

int

reset()[source]

Rebuilt the cache as if it were built from scratch. This call might take a while to process.

Danger

You NEED to call this function (with env.chronics_handler.reset()) if you use the MultiFolderWithCache class in your experiments.

Warning

If a seed is set (see MultiFolderWithCache.seed()) then all the data in the cache are also seeded when this method is called.

seed(seed: int)[source]

This seeds both the MultiFolderWithCache (which has an impact for example on MultiFolder.sample_next_chronics()) and each data present in the cache.

Parameters:

seed (int) – The seed to use

set_filter(filter_fun)[source]

Assign a filtering function to remove some chronics from the next time a call to “reset_cache” is called.

NB filter_fun is applied to all element of Multifolder.subpaths. If True then it will be put in cache, if False this data will NOT be put in the cache.

NB this has no effect until Multifolder.reset is called.

Examples

Let’s assume in your chronics, the folder names are “Scenario_august_dummy”, and “Scenario_february_dummy”. For the sake of the example, we want the environment to loop only through the month of february, because why not. Then we can do the following:

import re
import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1", test=True)  # don't add "test=True" if
# you don't want to perform a test.

# check at which month will belong each observation
for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always alternatively prints "8" (if chronics if from august) or
    # "2" if chronics is from february)

# to see where the chronics are located
print(env.chronics_handler.subpaths)

# keep only the month of february
env.chronics_handler.set_filter(lambda path: re.match(".*february.*", path) is not None)
env.chronics_handler.reset()  # if you don't do that it will not have any effect

for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always prints "2" (representing february)

If you still can’t find what you’re looking for, try in one of the following pages:

Still trouble finding the information ? Do not hesitate to send a github issue about the documentation at this link: Documentation issue template