Chronics

This page is organized as follow:

Objectives

This module is present to handle everything related to input data that are not structural.

In the Grid2Op vocabulary a “GridValue” or “Chronics” is something that provides data to change the input parameter of a power flow between 1 time step and the other.

It is a more generic terminology. Modification that can be performed by GridValue object includes, but are not limited to:

  • injections such as:

    • generators active production setpoint

    • generators voltage setpoint

    • loads active consumption

    • loads reactive consumption

  • structural informations such as:

    • planned outage: powerline disconnection anticipated in advance

    • hazards: powerline disconnection that cannot be anticipated, for example due to a windstorm.

All powergrid modification that can be performed using an grid2op.Action.BaseAction can be implemented as form of a GridValue.

The same mechanism than for grid2op.Action.BaseAction or grid2op.Observation.BaseObservation is pursued here. All state modifications made by the grid2op.Environment must derived from the GridValue. It is not recommended to create them directly, but rather to use the ChronicsHandler for such a purpose.

Note that the values returned by a GridValue are backend dependant. A GridValue object should always return the data in the order expected by the grid2op.Backend, regardless of the order in which data are given in the files or generated by the data generator process.

This implies that changing the backend will change the output of GridValue. More information about this is given in the description of the GridValue.initialize() method.

Finally, compared to other Reinforcement Learning problems, is the possibility to use “forecast”. This optional feature can be accessed via the grid2op.Observation.BaseObservation and mainly the grid2op.Observation.BaseObservation.simulate() method. The data that are used to generate this forecasts come from the grid2op.GridValue and are detailed in the GridValue.forecasts() method.

More control on the chronics

We explained, in the description of the grid2op.Environment in sections Chronics Customization and following how to have more control on which chronics is used, with steps are used within a chronics etc. We will not detailed here again, please refer to this page for more information.

However, know that you can have a very detailed control on which chronics are used:

Chosing the right chronics can also lead to some large advantage in terms of computation time. This is particularly true if you want to benefit the most from HPC for example. More detailed is given in the Optimize the data pipeline section. In summary:

  • set the “chunk” size (amount of data read from the disk, instead of reading an entire scenarios, you read from the hard drive only a certain amount of data at a time, see grid2op.Chronics.ChronicsHandler.set_chunk_size()) you can use it with env.chronics_handler.set_chunk_size(100)

  • cache all the chronics and use them from memory (instead of reading them from the hard drive, see grid2op.Chronics.MultifolderWithCache) you can do this with env = grid2op.make(…, chronics_class=MultifolderWithCache)

Finally, if you need to study machine learning in a “regular” fashion, with a train / validation / set you can use the env.train_val_split or env.train_val_split_random functions to do that. See an example usage in the section Splitting into raining, validation, test scenarios.

Detailed Documentation by class

Classes:

ChangeNothing([time_interval, max_iter, ...])

INTERNAL

ChronicsHandler([chronicsClass, ...])

Represents a Chronics handler that returns a grid state.

GridStateFromFile(path[, sep, ...])

INTERNAL

GridStateFromFileWithForecasts(path[, sep, ...])

An extension of GridStateFromFile that implements the "forecast" functionality.

GridStateFromFileWithForecastsWithMaintenance(path)

An extension of GridStateFromFileWithForecasts that implements the maintenance chronic generator on the fly (maintenance are not read from files, but are rather generated when the chronics is created).

GridStateFromFileWithForecastsWithoutMaintenance(path)

INTERNAL

GridValue([time_interval, max_iter, ...])

This is the base class for every kind of data for the _grid.

Multifolder(path[, time_interval, ...])

The classes GridStateFromFile and GridStateFromFileWithForecasts implemented the reading of a single folder representing a single episode.

MultifolderWithCache(path[, time_interval, ...])

This class is a particular type of Multifolder that, instead of reading is all from disk each time stores it into memory.

ReadPypowNetData(path[, sep, time_interval, ...])

DEPRECATED, this class is no longer used nor tested.

class grid2op.Chronics.ChangeNothing(time_interval=datetime.timedelta(0, 300), max_iter=- 1, start_datetime=datetime.datetime(2019, 1, 1, 0, 0), chunk_size=None, **kargs)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Do not attempt to create an object of this class. This is initialized by the environment at its creation.

This set of class is mainly internal.

We don’t recommend you, unless you want to code a custom “chroncis class” to change anything on these classes.

This class is the most basic class to modify a powergrid values. It does nothing aside from increasing GridValue.max_iter and the GridValue.current_datetime.

Examples

Usage example, for what you don’t really have to do:

import grid2op
from grid2op.Chronics import ChangeNothing

env_name = ...
env = grid2op.make(env_name, data_feeding_kwargs={"gridvalueClass": ChangeNothing})

It can also be used with the “blank” environment:

import grid2op
from grid2op.Chronics import ChangeNothing
env = grid2op.make("blank",
                   test=True,
                   grid_path=EXAMPLE_CASEFILE,
                   chronics_class=ChangeNothing,
                   action_class=TopologyAndDispatchAction)

Methods:

check_validity(backend)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is called at the creation of the environment to ensure the Backend and the chronics are consistent with one another.

A call to this method ensure that the action that will be sent to the current grid2op.Environment can be properly implemented by its grid2op.Backend. This specific method check that the dimension of all vectors are consistent

Parameters

backend (grid2op.Backend.Backend) – The backend used by the grid2op.Environment.Environment

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

This function is used to initialize the data generator. It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also initialize GridValue.maintenance_time, GridValue.maintenance_duration and GridValue.hazard_duration.

This function should also increment GridValue.curr_iter of 1 each time it is called.

The GridValue is what makes the connection between the data (generally in a shape of files on the hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change the tool that computes the load flows. Generally, such grid2op.Backend expects data in a specific format that is given by the way their internal powergrid is represented, and in particular, the “same” objects can have different name and different position. To ensure that the same chronics would produce the same results on every backend (ie regardless of the order of which the Backend is expecting the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name of the object in the chronics to the name of the same object in the backend.

This is done with the “names_chronics_to_backend” dictionnary that has the following keys:

  • “loads”

  • “prods”

  • “lines”

The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend. This means that each keys of these subdictionnary is a name of one column in the files, and each values is the corresponding name of this same object in the dictionnary. An example is provided bellow.

Parameters
  • order_backend_loads (numpy.ndarray, dtype:str) – Ordered name, in the Backend, of the loads. It is required that a grid2op.Backend object always output the informations in the same order. This array gives the name of the loads following this order. See the documentation of grid2op.Backend for more information about this.

  • order_backend_prods (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for generators.

  • order_backend_lines (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • order_backend_subs (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • names_chronics_to_backend (dict) – See in the description of the method for more information about its format.

Examples

For example, suppose we have a grid2op.Backend with:

  • substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)

  • loads named “load_i” with “i” the subtations to which it is connected

  • generators units named “gen_i” (i still the substation id to which it is connected)

  • powerlnes are named “i_j” if it connected substations i to substation j

And on the other side, we have some files with the following conventions:

  • substations are numbered from 1 to N

  • loads are named “i_C” with i being the substation to which it is connected

  • generators are named “i_G” with is being the id of the substations to which it is connected

  • powerlines are namesd “i_j_k” where i is the origin substation, j the extremity substations and “k” is a unique identifier of this powerline in the powergrid.

In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the following elements and initialize the object gridval of type GridValue with:

gridval = GridValue()  # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
                         'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
                           '3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
                          'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
                                           "14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
                                           "6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
                                           "11_C": 'load_10', "12_C": 'load_11',
                                           "13_C": 'load_12'},
                                 "lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
                                          '10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
                                           '2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
                                           '4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
                                           '6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
                                          '7_8_14': '6_7', '7_9_15': '6_8'},
                                 "prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
                                           "2_G": "gen_1", "8_G": "gen_7"},
                                }
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
load_next()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is automatically called by the “env.step” function. It loads the next information about the grid state (load p and load q, prod p and prod v as well as some maintenance or hazards information)

Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary compatible with the grid2op.BaseAction class allowed for the Environment.

More information about this dictionary can be found at grid2op.BaseAction.update().

As a (quick) reminder: this dictionary has for keys:

  • “injection” (optional): a dictionary with keys (optional) “load_p”, “load_q”, “prod_p”, “prod_v”

  • “hazards” (optional) : the outage suffered from the _grid

  • “maintenance” (optional) : the maintenance operations planned on the grid for the current time step.

Returns

  • timestamp (datetime.datetime) – The current timestamp for which the modifications have been generated.

  • dict_ (dict) – Always empty, indicating i do nothing.

  • maintenance_time (numpy.ndarray, dtype:int) – Information about the next planned maintenance. See GridValue.maintenance_time for more information.

  • maintenance_duration (numpy.ndarray, dtype:int) – Information about the duration of next planned maintenance. See GridValue.maintenance_duration for more information.

  • hazard_duration (numpy.ndarray, dtype:int) – Information about the current hazard. See GridValue.hazard_duration for more information.

  • prod_v (numpy.ndarray, dtype:float) – the (stored) value of the generator voltage setpoint

Raises

StopIteration – if the chronics is over

next_chronics()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Move to the next “chronics”, representing the next “level” if we make the parallel with video games.

A call to this function should at least restart:

class grid2op.Chronics.ChronicsHandler(chronicsClass=<class 'grid2op.Chronics.ChangeNothing.ChangeNothing'>, time_interval=datetime.timedelta(0, 300), max_iter=-1, **kwargs)[source]

Represents a Chronics handler that returns a grid state.

As stated previously, it is not recommended to make an directly an object from the class GridValue. This utility will ensure that the creation of such objects are properly made.

The types of chronics used can be specified in the ChronicsHandler.chronicsClass attribute.

chronicsClass

Type of chronics that will be loaded and generated. Default is ChangeNothing (NB the class, and not an object / instance of the class should be send here.) This should be a derived class from GridValue.

Type

type, optional

kwargs

key word arguments that will be used to build new chronics.

Type

dict, optional

max_iter

Maximum number of iterations per episode.

Type

int, optional

real_data

An instance of type given by ChronicsHandler.chronicsClass.

Type

GridValue

path

path where the data are located.

Type

str (or None)

Methods:

get_name()[source]

This method retrieve a unique name that is used to serialize episode data on disk.

See definition of EpisodeData for more information about this method.

max_episode_duration()[source]
Returns

max_duration – The maximum duration of the current episode

Return type

int

Notes

Using this function (which we do not recommend) you will receive “-1” for “infinite duration” otherwise you will receive a positive integer

next_time_step()[source]

This method returns the modification of the powergrid at the next time step for the same episode.

See definition of GridValue.load_next() for more information about this method.

seed(seed)[source]

Seed the chronics handler and the GridValue that is used to generate the chronics.

Parameters

seed (int) – Set the seed for this instance and for the data it holds

Returns

  • seed (int) – The seed used for this object

  • seed_chronics (int) – The seed used for the real data

set_max_iter(max_iter)[source]

This function is used to set the maximum number of iterations possible before the chronics ends.

Parameters

max_iter (int) – The maximum number of steps that can be done before reaching the end of the episode

class grid2op.Chronics.GridStateFromFile(path, sep=';', time_interval=datetime.timedelta(0, 300), max_iter=- 1, start_datetime=datetime.datetime(2019, 1, 1, 0, 0), chunk_size=None)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Do not attempt to create an object of this class. This is initialized by the environment at its creation.

Read the injections values from a file stored on hard drive. More detailed about the files is provided in the GridStateFromFile.initialize() method.

This class reads only files stored as csv. The header of the csv is mandatory and should represent the name of the objects. This names should either be matched to the name of the same object in the backend using the names_chronics_to_backend argument pass into the GridStateFromFile.initialize() (see GridValue.initialize() for more information) or match the names of the object in the backend.

When the grid value is initialized, all present csv are read, sorted in order compatible with the backend and extracted as numpy array.

For now, the current date and times are not read from file. It is mandatory that the chronics starts at 00:00 and its first time stamps is corresponds to January, 1st 2019.

Chronics read from this files don’t implement the “forecast” value.

In this values, only 1 episode is stored. If the end of the episode is reached and another one should start, then it will loop from the beginning.

It reads the following files from the “path” location specified:

  • “prod_p.csv”: for each time steps, this file contains the value for the active production of each generators of the grid (it counts as many rows as the number of time steps - and its header) and as many columns as the number of generators on the grid. The header must contains the names of the generators used to map their value on the grid. Values must be convertible to floating point and the column separator of this file should be semi-colon ; (unless you specify a “sep” when loading this class)

  • “prod_v.csv”: same as “prod_p.csv” but for the production voltage setpoint.

  • “load_p.csv”: same as “prod_p.csv” but for the load active value (number of columns = number of loads)

  • “load_q.csv”: same as “prod_p.csv” but for the load reactive value (number of columns = number of loads)

  • “maintenance.csv”: that contains whether or not there is a maintenance for a given powerline (column) at each time step (row).

  • “hazards.csv”: that contains whether or not there is a hazard for a given powerline (column) at each time step (row).

  • “start_datetime.info”: the time stamp (date and time) at which the chronic is starting.

  • “time_interval.info”: the amount of time between two consecutive steps (e.g. 5 mins, or 1h)

If a file is missing, it is understood as “this value will not be modified”. For example, if the file “prod_v.csv” is not present, it will be equivalent as not modifying the production voltage setpoint, never.

Except if the attribute GridStateFromFile.sep is modified, the above tables should be “semi colon” (;) separated.

path

The path of the folder where the data are stored. It is recommended to set absolute path, and not relative paths.

Type

str

load_p

All the values of the load active values

Type

numpy.ndarray, dtype: float

load_q

All the values of the load reactive values

Type

numpy.ndarray, dtype: float

prod_p

All the productions setpoint active values.

Type

numpy.ndarray, dtype: float

prod_v

All the productions setpoint voltage magnitude values.

Type

numpy.ndarray, dtype: float

hazards

This vector represents the possible hazards. It is understood as: True there is a hazard for the given powerline, False there is not.

Type

numpy.ndarray, dtype: bool

maintenance

This vector represents the possible maintenance. It is understood as: True there is a maintenance for the given powerline, False there is not.

Type

numpy.ndarray, dtype: bool

current_index

The index of the last observation sent to the grid2op.Environment.

Type

int

sep

The csv columns separator. By defaults it’s “;”

Type

str, optional

names_chronics_to_backend

This directory matches the name of the objects (line extremity, generator or load) to the same object in the backed. See the help of GridValue.initialize() for more information).

Type

dict

Methods:

check_validity(backend)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is called at the creation of the environment to ensure the Backend and the chronics are consistent with one another.

A call to this method ensure that the action that will be sent to the current grid2op.Environment can be properly implemented by its grid2op.Backend. This specific method check that the dimension of all vectors are consistent

Parameters

backend (grid2op.Backend.Backend) – The backend used by the grid2op.Environment.Environment

done()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Compare to GridValue.done() an episode can be over for 2 main reasons:

The episode is done if one of the above condition is met.

Returns

res – Whether the episode has reached its end or not.

Return type

bool

get_id() str[source]

Utility to get the current name of the path of the data are looked at, if data are files.

This could also be used to return a unique identifier to the generated chronics even in the case where they are generated on the fly, for example by return a hash of the seed.

Returns

res – A unique identifier of the chronics generated for this episode. For example, if the chronics comes from a specific folder, this could be the path to this folder.

Return type

str

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Called at the creation of the environment.

In this function, the numpy arrays are read from the csv using the panda.dataframe engine.

In order to be valid, the folder located at GridStateFromFile.path can contain:

All these csv must have the same separator specified by GridStateFromFile.sep. If one of these file is missing, it is equivalent to “change nothing” class.

If a file named “start_datetime.info” is present, then it will be used to initialized GridStateFromFile.start_datetime. If this file exists, it should count only one row, with the initial datetime in the “%Y-%m-%d %H:%M” format.

If a file named “time_interval.info” is present, then it will be used to initialized the GridStateFromFile.time_interval attribute. If this file exists, it should count only one row, with the initial datetime in the “%H:%M” format. Only timedelta composed of hours and minutes are supported (time delta cannot go above 23 hours 55 minutes and cannot be smaller than 0 hour 1 minutes)

The first row of these csv is understood as the name of the object concerned by the column. Either this name is present in the grid2op.Backend, in this case no modification is performed, or in case the name is not found in the backend and in this case it must be specified in the “names_chronics_to_backend” parameters how to understand it. See the help of GridValue.initialize() for more information about this dictionnary.

All files should have the same number of rows.

:param See help of GridValue.initialize() for a detailed help about the parameters.:

load_next()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is automatically called by the “env.step” function. It loads the next information about the grid state (load p and load q, prod p and prod v as well as some maintenance or hazards information)

Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary compatible with the grid2op.BaseAction class allowed for the Environment.

More information about this dictionary can be found at grid2op.BaseAction.update().

As a (quick) reminder: this dictionary has for keys:

  • “injection” (optional): a dictionary with keys (optional) “load_p”, “load_q”, “prod_p”, “prod_v”

  • “hazards” (optional) : the outage suffered from the _grid

  • “maintenance” (optional) : the maintenance operations planned on the grid for the current time step.

Returns

  • timestamp (datetime.datetime) – The current timestamp for which the modifications have been generated.

  • dict_ (dict) – Always empty, indicating i do nothing.

  • maintenance_time (numpy.ndarray, dtype:int) – Information about the next planned maintenance. See GridValue.maintenance_time for more information.

  • maintenance_duration (numpy.ndarray, dtype:int) – Information about the duration of next planned maintenance. See GridValue.maintenance_duration for more information.

  • hazard_duration (numpy.ndarray, dtype:int) – Information about the current hazard. See GridValue.hazard_duration for more information.

  • prod_v (numpy.ndarray, dtype:float) – the (stored) value of the generator voltage setpoint

Raises

StopIteration – if the chronics is over

next_chronics()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Move to the next “chronics”, representing the next “level” if we make the parallel with video games.

A call to this function should at least restart:

set_chunk_size(new_chunk_size)[source]

This parameters allows to set, if the data generation process support it, the amount of data that is read at the same time. It can help speeding up the computation process by adding more control on the io operation.

Parameters

new_chunk_size (int) – The chunk size (ie the number of rows that will be read on each data set at the same time)

split_and_save(datetime_beg, datetime_end, path_out)[source]

You can use this function to save the values of the chronics in a format that will be loadable by GridStateFromFile

Notes

Prefer using the Multifolder.split_and_save() that handles different chronics

Parameters
  • datetime_beg (str) – Time stamp of the beginning of the data you want to save (time stamp in “%Y-%m-%d %H:%M” format)

  • datetime_end (str) – Time stamp of the end of the data you want to save (time stamp in “%Y-%m-%d %H:%M” format)

  • path_out (str) – Location where to save the data

class grid2op.Chronics.GridStateFromFileWithForecasts(path, sep=';', time_interval=datetime.timedelta(0, 300), max_iter=- 1, chunk_size=None)[source]

An extension of GridStateFromFile that implements the “forecast” functionality.

Forecast are also read from a file. For this class, only 1 forecast per timestep is read. The “forecast” present in the file at row $i$ is the one available at the corresponding time step, so valid for the grid state at the next time step.

To have more advanced forecasts, this class could be overridden.

load_p_forecast

Array used to store the forecasts of the load active values.

Type

numpy.ndarray, dtype: float

load_q_forecast

Array used to store the forecasts of the load reactive values.

Type

numpy.ndarray, dtype: float

prod_p_forecast

Array used to store the forecasts of the generator active production setpoint.

Type

numpy.ndarray, dtype: float

prod_v_forecast

Array used to store the forecasts of the generator voltage magnitude setpoint.

Type

numpy.ndarray, dtype: float

maintenance_forecast

Array used to store the forecasts of the _maintenance operations.

Type

numpy.ndarray, dtype: float

Methods:

check_validity(backend)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is called at the creation of the environment to ensure the Backend and the chronics are consistent with one another.

A call to this method ensure that the action that will be sent to the current grid2op.Environment can be properly implemented by its grid2op.Backend. This specific method check that the dimension of all vectors are consistent

Parameters

backend (grid2op.Backend.Backend) – The backend used by the grid2op.Environment.Environment

forecasts()[source]

This is the major difference between GridStateFromFileWithForecasts and GridStateFromFile. It returns non empty forecasts.

As explained in the GridValue.forecasts(), forecasts are made of list of tuple. Each tuple having exactly 2 elements:

  1. Is the time stamp of the forecast

  2. An grid2op.BaseAction representing the modification of the powergrid after the forecast.

For this class, only the forecast of the next time step is given, and only for the injections and maintenance.

Returns

Return type

See GridValue.forecasts() for more information.

get_id() str[source]

Utility to get the current name of the path of the data are looked at, if data are files.

This could also be used to return a unique identifier to the generated chronics even in the case where they are generated on the fly, for example by return a hash of the seed.

Returns

res – A unique identifier of the chronics generated for this episode. For example, if the chronics comes from a specific folder, this could be the path to this folder.

Return type

str

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

The same condition as GridStateFromFile.initialize applies also for GridStateFromFileWithForecasts.load_p_forecast, GridStateFromFileWithForecasts.load_q_forecast, GridStateFromFileWithForecasts.prod_p_forecast, GridStateFromFileWithForecasts.prod_v_forecast and GridStateFromFileWithForecasts.maintenance_forecast.

:param See help of GridValue.initialize() for a detailed help about the _parameters.:

class grid2op.Chronics.GridStateFromFileWithForecastsWithMaintenance(path, sep=';', time_interval=datetime.timedelta(0, 300), max_iter=- 1, chunk_size=None)[source]

An extension of GridStateFromFileWithForecasts that implements the maintenance chronic generator on the fly (maintenance are not read from files, but are rather generated when the chronics is created).

maintenance_starting_hour

The hour at which every maintenance will start

Type

int

maintenance_ending_hour

The hour at which every maintenance will end (we suppose mainteance end on same day for now

Type

int

line_to_maintenance

Array used to store the name of the lines that can happen to be in maintenance

Type

array, dtype: string

daily_proba_per_month_maintenance

Array used to store probability each line can be in maintenance on a day for a given month

Type

array, dtype: float

max_daily_number_per_month_maintenance

Array used to store maximum number of maintenance per day for each month

Type

array, dtype: int

Methods:

initialize(order_backend_loads, ...[, ...])

The same condition as GridStateFromFile.initialize applies also for GridStateFromFileWithForecasts.load_p_forecast, GridStateFromFileWithForecasts.load_q_forecast, GridStateFromFileWithForecasts.prod_p_forecast, GridStateFromFileWithForecasts.prod_v_forecast and GridStateFromFileWithForecasts.maintenance_forecast.

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

The same condition as GridStateFromFile.initialize applies also for GridStateFromFileWithForecasts.load_p_forecast, GridStateFromFileWithForecasts.load_q_forecast, GridStateFromFileWithForecasts.prod_p_forecast, GridStateFromFileWithForecasts.prod_v_forecast and GridStateFromFileWithForecasts.maintenance_forecast.

:param See help of GridValue.initialize() for a detailed help about the _parameters.:

class grid2op.Chronics.GridStateFromFileWithForecastsWithoutMaintenance(path, sep=';', time_interval=datetime.timedelta(0, 300), max_iter=- 1, chunk_size=None)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This class is made mainly for debugging. And it is not well tested.

Behaves exactly like “GridStateFromFileWithForecasts” but ignore all maintenance and hazards

Examples

You can use it as follow:

import grid2op
from grid2op.Chronics import GridStateFromFileWithForecastsWithoutMaintenance

env= make(ENV_NAME,
          data_feeding_kwargs={"gridvalueClass": GridStateFromFileWithForecastsWithoutMaintenance},
          )

# even if there are maintenance in the environment, they will not be used.

Methods:

initialize(order_backend_loads, ...[, ...])

The same condition as GridStateFromFile.initialize applies also for GridStateFromFileWithForecasts.load_p_forecast, GridStateFromFileWithForecasts.load_q_forecast, GridStateFromFileWithForecasts.prod_p_forecast, GridStateFromFileWithForecasts.prod_v_forecast and GridStateFromFileWithForecasts.maintenance_forecast.

load_next()

INTERNAL

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

The same condition as GridStateFromFile.initialize applies also for GridStateFromFileWithForecasts.load_p_forecast, GridStateFromFileWithForecasts.load_q_forecast, GridStateFromFileWithForecasts.prod_p_forecast, GridStateFromFileWithForecasts.prod_v_forecast and GridStateFromFileWithForecasts.maintenance_forecast.

:param See help of GridValue.initialize() for a detailed help about the _parameters.:

load_next()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is automatically called by the “env.step” function. It loads the next information about the grid state (load p and load q, prod p and prod v as well as some maintenance or hazards information)

Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary compatible with the grid2op.BaseAction class allowed for the Environment.

More information about this dictionary can be found at grid2op.BaseAction.update().

As a (quick) reminder: this dictionary has for keys:

  • “injection” (optional): a dictionary with keys (optional) “load_p”, “load_q”, “prod_p”, “prod_v”

  • “hazards” (optional) : the outage suffered from the _grid

  • “maintenance” (optional) : the maintenance operations planned on the grid for the current time step.

Returns

  • timestamp (datetime.datetime) – The current timestamp for which the modifications have been generated.

  • dict_ (dict) – Always empty, indicating i do nothing.

  • maintenance_time (numpy.ndarray, dtype:int) – Information about the next planned maintenance. See GridValue.maintenance_time for more information.

  • maintenance_duration (numpy.ndarray, dtype:int) – Information about the duration of next planned maintenance. See GridValue.maintenance_duration for more information.

  • hazard_duration (numpy.ndarray, dtype:int) – Information about the current hazard. See GridValue.hazard_duration for more information.

  • prod_v (numpy.ndarray, dtype:float) – the (stored) value of the generator voltage setpoint

Raises

StopIteration – if the chronics is over

class grid2op.Chronics.GridValue(time_interval=datetime.timedelta(0, 300), max_iter=- 1, start_datetime=datetime.datetime(2019, 1, 1, 0, 0), chunk_size=None)[source]

This is the base class for every kind of data for the _grid.

It allows the grid2op.Environment to perform powergrid modification that make the “game” time dependant.

It is not recommended to directly create GridValue object, but to use the grid2op.Environment.chronics_handler" for such a purpose. This is made in an attempt to make sure the :func:`GridValue.initialize is called. Before this initialization, it is not recommended to use any GridValue object.

The method GridValue.next_chronics() should be used between two epoch of the game. If there are no more data to be generated from this object, then GridValue.load_next() should raise a StopIteration exception and a call to GridValue.done() should return True.

In grid2op, the production and loads (and hazards or maintenance) can be stored in this type of of “GridValue”. This class will map things generated (or read from a file) and assign the given element of the powergrid with its proper value at each time steps.

time_interval

Time interval between 2 consecutive timestamps. Default 5 minutes.

Type

datetime.timedelta

start_datetime

The datetime of the first timestamp of the scenario.

Type

datetime.datetime

current_datetime

The timestamp of the current scenario.

Type

datetime.datetime

max_iter

Number maximum of data to generate for one episode.

Type

int

curr_iter

Duration of the current episode.

Type

int

maintenance_time

Number of time steps the next maintenance will take place with the following convention:

  • -1 no maintenance are planned for the forseeable future

  • 0 a maintenance is taking place

  • 1, 2, 3 … a maintenance will take place in 1, 2, 3, … time step

Some examples are given in GridValue.maintenance_time_1d().

Type

numpy.ndarray, dtype:int

maintenance_duration

Duration of the next maintenance. 0 means no maintenance is happening. If a maintenance is planned for a given powerline, this number decreases each time step, up until arriving at 0 when the maintenance is over. Note that if a maintenance is planned (see GridValue.maintenance_time) this number indicates how long the maintenance will last, and does not suppose anything on the maintenance taking place or not (= there can be positive number here without a powerline being removed from the grid for maintenance reason). Some examples are given in GridValue.maintenance_duration_1d().

Type

numpy.ndarray, dtype:int

hazard_duration

Duration of the next hzard. 0 means no maintenance is happening. If a hazard is taking place for a given powerline, this number decreases each time step, up until arriving at 0 when the maintenance is over. On the contrary to GridValue.maintenance_duration, if a component of this vector is higher than 1, it means that the powerline is out of service. Some examples are given in GridValue.get_hazard_duration_1d().

Type

numpy.ndarray, dtype:int

Methods:

abstractmethod check_validity(backend)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is called at the creation of the environment to ensure the Backend and the chronics are consistent with one another.

A call to this method ensure that the action that will be sent to the current grid2op.Environment can be properly implemented by its grid2op.Backend. This specific method check that the dimension of all vectors are consistent

Parameters

backend (grid2op.Backend.Backend) – The backend used by the grid2op.Environment.Environment

done()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Use the ChroncisHandler for such purpose

Whether the episode is over or not.

Returns

doneTrue means the episode has arrived to the end (no more data to generate) False means that the episode is not over yet.

Return type

bool

fast_forward(nb_timestep)[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Prefer using grid2op.Environment.BaseEnv.fast_forward_chronics()

This method allows you to skip some time step at the beginning of the chronics.

This is useful at the beginning of the training, if you want your agent to learn on more diverse scenarios. Indeed, the data provided in the chronics usually starts always at the same date time (for example Jan 1st at 00:00). This can lead to suboptimal exploration, as during this phase, only a few time steps are managed by the agent, so in general these few time steps will correspond to grid state around Jan 1st at 00:00.

Parameters

nb_timestep (int) – Number of time step to “fast forward”

forecasts()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Use the ChroncisHandler for such purpose

This method is used to generate the forecasts that are made available to the grid2op.BaseAgent. This forecasts are behaving the same way than a list of tuple as the one returned by GridValue.load_next() method.

The way they are generated depends on the GridValue class. If not forecasts are made available, then the empty list should be returned.

Returns

res – Each element of this list having the same type as what is returned by GridValue.load_next().

Return type

list

staticmethod get_hazard_duration_1d(hazard)[source]

This function allows to transform a 1d numpy aarray maintenance (or hazards), where is specify:

  • 0 there is no maintenance at this time step

  • 1 there is a maintenance at this time step

Into the representation in terms of “hzard duration” as specified in GridValue.maintenance_duration which is:

  • 0 no forseeable hazard operation will be performed

  • 1, 2 etc. is the number of time step the next hzard will last (it is positive only when a hazard

    affect a given powerline)

Compared to GridValue.get_maintenance_duration_1d() we only know when the hazard occurs how long it will last.

Parameters

hazard (numpy.ndarray) – 1 dimensional array representing the time series of the hazards (0 there is no hazard, 1 there is a hazard at this time step)

Returns

hazard_duration – Array representing the time series of the duration of the next hazard forseeable.

Return type

numpy.ndarray

Examples

If no maintenance are planned:

hazard = np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0])
hazard_duration = GridValue.get_hazard_duration_1d(hazard)
assert np.all(hazard_duration == np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]))

If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0)

hazard = np.array([0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0])
hazard_duration = GridValue.get_hazard_duration_1d(hazard)
assert np.all(hazard_duration == np.array([0,0,0,0,0,3,2,1,0,0,0,0,0,0,0,0]))

If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0), and a second one for 2 time steps at time step 13

hazard = np.array([0,0,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0])
hazard_duration = GridValue.get_hazard_duration_1d(hazard)
assert np.all(hazard_duration == np.array([0,0,0,0,0,3,2,1,0,0,0,0,2,1,0,0,0]))
get_id() str[source]

Utility to get the current name of the path of the data are looked at, if data are files.

This could also be used to return a unique identifier to the generated chronics even in the case where they are generated on the fly, for example by return a hash of the seed.

Returns

res – A unique identifier of the chronics generated for this episode. For example, if the chronics comes from a specific folder, this could be the path to this folder.

Return type

str

staticmethod get_maintenance_duration_1d(maintenance)[source]

This function allows to transform a 1d numpy aarray maintenance (or hazards), where is specify:

  • 0 there is no maintenance at this time step

  • 1 there is a maintenance at this time step

Into the representation in terms of “next maintenance duration” as specified in GridValue.maintenance_duration which is:

  • 0 no forseeable maintenance operation will be performed

  • 1, 2 etc. is the number of time step the next maintenance will last (it can be positive even in the

    case that no maintenance is currently being performed.

Parameters

maintenance (numpy.ndarray) – 1 dimensional array representing the time series of the maintenance (0 there is no maintenance, 1 there is a maintenance at this time step)

Returns

maintenance_duration – Array representing the time series of the duration of the next maintenance forseeable.

Return type

numpy.ndarray

Examples

If no maintenance are planned:

maintenance = np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0])
maintenance_duration = GridValue.get_maintenance_duration_1d(maintenance)
assert np.all(maintenance_duration == np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]))

If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0)

maintenance = np.array([0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0])
maintenance_duration = GridValue.get_maintenance_duration_1d(maintenance)
assert np.all(maintenance_duration == np.array([3,3,3,3,3,3,2,1,0,0,0,0,0,0,0,0]))

If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0), and a second one for 2 time steps at time step 13

maintenance = np.array([0,0,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0])
maintenance_duration = GridValue.get_maintenance_duration_1d(maintenance)
assert np.all(maintenance_duration == np.array([3,3,3,3,3,3,2,1,2,2,2,2,2,1,0,0,0]))
staticmethod get_maintenance_time_1d(maintenance)[source]

This function allows to transform a 1d numpy aarray maintenance, where is specify:

  • 0 there is no maintenance at this time step

  • 1 there is a maintenance at this time step

Into the representation in terms of “next maintenance time” as specified in GridValue.maintenance_time which is:

  • -1 no foreseeable maintenance operation will be performed

  • 0 a maintenance operation is being performed

  • 1, 2 etc. is the number of time step the next maintenance will be performed.

Parameters

maintenance (numpy.ndarray) – 1 dimensional array representing the time series of the maintenance (0 there is no maintenance, 1 there is a maintenance at this time step)

Returns

maintenance_duration – Array representing the time series of the duration of the next maintenance forseeable.

Return type

numpy.ndarray

Examples

If no maintenance are planned:

maintenance_time = GridValue.get_maintenance_time_1d(np.array([0 for _ in range(10)]))
assert np.all(maintenance_time == np.array([-1  for _ in range(10)]))

If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0)

maintenance = np.array([0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0])
maintenance_time = GridValue.get_maintenance_time_1d(maintenance)
assert np.all(maintenance_time == np.array([5,4,3,2,1,0,0,0,-1,-1,-1,-1,-1,-1,-1,-1]))

If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0), and a second one for 2 time steps at time step 13

maintenance = np.array([0,0,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0])
maintenance_time = GridValue.get_maintenance_time_1d(maintenance)
assert np.all(maintenance_time == np.array([5,4,3,2,1,0,0,0,4,3,2,1,0,0,-1,-1,-1]))
abstractmethod initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend)[source]

This function is used to initialize the data generator. It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also initialize GridValue.maintenance_time, GridValue.maintenance_duration and GridValue.hazard_duration.

This function should also increment GridValue.curr_iter of 1 each time it is called.

The GridValue is what makes the connection between the data (generally in a shape of files on the hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change the tool that computes the load flows. Generally, such grid2op.Backend expects data in a specific format that is given by the way their internal powergrid is represented, and in particular, the “same” objects can have different name and different position. To ensure that the same chronics would produce the same results on every backend (ie regardless of the order of which the Backend is expecting the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name of the object in the chronics to the name of the same object in the backend.

This is done with the “names_chronics_to_backend” dictionnary that has the following keys:

  • “loads”

  • “prods”

  • “lines”

The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend. This means that each keys of these subdictionnary is a name of one column in the files, and each values is the corresponding name of this same object in the dictionnary. An example is provided bellow.

Parameters
  • order_backend_loads (numpy.ndarray, dtype:str) – Ordered name, in the Backend, of the loads. It is required that a grid2op.Backend object always output the informations in the same order. This array gives the name of the loads following this order. See the documentation of grid2op.Backend for more information about this.

  • order_backend_prods (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for generators.

  • order_backend_lines (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • order_backend_subs (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • names_chronics_to_backend (dict) – See in the description of the method for more information about its format.

Examples

For example, suppose we have a grid2op.Backend with:

  • substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)

  • loads named “load_i” with “i” the subtations to which it is connected

  • generators units named “gen_i” (i still the substation id to which it is connected)

  • powerlnes are named “i_j” if it connected substations i to substation j

And on the other side, we have some files with the following conventions:

  • substations are numbered from 1 to N

  • loads are named “i_C” with i being the substation to which it is connected

  • generators are named “i_G” with is being the id of the substations to which it is connected

  • powerlines are namesd “i_j_k” where i is the origin substation, j the extremity substations and “k” is a unique identifier of this powerline in the powergrid.

In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the following elements and initialize the object gridval of type GridValue with:

gridval = GridValue()  # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
                         'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
                           '3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
                          'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
                                           "14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
                                           "6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
                                           "11_C": 'load_10', "12_C": 'load_11',
                                           "13_C": 'load_12'},
                                 "lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
                                          '10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
                                           '2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
                                           '4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
                                           '6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
                                          '7_8_14': '6_7', '7_9_15': '6_8'},
                                 "prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
                                           "2_G": "gen_1", "8_G": "gen_7"},
                                }
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
abstractmethod load_next()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

This is automatically called by the “env.step” function. It loads the next information about the grid state (load p and load q, prod p and prod v as well as some maintenance or hazards information)

Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary compatible with the grid2op.BaseAction class allowed for the Environment.

More information about this dictionary can be found at grid2op.BaseAction.update().

As a (quick) reminder: this dictionary has for keys:

  • “injection” (optional): a dictionary with keys (optional) “load_p”, “load_q”, “prod_p”, “prod_v”

  • “hazards” (optional) : the outage suffered from the _grid

  • “maintenance” (optional) : the maintenance operations planned on the grid for the current time step.

Returns

  • timestamp (datetime.datetime) – The current timestamp for which the modifications have been generated.

  • dict_ (dict) – Always empty, indicating i do nothing.

  • maintenance_time (numpy.ndarray, dtype:int) – Information about the next planned maintenance. See GridValue.maintenance_time for more information.

  • maintenance_duration (numpy.ndarray, dtype:int) – Information about the duration of next planned maintenance. See GridValue.maintenance_duration for more information.

  • hazard_duration (numpy.ndarray, dtype:int) – Information about the current hazard. See GridValue.hazard_duration for more information.

  • prod_v (numpy.ndarray, dtype:float) – the (stored) value of the generator voltage setpoint

Raises

StopIteration – if the chronics is over

max_timestep()[source]

This method returned the maximum timestep that the current episode can last. Note that if the grid2op.BaseAgent performs a bad action that leads to a game over, then the episode can lasts less.

Returns

res – -1 if possibly infinite length or a positive integer representing the maximum duration of this episode

Return type

int

abstractmethod next_chronics()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Move to the next “chronics”, representing the next “level” if we make the parallel with video games.

A call to this function should at least restart:

sample_next_chronics(probabilities=None)[source]

this is used to sample the next chronics used with given probabilities

Parameters

probabilities (np.ndarray) – Array of integer with the same size as the number of chronics in the cache. If it does not sum to one, it is rescaled such that it sums to one.

Returns

selected – The integer that was selected.

Return type

int

Examples

Let’s assume in your chronics, the folder names are “Scenario_august_dummy”, and “Scenario_february_dummy”. For the sake of the example, we want the environment to loop 75% of the time to the month of february and 25% of the time to the month of august.

import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1", test=True)  # don't add "test=True" if
# you don't want to perform a test.

# check at which month will belong each observation
for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always alternatively prints "8" (if chronics if from august) or
    # "2" if chronics is from february) with a probability of 50% / 50%

env.seed(0)  # for reproducible experiment
for i in range(10):
    _ = env.chronics_handler.sample_next_chronics([0.25, 0.75])
    obs = env.reset()
    print(obs.month)
    # it prints "2" with probability 0.75 and "8" with probability 0.25
set_chunk_size(new_chunk_size)[source]

This parameters allows to set, if the data generation process support it, the amount of data that is read at the same time. It can help speeding up the computation process by adding more control on the io operation.

Parameters

new_chunk_size (int) – The chunk size (ie the number of rows that will be read on each data set at the same time)

set_filter(filter_fun)[source]

Assign a filtering function to remove some chronics from the next time a call to “reset_cache” is called.

NB filter_fun is applied to all element of Multifolder.subpaths. If True then it will be put in cache, if False this data will NOT be put in the cache.

NB this has no effect until Multifolder.reset is called.

Notes

As of now, this has no effect unless the chronics are generated using Multifolder or MultifolderWithCache

Examples

Let’s assume in your chronics, the folder names are “Scenario_august_dummy”, and “Scenario_february_dummy”. For the sake of the example, we want the environment to loop only through the month of february, because why not. Then we can do the following:

import re
import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1", test=True)  # don't add "test=True" if
# you don't want to perform a test.

# check at which month will belong each observation
for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always alternatively prints "8" (if chronics if from august) or
    # "2" if chronics is from february)

# to see where the chronics are located
print(env.chronics_handler.subpaths)

# keep only the month of february
env.chronics_handler.set_filter(lambda path: re.match(".*february.*", path) is not None)
env.chronics_handler.reset()  # if you don't do that it will not have any effect

for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always prints "2" (representing february)
shuffle(shuffler=None)[source]

This method can be overridden if the data that are represented by this object need to be shuffle.

By default it does nothing.

Parameters

shuffler (object) – Any function that can be used to shuffle the data.

tell_id(id_num, previous=False)[source]

Tell the backend to use one folder for the chronics in particular. This method is mainly use when the GridValue object can deal with many folder. In this case, this method is used by the grid2op.Runner to indicate which chronics to load for the current simulated episode.

This is important to ensure reproducibility, especially in parrallel computation settings.

This should also be used in case of generation “on the fly” of the chronics to ensure the same property.

By default it does nothing.

Note

As of grid2op 1.6.4, this function now accepts the return value of self.get_id().

class grid2op.Chronics.Multifolder(path, time_interval=datetime.timedelta(0, 300), start_datetime=datetime.datetime(2019, 1, 1, 0, 0), gridvalueClass=<class 'grid2op.Chronics.GridStateFromFile.GridStateFromFile'>, sep=';', max_iter=-1, chunk_size=None)[source]

The classes GridStateFromFile and GridStateFromFileWithForecasts implemented the reading of a single folder representing a single episode.

This class is here to “loop” between different episode. Each one being stored in a folder readable by GridStateFromFile or one of its derivate (eg. GridStateFromFileWithForecasts).

Chronics are always read in the alpha-numeric order for this class. This means that if the folder is not modified, the data are always loaded in the same order, regardless of the grid2op.Backend, grid2op.BaseAgent or grid2op.Environment.

gridvalueClass

Type of class used to read the data from the disk. It defaults to GridStateFromFile.

Type

type, optional

data

Data that will be loaded and used to produced grid state and forecasted values.

Type

GridStateFromFile

path: str

Path where the folders of the episodes are stored.

sep: str

Columns separtor, forwarded to Multifolder.data when it’s built at the beginning of each episode.

subpaths: list

List of all the episode that can be “played”. It’s a sorted list of all the directory in Multifolder.path. Each one should contain data in a format that is readable by MultiFolder.gridvalueClass.

Methods:

check_validity(backend)

This method check that the data loaded can be properly read and understood by the grid2op.Backend.

done()

Tells the grid2op.Environment if the episode is over.

forecasts()

The representation of the forecasted grid state(s), if any.

get_id()

Full absolute path of the current folder used for the current episode.

initialize(order_backend_loads, ...[, ...])

This function is used to initialize the data generator.

load_next()

Load the next data from the current episode.

max_timestep()

This method returned the maximum timestep that the current episode can last.

next_chronics()

INTERNAL

reset()

Rebuilt the Multifolder._order.

sample_next_chronics([probabilities])

This function should be called before "next_chronics".

set_chunk_size(new_chunk_size)

This parameters allows to set, if the data generation process support it, the amount of data that is read at the same time.

set_filter(filter_fun)

Assign a filtering function to remove some chronics from the next time a call to "reset_cache" is called.

shuffle([shuffler])

This method is used to have a better control on the order in which the subfolder containing the episode are processed.

split_and_save(datetime_beg, datetime_end, ...)

This function allows you to split the data (keeping only the data between datetime_beg and datetime_end) and to save it on your local machine.

tell_id(id_num[, previous])

This tells this chronics to load for the next episode.

Attributes:

chronics_used

return the full path of the chronics currently in use.

check_validity(backend)[source]

This method check that the data loaded can be properly read and understood by the grid2op.Backend.

Parameters

backend (grid2op.Backend) – The backend used for the experiment.

Returns

property chronics_used

return the full path of the chronics currently in use.

done()[source]

Tells the grid2op.Environment if the episode is over.

Returns

res – Whether or not the episode, represented by MultiFolder.data is over.

Return type

bool

forecasts()[source]

The representation of the forecasted grid state(s), if any.

Returns

  • See the return type of GridStateFromFile.forecasts (or of MultiFolder.gridvalueClass if it

  • has been changed) for more information.

get_id() str[source]

Full absolute path of the current folder used for the current episode.

Returns

res – Path from which the data are generated for the current episode.

Return type

str

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

This function is used to initialize the data generator. It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also initialize GridValue.maintenance_time, GridValue.maintenance_duration and GridValue.hazard_duration.

This function should also increment GridValue.curr_iter of 1 each time it is called.

The GridValue is what makes the connection between the data (generally in a shape of files on the hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change the tool that computes the load flows. Generally, such grid2op.Backend expects data in a specific format that is given by the way their internal powergrid is represented, and in particular, the “same” objects can have different name and different position. To ensure that the same chronics would produce the same results on every backend (ie regardless of the order of which the Backend is expecting the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name of the object in the chronics to the name of the same object in the backend.

This is done with the “names_chronics_to_backend” dictionnary that has the following keys:

  • “loads”

  • “prods”

  • “lines”

The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend. This means that each keys of these subdictionnary is a name of one column in the files, and each values is the corresponding name of this same object in the dictionnary. An example is provided bellow.

Parameters
  • order_backend_loads (numpy.ndarray, dtype:str) – Ordered name, in the Backend, of the loads. It is required that a grid2op.Backend object always output the informations in the same order. This array gives the name of the loads following this order. See the documentation of grid2op.Backend for more information about this.

  • order_backend_prods (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for generators.

  • order_backend_lines (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • order_backend_subs (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • names_chronics_to_backend (dict) – See in the description of the method for more information about its format.

Examples

For example, suppose we have a grid2op.Backend with:

  • substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)

  • loads named “load_i” with “i” the subtations to which it is connected

  • generators units named “gen_i” (i still the substation id to which it is connected)

  • powerlnes are named “i_j” if it connected substations i to substation j

And on the other side, we have some files with the following conventions:

  • substations are numbered from 1 to N

  • loads are named “i_C” with i being the substation to which it is connected

  • generators are named “i_G” with is being the id of the substations to which it is connected

  • powerlines are namesd “i_j_k” where i is the origin substation, j the extremity substations and “k” is a unique identifier of this powerline in the powergrid.

In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the following elements and initialize the object gridval of type GridValue with:

gridval = GridValue()  # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
                         'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
                           '3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
                          'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
                                           "14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
                                           "6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
                                           "11_C": 'load_10', "12_C": 'load_11',
                                           "13_C": 'load_12'},
                                 "lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
                                          '10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
                                           '2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
                                           '4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
                                           '6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
                                          '7_8_14': '6_7', '7_9_15': '6_8'},
                                 "prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
                                           "2_G": "gen_1", "8_G": "gen_7"},
                                }
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
load_next()[source]

Load the next data from the current episode. It loads the next time step for the current episode.

Returns

max_timestep()[source]

This method returned the maximum timestep that the current episode can last. Note that if the grid2op.BaseAgent performs a bad action that leads to a game over, then the episode can lasts less.

Returns

res – -1 if possibly infinite length or a positive integer representing the maximum duration of this episode

Return type

int

next_chronics()[source]

INTERNAL

Warning

/!\ Internal, do not use unless you know what you are doing /!\

Move to the next “chronics”, representing the next “level” if we make the parallel with video games.

A call to this function should at least restart:

reset()[source]

Rebuilt the Multifolder._order. This should be called after a call to Multifolder.set_filter() is performed.

Warning

This “reset” is different from the env.reset. It should be only called after the function to set the filtering function has been called.

This “reset” only reset which chronics are used for the environment.

Returns

new_order – The selected chronics paths after a call to this method.

Return type

numpy.ndarray, dtype: str

Notes

Except explicitly mentioned, for example by Multifolder.set_filter() you should not use this function. This will erased every selection of chronics, every shuffle etc.

sample_next_chronics(probabilities=None)[source]

This function should be called before “next_chronics”. It can be used to sample non uniformly for the next next chronics.

Parameters

probabilities (np.ndarray) – Array of integer with the same size as the number of chronics in the cache. If it does not sum to one, it is rescaled such that it sums to one.

Returns

selected – The integer that was selected.

Return type

int

Examples

Let’s assume in your chronics, the folder names are “Scenario_august_dummy”, and “Scenario_february_dummy”. For the sake of the example, we want the environment to loop 75% of the time to the month of february and 25% of the time to the month of august.

import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1", test=True)  # don't add "test=True" if
# you don't want to perform a test.

# check at which month will belong each observation
for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always alternatively prints "8" (if chronics if from august) or
    # "2" if chronics is from february) with a probability of 50% / 50%

env.seed(0)  # for reproducible experiment
for i in range(10):
    _ = env.chronics_handler.sample_next_chronics([0.25, 0.75])
    obs = env.reset()
    print(obs.month)
    # it prints "2" with probability 0.75 and "8" with probability 0.25
set_chunk_size(new_chunk_size)[source]

This parameters allows to set, if the data generation process support it, the amount of data that is read at the same time. It can help speeding up the computation process by adding more control on the io operation.

Parameters

new_chunk_size (int) – The chunk size (ie the number of rows that will be read on each data set at the same time)

set_filter(filter_fun)[source]

Assign a filtering function to remove some chronics from the next time a call to “reset_cache” is called.

NB filter_fun is applied to all element of Multifolder.subpaths. If True then it will be put in cache, if False this data will NOT be put in the cache.

NB this has no effect until Multifolder.reset is called.

Examples

Let’s assume in your chronics, the folder names are “Scenario_august_dummy”, and “Scenario_february_dummy”. For the sake of the example, we want the environment to loop only through the month of february, because why not. Then we can do the following:

import re
import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1", test=True)  # don't add "test=True" if
# you don't want to perform a test.

# check at which month will belong each observation
for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always alternatively prints "8" (if chronics if from august) or
    # "2" if chronics is from february)

# to see where the chronics are located
print(env.chronics_handler.subpaths)

# keep only the month of february
env.chronics_handler.set_filter(lambda path: re.match(".*february.*", path) is not None)
env.chronics_handler.reset()  # if you don't do that it will not have any effect

for i in range(10):
    obs = env.reset()
    print(obs.month)
    # it always prints "2" (representing february)
shuffle(shuffler=None)[source]

This method is used to have a better control on the order in which the subfolder containing the episode are processed.

It can focus the evaluation on one specific folder, shuffle the folders, use only a subset of them etc. See the examples for more information.

Parameters

shuffler (object) – Shuffler should be a function that is called on MultiFolder.subpaths that will shuffle them. It can also be used to remove some path if needed (see example).

Returns

new_order – The order in which the chronics will be looped through

Return type

numpy.ndarray, dtype: str

Examples

If you want to simply shuffle the data you can do:

# create an environment
import numpy as np
import grid2op
env_name = "l2rpn_case14_sandbox"
env = grid2op.make(env_name)

# shuffle the chronics (uniformly at random, without duplication)
env.chronics_handler.shuffle()
# use the environment as you want, here do 10 episode with the selected data
for i in range(10):
    obs = env.reset()
    print(f"Path of the chronics used: {env.chronics_handler.data.path}")
    done = False
    while not done:
        act = ...
        obs, reward, done, info = env.step(act)

# re shuffle them (still uniformly at random, without duplication)
env.chronics_handler.shuffle()

# use the environment as you want, here do 10 episode with the selected data
for i in range(10):
    obs = env.reset()
    print(f"Path of the chronics used: {env.chronics_handler.data.path}")
    done = False
    while not done:
        act = ...
        obs, reward, done, info = env.step(act)

If you want to use only a subset of the path, say for example the path with index 1, 5, and 6

# create an environment
import numpy as np
import grid2op
env_name = "l2rpn_case14_sandbox"
env = grid2op.make(env_name)

# select the chronics (here 5 at random amongst the 10 "last" chronics of the environment)
nb_chron = len(env.chronics_handler.chronics_used)
chron_id_to_keep = np.random.choice(np.arange(nb_chron - 10, nb_chron), size=5, replace=True)
env.chronics_handler.shuffle(lambda x: chron_id_to_keep)

# use the environment as you want, here do 10 episode with the selected data
for i in range(10):
    obs = env.reset()
    print(f"Path of the chronics used: {env.chronics_handler.data.path}")
    done = False
    while not done:
        act = ...
        obs, reward, done, info = env.step(act)

# re shuffle them (uniformly at random, without duplication, among the chronics "selected" above.)
env.chronics_handler.shuffle()

# use the environment as you want, here do 10 episode with the selected data
for i in range(10):
    obs = env.reset()
    print(f"Path of the chronics used: {env.chronics_handler.data.path}")
    done = False
    while not done:
        act = ...
        obs, reward, done, info = env.step(act)

Warning

Though it is possible to use this “shuffle” function to only use some chronics, we highly recommend you to have a look at the sections Chronics Customization or Splitting into raining, validation, test scenarios. It is likely that you will find better way to do what you want to do there. Use this last example with care then.

Warning

As stated on the MultiFolder.reset(), any call to env.chronics_handler.reset will remove anything related to shuffling, including the selection of chronics !

split_and_save(datetime_beg, datetime_end, path_out)[source]

This function allows you to split the data (keeping only the data between datetime_beg and datetime_end) and to save it on your local machine. This is espacially handy if you want to extract only a piece of the dataset we provide for example.

Parameters
  • datetime_beg (dict) – Keys are the name id of the scenarios you want to save. Values are the corresponding starting date and time (in “%Y-%m-ùd %H:%M” format). See example for more information.

  • datetime_end (dict) –

    keys must be the same as in the “datetime_beg” argument.

    See example for more information

  • path_out (str) – The path were the data will be stored.

Examples

Here is a short example on how to use it

import grid2op
import os
env = grid2op.make()

env.chronics_handler.real_data.split_and_save({"004": "2019-01-08 02:00",
                                     "005": "2019-01-30 08:00",
                                     "006": "2019-01-17 00:00",
                                     "007": "2019-01-17 01:00",
                                     "008": "2019-01-21 09:00",
                                     "009": "2019-01-22 12:00",
                                     "010": "2019-01-27 19:00",
                                     "011": "2019-01-15 12:00",
                                     "012": "2019-01-08 13:00",
                                     "013": "2019-01-22 00:00"},
                                    {"004": "2019-01-11 02:00",
                                     "005": "2019-02-01 08:00",
                                     "006": "2019-01-18 00:00",
                                     "007": "2019-01-18 01:00",
                                     "008": "2019-01-22 09:00",
                                     "009": "2019-01-24 12:00",
                                     "010": "2019-01-29 19:00",
                                     "011": "2019-01-17 12:00",
                                     "012": "2019-01-10 13:00",
                                     "013": "2019-01-24 00:00"},
                          path_out=os.path.join("/tmp"))
tell_id(id_num, previous=False)[source]

This tells this chronics to load for the next episode. By default, if id_num is greater than the number of episode, it is equivalent at restarting from the first one: episode are played indefinitely in the same order.

Parameters
  • id_num (int | str) – Id of the chronics to load.

  • previous – Do you want to set to the previous value of this one or not (note that in general you want to set to the previous value, as calling this function as an impact only after env.reset() is called)

class grid2op.Chronics.MultifolderWithCache(path, time_interval=datetime.timedelta(0, 300), start_datetime=datetime.datetime(2019, 1, 1, 0, 0), gridvalueClass=<class 'grid2op.Chronics.GridStateFromFile.GridStateFromFile'>, sep=';', max_iter=-1, chunk_size=None)[source]

This class is a particular type of Multifolder that, instead of reading is all from disk each time stores it into memory.

For now it’s only compatible (because it only present some kind of interest) with GridValue class inheriting from GridStateFromFile.

The function MultifolderWithCache.reset_cache() will redo the cache from scratch. You can filter which type of data will be cached or not with the MultifolderWithCache.set_filter() function.

NB Efficient use of this class can dramatically increase the speed of the learning algorithm, especially at the beginning where lots of data are read from the hard drive and the agent games over after a few time steps ( typically, data are given by months, so 30*288 >= 8600 time steps, while during exploration an agent usually performs less than a few dozen of steps leading to more time spent reading 8600 rows than computing the few dozen of steps.

Examples

This is how this class can be used:

import re
from grid2op import make
from grid2op.Chronics import MultifolderWithCache
env = make(...,chronics_class=MultifolderWithCache)

# set the chronics to limit to one week of data (lower memory footprint)
env.chronics_handler.set_max_iter(7*288)
# assign a filter, use only chronics that have "december" in their name
env.chronics_handler.real_data.set_filter(lambda x: re.match(".*december.*", x) is not None)
# create the cache
env.chronics_handler.real_data.reset_cache()

# and now you can use it as you would do any gym environment:
my_agent = ...
obs = env.reset()
done = False
reward = env.reward_range[0]
while not done:
    act = my_agent.act(obs, reward, done)
    obs, reward, done, info = env.step(act)  # and step will NOT load any data from disk.

Methods:

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

This function is used to initialize the data generator. It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also initialize GridValue.maintenance_time, GridValue.maintenance_duration and GridValue.hazard_duration.

This function should also increment GridValue.curr_iter of 1 each time it is called.

The GridValue is what makes the connection between the data (generally in a shape of files on the hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change the tool that computes the load flows. Generally, such grid2op.Backend expects data in a specific format that is given by the way their internal powergrid is represented, and in particular, the “same” objects can have different name and different position. To ensure that the same chronics would produce the same results on every backend (ie regardless of the order of which the Backend is expecting the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name of the object in the chronics to the name of the same object in the backend.

This is done with the “names_chronics_to_backend” dictionnary that has the following keys:

  • “loads”

  • “prods”

  • “lines”

The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend. This means that each keys of these subdictionnary is a name of one column in the files, and each values is the corresponding name of this same object in the dictionnary. An example is provided bellow.

Parameters
  • order_backend_loads (numpy.ndarray, dtype:str) – Ordered name, in the Backend, of the loads. It is required that a grid2op.Backend object always output the informations in the same order. This array gives the name of the loads following this order. See the documentation of grid2op.Backend for more information about this.

  • order_backend_prods (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for generators.

  • order_backend_lines (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • order_backend_subs (numpy.ndarray, dtype:str) – Same as order_backend_loads, but for powerline.

  • names_chronics_to_backend (dict) – See in the description of the method for more information about its format.

Examples

For example, suppose we have a grid2op.Backend with:

  • substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)

  • loads named “load_i” with “i” the subtations to which it is connected

  • generators units named “gen_i” (i still the substation id to which it is connected)

  • powerlnes are named “i_j” if it connected substations i to substation j

And on the other side, we have some files with the following conventions:

  • substations are numbered from 1 to N

  • loads are named “i_C” with i being the substation to which it is connected

  • generators are named “i_G” with is being the id of the substations to which it is connected

  • powerlines are namesd “i_j_k” where i is the origin substation, j the extremity substations and “k” is a unique identifier of this powerline in the powergrid.

In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the following elements and initialize the object gridval of type GridValue with:

gridval = GridValue()  # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
                         'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
                           '3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
                          'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
                                           "14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
                                           "6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
                                           "11_C": 'load_10', "12_C": 'load_11',
                                           "13_C": 'load_12'},
                                 "lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
                                          '10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
                                           '2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
                                           '4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
                                           '6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
                                          '7_8_14': '6_7', '7_9_15': '6_8'},
                                 "prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
                                           "2_G": "gen_1", "8_G": "gen_7"},
                                }
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
reset()[source]

Rebuilt the cache as if it were built from scratch. This call might take a while to process.

class grid2op.Chronics.ReadPypowNetData(path, sep=';', time_interval=datetime.timedelta(0, 300), max_iter=- 1, chunk_size=None)[source]

DEPRECATED, this class is no longer used nor tested.

Methods:

initialize(order_backend_loads, order_backend_prods, order_backend_lines, order_backend_subs, names_chronics_to_backend=None)[source]

The same condition as GridStateFromFile.initialize applies also for GridStateFromFileWithForecasts.load_p_forecast, GridStateFromFileWithForecasts.load_q_forecast, GridStateFromFileWithForecasts.prod_p_forecast, GridStateFromFileWithForecasts.prod_v_forecast and GridStateFromFileWithForecasts.maintenance_forecast.

:param See help of GridValue.initialize() for a detailed help about the _parameters.:

If you still can’t find what you’re looking for, try in one of the following pages:

Still trouble finding the information ? Do not hesitate to send a github issue about the documentation at this link: Documentation issue template