utils package

Submodules

utils.base_agents module

class utils.base_agents.BaseBatteryAgent(parameters=None)[source]

Bases: object

Base class for battery agents.

Parameters:

parameters (dict) – Dictionary containing the agent parameters.

act(*args, **kwargs)[source]

Return the do nothing action regardless of the input parameters.

Parameters:
  • *args – Arbitrary positional arguments.

  • **kwargs – Arbitrary keyword arguments.

Returns:

The action (do nothing) to be taken.

Return type:

action (int)

do_nothing_action()[source]

Return the do nothing action.

Returns:

The action (do nothing) to be taken.

Return type:

action (int)

class utils.base_agents.BaseHVACAgent(parameters=None)[source]

Bases: object

Base class for HVAC agents.

Parameters:

parameters (dict) – Dictionary containing the agent parameters.

do_nothing_action()[source]

Return the do nothing action.

Returns:

The action (do nothing) to be taken.

Return type:

action (int)

class utils.base_agents.BaseLoadShiftingAgent(parameters=None)[source]

Bases: object

Base class for load shifting agents.

Parameters:

parameters (dict) – Dictionary containing the agent parameters.

do_nothing_action()[source]

Return the do nothing action.

Returns:

The action (do nothing) to be taken.

Return type:

action (int)

utils.checkpoint_finder module

utils.checkpoint_finder.get_best_checkpoint(trial_dir: str, metric: str | None = 'episode_reward_mean', mode: str | None = 'max') str[source]

Gets best persistent checkpoint path of provided trial.

Parameters:
  • trial – The log directory of a trial instance which contains all the checkpoints

  • metric – The metric by which the best checkpoint would be chosen, like ‘episode_reward_mean’

  • mode – One of [min, max].

Returns:

string (path to checkpoint)

utils.dc_config module

This file contains the data center configurations that may be customized by the user for designing the data center. The use of this file has been deprecated. Any changes to this file will not be reflected in the actual data center design. Instead, modify utils/dc_config.json to design the data center.

utils.dc_config_reader module

This file is used to read the data center configuration from user inputs provided inside dc_config.json. It also performs some auxiliary steps to calculate the server power specifications based on the given parameters.

class utils.dc_config_reader.DC_Config(dc_config_file='dc_config.json', datacenter_capacity_mw=1)[source]

Bases: object

utils.helper_methods module

utils.helper_methods.f2c(t: float) float[source]

Converts temperature in Fahrenheit to Celsius using the formula (5/9)*(t-23).

Parameters:

t (float) – Temperature in Fahrenheit.

Returns:

Temperature in Celsius.

Return type:

float

class utils.helper_methods.pyeplus_callback[source]

Bases: DefaultCallbacks

Custom callbacks class that extends the DefaultCallbacks class.

Defines callback methods that are triggered at various points of the training process.

on_episode_end(*, worker, base_env, policies, episode, env_index, **kwargs)[source]

Method that is called at the end of each episode in the training process.

Calculates some metrics based on the user_data variables updated during the episode.

Parameters:
  • worker (Worker) – The worker object that is being used in the training process.

  • base_env (BaseEnv) – The base environment that is being used in the training process.

  • policies (Dict[str, Policy]) – The policies that are being used in the training process.

  • episode (MultiAgentEpisode) – The episode object that is being processed.

  • env_index (int) – The index of the environment within the worker task.

  • **kwargs – additional arguments that can be passed.

on_episode_start(*, worker, base_env, policies, episode, env_index, **kwargs)[source]

Method that is called at the beginning of each episode in the training process.

Initializes some user_data variables to be used later on.

Parameters:
  • worker (Worker) – The worker object that is being used in the training process.

  • base_env (BaseEnv) – The base environment that is being used in the training process.

  • policies (Dict[str, Policy]) – The policies that are being used in the training process.

  • episode (MultiAgentEpisode) – The episode object that is being processed.

  • env_index (int) – The index of the environment within the worker task.

  • **kwargs – additional arguments that can be passed.

on_episode_step(*, worker, base_env, episode, env_index, **kwargs)[source]

Method that is called at each step of each episode in the training process.

Updates some user_data variables to be used later on.

Parameters:
  • worker (Worker) – The worker object that is being used in the training process.

  • base_env (BaseEnv) – The base environment that is being used in the training process.

  • episode (MultiAgentEpisode) – The episode object that is being processed.

  • env_index (int) – The index of the environment within the worker task.

  • **kwargs – additional arguments that can be passed.

utils.make_envs_pyenv module

utils.make_envs_pyenv.make_bat_fwd_env(month, max_bat_cap_Mwh: float = 2.0, charging_rate: float = 0.5, max_dc_pw_MW: float = 7.23, dcload_max: float = 2.5, dcload_min: float = 0.1, n_fwd_steps: int = 4)[source]

Method to build the Battery environment.

Parameters:
  • month (int) – Month of the year in which the agent is training.

  • max_bat_cap_Mwh (float, optional) – Max battery capacity. Defaults to 2.0.

  • charging_rate (float, optional) – Charging rate of the battery. Defaults to 0.5.

  • reward_method (str, optional) – Method used to calculate the rewards. Defaults to ‘default_bat_reward’.

Returns:

Batery environment.

Return type:

battery_env_fwd

utils.make_envs_pyenv.make_dc_pyeplus_env(month: int = 1, location: str = 'NYIS', dc_config_file: str = 'dc_config_file.json', datacenter_capacity_mw: int = 1, max_bat_cap_Mw: float = 2.0, add_cpu_usage: bool = True, add_CI: bool = True, episode_length_in_time: Timedelta | None = None, use_ls_cpu_load: bool = False, num_sin_cos_vars: int = 4)[source]

Method that creates the data center environment with the timeline, location, proper data files, gym specifications and auxiliary methods

Parameters:
  • month (int, optional) – The month of the year for which the Environment uses the weather and Carbon Intensity data. Defaults to 1.

  • location (str, optional) – The geographical location in a standard format for which Carbon Intensity files are accessed. Supported options are ‘NYIS’, ‘AZPS’, ‘BPAT’. Defaults to ‘NYIS’.

  • datacenter_capacity_mw (int, optional) – Maximum capacity (MW) of the data center. This value will scale the number of servers installed in the data center.

  • max_bat_cap_Mw (float, optional) – The battery capacity in Megawatts for the installed battery. Defaults to 2.0.

  • add_cpu_usage (bool, optional) – Boolean Flag to indicate whether cpu usage is part of the environment statespace. Defaults to True.

  • add_CI (bool, optional) – Boolean Flag to indicate whether Carbon Intensity is part of the environment statespace. Defaults to True.

  • episode_length_in_time (pd.Timedelta, optional) – Length of an episode in terms of pandas time-delta object. Defaults to None.

  • use_ls_cpu_load (bool, optional) – Use the cpu workload value from a separate Load Shifting agent. This turns of reading default cpu data. Defaults to False.

  • num_sin_cos_vars (int, optional) – Number of sin and cosine variable that will be added externally from the centralized data source

Returns:

The environment instantiated with the particular month.

Return type:

envs.dc_gym.dc_gymenv

utils.make_envs_pyenv.make_ls_env(month, n_vars_ci: int = 4, n_vars_energy: int = 4, n_vars_battery: int = 1, queue_max_len: int = 500, test_mode=False)[source]

Method to build the Load shifting environment

Parameters:
  • month (int) – Month of the year in which the agent is training.

  • n_vars_energy (int, optional) – Number of variables from the Energy environment. Defaults to 4.

  • n_vars_battery (int, optional) – Number of variables from the Battery environment. Defaults to 1.

  • queue_max_len (int, optional) – The size of the queue where the tasks are stored to be processed latter. Default to 500.

Returns:

Load Shifting environment

Return type:

CarbonLoadEnv

utils.managers module

class utils.managers.CI_Manager(filename='', location='NYIS', init_day=0, future_steps=4, weight=0.1, desired_std_dev=5, timezone_shift=0)[source]

Bases: object

Manager of the carbon intensity data.

Parameters:
  • filename (str, optional) – Filename of the carbon intensity data. Defaults to ‘’.

  • location (str, optional) – Location identifier. Defaults to ‘NYIS’.

  • init_day (int, optional) – Initial day of the episode. Defaults to 0.

  • future_steps (int, optional) – Number of steps of the CI forecast. Defaults to 4.

  • weight (float, optional) – Weight value for coherent noise. Defaults to 0.1.

  • desired_std_dev (float, optional) – Desired standard deviation for coherent noise. Defaults to 5.

  • timezone_shift (int, optional) – Shift for the timezone. Defaults to 0.

get_current_ci()[source]
get_forecast_ci(steps=4)[source]
get_total_ci()[source]

Function to obtain the total carbon intensity

Returns:

Total carbon intesity

Return type:

List[float]

reset(init_day=None, init_hour=None)[source]

Reset CI_Manager to a specific initial day and hour.

Parameters:
  • init_day (int, optional) – Day to start from. If None, defaults to the initial day set during initialization.

  • init_hour (int, optional) – Hour to start from. If None, defaults to 0.

Returns:

Carbon intensity at current time step. float: Normalized carbon intensity at current time step and its forecast.

Return type:

float

step()[source]

Step CI_Manager

Returns:

Carbon intensity at current time step float: Normalized carbon intensity at current time step and it’s forecast

Return type:

float

class utils.managers.CoherentNoise(base, weight, desired_std_dev=0.1, scale=1)[source]

Bases: object

Class to add coherent noise to the data.

Parameters:
  • base (List[float]) – Base data

  • weight (float) – Weight of the noise to be added

  • desired_std_dev (float, optional) – Desired standard deviation. Defaults to 0.1.

  • scale (int, optional) – Scale. Defaults to 1.

generate(n_steps)[source]

Generate coherent noise

Parameters:

n_steps (int) – Length of the data to generate.

Returns:

Array of generated coherent noise.

Return type:

numpy.ndarray

class utils.managers.Time_Manager(init_day=0, days_per_episode=30, timezone_shift=0)[source]

Bases: object

Class to manage the time dimenssion over an episode

Parameters:
  • init_day (int, optional) – Day to start from. Defaults to 0.

  • days_per_episode (int, optional) – Number of days that an episode would last. Defaults to 30.

  • timezone_shift (int, optional) – Shift for the timezone. Defaults to 0.

isterminal()[source]

Function to identify terminal state

Returns:

Signals if a state is terminal or not

Return type:

bool

reset(init_day=None, init_hour=None)[source]

Reset time manager to a specific initial day and hour.

Parameters:
  • init_day (int, optional) – Day to start from. If None, defaults to the initial day set during initialization.

  • init_hour (int, optional) – Hour to start from. If None, defaults to the timezone shift set during initialization.

Returns:

Sine and cosine of the current hour and day.

Return type:

List[float]

step()[source]

Step function for the time maneger

Returns:

Current hour and day in sine and cosine form. bool: Signal if the episode has reach the end.

Return type:

List[float]

class utils.managers.Weather_Manager(filename='', location='NY', init_day=0, weight=0.02, desired_std_dev=0.75, temp_column=6, rh_column=8, pres_column=9, timezone_shift=0)[source]

Bases: object

Manager of the weather data.

Where to obtain other weather files: https://climate.onebuilding.org/

Parameters:
  • filename (str, optional) – Filename of the weather data. Defaults to ‘’.

  • location (str, optional) – Location identifier. Defaults to ‘NY’.

  • init_day (int, optional) – Initial day of the year. Defaults to 0.

  • weight (float, optional) – Weight value for coherent noise. Defaults to 0.02.

  • desired_std_dev (float, optional) – Desired standard deviation for coherent noise. Defaults to 0.75.

  • temp_column (int, optional) – Column that contains the temperature data. Defaults to 6.

  • rh_column (int, optional) – Column that contains the relative humidity data. Defaults to 8.

  • pres_column (int, optional) – Column that contains the pressure data. Defaults to 9.

  • timezone_shift (int, optional) – Shift for the timezone. Defaults to 0.

get_current_weather()[source]
get_total_weather()[source]

Obtain the weather data in a List form

Returns:

Total temperature data

Return type:

List[form]

reset(init_day=None, init_hour=None)[source]

Reset Weather_Manager to a specific initial day and hour.

Parameters:
  • init_day (int, optional) – Day to start from. If None, defaults to the initial day set during initialization.

  • init_hour (int, optional) – Hour to start from. If None, defaults to 0.

Returns:

Temperature at current step, normalized temperature at current step, wet bulb temperature at current step, normalized wet bulb temperature at current step.

Return type:

tuple

step()[source]

Step on the Weather_Manager

Returns:

Temperature a current step float: Normalized temperature a current step

Return type:

float

class utils.managers.Workload_Manager(workload_filename='', init_day=0, future_steps=4, weight=0.01, desired_std_dev=0.025, timezone_shift=0)[source]

Bases: object

get_current_workload()[source]
get_total_wkl()[source]

Get current workload

Returns:

CPU data

Return type:

List[float]

reset(init_day=None, init_hour=None)[source]

Reset Workload_Manager to a specific initial day and hour.

Parameters:
  • init_day (int, optional) – Day to start from. If None, defaults to the initial day set during initialization.

  • init_hour (int, optional) – Hour to start from. If None, defaults to 0.

Returns:

CPU workload at current time step.

Return type:

float

scale_array(arr)[source]

Scales the input array so that approximately 90% of its values fall within the range of 0.2 to 0.8, based on the 5th and 95th percentiles.

Parameters: arr (np.array): The input numpy array to be scaled.

Returns: np.array: The scaled numpy array.

set_current_workload(workload)[source]
step()[source]

Step function for the Workload_Manager

Returns:

CPU workload at current time step float: Amount of daily flexible workload

Return type:

float

utils.managers.normalize(v, min_v, max_v)[source]

Function to normalize values

Parameters:
  • v (float) – Value to be normalized

  • min_v (float) – Lower limit

  • max_v (float) – Upper limit

Returns:

Normalized value

Return type:

float

utils.managers.sc_obs(current_hour, current_day)[source]

Generate sine and cosine of the hour and day

Parameters:
  • current_hour (int) – Current hour of the day

  • current_day (int) – Current day of the year

Returns:

Sine and cosine of the hour and day

Return type:

List[float]

utils.managers.standarize(v)[source]

Function to standarize a list of values

Parameters:

v (float) – Values to be normalized

Returns:

Normalized values

Return type:

float

utils.rbc_agents module

class utils.rbc_agents.RBCBatteryAgent(look_ahead=3, smooth_window=1, max_soc=0.9, min_soc=0.2)[source]

Bases: object

act(carbon_intensity_values, current_soc)[source]

Determine the action for the battery based on the carbon intensity forecast.

Parameters:
  • carbon_intensity_values (list) – Forecasted carbon intensity values.

  • current_soc (float) – Current state of charge of the battery.

Returns:

Action to be taken (0: ‘charge’, 1: ‘discharge’, 2: ‘idle’).

Return type:

int

utils.reward_creator module

utils.reward_creator.custom_agent_reward(params: dict) float[source]

A template for creating a custom agent reward function.

Parameters:

params (dict) – Dictionary containing custom parameters for reward calculation.

Returns:

Custom reward value. Currently returns 0.0 as a placeholder.

Return type:

float

utils.reward_creator.default_bat_reward(params: dict) float[source]

Calculates a reward value based on the battery usage.

Parameters:

params (dict) – Dictionary containing parameters: total_energy_with_battery (float): Total energy with battery. norm_CI (float): Normalized Carbon Intensity. dcload_min (float): Minimum DC load. dcload_max (float): Maximum DC load.

Returns:

Reward value.

Return type:

float

utils.reward_creator.default_dc_reward(params: dict) float[source]

Calculates a reward value based on the data center’s total ITE Load and CT Cooling load.

Parameters:

params (dict) – Dictionary containing parameters: data_center_total_ITE_Load (float): Total ITE Load of the data center. CT_Cooling_load (float): CT Cooling load of the data center. energy_lb (float): Lower bound of the energy. energy_ub (float): Upper bound of the energy.

Returns:

Reward value.

Return type:

float

utils.reward_creator.default_ls_reward(params: dict) float[source]

Calculates a reward value based on normalized load shifting.

Parameters:

params (dict) – Dictionary containing parameters: norm_load_left (float): Normalized load left. out_of_time (bool): Indicator (alarm) whether the agent is in the last hour of the day. penalty (float): Penalty value.

Returns:

Reward value.

Return type:

float

utils.reward_creator.energy_PUE_reward(params: dict) float[source]

Calculates a reward value based on Power Usage Effectiveness (PUE).

Parameters:

params (dict) – Dictionary containing parameters: total_energy_consumption (float): Total energy consumption of the data center. it_equipment_energy (float): Energy consumed by the IT equipment.

Returns:

Reward value.

Return type:

float

utils.reward_creator.energy_efficiency_reward(params: dict) float[source]

Calculates a reward value based on energy efficiency.

Parameters:

params (dict) – Dictionary containing parameters: ITE_load (float): The amount of energy spent on computation (useful work). total_energy_consumption (float): Total energy consumption of the data center.

Returns:

Reward value.

Return type:

float

utils.reward_creator.get_reward_method(reward_method: str = 'default_dc_reward')[source]

Maps the string identifier to the reward function

Parameters:

reward_method (string) – Identifier for the reward function.

Returns:

Reward function.

Return type:

function

utils.reward_creator.renewable_energy_reward(params: dict) float[source]

Calculates a reward value based on the usage of renewable energy sources.

Parameters:

params (dict) – Dictionary containing parameters: renewable_energy_ratio (float): Ratio of energy coming from renewable sources. total_energy_consumption (float): Total energy consumption of the data center.

Returns:

Reward value.

Return type:

float

utils.reward_creator.temperature_efficiency_reward(params: dict) float[source]

Calculates a reward value based on the efficiency of cooling in the data center.

Parameters:

params (dict) – Dictionary containing parameters: current_temperature (float): Current temperature in the data center. optimal_temperature_range (tuple): Tuple containing the minimum and maximum optimal temperatures for the data center.

Returns:

Reward value.

Return type:

float

utils.reward_creator.tou_reward(params: dict) float[source]

Calculates a reward value based on the Time of Use (ToU) of energy.

Parameters:

params (dict) – Dictionary containing parameters: energy_usage (float): The energy usage of the agent. hour (int): The current hour of the day (24-hour format).

Returns:

Reward value.

Return type:

float

utils.reward_creator.water_usage_efficiency_reward(params: dict) float[source]

Calculates a reward value based on the efficiency of water usage in the data center.

A lower value of water usage results in a higher reward, promoting sustainability and efficiency in water consumption.

Parameters:

params (dict) – Dictionary containing parameters: dc_water_usage (float): The amount of water used by the data center in a given period.

Returns:

Reward value. The reward is higher for lower values of water usage, promoting reduced water consumption.

Return type:

float

utils.rllib_callbacks module

class utils.rllib_callbacks.CustomCallbacks[source]

Bases: DefaultCallbacks

CustomCallbacks class that extends the DefaultCallbacks class and overrides its methods to customize the behavior of the callbacks during the RL training process.

on_episode_end(*, worker, base_env, policies, episode, env_index, **kwargs) None[source]

Method that is called at the end of each episode in the training process. It calculates some metrics based on the updated user_data variables.

Parameters:
  • worker (Worker) – The worker object that is being used in the training process.

  • base_env (BaseEnv) – The base environment that is being used in the training process.

  • policies (Dict[str, Policy]) – The policies that are being used in the training process.

  • episode (MultiAgentEpisode) – The episode object that is being processed.

  • env_index (int) – The index of the environment within the worker task.

  • **kwargs – additional arguments that can be passed.

on_episode_start(*, worker, base_env, policies, episode, env_index, **kwargs) None[source]

Method that is called at the beginning of each episode in the training process. It sets some user_data variables to be used later on.

Parameters:
  • worker (Worker) – The worker object that is being used in the training process.

  • base_env (BaseEnv) – The base environment that is being used in the training process.

  • policies (Dict[str, Policy]) – The policies that are being used in the training process.

  • episode (MultiAgentEpisode) – The episode object that is being processed.

  • env_index (int) – The index of the environment within the worker task.

  • **kwargs – additional arguments that can be passed.

on_episode_step(*, worker, base_env, episode, env_index, **kwargs) None[source]

Method that is called at each step of each episode in the training process. It updates some user_data variables to be used later on.

Parameters:
  • worker (Worker) – The worker object that is being used in the training process.

  • base_env (BaseEnv) – The base environment that is being used in the training process.

  • episode (MultiAgentEpisode) – The episode object that is being processed.

  • env_index (int) – The index of the environment within the worker task.

  • **kwargs – additional arguments that can be passed.

utils.trim_and_respond module

https://mepacademy.com/top-6-hvac-control-strategies-to-save-energy/#:~:text=Using%20Trim%20and%20Respond%20control,setpoint%20to%2055.2%C2%B0F. Based on #6 Supply Air Temperature Reset

class utils.trim_and_respond.trim_and_respond_ctrl(TandR_monitor_idx=6, TandR_monitor: str = 'avg_room_temp', TandR_monitor_limit: float = 27)[source]

Bases: object

action(obs)[source]
set_limit(x)[source]

utils.utils_cf module

utils.utils_cf.get_energy_variables(state)[source]

Obtain energy variables from the energy observation

Parameters:

state (List[float]) – agent_dc observation

Returns:

Subset of the agent_dc observation

Return type:

List[float]

utils.utils_cf.get_init_day(start_month=0)[source]

Obtain the initial day of the year to start the episode on

Parameters:

start_month (int, optional) – Starting month. Defaults to 0.

Returns:

Day of the year corresponding to the first day of the month

Return type:

int

utils.utils_cf.obtain_paths(location)[source]

Obtain the correct name for the data files

Parameters:

location (string) – Location identifier

Raises:

ValueError – If location identifier is not defined

Returns:

Naming for the data files

Return type:

List[string]

Module contents

Includes common utilities, controllers, rewards, wrappers and custom callbacks.