2.3. drrc.config module

Wrapper class for YAML configurations.

Use logger.INFO_1RUN for verbose output of this module’s functionality.

Code author: Gerrit Wellecke

class IndentDumper(stream, default_style=None, default_flow_style=False, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=None, explicit_start=None, explicit_end=None, version=None, tags=None, sort_keys=True)[source]

Bases: Dumper

YAML dumper needed for correct printing of config

class Config(proj_path: Path)[source]

Bases: object

Configuration of a system from a YAML file

Initializes the config from a path

Parameters:

proj_path (str) – absolute path from the git repository’s root

property path

Absolute path to the configuration YAML file for the current system.

Type:

pathlib.Path

property max_length

Maximum length of param_scan_list

Type:

int

static get_git_root() Path[source]

Get root of the current git repository

Returns:

absolute path to the git-root for the current system

Return type:

pathlib.Path

Warning

This does not really belong in this class.

get_data_dir() Path[source]

Get absolute path to the current systems data directory.

Returns:

absolute path to the data directory for the current system

Return type:

pathlib.Path

load_training_dataset(index: int) ndarray[source]

Load and return the training data.

Parameters:

index – The index of the training data set to load.

Returns:

One training data set with minimal temporal length needed for iterative timeseries predictions, i.e. temporal_length=(transient_steps + training_steps + 1).

Warning

This function should be modified to load only the amount of variables needed, to save memory.

load_evalaluation_datasets() list[ndarray][source]

Load and return the evaluation data.

Parameters:

config – The configuration object.

Returns:

A list of numpy arrays containing the first evaluation data sets.

Warning

This function should be modified to load only the amount of variables needed, to save memory.

parse_YAML() dict[source]

Parse the config.yml file to get systems parameters

Returns:

representation of the config as given in the YAML file

Return type:

dict

write_metadata_HDF(f: File, *, keylist: list[str] = ['Simulation'], task_id: int | None = None, sub_task_id: int | None = None) None[source]

Write parameters to an open HDF5-file’s attributes

Parameters:
  • f (h5py.File) – HDF file in which metadata is to be written

  • keylist (list[str]) – list of keys to be written (default: Simulation)

  • task_id (int) – if a task_id is given, the corresponding cluster params are also written

  • sub_task_id (int) – if a sub_task_id is given, only the corresponding cluster params are written

param_scan_list() list[list[dict]][source]

Return set of parameters for a cluster run

Returns:

list of lists of dictionaries or dictionary, where each sublist is to be understood as a single job in a job array

Note

In order to run a simulation as a parameter scan, supply the Config YAML with the following block:

ParamScan:
    A:
        - "range"
        - [3]
    B:
        - "range"
        - [5, 10]
    C:
        - "range"
        - [0, 100, 10]
    D:
        - "list"
        - [1, 5, 13]

# useful shorthand for the above config:
ParamScan:
    A: ["range", [3]]
    B: ["range", [5, 10]]
    C: ["range", [0, 100, 10]]
    D: ["list", [1, 5, 13]]

The arguments to the parameters must be lists where the first entry specifies the type of parameters to expect and the second specifies the values. If the specified as "list" then the values will simply be taken as specified. If instead specified as "range" the values are passed to numpy.arange() by list unpacking, e.g. np.arange(*B). Similar commands with linspace, geomspace are available, then the specified values would have to be start, stop, number.

The above example would mean that

\[A \in \{0, 1, 2\} \,, B \in \{5, 6, 7, 8, 9\} \,, C \in \{0, 10, 20, \ldots, 90\} \,, D \in \{1, 5, 13\} \,.\]

A job will then be started for each permutation of \(A,B,C,D\).

If you intend to run a single execution of a fixed set of parameters, you may set the following within the YAML

ParamScan: null
param_scan_len() int[source]

Total iterations of a cluster run

Warning

This is deprecated! Use len(conf.param_scan_list()) instead

jobscript_datadir(output_type: str) Path[source]

Path to raw data as defined in YAML

The cluster will expect data to be within a YAML format like such:

Parameters:

output_type – The output type (:codes: ‘ValidTimes’, ‘RunTimes’, ‘Memory’), which defines the folder to be written into.

Saving:
    OutputDirectory: "path/to/data"
make_jobscript_datadir(*, output_type: str, copy_yaml: bool = False) None[source]

Create datadir as specified in Config if it doesn’t exist

Parameters:
  • output_type (str) – The output type (:codes: ‘ValidTimes’, ‘RunTimes’, ‘Memory’), which defines the folder to be written into.

  • copy_yaml – If set to True, the YAML will be copied to the directory created

Warning

If the directory already exists, an error will be raised to avoid overwriting previous data. The recommended procedure is to either delete old data or rename the YAML such that new data is written in a new directory.

generate_submission_script_from_YAML(*, output_type: str, template: Path | None = None) Path[source]

Creates a shell script that can then be passed to qsub based on the information contained in the Config.

Parameters:
  • output_type (str) – The output type ('ValidTimes', 'RunTimes', 'Memory'), which defines the folder to be written into.

  • template (pathlib.Path or None) – Optional path to a different template than drrc/templates/qsub.template

Note

For this to work, the configuration YAML must contain the following block:

Jobscript:
  # for qsub
  Type: "qsub"                 # specify to run at MPI-DS
  Cores: 4                     # number of cores per job (should be divisor of 32)
  max_job_count: 1000          # array job will have 1000 jobs
  # optional:
  force_parallel_queue: False  # force job to run on teutates-mvapich2.q (optional)

  # for slurm
  Type: "sbatch"         # specify submission command (template must have matching name!)
  max_job_count: 2000    # 2000 jobs will be submitted as an array
  tasks_per_job: 4       # each job will have 4 job steps
  cores_per_task: 4      # each job step will use 4 cores
  mem_per_task: 24       # and 24GB of memory
  cluster: "GWDG"        # use specific cluster options
  time: "00:05:00"       # each task will run at most 5 minutes

This will create a parallel jobs with 4 cores per task. On qsub this will always fill nodes of 32 cores. When using SLURM this additionally allows to set how many tasks should be run in each job (potentially allowing for smaller jobs / faster queueing).

The resulting shell script will always be placed in the data directory as returned by Config.jobscript_datadir() to ensure data is kept with the submission script.

Warning

There seems to be some configuration in place for cluster: "raven" and cluster: "viper" such that total memory must always be defined. I’m not quite sure yet how to write scripts for that. So right now this only works @GWDG!

Important

If using Type: "qsub":

Currently this is set up such that a single cluster node of 32 cores will receive as many jobs as it can fit. For optimal use of the cluster Cores should be a divisor of 32.

When choosing this, keep in mind that per cluster node there are 192GB RAM.

If Cores is set to 1, this function assumes that the job will be submitted to the serial cluster and thus adapt the submission script. However, one may set the optional argument force_parallel_queue: True to run 32 single-core jobs per node in the parallel queue.

Jobs are submitted using:

qsub Submit-job.sh

If using Type: "slurm":

In this case RAM, partition must be defined in the YAML. Note that this is the RAM per CPU core. So the in the above example SLURM will allocate \(4\cdot12\mathrm{GB}=48\mathrm{GB}\) of RAM.

The job can then be submitted using

sbatch Submit-job.sh