Using Climada on the Euler Cluster (ETH internal)


  1. Access to Euler

  2. Installation and working directories

  3. Pre-installed version of Climada

    1. Load dependencies

    2. Check installation

    3. Adjust the Climada configuration

    4. Run a job

  4. Working with Git Branches

    1. Load dependencies

    2. Create installation environment

    3. Check out sources

    4. Pip install Climada

    5. Check installation

    6. Adjust the Climada configuration

    7. Run a job

  5. Fallback: Conda installation

    1. Conda installation

    2. Check out sources

    3. Climada environemnt

    4. Adjust the Climada configuration

    5. Climada sripts

    6. Run a job

  6. Conda Deinstallation

    1. Conda

    2. Climada

Access to Euler

See for details on how to register at and get started with Euler.

For all steps below, first enter the Cluster via SSH.

Installation- and working directories

Please, get familiar with the various Euler storage options: As a general rule: use /cluster/project for installation and /cluster/work for data processing.

For ETH WCR group members, the suggested installation and working directories are /cluster/project/climate/$USER and /cluster/work/climate/$USER respectively. You may have to create the insallation directory:

mkdir -p /cluster/project/climate/$USER \

Pre-installed version of Climada

Climada is pre-installed and available in the default pip environment of Euler.

1. Load dependencies

module load gcc/6.3.0 python/3.8.5 gdal/3.1.2 geos/3.8.1 proj/7.2.1 libspatialindex/1.8.5 hdf5/1.10.1 netcdf/ eccodes/2.21.0 zlib/1.2.9

You need to execute these two lines every time you login to Euler before Climada can be used. To safe yourself from doing it manually, one can append these lines to the ~/.bashrc script, which is automatically executed upon logging in to Euler.

2. Check installation

python -c 'import climada; print(climada.__file__)'

should output something like this:


3. Adjust the Climada configuration

Edit a configuration file according to your needs (see Guide_Configuration). Create a climada.conf file e.g., in /cluster/home/$USER/.config with the following content:

    "local_data": {
        "system": "/cluster/work/climate/USERNAME/climada/data",
        "demo": "/cluster/project/climate/USERNAME/climada_python/data/demo",
        "save_dir": "/cluster/work/climate/USERNAME/climada/results"

(Replace USERNAME with your nethz-id.)

4. Run a job

Please see the Wiki: for an overview on how to use bsub.

cd /cluster/work/climate/$USER  # change to the working directory
bsub [bsub-options*] python  # submit the job

Working with Git branches

If the Climada version of the default installation is not according to your needs, you can install Climada from a local Git repository.

1. Load dependencies

See Load dependencies above.

2. Create installation environment

python -m venv --system-site-packages /cluster/project/climate/$USER/climada_venv

3. Checkout sources

cd /cluster/project/climate/$USER
git clone
cd climada_python
git checkout develop  # i.e., your branch of interest

4. Pip install Climada

source /cluster/project/climate/$USER/climada_venv/bin/activate
pip install -e /cluster/project/climate/$USER/climada_python

5. Check installation

cd /cluster/work/climate/$USER
python -c 'import climada; print(climada.__file__)'

should output exactly this (with explicit $USER):


6. Adjust the Climada configuration

See Adjust the Climada configuration above.

7. Run a job

See Run a job above.

Fallback: Conda installation

If Climada cannot be installed through pip because of changed dependency requirements, there is still the possibility to install Climada through the Conda environment. > WARNING: This apporach is highly discouraged, as it imposes a heavy and mostly unnecessary burdon on the file system of the cluster.

1. Conda Installation

Download or update to the latest version of Miniconda. Installation is done by execution of the following steps:

cd /cluster/project/climate/USERNAME
miniconda3/bin/conda init

During the installation process of Miniconda, you are prompted to set the working directory according to your choice. Set it to /cluster/project/climate/USERNAME/miniconda3. Once the installation has finished, log out of Euler and in again. The command prompt should be preceded by (base), indicating that the installation was a success and that you login in into conda’s base environment by default.

2. Checkout sources

See Checkout sources above.

3. Climada Environment

Create the conda environment:

cd /cluster/project/climate/USERNAME/climada_python
conda env create -f requirements/env_climada.yml --name climada_env
conda env update -n climada_env -f requirements/env_developer.yml

conda activate climada_env
conda install conda-build
conda develop .
  1. Adjust the Climada configuration

See Adjust the Climada configuration above.

5. Climada Scripts

Create a bash script for executing python scripts in the climada environment,

. ~/.bashrc
conda activate climada_env
python $PYTHON_SCRIPT $@
echo $PYTHON_SCRIPT completed

Make it executable:

chmod +x

Create a python script that executes climada code, e.g.,

import sys
from climada import CONFIG, SYSTEM_DIR
from climada.util.test.test_finance import TestNetpresValue
print("the script ran with arguments", sys.argv)

6. Run a Job

Please see the Wiki:

With the scripts from above you can submit the python script as a job like this:

bsub [options] /path/to/ /path/to/ arg1 arg2

After the job has finished the lsf output file should look something like this:

Sender: LSF System <lsfadmin@eu-ms-010-32>
Subject: Job 161617875: <./ arg1 arg2> in cluster <euler> Done

Job <./ arg1 arg2> was submitted from host <eu-login-41> by user <USERNAME> in cluster <euler> at Thu Jan 28 14:10:15 2021
Job was executed on host(s) <eu-ms-010-32>, in queue <normal.4h>, as user <USERNAME> in cluster <euler> at Thu Jan 28 14:10:42 2021
</cluster/home/USERNAME> was used as the home directory.
</cluster/work/climate/USERNAME> was used as the working directory.
Started at Thu Jan 28 14:10:42 2021
Terminated at Thu Jan 28 14:10:53 2021
Results reported at Thu Jan 28 14:10:53 2021

Your job looked like:

# LSBATCH: User input
./ arg1 arg2

Successfully completed.

Resource usage summary:

    CPU time :                                   2.99 sec.
    Max Memory :                                 367 MB
    Average Memory :                             5.00 MB
    Total Requested Memory :                     1024.00 MB
    Delta Memory :                               657.00 MB
    Max Swap :                                   -
    Max Processes :                              5
    Max Threads :                                6
    Run time :                                   22 sec.
    Turnaround time :                            38 sec.

The output (if any) follows:

/cluster/project/climate/USERNAME/miniconda3/envs/climada/lib/python3.7/site-packages/pandas_datareader/compat/ FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
  from pandas.util.testing import assert_frame_equal
the script ran with arguments ['/path/to/', 'arg1' 'arg2'] completed

Conda Deinstallation

1. Conda

Remove the miniconda3 directory from the installation directory:

rm -rf /cluster/project/climate/USERNAME/miniconda3/

Delete the conda related parts from /cluster/home/USERNAME/.bashrc, i.e., everything between

# >>> conda initialize >>>
# <<< conda initialize <<<

2. Climada

Remove the climada sources and config file:

rm -rf /cluster/project/climate/USERNAME/climada_python
rm -f /cluster/home/USERNAME/climada.conf /cluster/home/USERNAME/*/climada.conf