Using Climada on the Euler Cluster (ETH internal)¶
Content¶
Access to Euler¶
See https://scicomp.ethz.ch/wiki/Getting_started_with_clusters for details on how to register at and get started with Euler.
For all steps below, first enter the Cluster via SSH.
Installation- and working directories
Please, get familiar with the various Euler storage options: https://scicomp.ethz.ch/wiki/Storage_systems. As a general rule: use /cluster/project
for installation and /cluster/work
for data processing.
For ETH WCR group members, the suggested installation and working directories are /cluster/project/climate/$USER
and /cluster/work/climate/$USER
respectively. You may have to create the insallation directory:
mkdir -p /cluster/project/climate/$USER \
/cluster/work/climate/$USER
Pre-installed version of Climada¶
Climada is pre-installed and available in the default pip environment of Euler.
1. Load dependencies¶
env2lmod
module load gcc/6.3.0 python/3.8.5 gdal/3.1.2 geos/3.8.1 proj/7.2.1 libspatialindex/1.8.5 hdf5/1.10.1 netcdf/4.4.1.1 eccodes/2.21.0 zlib/1.2.9
You need to execute these two lines every time you login to Euler before Climada can be used. To safe yourself from doing it manually, one can append these lines to the ~/.bashrc script, which is automatically executed upon logging in to Euler.
2. Check installation¶
python -c 'import climada; print(climada.__file__)'
should output something like this:
/cluster/apps/nss/gcc-6.3.0/python/3.8.5/x86_64/lib64/python3.8/site-packages/climada/__init__.py
3. Adjust the Climada configuration¶
Edit a configuration file according to your needs (see Guide_Configuration). Create a climada.conf file e.g., in /cluster/home/$USER/.config with the following content:
{
"local_data": {
"system": "/cluster/work/climate/USERNAME/climada/data",
"demo": "/cluster/project/climate/USERNAME/climada_python/data/demo",
"save_dir": "/cluster/work/climate/USERNAME/climada/results"
}
}
(Replace USERNAME with your nethz-id.)
4. Run a job¶
Please see the Wiki: https://scicomp.ethz.ch/wiki/Using_the_batch_system for an overview on how to use bsub
.
cd /cluster/work/climate/$USER # change to the working directory
bsub [bsub-options*] python climada_job_script.py # submit the job
Working with Git branches¶
If the Climada version of the default installation is not according to your needs, you can install Climada from a local Git repository.
1. Load dependencies¶
See Load dependencies above.
2. Create installation environment¶
python -m venv --system-site-packages /cluster/project/climate/$USER/climada_venv
3. Checkout sources¶
cd /cluster/project/climate/$USER
git clone https://github.com/CLIMADA-project/climada_python.git
cd climada_python
git checkout develop # i.e., your branch of interest
4. Pip install Climada¶
source /cluster/project/climate/$USER/climada_venv/bin/activate
pip install -e /cluster/project/climate/$USER/climada_python
5. Check installation¶
cd /cluster/work/climate/$USER
python -c 'import climada; print(climada.__file__)'
should output exactly this (with explicit $USER):
/cluster/project/climate/$USER/climada_python/climada/__init__.py
6. Adjust the Climada configuration¶
See Adjust the Climada configuration above.
7. Run a job¶
See Run a job above.
Fallback: Conda installation¶
If Climada cannot be installed through pip because of changed dependency requirements, there is still the possibility to install Climada through the Conda environment. > WARNING: This apporach is highly discouraged, as it imposes a heavy and mostly unnecessary burdon on the file system of the cluster.
1. Conda Installation¶
Download or update to the latest version of Miniconda. Installation is done by execution of the following steps:
cd /cluster/project/climate/USERNAME
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
miniconda3/bin/conda init
rm Miniconda3-latest-Linux-x86_64.sh
During the installation process of Miniconda, you are prompted to set the working directory according to your choice. Set it to /cluster/project/climate/USERNAME/miniconda3
. Once the installation has finished, log out of Euler and in again. The command prompt should be preceded by (base)
, indicating that the installation was a success and that you login in into conda’s base environment by default.
2. Checkout sources¶
See Checkout sources above.
3. Climada Environment¶
Create the conda environment:
cd /cluster/project/climate/USERNAME/climada_python
conda env create -f requirements/env_climada.yml --name climada_env
conda env update -n climada_env -f requirements/env_developer.yml
conda activate climada_env
conda install conda-build
conda develop .
Adjust the Climada configuration
See Adjust the Climada configuration above.
5. Climada Scripts¶
Create a bash script for executing python scripts in the climada environment, climadajob.sh
:
#!/bin/bash
PYTHON_SCRIPT=$1
shift
. ~/.bashrc
conda activate climada_env
python $PYTHON_SCRIPT $@
echo $PYTHON_SCRIPT completed
Make it executable:
chmod +x climadajob.sh
Create a python script that executes climada code, e.g., climada_smoke_test.py
:
import sys
from climada import CONFIG, SYSTEM_DIR
from climada.util.test.test_finance import TestNetpresValue
TestNetpresValue().test_net_pres_val_pass()
print(SYSTEM_DIR)
print(CONFIG.local_data.save_dir.str())
print("the script ran with arguments", sys.argv)
6. Run a Job¶
Please see the Wiki: https://scicomp.ethz.ch/wiki/Using_the_batch_system.
With the scripts from above you can submit the python script as a job like this:
bsub [options] /path/to/climadajob.sh /path/to/climada_smoke_test.py arg1 arg2
After the job has finished the lsf output file should look something like this:
Sender: LSF System <lsfadmin@eu-ms-010-32>
Subject: Job 161617875: <./climada_job.sh climada_smoke_test.py arg1 arg2> in cluster <euler> Done
Job <./climada_job.sh climada_smoke_test.py arg1 arg2> was submitted from host <eu-login-41> by user <USERNAME> in cluster <euler> at Thu Jan 28 14:10:15 2021
Job was executed on host(s) <eu-ms-010-32>, in queue <normal.4h>, as user <USERNAME> in cluster <euler> at Thu Jan 28 14:10:42 2021
</cluster/home/USERNAME> was used as the home directory.
</cluster/work/climate/USERNAME> was used as the working directory.
Started at Thu Jan 28 14:10:42 2021
Terminated at Thu Jan 28 14:10:53 2021
Results reported at Thu Jan 28 14:10:53 2021
Your job looked like:
------------------------------------------------------------
# LSBATCH: User input
./climada_job.sh climada_smoke_test.py arg1 arg2
------------------------------------------------------------
Successfully completed.
Resource usage summary:
CPU time : 2.99 sec.
Max Memory : 367 MB
Average Memory : 5.00 MB
Total Requested Memory : 1024.00 MB
Delta Memory : 657.00 MB
Max Swap : -
Max Processes : 5
Max Threads : 6
Run time : 22 sec.
Turnaround time : 38 sec.
The output (if any) follows:
/cluster/project/climate/USERNAME/miniconda3/envs/climada/lib/python3.7/site-packages/pandas_datareader/compat/__init__.py:7: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
from pandas.util.testing import assert_frame_equal
/cluster/work/climate/USERNAME/climada/data
/cluster/work/climate/USERNAME/climada/results
the script ran with arguments ['/path/to/climada_smoke_test.py', 'arg1' 'arg2']
python_script.sh completed
Conda Deinstallation
1. Conda¶
Remove the miniconda3 directory from the installation directory:
rm -rf /cluster/project/climate/USERNAME/miniconda3/
Delete the conda related parts from /cluster/home/USERNAME/.bashrc
, i.e., everything between
# >>> conda initialize >>>
# <<< conda initialize <<<
2. Climada¶
Remove the climada sources and config file:
rm -rf /cluster/project/climate/USERNAME/climada_python
rm -f /cluster/home/USERNAME/climada.conf /cluster/home/USERNAME/*/climada.conf