climada.util package

Contents

climada.util package#

climada.util.api_client module#

class climada.util.api_client.Download(*args, **kwargs)[source]#

Bases: Model

Database entry keeping track of downloaded files from the CLIMADA data API

url = <CharField: Download.url>#
path = <CharField: Download.path>#
startdownload = <DateTimeField: Download.startdownload>#
enddownload = <DateTimeField: Download.enddownload>#
exception Failed[source]#

Bases: Exception

The download failed for some reason.

DoesNotExist#

alias of DownloadDoesNotExist

id = <AutoField: Download.id>#
class climada.util.api_client.FileInfo(uuid: str, url: str, file_name: str, file_format: str, file_size: int, check_sum: str)[source]#

Bases: object

file data from CLIMADA data API.

uuid: str#
url: str#
file_name: str#
file_format: str#
file_size: int#
check_sum: str#
__init__(uuid: str, url: str, file_name: str, file_format: str, file_size: int, check_sum: str) None#
class climada.util.api_client.DataTypeInfo(data_type: str, data_type_group: str, status: str, description: str, properties: list, key_reference: list | None = None, version_notes: list | None = None)[source]#

Bases: object

data type meta data from CLIMADA data API.

data_type: str#
data_type_group: str#
status: str#
description: str#
properties: list#
key_reference: list = None#
version_notes: list = None#
__init__(data_type: str, data_type_group: str, status: str, description: str, properties: list, key_reference: list | None = None, version_notes: list | None = None) None#
class climada.util.api_client.DataTypeShortInfo(data_type: str, data_type_group: str)[source]#

Bases: object

data type name and group from CLIMADA data API.

data_type: str#
data_type_group: str#
__init__(data_type: str, data_type_group: str) None#
class climada.util.api_client.DatasetInfo(uuid: str, data_type: DataTypeShortInfo, name: str, version: str, status: str, properties: dict, files: list, doi: str, description: str, license: str, activation_date: str, expiration_date: str)[source]#

Bases: object

dataset data from CLIMADA data API.

uuid: str#
data_type: DataTypeShortInfo#
name: str#
version: str#
status: str#
properties: dict#
files: list#
doi: str#
description: str#
license: str#
activation_date: str#
expiration_date: str#
static from_json(jsono)[source]#

creates a DatasetInfo object from the json object returned by the CLIMADA data api server.

Parameters:

jsono (dict)

Return type:

DatasetInfo

__init__(uuid: str, data_type: DataTypeShortInfo, name: str, version: str, status: str, properties: dict, files: list, doi: str, description: str, license: str, activation_date: str, expiration_date: str) None#
climada.util.api_client.checksize(local_path, fileinfo)[source]#

Checks sanity of downloaded file simply by comparing actual and registered size.

Parameters:
  • local_path (Path) – the downloaded file

  • filinfo (FileInfo) – file information from CLIMADA data API

Raises:

Download.Failed – if the file is not what it’s supposed to be

climada.util.api_client.checkhash(local_path, fileinfo)[source]#

Checks sanity of downloaded file by comparing actual and registered check sum.

Parameters:
  • local_path (Path) – the downloaded file

  • filinfo (FileInfo) – file information from CLIMADA data API

Raises:

Download.Failed – if the file is not what it’s supposed to be

class climada.util.api_client.Cacher(cache_enabled)[source]#

Bases: object

Utility class handling cached results from http requests, to enable the API Client working in offline mode.

__init__(cache_enabled)[source]#

Constructor of Cacher.

Parameters:

cache_enabled (bool, None) – Default: None, in this case the value is taken from CONFIG.data_api.cache_enabled.

store(result, *args, **kwargs)[source]#

stores the result from a API call to a local file.

The name of the file is the md5 hash of a string created from the call’s arguments, the content of the file is the call’s result in json format.

Parameters:
  • result (dict) – will be written in json format to the cached result file

  • *args (list of str)

  • **kwargs (list of dict of (str,str))

fetch(*args, **kwargs)[source]#

reloads the result from a API call from a local file, created by the corresponding call of self.store.

If no call with exactly the same arguments has been made in the past, the result is None.

Parameters:
  • *args (list of str)

  • **kwargs (list of dict of (str,str))

Return type:

dict or None

class climada.util.api_client.Client(cache_enabled=None)[source]#

Bases: object

Python wrapper around REST calls to the CLIMADA data API server.

MAX_WAITING_PERIOD = 6#
UNLIMITED = 100000#
DOWNLOAD_TIMEOUT = 3600#
QUERY_TIMEOUT = 300#
exception AmbiguousResult[source]#

Bases: Exception

Custom Exception for Non-Unique Query Result

exception NoResult[source]#

Bases: Exception

Custom Exception for No Query Result

exception NoConnection[source]#

Bases: Exception

To be raised if there is no internet connection and no cached result.

__init__(cache_enabled=None)[source]#

Constructor of Client.

Data API host and chunk_size (for download) are configurable values. Default values are ‘climada.ethz.ch’ and 8096 respectively.

Parameters:

cache_enabled (bool, optional) – This flag controls whether the api calls of this client are going to be cached to the local file system (location defined by CONFIG.data_api.cache_dir). If set to true, the client can reload the results from the cache in case there is no internet connection and thus work in offline mode. Default: None, in this case the value is taken from CONFIG.data_api.cache_enabled.

list_dataset_infos(data_type=None, name=None, version=None, properties=None, status='active')[source]#

Find all datasets matching the given parameters.

Parameters:
  • data_type (str, optional) – data_type of the dataset, e.g., ‘litpop’ or ‘draught’

  • name (str, optional) – the name of the dataset

  • version (str, optional) – the version of the dataset, ‘any’ for all versions, ‘newest’ or None for the newest version meeting the requirements Default: None

  • properties (dict, optional) – search parameters for dataset properties, by default None any property has a string for key and can be a string or a list of strings for value

  • status (str, optional) – valid values are ‘preliminary’, ‘active’, ‘expired’, ‘test_dataset’ and None by default ‘active’

Return type:

list of DatasetInfo

get_dataset_info(data_type=None, name=None, version=None, properties=None, status='active')[source]#

Find the one dataset that matches the given parameters.

Parameters:
  • data_type (str, optional) – data_type of the dataset, e.g., ‘litpop’ or ‘draught’

  • name (str, optional) – the name of the dataset

  • version (str, optional) – the version of the dataset Default: newest version meeting the requirements

  • properties (dict, optional) – search parameters for dataset properties, by default None any property has a string for key and can be a string or a list of strings for value

  • status (str, optional) – valid values are ‘preliminary’, ‘active’, ‘expired’, ‘test_dataset’, None by default ‘active’

Return type:

DatasetInfo

Raises:
  • AmbiguousResult – when there is more than one dataset matching the search parameters

  • NoResult – when there is no dataset matching the search parameters

get_dataset_info_by_uuid(uuid)[source]#

Returns the data from ‘https://climada.ethz.ch/data-api/v1/dataset/{uuid}’ as DatasetInfo object.

Parameters:

uuid (str) – the universal unique identifier of the dataset

Return type:

DatasetInfo

Raises:

NoResult – if the uuid is not valid

list_data_type_infos(data_type_group=None)[source]#

Returns all data types from the climada data API belonging to a given data type group.

Parameters:

data_type_group (str, optional) – name of the data type group, by default None

Return type:

list of DataTypeInfo

get_data_type_info(data_type)[source]#

Returns the metadata of the data type with the given name from the climada data API.

Parameters:

data_type (str) – data type name

Return type:

DataTypeInfo

Raises:

NoResult – if there is no such data type registered

download_dataset(dataset, target_dir=PosixPath('/home/docs/climada/data'), organize_path=True)[source]#

Download all files from a given dataset to a given directory.

Parameters:
  • dataset (DatasetInfo) – the dataset

  • target_dir (Path, optional) – target directory for download, by default climada.util.constants.SYSTEM_DIR

  • organize_path (bool, optional) – if set to True the files will end up in subdirectories of target_dir: [target_dir]/[data_type_group]/[data_type]/[name]/[version] by default True

Returns:

  • download_dir (Path) – the path to the directory containing the downloaded files, will be created if organize_path is True

  • downloaded_files (list of Path) – the downloaded files themselves

Raises:

Exception – when one of the files cannot be downloaded

static purge_cache_db(local_path)[source]#

Removes entry from the sqlite database that keeps track of files downloaded by cached_download. This may be necessary in case a previous attempt has failed in an uncontroled way (power outage or the like).

Parameters:
  • local_path (Path) – target destination

  • fileinfo (FileInfo) – file object as retrieved from the data api

get_hazard(hazard_type, name=None, version=None, properties=None, status='active', dump_dir=PosixPath('/home/docs/climada/data'))[source]#

Queries the data api for hazard datasets of the given type, downloads associated hdf5 files and turns them into a climada.hazard.Hazard object.

Parameters:
  • hazard_type (str) – Type of climada hazard.

  • name (str, optional) – the name of the dataset

  • version (str, optional) – the version of the dataset Default: newest version meeting the requirements

  • properties (dict, optional) – search parameters for dataset properties, by default None any property has a string for key and can be a string or a list of strings for value

  • status (str, optional) – valid values are ‘preliminary’, ‘active’, ‘expired’, ‘test_dataset’, None by default ‘active’

  • dump_dir (str, optional) – Directory where the files should be downoladed. Default: SYSTEM_DIR If the directory is the SYSTEM_DIR (as configured in climada.conf, i.g. ~/climada/data), the eventual target directory is organized into dump_dir > hazard_type > dataset name > version

Returns:

The combined hazard object

Return type:

climada.hazard.Hazard

to_hazard(dataset, dump_dir=PosixPath('/home/docs/climada/data'))[source]#

Downloads hdf5 files belonging to the given datasets reads them into Hazards and concatenates them into a single climada.Hazard object.

Parameters:
  • dataset (DatasetInfo) – Dataset to download and read into climada.Hazard object.

  • dump_dir (str, optional) – Directory where the files should be downoladed. Default: SYSTEM_DIR (as configured in climada.conf, i.g. ~/climada/data). If the directory is the SYSTEM_DIR, the eventual target directory is organized into dump_dir > hazard_type > dataset name > version

Returns:

The combined hazard object

Return type:

climada.hazard.Hazard

get_exposures(exposures_type, name=None, version=None, properties=None, status='active', dump_dir=PosixPath('/home/docs/climada/data'))[source]#

Queries the data api for exposures datasets of the given type, downloads associated hdf5 files and turns them into a climada.entity.exposures.Exposures object.

Parameters:
  • exposures_type (str) – Type of climada exposures.

  • name (str, optional) – the name of the dataset

  • version (str, optional) – the version of the dataset Default: newest version meeting the requirements

  • properties (dict, optional) – search parameters for dataset properties, by default None any property has a string for key and can be a string or a list of strings for value

  • status (str, optional) – valid values are ‘preliminary’, ‘active’, ‘expired’, ‘test_dataset’, None by default ‘active’

  • dump_dir (str, optional) – Directory where the files should be downoladed. Default: SYSTEM_DIR If the directory is the SYSTEM_DIR, the eventual target directory is organized into dump_dir > hazard_type > dataset name > version

Returns:

The combined exposures object

Return type:

climada.entity.exposures.Exposures

to_exposures(dataset, dump_dir=PosixPath('/home/docs/climada/data'))[source]#

Downloads hdf5 files belonging to the given datasets reads them into Exposures and concatenates them into a single climada.Exposures object.

Parameters:
  • dataset (DatasetInfo) – Dataset to download and read into climada.Exposures objects.

  • dump_dir (str, optional) – Directory where the files should be downoladed. Default: SYSTEM_DIR (as configured in climada.conf, i.g. ~/climada/data). If the directory is the SYSTEM_DIR, the eventual target directory is organized into dump_dir > exposures_type > dataset name > version

Returns:

The combined exposures object

Return type:

climada.entity.exposures.Exposures

get_litpop(country=None, exponents=(1, 1), version=None, dump_dir=PosixPath('/home/docs/climada/data'))[source]#

Get a LitPop Exposures instance on a 150arcsec grid with the default parameters: exponents = (1,1) and fin_mode = ‘pc’.

Parameters:
  • country (str, optional) – Country name or iso3 codes for which to create the LitPop object. For creating a LitPop object over multiple countries, use get_litpop individually and concatenate using LitPop.concat, see Examples. If country is None a global LitPop instance is created. Defaut is None.

  • exponents (tuple of two integers, optional) – Defining power with which lit (nightlights) and pop (gpw) go into LitPop. To get nightlights^3 without population count: (3, 0). To use population count alone: (0, 1). Default: (1, 1)

  • version (str, optional) – the version of the dataset Default: newest version meeting the requirements

  • dump_dir (str) – directory where the files should be downoladed. Default: SYSTEM_DIR

Returns:

default litpop Exposures object

Return type:

climada.entity.exposures.Exposures

Examples

Combined default LitPop object for Austria and Switzerland:

>>> client = Client()
>>> litpop_aut = client.get_litpop("AUT")
>>> litpop_che = client.get_litpop("CHE")
>>> litpop_comb = LitPop.concat([litpop_aut, litpop_che])
get_centroids(res_arcsec_land=150, res_arcsec_ocean=1800, extent=(-180, 180, -60, 60), country=None, version=None, dump_dir=PosixPath('/home/docs/climada/data'))[source]#

Get centroids from teh API

Parameters:
  • res_land_arcsec (int) – resolution for land centroids in arcsec. Default is 150

  • res_ocean_arcsec (int) – resolution for ocean centroids in arcsec. Default is 1800

  • country (str) – country name, numeric code or iso code based on pycountry. Default is None (global).

  • extent (tuple) – Format (min_lon, max_lon, min_lat, max_lat) tuple. If min_lon > lon_max, the extend crosses the antimeridian and is [lon_max, 180] + [-180, lon_min] Borders are inclusive. Default is (-180, 180, -60, 60).

  • version (str, optional) – the version of the dataset Default: newest version meeting the requirements

  • dump_dir (str) – directory where the files should be downoladed. Default: SYSTEM_DIR

Returns:

Centroids from the api

Return type:

climada.hazard.centroids.Centroids

static get_property_values(dataset_infos, known_property_values=None, exclude_properties=None)[source]#

Returns a dictionnary of possible values for properties of a data type, optionally given known property values.

Parameters:
  • dataset_infos (list of DataSetInfo) – as returned by list_dataset_infos

  • known_properties_value (dict, optional) – dict {‘property’:’value1, ‘property2’:’value2’}, to provide only a subset of property values that can be combined with the given properties.

  • exclude_properties (list of str, optional) – properties in this list will be excluded from the resulting dictionary, e.g., because they are strictly metadata and don’t provide any information essential to the dataset. Default: ‘creation_date’, ‘climada_version’

Returns:

of possibles property values

Return type:

dict

static into_datasets_df(dataset_infos)[source]#

Convenience function providing a DataFrame of datasets with properties.

Parameters:

dataset_infos (list of DatasetInfo) – as returned by list_dataset_infos

Returns:

of datasets with properties as found in query by arguments

Return type:

pandas.DataFrame

static into_files_df(dataset_infos)[source]#

Convenience function providing a DataFrame of files aligned with the input datasets.

Parameters:

datasets (list of DatasetInfo) – as returned by list_dataset_infos

Returns:

of the files’ informations including dataset informations

Return type:

pandas.DataFrame

purge_cache(target_dir=PosixPath('/home/docs/climada/data'), keep_testfiles=True)[source]#

Removes downloaded dataset files from the given directory if they have been downloaded with the API client, if they are beneath the given directory and if one of the following is the case: - there status is neither ‘active’ nor ‘test_dataset’ - their status is ‘test_dataset’ and keep_testfiles is set to False - their status is ‘active’ and they are outdated, i.e., there is a dataset with the same

data_type and name but a newer version.

Parameters:
  • target_dir (Path or str, optional) – files downloaded beneath this directory and empty subdirectories will be removed. default: SYSTEM_DIR

  • keep_testfiles (bool, optional) – if set to True, files from datasets with status ‘test_dataset’ will not be removed. default: True

get_dataset_file(**kwargs)[source]#

Convenience method. Combines get_dataset and download_dataset. Returns the path to a single file if the dataset has only one, otherwise throws an error.

Parameters:

**kwargs – arguments for get_dataset and download_dataset

Return type:

Path

climada.util.checker module#

climada.util.checker.size(exp_len, var, var_name)[source]#

Check if the length of a variable is the expected one.

Raises:

ValueError

climada.util.checker.shape(exp_row, exp_col, var, var_name)[source]#

Check if the length of a variable is the expected one.

Raises:

ValueError

climada.util.checker.array_optional(exp_len, var, var_name)[source]#

Check if array has right size. Warn if array empty. Call check_size.

Parameters:
  • exp_len (str) – expected array size

  • var (np.array) – numpy array to check

  • var_name (str) – name of the variable. Used in error/warning msg

Raises:

ValueError

climada.util.checker.array_default(exp_len, var, var_name, def_val)[source]#

Check array has right size. Set default value if empty. Call check_size.

Parameters:
  • exp_len (str) – expected array size

  • var (np.array) – numpy array to check

  • var_name (str) – name of the variable. Used in error/warning msg

  • def_val (np.array) – nump array used as default value

Raises:

ValueError

Return type:

Filled array

climada.util.config module#

climada.util.constants module#

climada.util.constants.SYSTEM_DIR = PosixPath('/home/docs/climada/data')#

Folder containing the data used internally

climada.util.constants.DEMO_DIR = PosixPath('/home/docs/climada/demo/data')#

Folder containing the data used for tutorials

climada.util.constants.ENT_DEMO_TODAY = PosixPath('/home/docs/climada/demo/data/demo_today.xlsx')#

Entity demo present in xslx format.

climada.util.constants.ENT_DEMO_FUTURE = PosixPath('/home/docs/climada/demo/data/demo_future_TEST.xlsx')#

Entity demo future in xslx format.

climada.util.constants.HAZ_DEMO_MAT = PosixPath('/home/docs/climada/demo/data/atl_prob_nonames.mat')#

hurricanes from 1851 to 2011 over Florida with 100 centroids.

Type:

Hazard demo from climada in MATLAB

climada.util.constants.HAZ_DEMO_FL = PosixPath('/home/docs/climada/demo/data/SC22000_VE__M1.grd.gz')#

Raster file of flood over Venezuela. Model from GAR2015

climada.util.constants.ENT_TEMPLATE_XLS = PosixPath('/home/docs/climada/data/entity_template.xlsx')#

Entity template in xls format.

climada.util.constants.HAZ_TEMPLATE_XLS = PosixPath('/home/docs/climada/data/hazard_template.xlsx')#

Hazard template in xls format.

climada.util.constants.ONE_LAT_KM = 111.12#

Mean one latitude (in degrees) to km

climada.util.constants.EARTH_RADIUS_KM = 6371#

Earth radius in km

climada.util.constants.GLB_CENTROIDS_MAT = PosixPath('/home/docs/climada/data/GLB_NatID_grid_0360as_adv_2.mat')#

Global centroids

climada.util.constants.GLB_CENTROIDS_NC = PosixPath('/home/docs/climada/data/NatID_grid_0150as.nc')#

For backwards compatibility, it remains available under its old name.

climada.util.constants.ISIMIP_GPWV3_NATID_150AS = PosixPath('/home/docs/climada/data/NatID_grid_0150as.nc')#

Compressed version of National Identifier Grid in 150 arc-seconds from ISIMIP project, based on GPWv3. Location in ISIMIP repository:

ISIMIP2a/InputData/landuse_humaninfluences/population/ID_GRID/Nat_id_grid_ISIMIP.nc

More references:

climada.util.constants.NATEARTH_CENTROIDS = {150: PosixPath('/home/docs/climada/data/NatEarth_Centroids_150as.hdf5'), 360: PosixPath('/home/docs/climada/data/NatEarth_Centroids_360as.hdf5')}#

Global centroids at XXX arc-seconds resolution, including region ids from Natural Earth. The 360 AS file includes distance to coast from NASA.

climada.util.constants.RIVER_FLOOD_REGIONS_CSV = PosixPath('/home/docs/climada/data/NatRegIDs.csv')#

Look-up table for river flood module

climada.util.constants.TC_ANDREW_FL = PosixPath('/home/docs/climada/demo/data/ibtracs_global_intp-None_1992230N11325.csv')#

Tropical cyclone Andrew in Florida

climada.util.constants.HAZ_DEMO_H5 = PosixPath('/home/docs/climada/demo/data/tc_fl_1990_2004.h5')#

IBTrACS from 1990 to 2004 over Florida with 2500 centroids.

Type:

Hazard demo in hdf5 format

climada.util.constants.EXP_DEMO_H5 = PosixPath('/home/docs/climada/demo/data/exp_demo_today.h5')#

Exposures over Florida

climada.util.constants.WS_DEMO_NC = [PosixPath('/home/docs/climada/demo/data/fp_lothar_crop-test.nc'), PosixPath('/home/docs/climada/demo/data/fp_xynthia_crop-test.nc')]#

Winter storm in Europe files. These test files have been generated using the netCDF kitchen sink:

>>> ncks -d latitude,50.5,54.0 -d longitude,3.0,7.5 ./file_in.nc ./file_out.nc
climada.util.constants.TEST_UNC_OUTPUT_IMPACT = 'test_unc_output_impact'#

Demo uncertainty impact output

climada.util.constants.TEST_UNC_OUTPUT_COSTBEN = 'test_unc_output_costben'#

Demo uncertainty costben output

climada.util.coordinates module#

climada.util.coordinates.NE_EPSG = 4326#

Natural Earth CRS EPSG

climada.util.coordinates.NE_CRS = 'epsg:4326'#

Natural Earth CRS

climada.util.coordinates.DEM_NODATA = -9999#

Value to use for no data values in DEM, i.e see points

climada.util.coordinates.MAX_DEM_TILES_DOWN = 300#

Maximum DEM tiles to dowload

climada.util.coordinates.NEAREST_NEIGHBOR_THRESHOLD = 100#

Distance threshold in km for coordinate assignment. Nearest neighbors with greater distances are not considered.

climada.util.coordinates.latlon_to_geosph_vector(lat, lon, rad=False, basis=False)[source]#

Convert lat/lon coodinates to radial vectors (on geosphere)

Parameters:
  • lat, lon (ndarrays of floats, same shape) – Latitudes and longitudes of points.

  • rad (bool, optional) – If True, latitude and longitude are not given in degrees but in radians.

  • basis (bool, optional) – If True, also return an orthonormal basis of the tangent space at the given points in lat-lon coordinate system. Default: False.

Returns:

  • vn (ndarray of floats, shape (…, 3)) – Same shape as lat/lon input with additional axis for components.

  • vbasis (ndarray of floats, shape (…, 2, 3)) – Only present, if basis is True. Same shape as lat/lon input with additional axes for components of the two basis vectors.

climada.util.coordinates.lon_normalize(lon, center=0.0)[source]#

Normalizes degrees such that always -180 < lon - center <= 180

The input data is modified in place!

Parameters:
  • lon (np.array) – Longitudinal coordinates

  • center (float, optional) – Central longitude value to use instead of 0. If None, the central longitude is determined automatically.

Returns:

lon – Normalized longitudinal coordinates. Since the input lon is modified in place (!), the returned array is the same Python object (instead of a copy).

Return type:

np.array

climada.util.coordinates.lon_bounds(lon, buffer=0.0)[source]#

Bounds of a set of degree values, respecting the periodicity in longitude

The longitudinal upper bound may be 180 or larger to make sure that the upper bound is always larger than the lower bound. The lower longitudinal bound will never lie below -180 and it will only assume the value -180 if the specified buffering enforces it.

Note that, as a consequence of this, the returned bounds do not satisfy the inequality lon_min <= lon <= lon_max in general!

Usually, an application of this function is followed by a renormalization of longitudinal values around the longitudinal middle value:

>>> bounds = lon_bounds(lon)
>>> lon_mid = 0.5 * (bounds[0] + bounds[2])
>>> lon = lon_normalize(lon, center=lon_mid)
>>> np.all((bounds[0] <= lon) & (lon <= bounds[2]))

If the bounds cover a full circle (360 degrees), this function will always return (-180, 180), instead of (0, 360) or similar equivalent outputs.

Example

>>> lon_bounds(np.array([-179, 175, 178]))
(175, 181)
>>> lon_bounds(np.array([-179, 175, 178]), buffer=1)
(174, 182)
Parameters:
  • lon (np.array) – Longitudinal coordinates

  • buffer (float, optional) – Buffer to add to both sides of the bounding box. Default: 0.0.

Returns:

bounds – Bounding box of the given points.

Return type:

tuple (lon_min, lon_max)

climada.util.coordinates.latlon_bounds(lat, lon, buffer=0.0)[source]#

Bounds of a set of degree values, respecting the periodicity in longitude

See lon_bounds for more information about the handling of longitudinal values crossing the antimeridian.

Example

>>> latlon_bounds(np.array([0, -2, 5]), np.array([-179, 175, 178]))
(175, -2, 181, 5)
>>> latlon_bounds(np.array([0, -2, 5]), np.array([-179, 175, 178]), buffer=1)
(174, -3, 182, 6)
Parameters:
  • lat (np.array) – Latitudinal coordinates

  • lon (np.array) – Longitudinal coordinates

  • buffer (float, optional) – Buffer to add to all sides of the bounding box. Default: 0.0.

Returns:

bounds – Bounding box of the given points.

Return type:

tuple (lon_min, lat_min, lon_max, lat_max)

climada.util.coordinates.toggle_extent_bounds(bounds_or_extent)[source]#

Convert between the “bounds” and the “extent” description of a bounding box

The difference between the two conventions is in the order in which the bounds for each coordinate direction are given. To convert from one description to the other, the two central entries of the 4-tuple are swapped. Hence, the conversion is symmetric.

Parameters:

bounds_or_extent (tuple (a, b, c, d)) – Bounding box of the given points in “bounds” (or “extent”) convention.

Returns:

extent_or_bounds – Bounding box of the given points in “extent” (or “bounds”) convention.

Return type:

tuple (a, c, b, d)

climada.util.coordinates.dist_approx(lat1, lon1, lat2, lon2, log=False, normalize=True, method='equirect', units='km')[source]#

Compute approximation of geodistance in specified units

Several batches of points can be processed at once for improved performance. The distances of all (lat1, lon1)-points within a batch to all (lat2, lon2)-points within the same batch are computed, according to the formula:

result[k, i, j] = dist((lat1[k, i], lon1[k, i]), (lat2[k, j], lon2[k, j]))

Hence, each of lat1, lon1, lat2, lon2 is expected to be a 2-dimensional array and the resulting array will always be 3-dimensional.

Parameters:
  • lat1, lon1 (ndarrays of floats, shape (nbatch, nx)) – Latitudes and longitudes of first points.

  • lat2, lon2 (ndarrays of floats, shape (nbatch, ny)) – Latitudes and longitudes of second points.

  • log (bool, optional) – If True, return the tangential vectors at the first points pointing to the second points (Riemannian logarithm). Default: False.

  • normalize (bool, optional) – If False, assume that all longitudinal values lie within a single interval of size 360 (e.g., between -180 and 180, or between 0 and 360) and such that the shortest path between any two points does not cross the antimeridian according to that parametrization. If True, a suitable interval is determined using lon_bounds() and the longitudinal values are reparametrized accordingly using lon_normalize(). Note that this option has no effect when using the “geosphere” method because it is independent from the parametrization. Default: True

  • method (str, optional) – Specify an approximation method to use:

    • “equirect”: Distance according to sinusoidal projection. Fast, but inaccurate for large distances and high latitudes.

    • “geosphere”: Exact spherical distance. Much more accurate at all distances, but slow.

    Note that ellipsoidal distances would be even more accurate, but are currently not implemented. Default: “equirect”.

  • units (str, optional) – Specify a unit for the distance. One of:

    • “km”: distance in km.

    • “m”: distance in m.

    • “degree”: angular distance in decimal degrees.

    • “radian”: angular distance in radians.

    Default: “km”.

Returns:

  • dists (ndarray of floats, shape (nbatch, nx, ny)) – Approximate distances in specified units.

  • vtan (ndarray of floats, shape (nbatch, nx, ny, 2)) – If log is True, tangential vectors at first points in local lat-lon coordinate system.

climada.util.coordinates.compute_geodesic_lengths(gdf)[source]#

Calculate the great circle (geodesic / spherical) lengths along any (complicated) line geometry object, based on the pyproj.Geod implementation.

Parameters:

gdf (gpd.GeoDataframe with geometrical shapes of which to compute the length)

Returns:

series – objects in metres.

Return type:

a pandas series (column) with the great circle lengths of the

See also

dist_approx()

distance between individual lat/lon-points

Note

This implementation relies on non-projected (i.e. geographic coordinate systems that span the entire globe) CRS only, which results in sea-level distances and hence a certain (minor) level of distortion; cf. https://gis.stackexchange.com/questions/176442/what-is-the-real-distance-between-positions

climada.util.coordinates.get_gridcellarea(lat, resolution=0.5, unit='ha')[source]#

The area covered by a grid cell is calculated depending on the latitude

  • 1 degree = ONE_LAT_KM (111.12km at the equator)

  • longitudal distance in km = ONE_LAT_KM*resolution*cos(lat)

  • latitudal distance in km = ONE_LAT_KM*resolution

  • area = longitudal distance * latitudal distance

Parameters:
  • lat (np.array) – Latitude of the respective grid cell

  • resolution (int, optional) – raster resolution in degree (default: 0.5 degree)

  • unit (string, optional) – unit of the output area (default: ha, alternatives: m2, km2)

climada.util.coordinates.grid_is_regular(coord)[source]#

Return True if grid is regular. If True, returns height and width.

Parameters:

coord (np.array) – Each row is a lat-lon-pair.

Returns:

  • regular (bool) – Whether the grid is regular. Only in this case, the following width and height are reliable.

  • height (int) – Height of the supposed grid.

  • width (int) – Width of the supposed grid.

climada.util.coordinates.convert_wgs_to_utm(lon, lat)[source]#

Get EPSG code of UTM projection for input point in EPSG 4326

Parameters:
  • lon (float) – longitude point in EPSG 4326

  • lat (float) – latitude of point (lat, lon) in EPSG 4326

Returns:

epsg_code – EPSG code of UTM projection.

Return type:

int

climada.util.coordinates.dist_to_coast(coord_lat, lon=None, highres=False, signed=False)[source]#

Read interpolated (signed) distance to coast (in m) from NASA data

Note: The NASA raster file is 300 MB and will be downloaded on first run!

Parameters:
  • coord_lat (GeoDataFrame or np.ndarray or float) – One of the following:

    • GeoDataFrame with geometry column in epsg:4326

    • np.array with two columns, first for latitude of each point and second with longitude in epsg:4326

    • np.array with one dimension containing latitudes in epsg:4326

    • float with a latitude value in epsg:4326

  • lon (np.ndarray or float, optional) – If given, one of the following:

    • np.array with one dimension containing longitudes in epsg:4326

    • float with a longitude value in epsg:4326

  • highres (bool, optional) – Use full resolution of NASA data (much slower). Default: False.

  • signed (bool) – If True, distance is signed with positive values off shore and negative values on land. Default: False

Returns:

dist – (Signed) distance to coast in meters.

Return type:

np.array

climada.util.coordinates.dist_to_coast_nasa(lat, lon, highres=False, signed=False)[source]#

Read interpolated (signed) distance to coast (in m) from NASA data

Note: The NASA raster file is 300 MB and will be downloaded on first run!

Parameters:
  • lat (np.array) – latitudes in epsg:4326

  • lon (np.array) – longitudes in epsg:4326

  • highres (bool, optional) – Use full resolution of NASA data (much slower). Default: False.

  • signed (bool) – If True, distance is signed with positive values off shore and negative values on land. Default: False

Returns:

dist – (Signed) distance to coast in meters.

Return type:

np.array

climada.util.coordinates.get_land_geometry(country_names=None, extent=None, resolution=10)[source]#

Get union of the specified (or all) countries or the points inside the extent.

Parameters:
  • country_names (list, optional) – list with ISO3 names of countries, e.g [‘ZWE’, ‘GBR’, ‘VNM’, ‘UZB’]

  • extent (tuple, optional) – (min_lon, max_lon, min_lat, max_lat)

  • resolution (float, optional) – 10, 50 or 110. Resolution in m. Default: 10m, i.e. 1:10.000.000

Returns:

geom – Polygonal shape of union.

Return type:

shapely.geometry.multipolygon.MultiPolygon

climada.util.coordinates.coord_on_land(lat, lon, land_geom=None)[source]#

Check if points are on land.

Parameters:
  • lat (np.array) – latitude of points in epsg:4326

  • lon (np.array) – longitude of points in epsg:4326

  • land_geom (shapely.geometry.multipolygon.MultiPolygon, optional) – If given, use these as profiles of land. Otherwise, the global landmass is used.

Returns:

on_land – Entries are True if corresponding coordinate is on land and False otherwise.

Return type:

np.array(bool)

climada.util.coordinates.nat_earth_resolution(resolution)[source]#

Check if resolution is available in Natural Earth. Build string.

Parameters:

resolution (int) – resolution in millions, 110 == 1:110.000.000.

Returns:

res_name – Natural Earth name of resolution (e.g. ‘110m’)

Return type:

str

Raises:

ValueError

climada.util.coordinates.get_country_geometries(country_names=None, extent=None, resolution=10, center_crs=True)[source]#

Natural Earth country boundaries within given extent

If no arguments are given, simply returns the whole natural earth dataset.

Take heed: we assume WGS84 as the CRS unless the Natural Earth download utility from cartopy starts including the projection information. (They are saving a whopping 147 bytes by omitting it.) Same goes for UTF.

If extent is provided and center_crs is True, longitude values in ‘geom’ will all lie within ‘extent’ longitude range. Therefore setting extent to e.g. [160, 200, -20, 20] will provide longitude values between 160 and 200 degrees.

Parameters:
  • country_names (list, optional) – list with ISO 3166 alpha-3 codes of countries, e.g [‘ZWE’, ‘GBR’, ‘VNM’, ‘UZB’]

  • extent (tuple, optional) – (min_lon, max_lon, min_lat, max_lat) Extent, assumed to be in the same CRS as the natural earth data.

  • resolution (float, optional) – 10, 50 or 110. Resolution in m. Default: 10m

  • center_crs (bool) – if True, the crs of the countries is centered such that longitude values in ‘geom’ will all lie within ‘extent’ longitude range. Default is True.

Returns:

geom – Natural Earth multipolygons of the specified countries, resp. the countries that lie within the specified extent.

Return type:

GeoDataFrame

climada.util.coordinates.get_region_gridpoints(countries=None, regions=None, resolution=150, iso=True, rect=False, basemap='natearth')[source]#

Get coordinates of gridpoints in specified countries or regions

Parameters:
  • countries (list, optional) – ISO 3166-1 alpha-3 codes of countries, or internal numeric NatID if iso is set to False.

  • regions (list, optional) – Region IDs.

  • resolution (float, optional) – Resolution in arc-seconds, either 150 (default) or 360.

  • iso (bool, optional) – If True, assume that countries are given by their ISO 3166-1 alpha-3 codes (instead of the internal NatID). Default: True.

  • rect (bool, optional) – If True, a rectangular box around the specified countries/regions is selected. Default: False.

  • basemap (str, optional) – Choose between different data sources. Currently available: “isimip” and “natearth”. Default: “natearth”.

Returns:

  • lat (np.array) – Latitude of points in epsg:4326.

  • lon (np.array) – Longitude of points in epsg:4326.

climada.util.coordinates.assign_grid_points(*args, **kwargs)[source]#

This function has been renamed, use match_grid_points instead.

climada.util.coordinates.match_grid_points(x, y, grid_width, grid_height, grid_transform)[source]#

To each coordinate in x and y, assign the closest centroid in the given raster grid

Make sure that your grid specification is relative to the same coordinate reference system as the x and y coordinates. In case of lon/lat coordinates, make sure that the longitudinal values are within the same longitudinal range (such as [-180, 180]).

If your grid is given by bounds instead of a transform, the functions rasterio.transform.from_bounds and pts_to_raster_meta might be helpful.

Parameters:
  • x, y (np.array) – x- and y-coordinates of points to assign coordinates to.

  • grid_width (int) – Width (number of columns) of the grid.

  • grid_height (int) – Height (number of rows) of the grid.

  • grid_transform (affine.Affine) – Affine transformation defining the grid raster.

Returns:

assigned_idx – Index into the flattened grid. Note that the value -1 is used to indicate that no matching coordinate has been found, even though -1 is a valid index in NumPy!

Return type:

np.array of size equal to the size of x and y

climada.util.coordinates.assign_coordinates(*args, **kwargs)[source]#

This function has been renamed, use match_coordinates instead.

climada.util.coordinates.match_coordinates(coords, coords_to_assign, distance='euclidean', threshold=100, **kwargs)[source]#

To each coordinate in coords, assign a matching coordinate in coords_to_assign

If there is no exact match for some entry, an attempt is made to assign the geographically nearest neighbor. If the distance to the nearest neighbor exceeds threshold, the index -1 is assigned.

Currently, the nearest neighbor matching works with lat/lon coordinates only. However, you can disable nearest neighbor matching by setting threshold to 0, in which case only exactly matching coordinates are assigned to each other.

Make sure that all coordinates are according to the same coordinate reference system. In case of lat/lon coordinates, the “haversine” distance is able to correctly compute the distance across the antimeridian. However, when exact matches are enforced with threshold=0, lat/lon coordinates need to be given in the same longitudinal range (such as (-180, 180)).

Parameters:
  • coords (np.array with two columns) – Each row is a geographical coordinate pair. The result’s size will match this array’s number of rows.

  • coords_to_assign (np.array with two columns) – Each row is a geographical coordinate pair. The result will be an index into the rows of this array. Make sure that these coordinates use the same coordinate reference system as coords.

  • distance (str, optional) – Distance to use for non-exact matching. Possible values are “euclidean”, “haversine” and “approx”. Default: “euclidean”

  • threshold (float, optional) – If the distance to the nearest neighbor exceeds threshold, the index -1 is assigned. Set threshold to 0 to disable nearest neighbor matching. Default: 100 (km)

  • kwargs (dict, optional) – Keyword arguments to be passed on to nearest-neighbor finding functions in case of non-exact matching with the specified distance.

Returns:

assigned_idx – Index into coords_to_assign. Note that the value -1 is used to indicate that no matching coordinate has been found, even though -1 is a valid index in NumPy!

Return type:

np.array of size equal to the number of rows in coords

Notes

By default, the ‘euclidean’ distance metric is used to find the nearest neighbors in case of non-exact matching. This method is fast for (quasi-)gridded data, but introduces innacuracy since distances in lat/lon coordinates are not equal to distances in meters on the Earth surface, in particular for higher latitude and distances larger than 100km. If more accuracy is needed, please use the ‘haversine’ distance metric. This however is slower for (quasi-)gridded data.

climada.util.coordinates.match_centroids(coord_gdf, centroids, distance='euclidean', threshold=100)[source]#

Assign to each gdf coordinate point its closest centroids’s coordinate. If distances > threshold in points’ distances, -1 is returned. If centroids are in a raster and coordinate point is outside of it -1 is assigned

Parameters:
  • coord_gdf (gpd.GeoDataFrame) – GeoDataframe with defined latitude/longitude column and crs

  • centroids (Centroids) – (Hazard) centroids to match (as raster or vector centroids).

  • distance (str, optional) – Distance to use in case of vector centroids. Possible values are “euclidean”, “haversine” and “approx”. Default: “euclidean”

  • threshold (float, optional) – If the distance (in km) to the nearest neighbor exceeds threshold, the index -1 is assigned. Set threshold to 0, to disable nearest neighbor matching. Default: NEAREST_NEIGHBOR_THRESHOLD (100km)

See also

climada.util.coordinates.match_grid_points

method to associate centroids to coordinate points when centroids is a raster

climada.util.coordinates.match_coordinates

method to associate centroids to coordinate points

Notes

The default order of use is:

1. if centroid raster is defined, assign the closest raster point to each gdf coordinate point. 2. if no raster, assign centroids to the nearest neighbor using euclidian metric

Both cases can introduce innacuracies for coordinates in lat/lon coordinates as distances in degrees differ from distances in meters on the Earth surface, in particular for higher latitude and distances larger than 100km. If more accuracy is needed, please use ‘haversine’ distance metric. This however is slower for (quasi-)gridded data, and works only for non-gridded data.

climada.util.coordinates.region2isos(regions)[source]#

Convert region names to ISO 3166 alpha-3 codes of countries

Parameters:

regions (str or list of str) – Region name(s).

Returns:

isos – Sorted list of iso codes of all countries in specified region(s).

Return type:

list of str

climada.util.coordinates.country_to_iso(countries, representation='alpha3', fillvalue=None)[source]#

Determine ISO 3166 representation of countries

Example

>>> country_to_iso(840)
'USA'
>>> country_to_iso("United States", representation="alpha2")
'US'
>>> country_to_iso(["United States of America", "SU"], "numeric")
[840, 810]

Some geopolitical areas that are not covered by ISO 3166 are added in the “user-assigned” range of ISO 3166-compliant values:

>>> country_to_iso(["XK", "Dhekelia"], "numeric")  # XK for Kosovo
[983, 907]
Parameters:
  • countries (one of str, int, list of str, list of int) – Country identifiers: name, official name, alpha-2, alpha-3 or numeric ISO codes. Numeric representations may be specified as str or int.

  • representation (str (one of “alpha3”, “alpha2”, “numeric”, “name”), optional) – All countries are converted to this representation according to ISO 3166. Default: “alpha3”.

  • fillvalue (str or int or None, optional) – The value to assign if a country is not recognized by the given identifier. By default, a LookupError is raised. Default: None

Returns:

iso_list – ISO 3166 representation of countries. Will only return a list if the input is a list. Numeric representations are returned as integers.

Return type:

one of str, int, list of str, list of int

climada.util.coordinates.country_iso_alpha2numeric(iso_alpha)[source]#

Deprecated: Use country_to_iso with representation=”numeric” instead

climada.util.coordinates.country_natid2iso(natids, representation='alpha3')[source]#

Convert internal NatIDs to ISO 3166-1 alpha-3 codes

Parameters:
  • natids (int or list of int) – NatIDs of countries (or single ID) as used in ISIMIP’s version of the GPWv3 national identifier grid.

  • representation (str, one of “alpha3”, “alpha2” or “numeric”) – All countries are converted to this representation according to ISO 3166. Default: “alpha3”.

Returns:

iso_list – ISO 3166 representation of countries. Will only return a list if the input is a list. Numeric representations are returned as integers.

Return type:

one of str, int, list of str, list of int

climada.util.coordinates.country_iso2natid(isos)[source]#

Convert ISO 3166-1 alpha-3 codes to internal NatIDs

Parameters:

isos (str or list of str) – ISO codes of countries (or single code).

Returns:

natids – Will only return a list if the input is a list.

Return type:

int or list of int

climada.util.coordinates.natearth_country_to_int(country)[source]#

Integer representation (ISO 3166, if possible) of Natural Earth GeoPandas country row

Parameters:

country (GeoSeries) – Row from Natural Earth GeoDataFrame.

Returns:

iso_numeric – Integer representation of given country.

Return type:

int

climada.util.coordinates.get_country_code(lat, lon, gridded=False)[source]#

Provide numeric (ISO 3166) code for every point.

Oceans get the value zero. Areas that are not in ISO 3166 are given values in the range above 900 according to NATEARTH_AREA_NONISO_NUMERIC.

Parameters:
  • lat (np.array) – latitude of points in epsg:4326

  • lon (np.array) – longitude of points in epsg:4326

  • gridded (bool) – If True, interpolate precomputed gridded data which is usually much faster. Default: False.

Returns:

country_codes – Numeric code for each point.

Return type:

np.array(int)

climada.util.coordinates.get_admin1_info(country_names)[source]#

Provide Natural Earth registry info and shape files for admin1 regions

Parameters:

country_names (list or str) – string or list with strings, either ISO code or names of countries, e.g.: ['ZWE', 'GBR', 'VNM', 'UZB', 'Kenya', '051'] For example, for Armenia, all of the following inputs work: 'Armenia', 'ARM', 'AM', '051', 51

Returns:

  • admin1_info (dict) – Data according to records in Natural Earth database.

  • admin1_shapes (dict) – Shape according to Natural Earth.

climada.util.coordinates.get_admin1_geometries(countries)[source]#

return geometries, names and codes of admin 1 regions in given countries in a GeoDataFrame. If no admin 1 regions are defined, all regions in countries are returned.

Parameters:

countries (list or str or int) – string or list with strings, either ISO code or names of countries, e.g.: ['ZWE', 'GBR', 'VNM', 'UZB', 'Kenya', '051'] For example, for Armenia, all of the following inputs work: 'Armenia', 'ARM', 'AM', '051', 51

Returns:

gdf

geopandas.GeoDataFrame instance with columns:
”admin1_name”str

name of admin 1 region

”iso_3166_2”str

iso code of admin 1 region

”geometry”Polygon or MultiPolygon

shape of admin 1 region as shapely geometry object

”iso_3n”str

numerical iso 3 code of country (admin 0)

”iso_3a”str

alphabetical iso 3 code of country (admin 0)

Return type:

GeoDataFrame

climada.util.coordinates.get_resolution_1d(coords, min_resol=1e-08)[source]#

Compute resolution of scalar grid

Parameters:
  • coords (np.array) – scalar coordinates

  • min_resol (float, optional) – minimum resolution to consider. Default: 1.0e-8.

Returns:

res – Resolution of given grid.

Return type:

float

climada.util.coordinates.get_resolution(*coords, min_resol=1e-08)[source]#

Compute resolution of n-d grid points

Parameters:
  • X, Y, … (np.array) – Scalar coordinates in each axis

  • min_resol (float, optional) – minimum resolution to consider. Default: 1.0e-8.

Returns:

resolution – Resolution in each coordinate direction.

Return type:

pair of floats

climada.util.coordinates.pts_to_raster_meta(points_bounds, res)[source]#

Transform vector data coordinates to raster.

If a raster of the given resolution doesn’t exactly fit the given bounds, the raster might have slightly larger (but never smaller) bounds.

Parameters:
  • points_bounds (tuple) – points total bounds (xmin, ymin, xmax, ymax)

  • res (tuple) – resolution of output raster (xres, yres)

Returns:

  • nrows (int) – Number of rows.

  • ncols (int) – Number of columns.

  • ras_trans (affine.Affine) – Affine transformation defining the raster.

climada.util.coordinates.raster_to_meshgrid(transform, width, height)[source]#

Get coordinates of grid points in raster

Parameters:
  • transform (affine.Affine) – Affine transform defining the raster.

  • width (int) – Number of points in first coordinate axis.

  • height (int) – Number of points in second coordinate axis.

Returns:

  • x (np.array of shape (height, width)) – x-coordinates of grid points.

  • y (np.array of shape (height, width)) – y-coordinates of grid points.

climada.util.coordinates.to_crs_user_input(crs_obj)[source]#

Returns a crs string or dictionary from a hdf5 file object.

bytes are decoded to str if the string starts with a ‘{’ it is assumed to be a dumped string from a dictionary and ast is used to parse it.

Parameters:

crs_obj (int, dict or str or bytes) – the crs object to be converted user input

Returns:

to eventually be used as argument of rasterio.crs.CRS.from_user_input and pyproj.crs.CRS.from_user_input

Return type:

str or dict

Raises:

ValueError – if type(crs_obj) has the wrong type

climada.util.coordinates.equal_crs(crs_one, crs_two)[source]#

Compare two crs

Parameters:
  • crs_one (dict, str or int) – user crs

  • crs_two (dict, str or int) – user crs

Returns:

equal – Whether the two specified CRS are equal according tho rasterio.crs.CRS.from_user_input

Return type:

bool

climada.util.coordinates.read_raster(file_name, band=None, src_crs=None, window=None, geometry=None, dst_crs=None, transform=None, width=None, height=None, resampling='nearest')[source]#

Read raster of bands and set 0-values to the masked ones.

Parameters:
  • file_name (str) – name of the file

  • band (list(int), optional) – band number to read. Default: 1

  • window (rasterio.windows.Window, optional) – window to read

  • geometry (list of shapely.geometry, optional) – consider pixels only within these shapes

  • dst_crs (crs, optional) – reproject to given crs

  • transform (rasterio.Affine) – affine transformation to apply

  • wdith (float) – number of lons for transform

  • height (float) – number of lats for transform

  • resampling (int or str, optional) – Resampling method to use, encoded as an integer value (see rasterio.enums.Resampling). String values like “nearest” or “bilinear” are resolved to attributes of rasterio.enums.Resampling. Default: “nearest”

Returns:

  • meta (dict) – Raster meta (height, width, transform, crs).

  • data (np.array) – Each row corresponds to one band (raster points are flattened, can be reshaped to height x width).

climada.util.coordinates.read_raster_bounds(path, bounds, res=None, bands=None, resampling='nearest', global_origin=None, pad_cells=1.0)[source]#

Read raster file within given bounds at given resolution

By default, not only the grid cells of the destination raster whose cell centers fall within the specified bounds are selected, but one additional row/column of grid cells is added as a padding in each direction (pad_cells=1). This makes sure that the extent of the selected cell centers encloses the specified bounds.

The axis orientations (e.g. north to south, west to east) of the input data set are preserved.

Parameters:
  • path (str) – Path to raster file to open with rasterio.

  • bounds (tuple) – (xmin, ymin, xmax, ymax)

  • res (float or pair of floats, optional) – Resolution of output. Note that the orientation (sign) of these is overwritten by the input data set’s axis orientations (e.g. north to south, west to east). Default: Resolution of input raster file.

  • bands (list of int, optional) – Bands to read from the input raster file. Default: [1]

  • resampling (int or str, optional) – Resampling method to use, encoded as an integer value (see rasterio.enums.Resampling). String values like “nearest” or “bilinear” are resolved to attributes of rasterio.enums.Resampling. Default: “nearest”

  • global_origin (pair of floats, optional) – If given, align the output raster to a global reference raster with this origin. By default, the data set’s origin (according to it’s transform) is used.

  • pad_cells (float, optional) – The number of cells to add as a padding (in terms of the destination grid that is inferred from res and/or global_origin if those parameters are given). This defaults to 1 to make sure that applying methods like bilinear interpolation to the output of this function is well-defined everywhere within the specified bounds. Default: 1.0

Returns:

  • data (3d np.array) – First dimension is for the selected raster bands. Second dimension is y (lat) and third dimension is x (lon).

  • transform (rasterio.Affine) – Affine transformation defining the output raster data.

climada.util.coordinates.read_raster_sample(path, lat, lon, intermediate_res=None, method='linear', fill_value=None)[source]#

Read point samples from raster file.

Parameters:
  • path (str) – path of the raster file

  • lat (np.array of shape (npoints,)) – latitudes in file’s CRS

  • lon (np.array of shape (npoints,)) – longitudes in file’s CRS

  • intermediate_res (float or pair of floats, optional) – If given, the raster is not read in its original resolution but in the given one. This can increase performance for files of very high resolution.

  • method (str or pair of str, optional) – The interpolation method, passed to scipy.interpolate.interpn. Default: ‘linear’

  • fill_value (numeric, optional) – The value used outside of the raster bounds. Default: The raster’s nodata value or 0.

Returns:

values – Interpolated raster values for each given coordinate point.

Return type:

np.array of shape (npoints,)

climada.util.coordinates.read_raster_sample_with_gradients(path, lat, lon, intermediate_res=None, method=('linear', 'nearest'), fill_value=None)[source]#

Read point samples with computed gradients from raster file.

For convenience, and because this is the most common use case, the step sizes in the gradient computation are converted to meters if the raster’s CRS is EPSG:4326 (lat/lon).

For example, in case of an elevation data set, not only the heights, but also the slopes of the terrain in x- and y-direction are returned. In addition, if the CRS of the elevation data set is EPSG:4326 (lat/lon) and elevations are given in m, then distances are converted from degrees to meters, so that the unit of the returned slopes is “meters (height) per meter (distance)”.

Parameters:
  • path (str) – path of the raster file

  • lat (np.array of shape (npoints,)) – latitudes in file’s CRS

  • lon (np.array of shape (npoints,)) – longitudes in file’s CRS

  • intermediate_res (float or pair of floats, optional) – If given, the raster is not read in its original resolution but in the given one. This can increase performance for files of very high resolution.

  • method (str or pair of str, optional) – The interpolation methods for the data and its gradient, passed to scipy.interpolate.interpn. If a single string is given, the same interpolation method is used for both the data and its gradient. Default: (‘linear’, ‘nearest’)

  • fill_value (numeric, optional) – The value used outside of the raster bounds. Default: The raster’s nodata value or 0.

Returns:

  • values (np.array of shape (npoints,)) – Interpolated raster values for each given coordinate point.

  • gradient (np.array of shape (npoints, 2)) – The raster gradient at each of the given coordinate points. The first/second value in each row is the derivative in lat/lon direction (lat is first!).

climada.util.coordinates.interp_raster_data(data, interp_y, interp_x, transform, method='linear', fill_value=0)[source]#

Interpolate raster data, given as array and affine transform

Parameters:
  • data (np.array) – Array containing the values. The first two dimensions are always interpreted as corresponding to the y- and x-coordinates of the grid. Additional dimensions can be present in case of multi-band data.

  • interp_y (np.array) – y-coordinates of points (corresp. to first axis of data)

  • interp_x (np.array) – x-coordinates of points (corresp. to second axis of data)

  • transform (affine.Affine) – affine transform defining the raster

  • method (str, optional) – The interpolation method, passed to scipy.interpolate.interpn. Default: ‘linear’.

  • fill_value (numeric, optional) –

    The value used outside of the raster

    bounds. Default: 0.

Returns:

values – Interpolated raster values for each given coordinate point. If multi-band data is provided, the additional dimensions from data will also be present in this array.

Return type:

np.array

climada.util.coordinates.refine_raster_data(data, transform, res, method='linear', fill_value=0)[source]#

Refine raster data, given as array and affine transform

Parameters:
  • data (np.array) – 2d array containing the values

  • transform (affine.Affine) – affine transform defining the raster

  • res (float or pair of floats) – new resolution

  • method (str, optional) –

    The interpolation method, passed to

    scipy.interp.interpn. Default: ‘linear’.

Returns:

  • new_data (np.array) – 2d array containing the interpolated values.

  • new_transform (affine.Affine) – Affine transform defining the refined raster.

climada.util.coordinates.read_vector(file_name, field_name, dst_crs=None)[source]#

Read vector file format supported by fiona.

Parameters:
  • file_name (str) – vector file with format supported by fiona and ‘geometry’ field.

  • field_name (list(str)) – list of names of the columns with values.

  • dst_crs (crs, optional) – reproject to given crs

Returns:

  • lat (np.array) – Latitudinal coordinates.

  • lon (np.array) – Longitudinal coordinates.

  • geometry (GeoSeries) – Shape geometries.

  • value (np.array) – Values associated to each shape.

climada.util.coordinates.write_raster(file_name, data_matrix, meta, dtype=<class 'numpy.float32'>)[source]#

Write raster in GeoTiff format.

Parameters:
  • file_name (str) – File name to write.

  • data_matrix (np.array) – 2d raster data. Either containing one band, or every row is a band and the column represents the grid in 1d.

  • meta (dict) – rasterio meta dictionary containing raster properties: width, height, crs and transform must be present at least. Include compress=”deflate” for compressed output.

  • dtype (numpy dtype, optional) – A numpy dtype. Default: np.float32

climada.util.coordinates.points_to_raster(points_df, val_names=None, res=0.0, raster_res=0.0, crs='EPSG:4326', scheduler=None)[source]#

Compute raster (as data and transform) from GeoDataFrame.

Parameters:
  • points_df (GeoDataFrame) – contains columns latitude, longitude and those listed in the parameter val_names.

  • val_names (list of str, optional) – The names of columns in points_df containing values. The raster will contain one band per column. Default: [‘value’]

  • res (float, optional) – resolution of current data in units of latitude and longitude, approximated if not provided.

  • raster_res (float, optional) – desired resolution of the raster

  • crs (object (anything accepted by pyproj.CRS.from_user_input), optional) – If given, overwrites the CRS information given in points_df. If no CRS is explicitly given and there is no CRS information in points_df, the CRS is assumed to be EPSG:4326 (lat/lon). Default: None

  • scheduler (str) – used for dask map_partitions. “threads”, “synchronous” or “processes”

Returns:

  • data (np.array) – 3d array containing the raster values. The first dimension has the same size as val_names and represents the raster bands.

  • meta (dict) – Dictionary with ‘crs’, ‘height’, ‘width’ and ‘transform’ attributes.

climada.util.coordinates.subraster_from_bounds(transform, bounds)[source]#

Compute a subraster definition from a given reference transform and bounds.

The axis orientations (sign of resolution step sizes) in transform are not required to be north to south and west to east. The given orientation is preserved in the result.

Parameters:
  • transform (rasterio.Affine) – Affine transformation defining the reference grid.

  • bounds (tuple of floats (xmin, ymin, xmax, ymax)) – Bounds of the subraster in units and CRS of the reference grid.

Returns:

  • dst_transform (rasterio.Affine) – Subraster affine transformation. The axis orientations of the input transform (e.g. north to south, west to east) are preserved.

  • dst_shape (tuple of ints (height, width)) – Number of pixels of subraster in vertical and horizontal direction.

climada.util.coordinates.align_raster_data(source, src_crs, src_transform, dst_crs=None, dst_resolution=None, dst_bounds=None, global_origin=(-180, 90), resampling='nearest', conserve=None, **kwargs)[source]#

Reproject 2D np.ndarray to be aligned to a reference grid.

This function ensures that reprojected data with the same dst_resolution and global_origins are aligned to the same global grid, i.e., no offset between destination grid points for different source grids that are projected to the same target resolution.

Note that the origin is required to be in the upper left corner. The result is always oriented left to right (west to east) and top to bottom (north to south).

Parameters:
  • source (np.ndarray) – The source is a 2D ndarray containing the values to be reprojected.

  • src_crs (CRS or dict) – Source coordinate reference system, in rasterio dict format.

  • src_transform (rasterio.Affine) – Source affine transformation.

  • dst_crs (CRS, optional) – Target coordinate reference system, in rasterio dict format. Default: src_crs

  • dst_resolution (tuple (x_resolution, y_resolution) or float, optional) – Target resolution (positive pixel sizes) in units of the target CRS. Default: (abs(src_transform[0]), abs(src_transform[4]))

  • dst_bounds (tuple of floats (xmin, ymin, xmax, ymax), optional) – Bounds of the target raster in units of the target CRS. By default, the source’s bounds are reprojected to the target CRS.

  • global_origin (tuple (west, north) of floats, optional) – Coordinates of the reference grid’s upper left corner. Default: (-180, 90). Make sure to change global_origin for non-geographical CRS!

  • resampling (int or str, optional) – Resampling method to use, encoded as an integer value (see rasterio.enums.Resampling). String values like “nearest” or “bilinear” are resolved to attributes of rasterio.enums.Resampling. Default: “nearest”

  • conserve (str, optional) – If provided, conserve the source array’s ‘mean’ or ‘sum’ in the transformed data or normalize the values of the transformed data ndarray (‘norm’). WARNING: Please note that this procedure will not apply any weighting of values according to the geographical cell sizes, which will introduce serious biases for lat/lon grids in case of areas spanning large latitudinal ranges. Default: None (no conservation)

  • kwargs (dict, optional) – Additional arguments passed to rasterio.warp.reproject.

Raises:

ValueError

Returns:

  • destination (np.ndarray with same dtype as source) – The transformed 2D ndarray.

  • dst_transform (rasterio.Affine) – Destination affine transformation.

climada.util.coordinates.mask_raster_with_geometry(raster, transform, shapes, nodata=None, **kwargs)[source]#

Change values in raster that are outside of given shapes to nodata.

This function is a wrapper for rasterio.mask.mask to allow for in-memory processing. This is done by first writing data to memfile and then reading from it before the function call to rasterio.mask.mask(). The MemoryFile will be discarded after exiting the with statement.

Parameters:
  • raster (numpy.ndarray) – raster to be masked with dim: [H, W].

  • transform (affine.Affine) – the transform of the raster.

  • shapes (GeoJSON-like dict or an object that implements the Python geo) – interface protocol (such as a Shapely Polygon) Passed to rasterio.mask.mask

  • nodata (int or float, optional) – Passed to rasterio.mask.mask: Data points outside shapes are set to nodata.

  • kwargs (optional) – Passed to rasterio.mask.mask.

Returns:

masked – raster with dim: [H, W] and points outside shapes set to nodata

Return type:

numpy.ndarray or numpy.ma.MaskedArray

climada.util.coordinates.set_df_geometry_points(df_val, scheduler=None, crs=None)[source]#

Set given geometry to given dataframe using dask if scheduler.

Parameters:
  • df_val (GeoDataFrame) – contains latitude and longitude columns

  • scheduler (str, optional) – used for dask map_partitions. “threads”, “synchronous” or “processes”

  • crs (object (anything readable by pyproj4.CRS.from_user_input), optional) – Coordinate Reference System, if omitted or None: df_val.geometry.crs

climada.util.coordinates.fao_code_def()[source]#

Generates list of FAO country codes and corresponding ISO numeric-3 codes.

Returns:

  • iso_list (list) – list of ISO numeric-3 codes

  • faocode_list (list) – list of FAO country codes

climada.util.coordinates.country_faocode2iso(input_fao)[source]#

Convert FAO country code to ISO numeric-3 codes.

Parameters:

input_fao (int or array) – FAO country codes of countries (or single code)

Returns:

output_iso – ISO numeric-3 codes of countries (or single code)

Return type:

int or array

climada.util.coordinates.country_iso2faocode(input_iso)[source]#

Convert ISO numeric-3 codes to FAO country code.

Parameters:

input_iso (iterable of int) – ISO numeric-3 code(s) of country/countries

Returns:

output_faocode – FAO country code(s) of country/countries

Return type:

numpy.array

climada.util.dates_times module#

climada.util.dates_times.date_to_str(date)[source]#

Compute date string in ISO format from input datetime ordinal int. :Parameters: date (int or list or np.array) – input datetime ordinal

Return type:

str or list(str)

climada.util.dates_times.str_to_date(date)[source]#

Compute datetime ordinal int from input date string in ISO format. :Parameters: date (str or list) – idate string in ISO format, e.g. ‘2018-04-06’

Return type:

int

climada.util.dates_times.datetime64_to_ordinal(datetime)[source]#

Converts from a numpy datetime64 object to an ordinal date. See https://stackoverflow.com/a/21916253 for the horrible details. :Parameters: datetime (np.datetime64, or list or np.array) – date and time

Return type:

int

climada.util.dates_times.last_year(ordinal_vector)[source]#

Extract first year from ordinal date

Parameters:

ordinal_vector (list or np.array) – input datetime ordinal

Return type:

int

climada.util.dates_times.first_year(ordinal_vector)[source]#

Extract first year from ordinal date

Parameters:

ordinal_vector (list or np.array) – input datetime ordinal

Return type:

int

climada.util.dwd_icon_loader module#

climada.util.dwd_icon_loader.download_icon_grib(run_datetime, model_name='icon-eu-eps', parameter_name='vmax_10m', max_lead_time=None, download_dir=None)[source]#

download the gribfiles of a weather forecast run for a certain weather parameter from opendata.dwd.de/weather/nwp/.

Parameters:
  • run_datetime (datetime) – The starting timepoint of the forecast run

  • model_name (str) – the name of the forecast model written as it appears in the folder structure in opendata.dwd.de/weather/nwp/ or ‘test’

  • parameter_name (str) – the name of the meteorological parameter written as it appears in the folder structure in opendata.dwd.de/weather/nwp/

  • max_lead_time (int) – number of hours for which files should be downloaded, will default to maximum available data

  • download_dir (: str or Path) – directory where the downloaded files should be saved in

Returns:

file_names – a list of filenames that link to all just downloaded or available files from the forecast run, defined by the input parameters

Return type:

list

climada.util.dwd_icon_loader.delete_icon_grib(run_datetime, model_name='icon-eu-eps', parameter_name='vmax_10m', max_lead_time=None, download_dir=None)[source]#

delete the downloaded gribfiles of a weather forecast run for a certain weather parameter from opendata.dwd.de/weather/nwp/.

Parameters:
  • run_datetime (datetime) – The starting timepoint of the forecast run

  • model_name (str) – the name of the forecast model written as it appears in the folder structure in opendata.dwd.de/weather/nwp/

  • parameter_name (str) – the name of the meteorological parameter written as it appears in the folder structure in opendata.dwd.de/weather/nwp/

  • max_lead_time (int) – number of hours for which files should be deleted, will default to maximum available data

  • download_dir (str or Path) – directory where the downloaded files are stored at the moment

climada.util.dwd_icon_loader.download_icon_centroids_file(model_name='icon-eu-eps', download_dir=None)[source]#

create centroids based on netcdf files provided by dwd, links found here: https://www.dwd.de/DE/leistungen/opendata/neuigkeiten/opendata_dez2018_02.html https://www.dwd.de/DE/leistungen/opendata/neuigkeiten/opendata_aug2020_01.html

Parameters:
  • model_name (str) – the name of the forecast model written as it appears in the folder structure in opendata.dwd.de/weather/nwp/

  • download_dir (str or Path) – directory where the downloaded files should be saved in

Returns:

file_name – absolute path and filename of the downloaded and decompressed netcdf file

Return type:

str

climada.util.earth_engine module#

climada.util.files_handler module#

climada.util.files_handler.to_list(num_exp, values, val_name)[source]#

Check size and transform to list if necessary. If size is one, build a list with num_exp repeated values.

Parameters:
  • num_exp (int) – expected number of list elements

  • values (object or list(object)) – values to check and transform

  • val_name (str) – name of the variable values

Return type:

list

climada.util.files_handler.get_file_names(file_name)[source]#

Return list of files contained. Supports globbing.

Parameters:

file_name (str or list(str)) – Either a single string or a list of strings that are either - a file path - or the path of the folder containing the files - or a globbing pattern.

Return type:

list(str)

climada.util.finance module#

climada.util.finance.net_present_value(years, disc_rates, val_years)[source]#

Compute net present value.

Parameters:
  • years (np.array) – array with the sequence of years to consider.

  • disc_rates (np.array) – discount rate for every year in years.

  • val_years (np.array) – chash flow at each year.

Return type:

float

climada.util.finance.income_group(cntry_iso, ref_year, shp_file=None)[source]#

Get country’s income group from World Bank’s data at a given year, or closest year value. If no data, get the natural earth’s approximation.

Parameters:
  • cntry_iso (str) – key = ISO alpha_3 country

  • ref_year (int) – reference year

  • shp_file (cartopy.io.shapereader.Reader, optional) – shape file with INCOME_GRP attribute for every country. Load Natural Earth admin0 if not provided.

climada.util.finance.gdp(cntry_iso, ref_year, shp_file=None, per_capita=False)[source]#

Get country’s (current value) GDP from World Bank’s data at a given year, or closest year value. If no data, get the natural earth’s approximation.

Parameters:
  • cntry_iso (str) – key = ISO alpha_3 country

  • ref_year (int) – reference year

  • shp_file (cartopy.io.shapereader.Reader, optional) – shape file with INCOME_GRP attribute for every country. Load Natural Earth admin0 if not provided.

  • per_capita (boolean, optional) – If True, GDP is returned per capita

Return type:

float

climada.util.hdf5_handler module#

climada.util.hdf5_handler.read(file_name, with_refs=False)[source]#

Load a hdf5 data structure from a file.

Parameters:
  • file_name – file to load

  • with_refs – enable loading of the references. Default is unset, since it increments the execution time considerably.

Returns:

dictionary structure containing all the variables.

Return type:

contents

Examples

>>> # Contents contains the Matlab data in a dictionary.
>>> contents = read("/pathto/dummy.mat")
>>> # Contents contains the Matlab data and its reference in a dictionary.
>>> contents = read("/pathto/dummy.mat", True)
Raises:

Exception while reading

climada.util.hdf5_handler.get_string(array)[source]#

Form string from input array of unisgned integers.

Parameters:

array – array of integers

Return type:

string

climada.util.hdf5_handler.get_str_from_ref(file_name, var)[source]#

Form string from a reference HDF5 variable of the given file.

Parameters:
  • file_name – matlab file name

  • var – HDF5 reference variable

Return type:

string

climada.util.hdf5_handler.get_list_str_from_ref(file_name, var)[source]#

Form list of strings from a reference HDF5 variable of the given file.

Parameters:
  • file_name – matlab file name

  • var – array of HDF5 reference variable

Return type:

string

climada.util.hdf5_handler.get_sparse_csr_mat(mat_dict, shape)[source]#

Form sparse matrix from input hdf5 sparse matrix data type.

Parameters:
  • mat_dict – dictionary containing the sparse matrix information.

  • shape – tuple describing output matrix shape.

Return type:

sparse csr matrix

climada.util.lines_polys_handler module#

class climada.util.lines_polys_handler.DisaggMethod(value)[source]#

Bases: Enum

Disaggregation Method for the … function

DIV : the geometry’s distributed to equal parts over all its interpolated points FIX : the geometry’s value is replicated over all its interpolated points

DIV = 'div'#
FIX = 'fix'#
class climada.util.lines_polys_handler.AggMethod(value)[source]#

Bases: Enum

Aggregation Method for the aggregate_impact_mat function

SUM : the impact is summed over all points in the polygon/line

SUM = 'sum'#
climada.util.lines_polys_handler.calc_geom_impact(exp, impf_set, haz, res, to_meters=False, disagg_met=DisaggMethod.DIV, disagg_val=None, agg_met=AggMethod.SUM)[source]#

Compute impact for exposure with (multi-)polygons and/or (multi-)lines. Lat/Lon values in exp.gdf are ignored, only exp.gdf.geometry is considered.

The geometries are first disaggregated to points. Polygons: grid with resolution res*res. Lines: points along the line separated by distance res. The impact per point is then re-aggregated for each geometry.

Parameters:
  • exp (Exposures) – The exposure instance with exp.gdf.geometry containing (multi-)polygons and/or (multi-)lines

  • impf_set (ImpactFuncSet) – The set of impact functions.

  • haz (Hazard) – The hazard instance.

  • res (float) – Resolution of the disaggregation grid (polygon) or line (lines).

  • to_meters (bool, optional) – If True, res is interpreted as meters, and geometries are projected to an equal area projection for disaggregation. The exposures are then projected back to the original projections before impact calculation. The default is False.

  • disagg_met (DisaggMethod) – Disaggregation method of the shapes’s original value onto its inter- polated points. ‘DIV’: Divide the value evenly over all the new points; ‘FIX’: Replicate the value onto all the new points. Default is ‘DIV’. Works in combination with the kwarg ‘disagg_val’.

  • disagg_val (float, optional) – Specifies what number should be taken as the value, which is to be disaggregated according to the method provided in disagg_met. None: The shape’s value is taken from the exp.gdf.value column. float: This given number will be disaggregated according to the method. In case exp.gdf.value column exists, original values in there will be ignored. The default is None.

  • agg_met (AggMethod) – Aggregation method of the point impacts into impact for respective parent-geometry. If ‘SUM’, the impact is summed over all points in each geometry. The default is ‘SUM’.

Returns:

Impact object with the impact per geometry (rows of exp.gdf). Contains two additional attributes ‘geom_exp’ and ‘coord_exp’, the first one being the origninal line or polygon geometries for which impact was computed.

Return type:

Impact

See also

exp_geom_to_pnt

disaggregate exposures

climada.util.lines_polys_handler.impact_pnt_agg(impact_pnt, exp_pnt_gdf, agg_met)[source]#

Aggregate the impact per geometry.

The output Impact object contains an extra attribute ‘geom_exp’ containing the geometries.

Parameters:
  • impact_pnt (Impact) – Impact object with impact per exposure point (lines of exp_pnt)

  • exp_pnt_gdf (gpd.GeoDataFrame) – Geodataframe of an exposures featuring a multi-index. First level indicating membership of original geometries, second level the disaggregated points. The exposure is obtained for instance with the disaggregation method exp_geom_to_pnt().

  • agg_met (AggMethod) – Aggregation method of the point impacts into impact for respective parent-geometry. If ‘SUM’, the impact is summed over all points in each geometry. The default is ‘SUM’.

Returns:

impact_agg – Impact object with the impact per original geometry. Original geometry additionally stored in attribute ‘geom_exp’; coord_exp contains only representative points (lat/lon) of those geometries.

Return type:

Impact

See also

exp_geom_to_pnt

exposures disaggregation method

climada.util.lines_polys_handler.calc_grid_impact(exp, impf_set, haz, grid, disagg_met=DisaggMethod.DIV, disagg_val=None, agg_met=AggMethod.SUM)[source]#

Compute impact for exposure with (multi-)polygons and/or (multi-)lines. Lat/Lon values in exp.gdf are ignored, only exp.gdf.geometry is considered.

The geometries are first disaggregated to points. Polygons: grid with resolution res*res. Lines: points along the line separated by distance res. The impact per point is then re-aggregated for each geometry.

Parameters:
  • exp (Exposures) – The exposure instance with exp.gdf.geometry containing (multi-)polygons and/or (multi-)lines

  • impf_set (ImpactFuncSet) – The set of impact functions.

  • haz (Hazard) – The hazard instance.

  • grid (np.array()) – Grid on which to disaggregate the exposures. Provided as two vectors [x_grid, y_grid].

  • disagg_met (DisaggMethod) – Disaggregation method of the shapes’s original value onto its inter- polated points. ‘DIV’: Divide the value evenly over all the new points; ‘FIX’: Replicate the value onto all the new points. Default is ‘DIV’. Works in combination with the kwarg ‘disagg_val’.

  • disagg_val (float, optional) – Specifies what number should be taken as the value, which is to be disaggregated according to the method provided in disagg_met. None: The shape’s value is taken from the exp.gdf.value column. float: This given number will be disaggregated according to the method. In case exp.gdf.value column exists, original values in there will be ignored The default is None.

  • agg_met (AggMethod) – Aggregation method of the point impacts into impact for respective parent-geometry. If ‘SUM’, the impact is summed over all points in each geometry. The default is ‘SUM’.

Returns:

Impact object with the impact per geometry (rows of exp.gdf). Contains two additional attributes ‘geom_exp’ and ‘coord_exp’, the first one being the origninal line or polygon geometries for which impact was computed.

Return type:

Impact

See also

exp_geom_to_pnt

disaggregate exposures

climada.util.lines_polys_handler.plot_eai_exp_geom(imp_geom, centered=False, figsize=(9, 13), **kwargs)[source]#

Plot the average impact per exposure polygon.

Parameters:
  • imp_geom (Impact) –

    Impact instance with imp_geom set (i.e. computed from exposures with

    polygons)

  • centered (bool, optional) – Center the plot. The default is False.

  • figsize ((float, float), optional) – Figure size. The default is (9, 13).

  • **kwargs (dict) – Keyword arguments for GeoDataFrame.plot()

Returns:

matplotlib axes instance

Return type:

ax

climada.util.lines_polys_handler.exp_geom_to_pnt(exp, res, to_meters, disagg_met, disagg_val)[source]#

Disaggregate exposures with (multi-)polygons and/or (multi-)lines geometries to points based on a given resolution.

Parameters:
  • exp (Exposures) – The exposure instance with exp.gdf.geometry containing lines or polygons

  • res (float) – Resolution of the disaggregation grid / distance. Can also be a tuple of [x_grid, y_grid] numpy arrays. In this case, to_meters is ignored. This is only possible for Polygon-only exposures.

  • to_meters (bool) – If True, res is interpreted as meters, and geometries are projected to an equal area projection for disaggregation. The exposures are then projected back to the original projections before impact calculation. The default is False.

  • disagg_met (DisaggMethod) – Disaggregation method of the shapes’s original value onto its inter- polated points. ‘DIV’: Divide the value evenly over all the new points; ‘FIX’: Replicate the value onto all the new points. Default is ‘DIV’. Works in combination with the kwarg ‘disagg_val’.

  • disagg_val (float, optional) – Specifies what number should be taken as the value, which is to be disaggregated according to the method provided in disagg_met. None: The shape’s value is taken from the exp.gdf.value column. float: This given number will be disaggregated according to the method. In case exp.gdf.value column exists, original values in there will be ignored The default is None.

Returns:

exp_pnt – Exposures with a double index geodataframe, first level indicating membership of the original geometries of exp, second for the point disaggregation within each geometries.

Return type:

Exposures

climada.util.lines_polys_handler.exp_geom_to_grid(exp, grid, disagg_met, disagg_val)[source]#

Disaggregate exposures with (multi-)polygon geometries to points based on a pre-defined grid.

Parameters:
  • exp (Exposures) – The exposure instance with exp.gdf.geometry containing polygons

  • grid (np.array()) – Grid on which to disaggregate the exposures. Provided as two vectors [x_grid, y_grid].

  • disagg_met (DisaggMethod) – Disaggregation method of the shapes’s original value onto its inter- polated points. ‘DIV’: Divide the value evenly over all the new points; ‘FIX’: Replicate the value onto all the new points. Default is ‘DIV’. Works in combination with the kwarg ‘disagg_val’.

  • disagg_val (float, optional) – Specifies what number should be taken as the value, which is to be disaggregated according to the method provided in disagg_met. None: The shape’s value is taken from the exp.gdf.value column. float: This given number will be disaggregated according to the method. In case exp.gdf.value column exists, original values in there will be ignored The default is None.

Returns:

exp_pnt – Exposures with a double index geodataframe, first level indicating membership of the original geometries of exp, second for the point disaggregation within each geometries.

Return type:

Exposures

Note

Works with polygon geometries only. No points or lines are allowed.

climada.util.lines_polys_handler.gdf_to_pnts(gdf, res, to_meters)[source]#

Disaggregate geodataframe with (multi-)polygons geometries to points.

Parameters:
  • gdf (gpd.GeoDataFrame) – Feodataframe instance with gdf.geometry containing (multi)-lines or (multi-)polygons. Points are ignored.

  • res (float) – Resolution of the disaggregation grid. Can also be a tuple of [x_grid, y_grid] numpy arrays. In this case, to_meters is ignored.

  • to_meters (bool) – If True, the geometries are projected to an equal area projection before the disaggregation. res is then in meters. The exposures are then reprojected into the original projections before the impact calculation.

Returns:

gdf_pnt – with a double index, first for the geometries of exp, second for the point disaggregation of the geometries.

Return type:

gpd.GeoDataFrame

climada.util.lines_polys_handler.gdf_to_grid(gdf, grid)[source]#

Disaggregate geodataframe with (multi-)polygons geometries to points based on a pre-defined grid.

Parameters:
  • gdf (gpd.GeoDataFrame) – Geodataframe instance with gdf.geometry containing (multi-)polygons.

  • grid (np.array()) – Grid on which to disaggregate the exposures. Provided as two vectors [x_grid, y_grid].

Returns:

gdf_pnt – with a double index, first for the geometries of exp, second for the point disaggregation of the geometries.

Return type:

gpd.GeoDataFrame

Note

Works only with polygon geometries. No mixed inputs (with lines or points) are allowed

:raises AttributeError : if other geometry types than polygons are contained in the: :raises dataframe:

climada.util.lines_polys_handler.reproject_grid(x_grid, y_grid, orig_crs, dest_crs)[source]#

Reproject a grid from one crs to another

Parameters:
  • x_grid – x-coordinates

  • y_grid – y-coordinates

  • orig_crs (pyproj.CRS) – original CRS of the grid

  • dest_crs (pyproj.CRS) – CRS of the grid to be reprojected to

Returns:

Grid coordinates in reprojected crs

Return type:

x_trafo, y_trafo

climada.util.lines_polys_handler.reproject_poly(poly, orig_crs, dest_crs)[source]#

Reproject a polygon from one crs to another

Parameters:
  • poly (shapely Polygon) – Polygon

  • orig_crs (pyproj.CRS) – original CRS of the polygon

  • dest_crs (pyproj.CRS) – CRS of the polygon to be reprojected to

Returns:

poly – Polygon in desired projection

Return type:

shapely Polygon

climada.util.lines_polys_handler.set_imp_mat(impact, imp_mat)[source]#

Set Impact attributes from the impact matrix. Returns a copy. Overwrites eai_exp, at_event, aai_agg, imp_mat.

Parameters:
  • impact (Impact) – Impact instance.

  • imp_mat (sparse.csr_matrix) – matrix num_events x num_exp with impacts.

Returns:

imp – Copy of impact with eai_exp, at_event, aai_agg, imp_mat set.

Return type:

Impact

climada.util.lines_polys_handler.eai_exp_from_mat(imp_mat, freq)[source]#

Compute impact for each exposures from the total impact matrix

Parameters:
  • imp_mat (sparse.csr_matrix) – matrix num_events x num_exp with impacts.

  • frequency (np.array) – frequency of events

Returns:

eai_exp – expected impact for each exposure within a period of 1/frequency_unit

Return type:

np.array

climada.util.lines_polys_handler.at_event_from_mat(imp_mat)[source]#

Compute impact for each hazard event from the total impact matrix

Parameters:

imp_mat (sparse.csr_matrix) – matrix num_events x num_exp with impacts.

Returns:

at_event – impact for each hazard event

Return type:

np.array

climada.util.lines_polys_handler.aai_agg_from_at_event(at_event, freq)[source]#

Aggregate impact.at_event

Parameters:
  • at_event (np.array) – impact for each hazard event

  • frequency (np.array) – frequency of event

Returns:

average impact within a period of 1/frequency_unit, aggregated

Return type:

float

climada.util.plot module#

climada.util.plot.geo_bin_from_array(array_sub, geo_coord, var_name, title, pop_name=True, buffer=1.0, extend='neither', proj=<Projected CRS: +proj=eqc +ellps=WGS84 +a=6378137.0 +lon_0=0.0 +to ...> Name: unknown Axis Info [cartesian]: - E[east]: Easting (unknown) - N[north]: Northing (unknown) - h[up]: Ellipsoidal height (metre) Area of Use: - undefined Coordinate Operation: - name: unknown - method: Equidistant Cylindrical Datum: unknown - Ellipsoid: WGS 84 - Prime Meridian: Greenwich, shapes=True, axes=None, figsize=(9, 13), adapt_fontsize=True, **kwargs)[source]#

Plot array values binned over input coordinates.

Parameters:
  • array_sub (np.array(1d or 2d) or list(np.array)) – Each array (in a row or in the list) are values at each point in corresponding geo_coord that are binned in one subplot.

  • geo_coord (2d np.array or list(2d np.array)) – (lat, lon) for each point in a row. If one provided, the same grid is used for all subplots. Otherwise provide as many as subplots in array_sub.

  • var_name (str or list(str)) – label to be shown in the colorbar. If one provided, the same is used for all subplots. Otherwise provide as many as subplots in array_sub.

  • title (str or list(str)) – subplot title. If one provided, the same is used for all subplots. Otherwise provide as many as subplots in array_sub.

  • pop_name (bool, optional) – add names of the populated places, by default True.

  • buffer (float, optional) – border to add to coordinates, by default BUFFER

  • extend (str, optional) – extend border colorbar with arrows. [ ‘neither’ | ‘both’ | ‘min’ | ‘max’ ], by default ‘neither’

  • proj (ccrs, optional) – coordinate reference system of the given data, by default ccrs.PlateCarree()

  • shapes (bool, optional) – Overlay Earth’s countries coastlines to matplotlib.pyplot axis. The default is True

  • axes (Axes or ndarray(Axes), optional) – by default None

  • figsize (tuple, optional) – figure size for plt.subplots, by default (9, 13)

  • adapt_fontsize (bool, optional) – If set to true, the size of the fonts will be adapted to the size of the figure. Otherwise the default matplotlib font size is used. Default is True.

  • **kwargs – arbitrary keyword arguments for hexbin matplotlib function

Return type:

cartopy.mpl.geoaxes.GeoAxesSubplot

Raises:

ValueError: – Input array size missmatch

climada.util.plot.geo_im_from_array(array_sub, coord, var_name, title, proj=None, smooth=True, shapes=True, axes=None, figsize=(9, 13), adapt_fontsize=True, **kwargs)[source]#

Image(s) plot defined in array(s) over input coordinates.

Parameters:
  • array_sub (np.array(1d or 2d) or list(np.array)) – Each array (in a row or in the list) are values at each point in corresponding geo_coord that are ploted in one subplot.

  • coord (2d np.array) – (lat, lon) for each point in a row. The same grid is used for all subplots.

  • var_name (str or list(str)) – label to be shown in the colorbar. If one provided, the same is used for all subplots. Otherwise provide as many as subplots in array_sub.

  • title (str or list(str)) – subplot title. If one provided, the same is used for all subplots. Otherwise provide as many as subplots in array_sub.

  • proj (ccrs, optional) – coordinate reference system used in coordinates, by default None

  • smooth (bool, optional) – smooth plot to RESOLUTIONxRESOLUTION, by default True

  • shapes (bool, optional) – Overlay Earth’s countries coastlines to matplotlib.pyplot axis. The default is True

  • axes (Axes or ndarray(Axes), optional) – by default None

  • figsize (tuple, optional) – figure size for plt.subplots, by default (9, 13)

  • adapt_fontsize (bool, optional) – If set to true, the size of the fonts will be adapted to the size of the figure. Otherwise the default matplotlib font size is used. Default is True.

  • **kwargs – arbitrary keyword arguments for pcolormesh matplotlib function

Return type:

cartopy.mpl.geoaxes.GeoAxesSubplot

Raises:

ValueError

climada.util.plot.make_map(num_sub=1, figsize=(9, 13), proj=<Projected CRS: +proj=eqc +ellps=WGS84 +a=6378137.0 +lon_0=0.0 +to ...> Name: unknown Axis Info [cartesian]: - E[east]: Easting (unknown) - N[north]: Northing (unknown) - h[up]: Ellipsoidal height (metre) Area of Use: - undefined Coordinate Operation: - name: unknown - method: Equidistant Cylindrical Datum: unknown - Ellipsoid: WGS 84 - Prime Meridian: Greenwich, adapt_fontsize=True)[source]#

Create map figure with cartopy.

Parameters:
  • num_sub (int or tuple) – number of total subplots in figure OR number of subfigures in row and column: (num_row, num_col).

  • figsize (tuple) – figure size for plt.subplots

  • proj (cartopy.crs projection, optional) – geographical projection, The default is PlateCarree default.

  • adapt_fontsize (bool, optional) – If set to true, the size of the fonts will be adapted to the size of the figure. Otherwise the default matplotlib font size is used. Default is True.

Returns:

fig, axis_sub, fontsize

Return type:

matplotlib.figure.Figure, cartopy.mpl.geoaxes.GeoAxesSubplot, int

climada.util.plot.add_shapes(axis)[source]#

Overlay Earth’s countries coastlines to matplotlib.pyplot axis.

Parameters:
  • axis (cartopy.mpl.geoaxes.GeoAxesSubplot) – Cartopy axis

  • projection (cartopy.crs projection, optional) – Geographical projection. The default is PlateCarree.

climada.util.plot.add_populated_places(axis, extent, proj=<Projected CRS: +proj=eqc +ellps=WGS84 +a=6378137.0 +lon_0=0.0 +to ...> Name: unknown Axis Info [cartesian]: - E[east]: Easting (unknown) - N[north]: Northing (unknown) - h[up]: Ellipsoidal height (metre) Area of Use: - undefined Coordinate Operation: - name: unknown - method: Equidistant Cylindrical Datum: unknown - Ellipsoid: WGS 84 - Prime Meridian: Greenwich, fontsize=None)[source]#

Add city names.

Parameters:
  • axis (cartopy.mpl.geoaxes.GeoAxesSubplot) – cartopy axis.

  • extent (list) – geographical limits [min_lon, max_lon, min_lat, max_lat]

  • proj (cartopy.crs projection, optional) – geographical projection, The default is PlateCarree.

  • fontsize (int, optional) – Size of the fonts. If set to None, the default matplotlib settings are used.

climada.util.plot.add_cntry_names(axis, extent, proj=<Projected CRS: +proj=eqc +ellps=WGS84 +a=6378137.0 +lon_0=0.0 +to ...> Name: unknown Axis Info [cartesian]: - E[east]: Easting (unknown) - N[north]: Northing (unknown) - h[up]: Ellipsoidal height (metre) Area of Use: - undefined Coordinate Operation: - name: unknown - method: Equidistant Cylindrical Datum: unknown - Ellipsoid: WGS 84 - Prime Meridian: Greenwich, fontsize=None)[source]#

Add country names.

Parameters:
  • axis (cartopy.mpl.geoaxes.GeoAxesSubplot) – Cartopy axis.

  • extent (list) – geographical limits [min_lon, max_lon, min_lat, max_lat]

  • proj (cartopy.crs projection, optional) – Geographical projection.

    The default is PlateCarree.

    fontsizeint, optional

    Size of the fonts. If set to None, the default matplotlib settings are used.

climada.util.save module#

climada.util.save.save(out_file_name, var)[source]#

Save variable with provided file name. Uses configuration save_dir folder if no absolute path provided.

Parameters:
  • out_file_name (str) – file name (absolute path or relative to configured save_dir)

  • var (object) – variable to save in pickle format

climada.util.save.load(in_file_name)[source]#

Load variable contained in file. Uses configuration save_dir folder if no absolute path provided.

Parameters:

in_file_name (str) – file name

Return type:

object

climada.util.scalebar_plot module#

climada.util.scalebar_plot.scale_bar(ax, location, length, metres_per_unit=1000, unit_name='km', tol=0.01, angle=0, color='black', linewidth=3, text_offset=0.005, ha='center', va='bottom', plot_kwargs=None, text_kwargs=None, **kwargs)[source]#

Add a scale bar to CartoPy axes.

For angles between 0 and 90 the text and line may be plotted at slightly different angles for unknown reasons. To work around this, override the ‘rotation’ keyword argument with text_kwargs.

Parameters:
  • ax – CartoPy axes.

  • location – Position of left-side of bar in axes coordinates.

  • length – Geodesic length of the scale bar.

  • metres_per_unit – Number of metres in the given unit. Default: 1000

  • unit_name – Name of the given unit. Default: ‘km’

  • tol – Allowed relative error in length of bar. Default: 0.01

  • angle – Anti-clockwise rotation of the bar.

  • color – Color of the bar and text. Default: ‘black’

  • linewidth – Same argument as for plot.

  • text_offset – Perpendicular offset for text in axes coordinates. Default: 0.005

  • ha – Horizontal alignment. Default: ‘center’

  • va – Vertical alignment. Default: ‘bottom’

  • plot_kwargs – Keyword arguments for plot, overridden by **kwargs.

  • text_kwargs – Keyword arguments for text, overridden by **kwargs.

  • kwargs – Keyword arguments for both plot and text.

climada.util.select module#

climada.util.select.get_attributes_with_matching_dimension(obj, dims)[source]#

Get the attributes of an object that have len(dims) number of dimensions or more, and all dims are individual parts of the attribute’s shape.

Parameters:
  • obj (object of any class) – The object from which matching attributes are returned

  • dims (list[int]) – List of dimensions size to match

Returns:

list_of_attrs – List of names of the attributes with matching dimensions

Return type:

list[str]

climada.util.value_representation module#

climada.util.value_representation.sig_dig(x, n_sig_dig=16)[source]#

Rounds x to n_sig_dig number of significant digits. 0, inf, Nan are returned unchanged.

Examples

with n_sig_dig = 5:

1.234567 -> 1.2346, 123456.89 -> 123460.0

Parameters:
  • x (float) – number to be rounded

  • n_sig_dig (int, optional) – Number of significant digits. The default is 16.

Returns:

Rounded number

Return type:

float

climada.util.value_representation.sig_dig_list(iterable, n_sig_dig=16)[source]#

Vectorized form of sig_dig. Rounds a list of float to a number of significant digits

Parameters:
  • iterable (iter(float)) – iterable of numbers to be rounded

  • n_sig_dig (int, optional) – Number of significant digits. The default is 16.

Returns:

list of rounded floats

Return type:

list

climada.util.value_representation.convert_monetary_value(values, abbrev, n_sig_dig=None)[source]#
climada.util.value_representation.value_to_monetary_unit(values, n_sig_dig=None, abbreviations=None)[source]#

Converts list of values to closest common monetary unit.

0, Nan and inf have not unit.

Parameters:
  • values (int or float, list(int or float) or np.ndarray(int or float)) – Values to be converted

  • n_sig_dig (int, optional) – Number of significant digits to return.

    Examples: n_sig_di=5: 1.234567 -> 1.2346, 123456.89 -> 123460.0

    Default: all digits are returned.

  • abbreviations (dict, optional) – Name of the abbreviations for the money 1000s counts

    Default: { 1:’’, 1000: ‘K’, 1000000: ‘M’, 1000000000: ‘Bn’, 1000000000000: ‘Tn’ }

Returns:

  • mon_val (np.ndarray) – Array of values in monetary unit

  • name (string) – Monetary unit

Examples

values = [1e6, 2*1e6, 4.5*1e7, 0, Nan, inf] ->

[1, 2, 4.5, 0, Nan, inf] [‘M’]

climada.util.value_representation.safe_divide(numerator, denominator, replace_with=nan)[source]#

Safely divide two arrays or scalars.

This function handles division by zero and NaN values in the numerator or denominator on an element-wise basis. If the division results in infinity, NaN, or division by zero in any element, that particular result is replaced by the specified value.

Parameters:
  • numerator (np.ndarray or scalar) – The numerator for division.

  • denominator (np.ndarray or scalar) – The denominator for division. Division by zero and NaN values are handled safely.

  • replace_with (float, optional) – The value to use in place of division results that are infinity, NaN, or division by zero. By default, it is NaN.

Returns:

The result of the division. If the division results in infinity, NaN, or division by zero in any element, it returns the value specified in replace_with for those elements.

Return type:

np.ndarray or scalar

Notes

The function uses numpy’s true_divide for array-like inputs and handles both scalar and array-like inputs for the numerator and denominator. NaN values or division by zero in any element of the input will result in the replace_with value in the corresponding element of the output.

Examples

>>> safe_divide(1, 0)
nan
>>> safe_divide(1, 0, replace_with=0)
0
>>> safe_divide([1, 0, 3], [0, 0, 3])
array([nan, nan,  1.])
>>> safe_divide([4, 4], [1, 0])
array([4., nan])
>>> safe_divide([4, 4], [1, nan])
array([ 4., nan])

climada.util.yearsets module#

climada.util.yearsets.impact_yearset(imp, sampled_years, lam=None, correction_fac=True, seed=None)[source]#

Create a yearset of impacts (yimp) containing a probabilistic impact for each year in the sampled_years list by sampling events from the impact received as input with a Poisson distribution centered around lam per year (lam = sum(imp.frequency)). In contrast to the expected annual impact (eai) yimp contains impact values that differ among years. When correction factor is true, the yimp are scaled such that the average over all years is equal to the eai.

Parameters:
  • imp (climada.engine.Impact()) – impact object containing impacts per event

  • sampled_years (list) – A list of years that shall be covered by the resulting yimp.

  • seed (Any, optional) – seed for the default bit generator default: None

Optional parameters
lam: int

The applied Poisson distribution is centered around lam events per year. If no lambda value is given, the default lam = sum(imp.frequency) is used.

correction_facboolean

If True a correction factor is applied to the resulting yimp. It is scaled in such a way that the expected annual impact (eai) of the yimp equals the eai of the input impact

Returns:

  • yimp (climada.engine.Impact()) – yearset of impacts containing annual impacts for all sampled_years

  • sampling_vect (2D array) – The sampling vector specifies how to sample the yimp, it consists of one sub-array per sampled_year, which contains the event_ids of the events used to calculate the annual impacts. Can be used to re-create the exact same yimp.

climada.util.yearsets.impact_yearset_from_sampling_vect(imp, sampled_years, sampling_vect, correction_fac=True)[source]#

Create a yearset of impacts (yimp) containing a probabilistic impact for each year in the sampled_years list by sampling events from the impact received as input following the sampling vector provided. In contrast to the expected annual impact (eai) yimp contains impact values that differ among years. When correction factor is true, the yimp are scaled such that the average over all years is equal to the eai.

Parameters:
  • imp (climada.engine.Impact()) – impact object containing impacts per event

  • sampled_years (list) – A list of years that shall be covered by the resulting yimp.

  • sampling_vect (2D array) – The sampling vector specifies how to sample the yimp, it consists of one sub-array per sampled_year, which contains the event_ids of the events used to calculate the annual impacts. It needs to be obtained in a first call, i.e. [yimp, sampling_vect] = climada_yearsets.impact_yearset(…) and can then be provided in this function to obtain the exact same sampling (also for a different imp object)

Optional parameter
correction_facboolean

If True a correction factor is applied to the resulting yimp. It is scaled in such a way that the expected annual impact (eai) of the yimp equals the eai of the input impact

Returns:

yimp – yearset of impacts containing annual impacts for all sampled_years

Return type:

climada.engine.Impact()

climada.util.yearsets.sample_from_poisson(n_sampled_years, lam, seed=None)[source]#

Sample the number of events for n_sampled_years

Parameters:
  • n_sampled_years (int) – The target number of years the impact yearset shall contain.

  • lam (float) – the applied Poisson distribution is centered around lambda events per year

  • seed (int, optional) – seed for numpy.random, will be set if not None default: None

Returns:

events_per_year – Number of events per sampled year

Return type:

np.ndarray

climada.util.yearsets.sample_events(events_per_year, freqs_orig, seed=None)[source]#

Sample events uniformely from an array (indices_orig) without replacement (if sum(events_per_year) > n_input_events the input events are repeated (tot_n_events/n_input_events) times, by ensuring that the same events doens’t occur more than once per sampled year).

Parameters:
  • events_per_year (np.ndarray) – Number of events per sampled year

  • freqs_orig (np.ndarray) – Frequency of each input event

  • seed (Any, optional) – seed for the default bit generator. Default: None

Returns:

sampling_vect – The sampling vector specifies how to sample the yimp, it consists of one sub-array per sampled_year, which contains the event_ids of the events used to calculate the annual impacts.

Return type:

2D array

climada.util.yearsets.compute_imp_per_year(imp, sampling_vect)[source]#

Sample annual impacts from the given event_impacts according to the sampling dictionary

Parameters:
  • imp (climada.engine.Impact()) – impact object containing impacts per event

  • sampling_vect (2D array) – The sampling vector specifies how to sample the yimp, it consists of one sub-array per sampled_year, which contains the event_ids of the events used to calculate the annual impacts.

Returns:

imp_per_year – Sampled impact per year (length = sampled_years)

Return type:

np.ndarray

climada.util.yearsets.calculate_correction_fac(imp_per_year, imp)[source]#

Calculate a correction factor that can be used to scale the yimp in such a way that the expected annual impact (eai) of the yimp amounts to the eai of the input imp

Parameters:
  • imp_per_year (np.ndarray) – sampled yimp

  • imp (climada.engine.Impact()) – impact object containing impacts per event

Returns:

correction_factor – The correction factor is calculated as imp_eai/yimp_eai

Return type:

int