Notes on Testing#

Any programming code that is meant to be used more than once should have a test, i.e., an additional piece of programming code that is able to check whether the original code is doing what it’s supposed to do.

Writing tests is work. As a matter of facts, it can be a lot of work, depending on the program often more than writing the original code.
Luckily, it essentially follows always the same basic procedure and a there are a lot of tools and frameworks available to facilitate this work.

In CLIMADA we use the Python in-built test runner pytest for execution of the tests.

Why do we write test?

  • The code is most certainly buggy if it’s not properly tested.

  • Software without tests is worthless. It won’t be trusted and therefore it won’t be used.

When do we write test?

  • Before implementation. A very good idea. It is called Test Driven Development.

  • During implementation. Test routines can be used to run code even while it’s not fully implemented. This is better than running it interactively, because the full context is set up by the test.
    By command line:
    python -m unittest climada.x.test_y.TestY.test_z

  • Right after implementation. In case the coverage analysis shows that there are missing tests, see Test Coverage.

  • Later, when a bug was encountered. Whenever a bug gets fixed, also the tests need to be adapted or amended.

Basic Test Procedure#

  • Test data setup
    Creating suitable test data is crucial, but not always trivial. It should be extensive enough to cover all functional requirements and yet as small as possible in order to save resources, both in space and time.

  • Code execution
    The main goal of a test is to find bugs before the user encounters them. Ultimately every single line of the program should be subject to test.
    In order to achieve this, it is necessary to run the code with respect to the whole parameter space. In practice that means that even a simple method may require a lot of test code.
    (Bear this in mind when designing methods or functions: the number of required tests increases dramatically with the number of function parameters!)

  • Result validation
    After the code was executed the actual result is compared to the expected result. The expected result depends on test data, state and parametrization.
    Therefore result validation can be very extensive. In most cases it won’t be practical nor required to validate every single byte. Nevertheless attention should be paid to validate a range of results that is wide enough to discover as many thinkable discrepancies as possible.

Testing types#

Despite the common basic procedure there are many different kinds of tests distinguished. (See WikiPedia:Software testing). Very commonly a distinction is made based on levels:

  • Unit Test: tests only a small part of the code, a single function or method, essentially without interaction between modules

  • Integration Test: tests whether different methods and modules work well with each other

  • System Test: tests the whole software at once, using the exposed interface to execute a program

Unit Tests#

Unit tests are meant to check the correctness of program units, i.e., single methods or functions, they are supposed to be fast, simple and easy to write.

Developer guidelines:#

  • Each module in CLIMADA has a counter part containing unit tests.
    Naming suggestion: climada.x.yclimada.x.test.test_y

  • Write a test class for each class of the module, plus a test class for the module itself in case it contains (module) functions.
    Naming suggestion: class Xclass TestX(unittest.TestCase), module climda.x.yclass TestY(unittest.TestCase)

  • Ideally, each method or function should have at least one test method.
    Naming suggestion: def xy()def test_xy(), def test_xy_suffix1(), def test_xy_suffix2()
    Functions that are created for the sole purpose of structuring the code do not necessarily have their own unit test.

  • Aim at having very fast unit tests!
    There will be hundreds of unit tests and in general they are called in corpore and expected to finish after a reaonable amount of time.
    Less than 10 milisecond is good, 2 seconds is the maximum acceptable duration

  • A unit test shouldn’t call more than one climada method or function.
    The motivation to combine more than one method in a test is usually creation of test data. Try to provide test data by other means. Define them on the spot (within the code of the test module) or create a file in a test data directory that can be read during the test. If this is too tedious, at least move the data acquisition part to the constructor of the test class.

  • Do not use external resources in unit tests.
    Methods depending on external resources can be skipped from unit tests.

Integration Tests#

Integration tests are meant to check the correctness of interaction between units of a module or a package.
As a general rule, more work is required to write integration tests than to write unit tests and they have longer runtime.

Developer guidelines:#

  • Write integration tests for all intended use cases.

  • Do not expect external resources to be immutable.
    If calling on external resources is part of the workflow to be tested, take into account that they may change over time.
    If the according API has means to indicate the precise version of the requested data, make use of it, otherwise, adapt your expectations and leave room for future changes.
    Example given: your function is ultimately relying on the current GDP retrieved from an online data provider, and you test it for Switzerland where it’s in about 700 Bio CHF at the moment. Leave room for future development, try to be on a reasonably save side, tolerate a range between 70 Bio CHF and 7000 Bio CHF.

  • Test location.
    Integration are written in modules climada.test.test_xy or in climada.x.test.test_y, like the unit tests.
    For the latter it is required that they do not use external resources and that the tests do not have a runtime longer than 2 seconds.

System Tests#

System tests are meant to check whether the whole software package is working correctly.

In CLIMADA, the system test that checks the core functionality of the package is executed by calling make install_test from the installation directory.

Error Messages#

When a test fails, make sure the raised exception contains all information that might be helpful to identify the exact problem.
If the error message is ever going to be read by someone else than you while still developing the test, you best assume it will be someone who is completely naive about CLIMADA.

Writing extensive failure messages will eventually save more time than it takes to write them.

Putting the failure information into logs is neither required nor sufficient: the automated tests are built around error messages, not logs.
Anything written to stdout by a test method is useful mainly for the developer of the test.

Test Coverage#

Coverage is a measure of how much of your code is actually checked by the tests. One distinguishes between line coverage and branch or conditionals coverage. The line coverage reports the percentage of all lines of code covered by the tests. The branch coverage reports the percentage of all possible branches covered by the tests. Achieving a high branch coverage is much harder than a high line coverage.

In CLIMADA, we aim for a high line coverage (only). Ideally, any new code should have a line coverage of 100%, meaning every line of code is tested. You can inspect the test coverage of your local code by following the instructions for executing tests below.

See the Continuous Integration Guide for information on how to inspect coverage of the automated test pipeline.

Test files#

For integration tests it can be required to read data from a file, in order to set up a test that aims to check functionality with non-trivial data, beyond the scope of unit tests. Some of thes test files can be found in the climada/**/test/data directories or in the climada/data directory. As mostly the case with large test data, it is not very well suitable for a Git repository.

The preferable alternative is to post the data to the Climada Data-API with status test_dataset and retrieve the files on the fly from there during tests. To do this one can use the convenience method climada.test.get_test_file:

from climada.test import get_test_file

my_test_file = get_test_file(ds_name='my-test-file', file_format='hdf5')  # returns a pathlib.Path object

Behind the scenes, get_test_file uses the climada.util.api_client.Client to identify the appropriate dataset and downloads the respective file to the local dataset cache (~/climada/data/*).

Dealing with External Resources#

Methods depending on external resources (calls a url or database) are ideally atomic and doing nothing else than providing data. If this is the case they can be skipped in unit tests on safe grounds - provided they are tested at some point in higher level tests.

In CLIMADA there are the utility functions climada.util.files_handler.download_file and climada.util.files_handler.download_ftp, which are assigned to exactly this task for the case of external data being available as files.

Any other method that is calling such a data providing method can be made compliant to unit test rules by having an option to replace them by another method. Like this one can write a dummy method in the test module that provides data, e.g., from a file or hard coded, which be given as the optional argument.

import climada
def x(download_file=climada.util.files_handler.download_file):
    filepath = download_file('http://real_data.ch')
    return Path(filepath).stat().st_size

import unittest
class TestX(unittest.TestCase):
    def download_file_dummy(url):
        return "phony_data.ch"

    def test_x(self):
        self.assertEqual(44, x(download_file=self.download_file_dummy))

Developer guideline:#

  • When introducing a new external resource, add a test method in test_data_api.py.

Test Configuration#

Use the configuration file climada.config in the installation directory to define file paths and external resources used during tests (see the Constants and Configuration Guide).

Testing CLIMADA#

Executing the entire test suite requires you to install the additional requirements for testing. See the installation instructions for developer dependencies for further information.

In general, you execute tests with

pytest <path>

where you replace <path> with a Python file containing tests or an entire directory containing multiple test files. Pytest will walk through all subdirectories of <path> and try to discover all tests. For example, to execute all tests within the CLIMADA repository, execute

pytest climada/

from within the climada_python directory.

Installation Test#

From the installation directory run

make install_test

It lasts about 45 seconds. If it succeeds, CLIMADA is properly installed and ready to use.

Unit Tests#

From the installation directory run

make unit_test

It lasts about 5 minutes and runs unit tests for all modules.

Integration Tests#

From the installation directory run

make integ_test

It lasts about 15 minutes and runs extensive integration tests, during which also data from external resources is read. An open internet connection is required for a successful test run.


Executing make unit_test and make integ_tests provides local coverage reports as HTML pages at coverage/index.html. You can open this file with your browser.