testtools

This module implements tools for testing HydPy and its models.

Module testtools implements the following members:


class hydpy.core.testtools.StdOutErr(indent: int = 0)[source]

Bases: object

Replaces sys.stdout and sys.stderr temporarily when calling method perform_tests() of class Tester.

indent: int
texts: list[str]
write(text: str) None[source]

Memorise the given text for later writing.

print_(text: str) None[source]

Print the memorised text to the original sys.stdout.

flush() None[source]

Do nothing.

class hydpy.core.testtools.Tester[source]

Bases: object

Tests either a base or an application model.

Usually, a Tester object is initialised at the end of the __init__ file of its base model or the end of the module of an application model.

>>> from hydpy.models import hland, hland_96
>>> hland.tester.package
'hydpy.models.hland'
>>> hland_96.tester.package
'hydpy.models'
filepath: str
package: str
ispackage: bool
property filenames: list[str]

The filenames which define the considered base or application model.

>>> from hydpy.models import hland, hland_96
>>> from pprint import pprint
>>> pprint(hland.tester.filenames)
['__init__.py',
 'hland_aides.py',
 'hland_constants.py',
 'hland_control.py',
 'hland_derived.py',
 'hland_factors.py',
 'hland_fixed.py',
 'hland_fluxes.py',
 'hland_inputs.py',
 'hland_masks.py',
 'hland_model.py',
 'hland_outlets.py',
 'hland_parameters.py',
 'hland_sequences.py',
 'hland_states.py']
>>> hland_96.tester.filenames
['hland_96.py']
property modulenames: list[str]

The module names to be taken into account for testing.

>>> from hydpy.models import hland, hland_96
>>> from pprint import pprint
>>> pprint(hland.tester.modulenames)
['hland_aides',
 'hland_constants',
 'hland_control',
 'hland_derived',
 'hland_factors',
 'hland_fixed',
 'hland_fluxes',
 'hland_inputs',
 'hland_masks',
 'hland_model',
 'hland_outlets',
 'hland_parameters',
 'hland_sequences',
 'hland_states']
>>> hland_96.tester.modulenames
['hland_96']
perform_tests() None[source]

Perform all doctests either in Python or in Cython mode depending on the state of usecython set in module pub.

Usually, perform_tests() is triggered automatically by a Cythonizer object assigned to the same base or application model as a Tester object. However, you are free to call it any time when in doubt of the functionality of a particular base or application model. Doing so might change some of the states of your current configuration, but only temporarily (besides “projectname”) we pick the Timegrids object of module pub as an example, which is changed multiple times during testing but finally reset to the original value):

>>> from hydpy import pub
>>> pub.projectname = "test"
>>> pub.timegrids = "2000-01-01", "2001-01-01", "1d"
>>> from hydpy.models import hland, hland_96
>>> hland.tester.perform_tests()  
Test package hydpy.models.hland in ...ython mode.
    * hland_aides:
        no failures occurred
    * hland_constants:
        no failures occurred
    * hland_control:
        no failures occurred
    * hland_derived:
        no failures occurred
    * hland_factors:
        no failures occurred
    * hland_fixed:
        no failures occurred
    * hland_fluxes:
        no failures occurred
    * hland_inputs:
        no failures occurred
    * hland_masks:
        no failures occurred
    * hland_model:
        no failures occurred
    * hland_outlets:
        no failures occurred
    * hland_parameters:
        no failures occurred
    * hland_sequences:
        no failures occurred
    * hland_states:
        no failures occurred
>>> hland_96.tester.perform_tests()  
Test module hland_96 in ...ython mode.
    * hland_96:
        no failures occurred
>>> pub.projectname
'test'
>>> pub.timegrids
Timegrids("2000-01-01 00:00:00",
          "2001-01-01 00:00:00",
          "1d")

To show the reporting of possible errors, we change the string representation of parameter ZoneType temporarily. Again, the Timegrids object is reset to its initial state after testing:

>>> from unittest import mock
>>> with mock.patch(
...     "hydpy.models.hland.hland_control.ZoneType.__repr__",
...     return_value="damaged"):
...     hland.tester.perform_tests()  
Test package hydpy.models.hland in ...ython mode.
    * hland_aides:
        no failures occurred
    * hland_constants:
        no failures occurred
    * hland_control:
        ******...hland_control.py", line ..., in hydpy.models.hland.hland_control.ZoneType
        Failed example:
            zonetype
        Expected:
            zonetype(FIELD, FOREST, GLACIER, ILAKE, ILAKE, FIELD)
        Got:
            damaged
        **********************************************************************
        1
        items had failures:
           1 of   7 in hydpy.models.hland.hland_control.ZoneType
        ***Test Failed***
        1
        failures.
    * hland_derived:
        no failures occurred
    ...
    * hland_states:
        no failures occurred
>>> pub.projectname
'test'
>>> pub.timegrids
Timegrids("2000-01-01 00:00:00",
          "2001-01-01 00:00:00",
          "1d")
class hydpy.core.testtools.Array[source]

Bases: object

Assures that attributes are ndarray objects.

class hydpy.core.testtools.ArrayDescriptor[source]

Bases: object

A descriptor for handling values of Array objects.

class hydpy.core.testtools.Test[source]

Bases: object

Base class for IntegrationTest and UnitTest.

This base class defines the printing of the test results primarily. How the tests shall be prepared and performed is to be defined in its subclasses.

parseqs: Any
HEADER_OF_FIRST_COL: Any
inits = <hydpy.core.testtools.Array object>

Stores arrays for setting the same values of parameters and/or sequences before each new experiment.

abstract property raw_first_col_strings: tuple[str, ...]

To be implemented by the subclasses of Test.

abstract get_output_array(parseqs)[source]

To be implemented by the subclasses of Test.

property nmb_rows: int

The number of rows of the table.

property nmb_cols: int

The number of columns in the table.

property raw_header_strings: list[str]

All raw strings for the tables header.

property raw_body_strings: list[list[str]]

All raw strings for the body of the table.

property raw_strings: list[list[str]]

All raw strings for the complete table.

property col_widths: list[int]

The widths of all columns of the table.

property col_separators: list[str]

The separators for adjacent columns.

property row_nmb_characters: int

The number of characters of a single row of the table.

make_table(idx1: int | None = None, idx2: int | None = None) str[source]

Return the result table between the given indices.

print_table(idx1: int | None = None, idx2: int | None = None) None[source]

Print the result table between the given indices.

class hydpy.core.testtools.PlottingOptions[source]

Bases: object

Plotting options of class IntegrationTest.

width: int
height: int
activated: tuple[IOSequence, ...] | None
axis1: IOSequence | Iterable[IOSequence] | None
axis2: IOSequence | Iterable[IOSequence] | None
class hydpy.core.testtools.IntegrationTest(element: Element | None = None, seqs: tuple[IOSequence, ...] | None = None, inits=None)[source]

Bases: Test

Defines model integration doctests.

The functionality of IntegrationTest is easiest to understand by inspecting doctests like the ones of modules arma_rimorido.

Note that all condition sequences (state and logging sequences) are initialised in accordance with the values are given as inits values. The values of the simulation sequences of outlet and sender nodes are always set to zero before each test run. All other parameter and sequence values can be changed between different test runs.

HEADER_OF_FIRST_COL: Any = 'date'

The header of the first column containing dates.

plotting_options = <hydpy.core.testtools.PlottingOptions object>
elements: Devices[Element]
nodes: Devices[Node]
element: Element
parseqs: tuple[IOSequence, ...]
property raw_first_col_strings: tuple[str, ...]

The raw date strings of the first column, except the header.

property dateformat: str

Format string for printing dates in the first column of the table.

See the documentation on module datetime for the format strings allowed.

You can query and change property dateformat:

>>> from hydpy import Element, IntegrationTest, prepare_model, pub
>>> pub.timegrids = "2000-01-01", "2001-01-01", "1d"
>>> element = Element("element", outlets="node")
>>> element.model = prepare_model("hland_96")
>>> __package__ = "testpackage"
>>> tester = IntegrationTest(element)
>>> tester.dateformat
'%Y-%m-%d %H:%M:%S'

Passing an ill-defined format string leads to the following error:

>>> tester.dateformat = 999
Traceback (most recent call last):
...
ValueError: The given date format `999` is not a valid format string for `datetime` objects.  Please read the documentation on module datetime of the Python standard library for for further information.
>>> tester.dateformat = "%x"
>>> tester.dateformat
'%x'
get_output_array(parseqs: IOSequence)[source]

Return the array containing the output results of the given sequence.

prepare_node_sequences() None[source]

Prepare the simulations series of all nodes.

This preparation might not be suitable for all types of integration tests. Prepare those node sequences manually, for which this method does not result in the desired outcome.

prepare_input_model_sequences() None[source]

Configure the input sequences of the model in a manner that allows for applying their time series data in integration tests.

extract_print_sequences() tuple[IOSequence, ...][source]

Return a list of all input, factor, flux, and state sequences of the model and the simulation sequences of all nodes.

prepare_model(update_parameters: bool, use_conditions: dict[str, dict[str, dict[str, float | ndarray[Any, dtype[float64]]]]] | None) None[source]

Derive the secondary parameter values, prepare all required time series and set the initial conditions.

reset_values() None[source]

Set the current values of all factor and flux sequences to nan.

reset_series() None[source]

Initialise all time series with nan values.

reset_outputs() None[source]

Set the values of the simulation sequences of all outlet nodes to zero.

reset_inits() None[source]

Set all initial conditions of all models.

plot(filename: str, axis1: IOSequence | Iterable[IOSequence] | None = None, axis2: IOSequence | Iterable[IOSequence] | None = None) None[source]

Save a plotly HTML file plotting the current test results.

(Optional) arguments:
  • filename: Name of the file. If necessary, the file ending html is added automatically. The file is stored in the html_ folder of subpackage docs.

  • act_sequences: List of the sequences to be shown initially (deprecated).

  • axis1: sequences to be shown initially on the first axis.

  • axis2: sequences to be shown initially on the second axis.

class hydpy.core.testtools.UnitTest(model, method, *, first_example=1, last_example=1, parseqs=None)[source]

Bases: Test

Defines unit doctests for a single model method.

HEADER_OF_FIRST_COL: Any = 'ex.'

The header of the first column containing sequential numbers.

nexts = <hydpy.core.testtools.Array object>

Stores arrays for setting different values of parameters and/or sequences before each new experiment.

results = <hydpy.core.testtools.Array object>

Stores arrays with the resulting values of parameters and/or sequences of each new experiment.

property nmb_examples

The number of examples to be calculated.

property idx0

The first index of the examples selected for printing.

property idx1

The last index of the examples selected for printing.

get_output_array(parseqs)[source]

Return the array containing the output results of the given parameter or sequence.

property raw_first_col_strings

The raw integer strings of the first column, except the header.

memorise_inits()[source]

Memorise all initial conditions.

prepare_output_arrays()[source]

Prepare arrays for storing the calculated results for the respective parameters and/or sequences.

reset_inits()[source]

Set all initial conditions.

class hydpy.core.testtools.Open[source]

Bases: object

Replace Open in doctests temporarily.

Class Open to intended to make writing to files visible and testable in docstrings. Therefore, Python’s built-in function Open is temporarily replaced by another object, printing the filename and the file content, as shown in the following example:

>>> import os
>>> path = os.path.join("folder", "test.py")
>>> from hydpy import Open
>>> with Open():
...     with open(path, "w") as file_:
...         file_.write("first line\n")
...         file_.writelines(["\n", "third line\n"])
~~~~~~~~~~~~~~
folder/test.py
--------------
first line

third line

~~~~~~~~~~~~~~

Note that, for simplicity, the UNIX style path separator / is used to print the file path on all systems.

Class Open is rather restricted at the moment. Functionalities like reading are not supported so far:

>>> with Open():
...     with open(path, "r") as file_:
...         file_.read()
Traceback (most recent call last):
...
NotImplementedError: Reading is not possible at the moment.  Please see the documentation on class `Open` of module `testtools` for further information.
>>> with Open():
...     with open(path, "r") as file_:
...         file_.readline()
Traceback (most recent call last):
...
NotImplementedError: Reading is not possible at the moment.  Please see the documentation on class `Open` of module `testtools` for further information.
>>> with Open():
...     with open(path, "r") as file_:
...         file_.readlines()
Traceback (most recent call last):
...
NotImplementedError: Reading is not possible at the moment.  Please see the documentation on class `Open` of module `testtools` for further information.
class hydpy.core.testtools.TestIO(clear_own: bool = False, clear_all: bool = False)[source]

Bases: object

Prepare an environment for testing IO functionalities.

Primarily, TestIO changes the current working during the execution of with| blocks. Inspecting your current working directory, os will likely find no file called testfile.txt:

>>> import os
>>> os.path.exists("testfile.txt")
False

If some tests require writing such a file, this should be done within HydPy’s iotesting folder in subpackage tests, which is achieved by applying the with statement on TestIO:

>>> from hydpy import TestIO
>>> with TestIO():
...     open("testfile.txt", "w").close()
...     print(os.path.exists("testfile.txt"))
True

After the with block, the working directory is reset automatically:

>>> os.path.exists("testfile.txt")
False

Nevertheless, testfile.txt still exists in the folder iotesting:

>>> with TestIO():
...     print(os.path.exists("testfile.txt"))
True

Optionally, files and folders created within the current with block can be removed automatically by setting clear_own to True (modified files and folders are not affected):

>>> with TestIO(clear_own=True):
...     open("testfile.txt", "w").close()
...     os.makedirs("testfolder")
...     print(os.path.exists("testfile.txt"),
...           os.path.exists("testfolder"))
True True
>>> with TestIO(clear_own=True):
...     print(os.path.exists("testfile.txt"),
...           os.path.exists("testfolder"))
True False

Alternatively, all files and folders contained in folder iotesting can be removed after leaving the with block:

>>> with TestIO(clear_all=True):
...     os.makedirs("testfolder")
...     print(os.path.exists("testfile.txt"),
...           os.path.exists("testfolder"))
True True
>>> with TestIO(clear_own=True):
...     print(os.path.exists("testfile.txt"),
...           os.path.exists("testfolder"))
False False

For just clearing the iofolder, one can call method clear() alternatively:

>>> with TestIO():
...     open("testfile.txt", "w").close()
...     print(os.path.exists("testfile.txt"))
True
>>> TestIO.clear()
>>> with TestIO():
...     print(os.path.exists("testfile.txt"))
False

Note that class TestIO copies all eventually generated .coverage files into the test subpackage to assure no covered lines are reported as uncovered.

classmethod clear() None[source]

Remove all files from the iotesting folder.

hydpy.core.testtools.make_abc_testable(abstract: type[T]) type[T][source]

Return a concrete version of the given abstract base class for testing purposes.

Abstract base classes cannot be (and, at least in production code, should not be) instantiated:

>>> from hydpy.core.netcdftools import NetCDFVariable
>>> var = NetCDFVariable()  
Traceback (most recent call last):
...
TypeError: Can't instantiate abstract class NetCDFVariable with...

However, it is convenient to do so for testing (partly) abstract base classes in doctests. The derived class returned by function make_abc_testable() is identical with the original one, except that its protection against initialisation is disabled:

>>> from hydpy import make_abc_testable, classname
>>> var = make_abc_testable(NetCDFVariable)("filepath")

To avoid confusion, make_abc_testable() appends an underscore to the original class name:

>>> classname(var)
'NetCDFVariable_'
hydpy.core.testtools.mock_datetime_now(testdatetime)[source]

Let class method now() of class datetime of module datetime return the given date for testing purposes within a “with-block”.

>>> import datetime
>>> testdate = datetime.datetime(2000, 10, 1, 12, 30, 0, 999)
>>> testdate == datetime.datetime.now()
False
>>> from hydpy import classname
>>> classname(datetime.datetime)
'datetime'
>>> from hydpy.core.testtools import mock_datetime_now
>>> with mock_datetime_now(testdate):
...     testdate == datetime.datetime.now()
...     classname(datetime.datetime)
True
'_DateTime'
>>> testdate == datetime.datetime.now()
False
>>> classname(datetime.datetime)
'datetime'

The following test shows that mocking datetime does not interfere with initialising Date objects and that the relevant exceptions are properly handled:

>>> from hydpy import Date
>>> with mock_datetime_now(testdate):
...     Date(datetime.datetime(2000, 10, 1, 12, 30, 0, 999))
Traceback (most recent call last):
...
ValueError: While trying to initialise a `Date` object based on argument `2000-10-01 12:30:00.000999`, the following error occurred: For `Date` instances, the microsecond must be zero, but for the given `datetime` object it is `999` instead.
>>> classname(datetime.datetime)
'datetime'
class hydpy.core.testtools.NumericalDifferentiator(*, xsequence: ModelSequence, ysequences: Iterable[ModelSequence], methods: Iterable[Method], dx: float = 1e-06, method: Literal['forward', 'central', 'backward'] = 'forward')[source]

Bases: object

Approximate the derivatives of ModelSequence values based on the finite difference approach.

Class NumericalDifferentiator is thought for testing purposes only. See, for example, the documentation on method Calc_RHMDH_V1, which uses a NumericalDifferentiator object to validate that this method calculates the derivative of sequence RHM (ysequence) with respect to sequence H (xsequence) correctly. Therefore, it must know the relationship between RHM and H, being defined by method Calc_RHM_V1.

See also the documentation on method Calc_AMDH_UMDH_V1, which explains how to apply class NumericalDifferentiator on multiple target sequences (ysequences). Note that, in order to calculate the correct derivatives of sequences AM and UM, we need not only to pass Calc_AM_UM_V1, but also methods Calc_RHM_V1 and Calc_RHV_V1, as sequences RHM and RHV, which are required for calculating AM and UM, depend on H themselves.

Numerical approximations of derivatives are of limited precision. NumericalDifferentiator achieves the second order of accuracy due to using the coefficients given here. If results are too inaccurate, you might improve them by changing the finite difference method (backward or central instead of forward) or by changing the default interval width dx.

hydpy.core.testtools.update_integrationtests(applicationmodel: ModuleType | str, resultfilepath: str = 'update_integrationtests.txt') None[source]

Write the docstring of the given application model, updated with the current simulation results, to file.

Sometimes, even tiny model-related changes bring a great deal of work concerning HydPy’s integration test strategy. For example, if you modify the value of a fixed parameter, the results of possibly dozens of integration tests of your application model might become wrong. In such situations, function update_integrationtests() helps you in replacing all integration tests results at once. Therefore, it calculates the new results, updates the old module docstring and writes it. You only need to copy-paste the printed result into the affected module. But be aware that function update_integrationtests() cannot guarantee the correctness of the new results. Whenever in doubt if the new results are really correct under all possible conditions, you should inspect and replace each integration test result manually.

In the following example, we disable method Pass_Outputs_V1 temporarily. Accordingly, application model conv_nn does not pass any output to its outlet nodes, which is why the last four columns of both integration test tables now contain zero value only (we can perform this mocking-based test in Python-mode only):

>>> from hydpy import pub, TestIO, update_integrationtests
>>> from unittest import mock
>>> pass_output = "hydpy.models.conv.conv_model.Pass_Outputs_V1.__call__"
>>> with TestIO(), pub.options.usecython(False), mock.patch(pass_output):
...     update_integrationtests("conv_nn", "temp.txt")
...     with open("temp.txt") as resultfile:
...         print(resultfile.read())  
Number of replacements: 2

... test()
|       date |      inputs |                outputs | in1 | in2 | out1 | out2 | out3 | out4 |
---------------------------------------------------------------------------------------------
| 2000-01-01 | 1.0     4.0 | 1.0  4.0  1.0      1.0 | 1.0 | 4.0 |  0.0 |  0.0 |  0.0 |  0.0 |
| 2000-01-02 | 2.0     nan | 2.0  nan  2.0      2.0 | 2.0 | nan |  0.0 |  0.0 |  0.0 |  0.0 |
| 2000-01-03 | nan     nan | nan  nan  nan      nan | nan | nan |  0.0 |  0.0 |  0.0 |  0.0 |

... test()
|       date |      inputs |                outputs | in1 | in2 | out1 | out2 | out3 | out4 |
---------------------------------------------------------------------------------------------
| 2000-01-01 | 1.0     4.0 | 1.0  4.0  1.0      1.0 | 1.0 | 4.0 |  0.0 |  0.0 |  0.0 |  0.0 |
| 2000-01-02 | 2.0     nan | 2.0  2.0  2.0      2.0 | 2.0 | nan |  0.0 |  0.0 |  0.0 |  0.0 |
| 2000-01-03 | nan     nan | nan  nan  nan      nan | nan | nan |  0.0 |  0.0 |  0.0 |  0.0 |
hydpy.core.testtools.check_methodorder(model: Model, indent: int = 0) str[source]

Check that HydPy calls the methods of the given application model in the correct order for each simulation step.

The purpose of this function is to help model developers ensure that each method uses only the values of those sequences that have been calculated by other methods beforehand. HydPy’s test routines apply check_methodorder() automatically on each available application model. Alternatively, you can also execute it at the end of the docstring of an individual application model “manually”, which suppresses the automatic execution and allows to check and discuss exceptional cases where check_methodorder() generates false alarms.

Function check_methodorder() relies on the class constants REQUIREDSEQUENCES, UPDATEDSEQUENCES, and RESULTSEQUENCES of all relevant Method subclasses. Hence, the correctness of its results depends on the correctness of these tuples. However, even if those tuples are well-defined, one cannot expect check_methodorder() to catch all kinds of order-related errors. For example, consider the case where one method calculates only some values of a multi-dimensional sequence and another method the remaining ones. check_methodorder() would not report anything when a third method, relying on the completeness of the sequence’s values, were called after the first but before the second method.

We use the quite complex model lland_knauf as an example. check_methodorder() does not report any problems:

>>> from hydpy.core.testtools import check_methodorder
>>> from hydpy.models.lland_knauf import Model
>>> print(check_methodorder(Model))

To show how check_methodorder() reports errors, we modify the RESULTSEQUENCES tuples of methods Calc_TKor_V1, Calc_TZ_V1, and Calc_QA_V1:

>>> from hydpy.models.lland.lland_model import (
...     Calc_TKor_V1, Calc_TZ_V1, Calc_QA_V1)
>>> results_tkor = Calc_TKor_V1.RESULTSEQUENCES
>>> results_tz = Calc_TZ_V1.RESULTSEQUENCES
>>> results_qa = Calc_QA_V1.RESULTSEQUENCES
>>> Calc_TKor_V1.RESULTSEQUENCES = ()
>>> Calc_TZ_V1.RESULTSEQUENCES = ()
>>> Calc_QA_V1.RESULTSEQUENCES += results_tkor

Now, none of the relevant models calculates the value of sequence TZ. For TKor, there is still a method (Calc_QA_V1) calculating its values, but at a too-late stage of the simulation step:

>>> print(check_methodorder(Model))  
Method Calc_SaturationVapourPressure_V1 requires the following sequences, which are not among the result sequences of any of its predecessors: TKor
...
Method Update_ESnow_V1 requires the following sequences, which are not among the result sequences of any of its predecessors: TKor and TZ

To tidy up, we need to revert the above changes:

>>> Calc_TKor_V1.RESULTSEQUENCES = results_tkor
>>> Calc_TZ_V1.RESULTSEQUENCES = results_tz
>>> Calc_QA_V1.RESULTSEQUENCES = results_qa
>>> print(check_methodorder(Model))
hydpy.core.testtools.check_selectedvariables(method: type[Method], indent: int = 0) str[source]

Perform consistency checks regarding the Parameter and Sequence_ subclasses selected by the given Method subclass.

The purpose of this function is to help model developers ensure that the class tuples CONTROLPARAMETERS, DERIVEDPARAMETERS, FIXEDPARAMETERS, SOLVERPARAMETERS, REQUIREDSEQUENCES, UPDATEDSEQUENCES, and RESULTSEQUENCES contain the correct parameter and sequence subclasses. HydPy’s test routines apply check_selectedvariables() automatically on each method of each available application model. Alternatively, you can also execute it at the end of the docstring of an individual Method subclass “manually”, which suppresses the automatic execution and allows to check and discuss exceptional cases where check_selectedvariables() generates false alarms.

Do not expect check_selectedvariables() to catch all possible errors. Also, false positives might occur. However, in our experience, function check_selectedvariables() is of great help to prevent the most common mistakes when defining the parameter and sequence classes relevant for a specific method.

As an example, we select method Calc_WindSpeed2m_V1 of base model evap. check_selectedvariables() does not reportany problems:

>>> from hydpy.core.testtools import check_selectedvariables
>>> from hydpy.models.evap.evap_model import Calc_WindSpeed10m_V1
>>> print(check_selectedvariables(Calc_WindSpeed10m_V1))

To show how check_selectedvariables() reports errors, we clear the RESULTSEQUENCES tuple of method Calc_WindSpeed10m_V1. Now check_selectedvariables() realises the usage of the factor sequence object windspeed10m within the source code of method Calc_WindSpeed10m_V1, which is neither available within the REQUIREDSEQUENCES, the UPDATEDSEQUENCES, nor the`RESULTSEQUENCES` tuple:

>>> resultseqs = Calc_WindSpeed10m_V1.RESULTSEQUENCES
>>> Calc_WindSpeed10m_V1.RESULTSEQUENCES = ()
>>> print(check_selectedvariables(Calc_WindSpeed10m_V1))
Definitely missing: windspeed10m

After putting the wrong flux sequence class WindSpeed2m into the tuple, we get an additional warning pointing to our mistake:

>>> from hydpy.models.evap.evap_factors import WindSpeed2m
>>> Calc_WindSpeed10m_V1.RESULTSEQUENCES = WindSpeed2m,
>>> print(check_selectedvariables(Calc_WindSpeed10m_V1))
Definitely missing: windspeed10m
Possibly erroneously selected (RESULTSEQUENCES): WindSpeed2m

Method Calc_WindSpeed10m_V1 uses Return_AdjustedWindSpeed_V1 as a submethod. Hence, Calc_WindSpeed10m_V1 most likely needs to select each variable selected by Return_AdjustedWindSpeed_V1. After adding additional variables to the DERIVEDPARAMETERS tuple of Return_AdjustedWindSpeed_V1, we get another warning message:

>>> from hydpy.models.evap.evap_model import Return_AdjustedWindSpeed_V1
>>> from hydpy.models.evap.evap_derived import Days, Hours, Seconds
>>> derivedpars = Return_AdjustedWindSpeed_V1.DERIVEDPARAMETERS
>>> Return_AdjustedWindSpeed_V1.DERIVEDPARAMETERS = Days, Hours, Seconds
>>> print(check_selectedvariables(Calc_WindSpeed10m_V1))
Definitely missing: windspeed10m
Possibly missing (DERIVEDPARAMETERS):
    Return_AdjustedWindSpeed_V1: Seconds, Hours, and Days
Possibly erroneously selected (RESULTSEQUENCES): WindSpeed2m

Finally, check_selectedvariables() checks for duplicates both within and between the different tuples:

>>> from hydpy.models.evap.evap_inputs import WindSpeed, RelativeHumidity
>>> requiredseqs = Calc_WindSpeed10m_V1.REQUIREDSEQUENCES
>>> Calc_WindSpeed10m_V1.REQUIREDSEQUENCES = WindSpeed, WindSpeed, RelativeHumidity
>>> Calc_WindSpeed10m_V1.UPDATEDSEQUENCES = RelativeHumidity,
>>> print(check_selectedvariables(Calc_WindSpeed10m_V1))
Definitely missing: windspeed10m
Possibly missing (DERIVEDPARAMETERS):
    Return_AdjustedWindSpeed_V1: Seconds, Hours, and Days
Possibly erroneously selected (REQUIREDSEQUENCES): RelativeHumidity
Possibly erroneously selected (UPDATEDSEQUENCES): RelativeHumidity
Possibly erroneously selected (RESULTSEQUENCES): WindSpeed2m
Duplicates: RelativeHumidity and WindSpeed

To tidy up, we need to revert the above changes:

>>> Calc_WindSpeed10m_V1.RESULTSEQUENCES = resultseqs
>>> Return_AdjustedWindSpeed_V1.DERIVEDPARAMETERS = derivedpars
>>> Calc_WindSpeed10m_V1.REQUIREDSEQUENCES = requiredseqs
>>> Calc_WindSpeed10m_V1.UPDATEDSEQUENCES = ()
>>> print(check_selectedvariables(Calc_WindSpeed10m_V1))

Some methods, such as Pick_Q_V1, of base model arma rely on the len attribute of 1-dimensional sequences. Function check_selectedvariables() does not report false alarms in such cases:

>>> from hydpy.models.arma.arma_model import Pick_Q_V1
>>> print(check_selectedvariables(Pick_Q_V1))

Some methods such as Calc_PotentialEvapotranspiration_V1 of base model evap rely on the entrymin attribute of KeywordParameter1D instances. Function check_selectedvariables() does not report false alarms in such cases:

>>> from hydpy.models.evap.evap_model import Calc_PotentialEvapotranspiration_V1
>>> from hydpy.models.evap.evap_control import MonthFactor
>>> MonthFactor in Calc_PotentialEvapotranspiration_V1.CONTROLPARAMETERS
True
>>> print(check_selectedvariables(Calc_PotentialEvapotranspiration_V1))

Some methods, such as Calc_PotentialEvapotranspiration_V2 of base model evap, rely on the rowmin or the columnmin attribute of KeywordParameter2D instances. Function check_selectedvariables() does not report false alarms in such cases:

>>> from hydpy.models.evap.evap_model import Calc_PotentialEvapotranspiration_V2
>>> from hydpy.models.evap.evap_control import LandMonthFactor
>>> LandMonthFactor in Calc_PotentialEvapotranspiration_V2.CONTROLPARAMETERS
True
>>> print(check_selectedvariables(Calc_PotentialEvapotranspiration_V2))

Some methods, such as Update_ESnow_V1 of base model lland, update a sequence (meaning, they require its old value and calculate a new one), but their submethods (in this case Return_BackwardEulerError_V1) just require them as input. Function check_selectedvariables() does not report false alarms in such cases:

>>> from hydpy.models.lland.lland_model import Update_ESnow_V1
>>> print(check_selectedvariables(Update_ESnow_V1))

Similarly, methods such as Perform_GARTO_V1 calculate sequence values from scratch but require submethods for updating them:

>>> from hydpy.models.ga.ga_model import Perform_GARTO_V1
>>> print(check_selectedvariables(Perform_GARTO_V1))

If a AutoMethod subclass selects multiple submethods and one requires sequence values that are calculated by another one, check_selectedvariables() does not report this as a problem if they are listed in the correct order, as is the case for method Determine_InterceptionEvaporation_V1:

>>> from hydpy.models.evap.evap_model import Determine_InterceptionEvaporation_V1
>>> print(check_selectedvariables(Determine_InterceptionEvaporation_V1))

However, when reversing the submethod order, check_selectedvariables() complains that Determine_InterceptionEvaporation_V1 does not specify all requirements of the first submethod Calc_InterceptionEvaporation_V1, which would be calculated too late by the second (Calc_InterceptedWater_V1) and the third (Calc_PotentialInterceptionEvaporation_V3) submethod:

>>> submethods = Determine_InterceptionEvaporation_V1.SUBMETHODS
>>> Determine_InterceptionEvaporation_V1.SUBMETHODS = tuple(reversed(submethods))
>>> print(check_selectedvariables(Determine_InterceptionEvaporation_V1))
Possibly missing (REQUIREDSEQUENCES):
    Calc_InterceptionEvaporation_V1: InterceptedWater and PotentialInterceptionEvaporation
>>> Determine_InterceptionEvaporation_V1.SUBMETHODS = submethods
hydpy.core.testtools.perform_consistencychecks(applicationmodel=typing.Union[module, str], indent: int = 0) str[source]

Perform all available consistency checks for the given application model.

At the moment, function perform_consistencychecks() calls function check_selectedvariables() for each relevant model method and function check_methodorder() for the application model itself. Note that perform_consistencychecks() executes only those checks not already executed in the doctest of the respective method or model. This alternative allows model developers to perform the tests themselves whenever exceptional cases result in misleading error reports and discuss any related potential pitfalls in the official documentation.

As an example, we apply perform_consistencychecks() on the application model lland_knauf. It does not report any potential problems (not already discussed in the documentation on the individual model methods):

>>> from hydpy.core.testtools import perform_consistencychecks
>>> print(perform_consistencychecks("lland_knauf"))

To show how perform_consistencychecks() reports errors, we modify the RESULTSEQUENCES tuple of method Calc_NKor_V1:

>>> from hydpy.models.lland.lland_model import Calc_NKor_V1
>>> resultsequences = Calc_NKor_V1.RESULTSEQUENCES
>>> Calc_NKor_V1.RESULTSEQUENCES = ()
>>> print(perform_consistencychecks("lland_knauf"))
Potential consistency problems for individual methods:
   Method Calc_NKor_V1:
        Definitely missing: nkor
Potential consistency problems between methods:
    Method Calc_NBes_Inzp_V1 requires the following sequences, which are not among the result sequences of any of its predecessors: NKor
    Method Calc_QBGZ_V1 requires the following sequences, which are not among the result sequences of any of its predecessors: NKor
    Method Calc_QDGZ_V1 requires the following sequences, which are not among the result sequences of any of its predecessors: NKor
    Method Calc_QAH_V1 requires the following sequences, which are not among the result sequences of any of its predecessors: NKor

To tidy up, we need to revert the above changes:

>>> Calc_NKor_V1.RESULTSEQUENCES = resultsequences
>>> print(perform_consistencychecks("lland_knauf"))
hydpy.core.testtools.save_autofig(filename: str, figure: Figure | None = None) None[source]

Save a figure automatically generated during testing in the special autofig sub-package so that Sphinx can include it into the documentation later.

When passing no figure, function save_autofig() takes the currently active one.

hydpy.core.testtools.warn_later() Iterator[None][source]

Suppress warnings and print them upon exit.

The context manager warn_later() helps demonstrate functionalities in doctests that emit warnings:

>>> import warnings
>>> def get_number():
...     warnings.warn("This is a warning.")
...     return 1
>>> get_number()
Traceback (most recent call last):
...
UserWarning: This is a warning.
>>> from hydpy.core.testtools import warn_later
>>> with warn_later():
...     get_number()
1
UserWarning: This is a warning.
hydpy.core.testtools.print_filestructure(dirpath: str) None[source]

Print the file structure of the given directory path in alphabetical order.

>>> import os
>>> dirpath = os.path.join(data.__path__[0], "HydPy-H-Lahn")
>>> from hydpy import data
>>> from hydpy.core.testtools import print_filestructure
>>> print_filestructure(dirpath)  
* ...hydpy/data/HydPy-H-Lahn
    - conditions
        - init_1996_01_01_00_00_00
            + land_dill_assl.py
            ...
            + stream_lahn_marb_lahn_leun.py
    - control
        - default
            + land.py
            ...
            + stream_lahn_marb_lahn_leun.py
    + multiple_runs.xml
    + multiple_runs_alpha.xml
    - network
        - default
            + headwaters.py
            + nonheadwaters.py
            + streams.py
    - series
        - default
            + dill_assl_obs_q.asc
            ...
            + obs_q.nc
    + single_run.xml
    + single_run.xmlt
hydpy.core.testtools.prepare_io_example_1() tuple[Nodes, Elements][source]

Prepare an IO example configuration for testing purposes.

Function prepare_io_example_1() is thought for testing the functioning of HydPy and thus should be of interest for framework developers only. It uses the main models lland_dd, lland_knauf, and hland_96 and the submodel evap_aet_morsim. Here, we apply prepare_io_example_1() and shortly discuss different aspects of its generated data:

>>> from hydpy.core.testtools import prepare_io_example_1
>>> nodes, elements = prepare_io_example_1()

It defines a short initialisation period of five days:

>>> from hydpy import pub
>>> pub.timegrids
Timegrids("2000-01-01 00:00:00",
          "2000-01-05 00:00:00",
          "1d")

It prepares an empty directory for IO testing:

>>> import os
>>> from hydpy import repr_, TestIO
>>> with TestIO():  
...     repr_(pub.sequencemanager.currentpath)
...     os.listdir("project/series/default")
'...iotesting/project/series/default'
[]

It returns four Element objects handling either application model lland_dd lland_knauf, or hland_96:

>>> for element in elements:
...     print(element.name, element.model)
element1 lland_dd
element2 lland_dd
element3 lland_knauf
element4 hland_96

The lland_knauf instance has a submodel of type evap_aet_morsim:

>>> print(elements.element3.model.aetmodel.name)
evap_aet_morsim

Two Node objects handling variables Q and T:

>>> for node in nodes:
...     print(node.name, node.variable)
node1 Q
node2 T

It generates artificial time series data for the input sequence Nied, the flux sequence NKor, and the state sequence BoWa of each lland model instance, the equally named wind speed sequences of lland_knauf and evap_aet_morsim, the state sequence SP of the hland_96 model instance, and the Sim sequence of each node instance. For precise test results, all generated values are unique:

>>> nied1 = elements.element1.model.sequences.inputs.nied
>>> nied1.series
InfoArray([0., 1., 2., 3.])
>>> nkor1 = elements.element1.model.sequences.fluxes.nkor
>>> nkor1.series
InfoArray([[12.],
           [13.],
           [14.],
           [15.]])
>>> bowa3 = elements.element3.model.sequences.states.bowa
>>> bowa3.series
InfoArray([[48., 49., 50.],
           [51., 52., 53.],
           [54., 55., 56.],
           [57., 58., 59.]])
>>> sim2 = nodes.node2.sequences.sim
>>> sim2.series
InfoArray([64., 65., 66., 67.])
>>> sp4 = elements.element4.model.sequences.states.sp
>>> sp4.series
InfoArray([[[68., 69., 70.],
            [71., 72., 73.]],

           [[74., 75., 76.],
            [77., 78., 79.]],

           [[80., 81., 82.],
            [83., 84., 85.]],

           [[86., 87., 88.],
            [89., 90., 91.]]])
>>> v_l = elements.element3.model.sequences.inputs.windspeed
>>> v_l.series
InfoArray([68., 69., 70., 71.])
>>> v_e = elements.element3.model.aetmodel.sequences.inputs.windspeed
>>> v_e.series
InfoArray([68., 69., 70., 71.])

All sequences carry ndarray objects with (deep) copies of the time series data for testing:

>>> import numpy
>>> assert numpy.all(nied1.series == nied1.testarray)
>>> assert numpy.all(nkor1.series == nkor1.testarray)
>>> assert numpy.all(bowa3.series == bowa3.testarray)
>>> assert numpy.all(sim2.series == sim2.testarray)
>>> assert numpy.all(sp4.series == sp4.testarray)
>>> assert numpy.all(v_l.series == v_l.testarray)
>>> assert numpy.all(v_e.series == v_e.testarray)
>>> bowa3.series[1, 2] = -999.0
>>> assert not numpy.all(bowa3.series == bowa3.testarray)
hydpy.core.testtools.prepare_full_example_1(dirpath: str | None = None) None[source]

Prepare the HydPy-H-Lahn example project on disk.

By default, function prepare_full_example_1() copies the original project data into the iotesting directory, thought for performing automated tests on real-world data. The following doctest shows the generated folder structure:

>>> from hydpy.core.testtools import prepare_full_example_1
>>> prepare_full_example_1()
>>> from hydpy import TestIO
>>> import os
>>> with TestIO():
...     print("root:", *sorted(os.listdir(".")))
...     for folder in ("control", "conditions", "series"):
...         print(f"HydPy-H-Lahn/{folder}:",
...               *sorted(os.listdir(f"HydPy-H-Lahn/{folder}")))
root: HydPy-H-Lahn __init__.py
HydPy-H-Lahn/control: default
HydPy-H-Lahn/conditions: init_1996_01_01_00_00_00
HydPy-H-Lahn/series: default

Pass an alternative path if you prefer to work in another directory:

>>> prepare_full_example_1(dirpath=".")
hydpy.core.testtools.prepare_full_example_2(lastdate: timetools.DateConstrArg = '1996-01-05') tuple[hydpytools.HydPy, pubtools.Pub, type[TestIO]][source]

Prepare the HydPy-H-Lahn project on disk and in RAM.

Function prepare_full_example_2() is an extensions of function prepare_full_example_1(). Besides preparing the project data of the HydPy-H-Lahn example project, it performs all necessary steps to start a simulation run. Therefore, it returns a readily prepared HydPy instance, as well as, for convenience, module pub and class TestIO:

>>> from hydpy.core.testtools import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
>>> hp.nodes
Nodes("dill_assl", "lahn_kalk", "lahn_leun", "lahn_marb")
>>> hp.elements
Elements("land_dill_assl", "land_lahn_kalk", "land_lahn_leun",
         "land_lahn_marb", "stream_dill_assl_lahn_leun",
         "stream_lahn_leun_lahn_kalk", "stream_lahn_marb_lahn_leun")
>>> pub.timegrids
Timegrids("1996-01-01 00:00:00",
          "1996-01-05 00:00:00",
          "1d")
>>> from hydpy import classname
>>> classname(TestIO)
'TestIO'

Function prepare_full_example_2() is primarily thought for testing and thus does not allow for many configurations except changing the end date of the initialisation period:

>>> hp, pub, TestIO = prepare_full_example_2(lastdate="1996-01-02")
>>> pub.timegrids
Timegrids("1996-01-01 00:00:00",
          "1996-01-02 00:00:00",
          "1d")