netcdftools¶
This module extends the features of module filetools
for loading data from and
storing data to netCDF4 files, consistent with the NetCDF Climate and Forecast (CF)
Metadata Conventions.
Usually, we only indirectly apply the features implemented in this module. Here, we demonstrate the underlying functionalities, which can be subsumed by following three steps:
Call either method
open_netcdfreader()
or methodopen_netcdfwriter()
of theSequenceManager
object available in modulepub
to prepare aNetCDFInterfaceReader
object for reading or aNetCDFInterfaceWriter
object for writing.Call either the usual reading or writing methods of other HydPy classes like method
load_fluxseries()
of classHydPy
or methodsave_stateseries()
of classElements
. The prepared interface object collects all requests of those sequences one wants to read from or write to NetCDF files.Finalise reading or writing by calling either method
close_netcdfreader()
orclose_netcdfwriter()
.
Step 2 is a logging process only, telling the interface object which data needs to be read or written, while step 3 triggers the actual reading from or writing to NetCDF files.
During step 2, the interface object and its subobjects of type
NetCDFVariableFlatReader
, NetCDFVariableFlatWriter
, or NetCDFVariableAggregated
are accessible, allowing one to inspect their current state or modify their behaviour.
The following real code examples show how to perform these three steps both for reading
and writing data, based on the example configuration defined by function
prepare_io_example_1()
:
>>> from hydpy.core.testtools import prepare_io_example_1
>>> nodes, elements = prepare_io_example_1()
We prepare a NetCDFInterfaceWriter
object for writing data via method
open_netcdfwriter()
:
>>> from hydpy import pub
>>> pub.sequencemanager.open_netcdfwriter()
We tell the SequenceManager
object to write all the time series data to NetCDF files:
>>> pub.sequencemanager.filetype = "nc"
We store all the time series handled by the Node
and Element
objects of the example
dataset by calling save_allseries()
of class Nodes
and
save_allseries()
of class Elements
. (In real cases, you would not write the
with TestIO(): line. This code block makes sure we pollute the IO testing directory
instead of our current working directory):
>>> from hydpy import TestIO
>>> with TestIO():
... nodes.save_allseries()
... elements.save_allseries()
We again log all sequences, but after telling the SequenceManager
object to average
each time series spatially:
ToDo: Support spatial averaging for sequences of submodels.
>>> with TestIO(), pub.sequencemanager.aggregation("mean"):
... nodes.save_allseries()
... elements.element3.model.aetmodel.prepare_allseries(allocate_ram=False)
... elements.save_allseries()
... elements.element3.model.aetmodel.prepare_allseries(allocate_ram=True)
We can now navigate into the details of the logged time series data via the
NetCDFInterfaceWriter
object and its NetCDFVariableFlatReader
and
NetCDFVariableAggregated
subobjects. For example, we can query the logged flux
sequence objects of type NKor
belonging to application model lland_dd
(those of elements element1 and element2; the trailing numbers are the indices of
the relevant hydrological response units):
>>> writer = pub.sequencemanager.netcdfwriter
>>> writer.lland_dd_flux_nkor.subdevicenames
('element1_0', 'element2_0', 'element2_1')
In the example discussed here, all sequences belong to the same folder (default). Storing sequences in separate folders means storing them in separate NetCDF files. In such cases, you must include the folder in the attribute name:
>>> writer.foldernames
('default',)
>>> writer.default_lland_dd_flux_nkor.subdevicenames
('element1_0', 'element2_0', 'element2_1')
We close the NetCDFInterfaceWriter
object, which is the moment when the writing
process happens. After that, the interface object is no longer available:
>>> from hydpy import TestIO
>>> with TestIO():
... pub.sequencemanager.close_netcdfwriter()
>>> pub.sequencemanager.netcdfwriter
Traceback (most recent call last):
...
hydpy.core.exceptiontools.AttributeNotReady: The sequence file manager currently handles no NetCDF writer object. Consider applying the `pub.sequencemanager.netcdfwriting` context manager first (search in the documentation for help).
We set the time series values of two test sequences to zero to demonstrate that reading the data back in actually works:
>>> nodes.node2.sequences.sim.series = 0.0
>>> elements.element2.model.sequences.fluxes.nkor.series = 0.0
We move up a gear and and prepare a NetCDFInterfaceReader
object for reading data,
log all NodeSequence
and ModelSequence
objects, and read their time series data
from the created NetCDF file. We temporarily disable the checkseries
option
to prevent raising an exception when reading incomplete data from the files:
>>> with TestIO(), pub.options.checkseries(False):
... pub.sequencemanager.open_netcdfreader()
... nodes.load_simseries()
... elements.load_allseries()
... pub.sequencemanager.close_netcdfreader()
We check if the data is available via the test sequences again:
>>> nodes.node2.sequences.sim.series
InfoArray([64., 65., 66., 67.])
>>> elements.element2.model.sequences.fluxes.nkor.series
InfoArray([[16., 17.],
[18., 19.],
[20., 21.],
[22., 23.]])
>>> pub.sequencemanager.netcdfreader
Traceback (most recent call last):
...
hydpy.core.exceptiontools.AttributeNotReady: The sequence file manager currently handles no NetCDF reader object. Consider applying the `pub.sequencemanager.netcdfreading` context manager first (search in the documentation for help).
We cannot invert spatial aggregation. Hence reading averaged time series is left for
postprocessing tools. To show that writing the averaged series worked, we access both
relevant NetCDF files more directly using the underlying NetCDF4 library (note that
averaging 1-dimensional time series as those of node sequence Sim
is allowed for the
sake of consistency):
>>> from numpy import array
>>> from hydpy import print_matrix
>>> from hydpy.core.netcdftools import netcdf4
>>> filepath = "project/series/default/sim_q_mean.nc"
>>> with TestIO(), netcdf4.Dataset(filepath) as ncfile:
... print_matrix(array(ncfile["sim_q_mean"][:]))
| 60.0 |
| 61.0 |
| 62.0 |
| 63.0 |
>>> from hydpy import print_vector
>>> filepath = "project/series/default/lland_dd_flux_nkor_mean.nc"
>>> with TestIO(), netcdf4.Dataset(filepath) as ncfile:
... print_vector(array(ncfile["lland_dd_flux_nkor_mean"][:])[:, 1])
16.5, 18.5, 20.5, 22.5
The previous examples relied on “model-specific” file names and variable names. The
documentation on class HydPy
introduces the standard “HydPy” convention as an
alternative for naming input time series files and demonstrates it for file types that
store the time series of single sequence instances. Here, we take the input sequence
Nied
as an example to show that one can use its standard name
PRECIPITATION
to read and write more generally named NetCDF files
and variables:
>>> print_vector(elements.element2.model.sequences.inputs.nied.series)
4.0, 5.0, 6.0, 7.0
>>> pub.sequencemanager.convention = "HydPy"
>>> with TestIO():
... elements.save_inputseries()
>>> filepath = "project/series/default/precipitation.nc"
>>> with TestIO(), netcdf4.Dataset(filepath) as ncfile:
... print_vector(array(ncfile["precipitation"][:])[:, 1])
4.0, 5.0, 6.0, 7.0
>>> elements.element2.model.sequences.inputs.nied.series = 0.0
>>> with TestIO(), pub.options.checkseries(False):
... elements.load_inputseries()
>>> print_vector(elements.element2.model.sequences.inputs.nied.series)
4.0, 5.0, 6.0, 7.0
In the last example, the methods load_inputseries()
and
save_inputseries()
of class Elements
opened and closed the required NetCDF
reader and writer objects automatically by using the context managers
netcdfreading()
and netcdfwriting()
. Such comfort is
only available for these and the similar methods of the classes HydPy
, Elements
,
and Nodes
. If you, for example, apply the load_series()
or the
save_series()
method of individual IOSequence
instances, you must activate
netcdfreading()
or netcdfwriting()
manually. This
discomfort is intentional and should help prevent accidentally opening and closing
the same NetCDF file repeatedly, which could result in an immense waste of computation
time. The following example shows how to apply these context managers manually and
that this does not conflict with using methods that could automatically open and close
NetCDF reader and writer objects:
>>> sequences = elements.element2.model.sequences
>>> sequences.inputs.nied.series = 1.0, 3.0, 5.0, 7.0
>>> sequences.fluxes.qah.series = 2.0, 4.0, 6.0, 8.0
>>> sequences.fluxes.qa.series = 3.0, 5.0, 7.0, 9.0
>>> with TestIO(), pub.sequencemanager.netcdfwriting():
... sequences.fluxes.qa.save_series()
... elements.save_inputseries()
... sequences.fluxes.qah.save_series()
>>> sequences.inputs.nied.series = 0.0
>>> sequences.fluxes.qah.series = 0.0
>>> sequences.fluxes.qa.series = 0.0
>>> with TestIO(), pub.options.checkseries(False), pub.sequencemanager.netcdfreading():
... sequences.fluxes.qah.load_series()
... elements.load_inputseries()
... sequences.fluxes.qa.load_series()
>>> print_vector(sequences.inputs.nied.series)
1.0, 3.0, 5.0, 7.0
>>> print_vector(sequences.fluxes.qah.series)
2.0, 4.0, 6.0, 8.0
>>> print_vector(sequences.fluxes.qa.series)
3.0, 5.0, 7.0, 9.0
Besides the testing-related specialities, the described workflow is more or less
standard but allows for different modifications. We illustrate them in the
documentation of the other features implemented in module netcdftools
and the
documentation on class SequenceManager
of module filetools
and class IOSequence
of module sequencetools
.
Using the NetCDF format allows reading or writing data “just in time” during simulation
runs. The documentation of class “HydPy” explains how to select and set the relevant
IOSequence
objects for this option. See the documentation on method
provide_jitaccess()
of class NetCDFInterfaceJIT
for more in-depth
information.
Module netcdftools
implements the following members:
TypeNetCDFVariable
Type variable.
summarise_ncfile()
Give a summary describing (a HydPy-compatible) NetCDF file.
str2chars()
Return andarray
object containing the byte characters (second axis) of all given strings (first axis).
chars2str()
Inversion function ofstr2chars()
.
create_dimension()
Add a new dimension with the given name and length to the given NetCDF file.
create_variable()
Add a new variable with the given name, datatype, and dimensions to the given NetCDF file.
query_variable()
Return the variable with the given name from the given NetCDF file.
query_timegrid()
Return theTimegrid
defined by the given NetCDF file.
query_array()
Return the data of the variable with the given name from the given NetCDF file.
get_filepath()
Return the path of the given NetCDF file.
JITAccessInfo
Helper class for structuring reading from or writing to a NetCDF file “just in time” during a simulation run for a specificNetCDFVariableFlat
object.
JITAccessHandler
Handler used by theSequenceManager
object available in modulepub
for reading data from and/or writing data to NetCDF files at each step of a simulation run.
Subdevice2Index
Return type of methodquery_subdevice2index()
.
NetCDFVariableInfo
Returned type ofNetCDFVariableFlatWriter
andNetCDFVariableAggregated
when querying logged data via attribute access.
NetCDFVariable
Base class for handling single NetCDF variables of a single NetCDF file.
NetCDFVariableFlat
Base class for handling single “flat” NetCDF variables (which deal with unaggregated time series) of single NetCDF files.
NetCDFVariableFlatReader
Concrete class for reading data of single “flat” NetCDF variables (which deal with unaggregated time series) from single NetCDF files.
MixinVariableWriter
Mixin class forNetCDFVariableFlatWriter
andNetCDFVariableAggregated
.
NetCDFVariableFlatWriter
Concrete class for writing data from single “flat” NetCDF variables (which deal with unaggregated time series) to single NetCDF files.
NetCDFVariableAggregated
Concrete class for writing data from single “aggregated” NetCDF variables (which deal with aggregated time series) to single NetCDF files.
NetCDFInterfaceBase
Base class for interfaces betweenSequenceManager
and multiple NetCDF files.
NetCDFInterfaceReader
Interface betweenSequenceManager
and multipleNetCDFVariableFlatReader
instances for reading data in one step.
NetCDFInterfaceWriter
Interface betweenSequenceManager
and multipleNetCDFVariableFlatWriter
orNetCDFVariableAggregated
instances for writing data in one step.
NetCDFInterfaceJIT
Interface betweenSequenceManager
and multipleNetCDFVariableFlatWriter
instances for reading or writing data just in time.
add_netcdfreading()
Enable a function or method that can read time series from NetCDF files to automatically activate thenetcdfreading()
mode if not already done.
add_netcdfwriting()
Enable a function or method that can write time series to NetCDF files to automatically activate thenetcdfwriting()
mode if not already done.
- hydpy.core.netcdftools.dimmapping = {'nmb_characters': 'char_leng_name', 'nmb_subdevices': 'stations', 'nmb_timepoints': 'time'}¶
Dimension-related terms within NetCDF files.
You can change this mapping if it does not suit your requirements. For example, change the value of the keyword “nmb_subdevices” if you prefer to call this dimension “location” instead of “stations” within NetCDF files:
>>> from hydpy.core.netcdftools import dimmapping >>> dimmapping["nmb_subdevices"] = "location"
- hydpy.core.netcdftools.varmapping = {'subdevices': 'station_id', 'timepoints': 'time'}¶
Variable-related terms within NetCDF files.
You can change this mapping if it does not suit your requirements. For example, change the value of the keyword “timepoints” if you prefer to call this variable “period” instead of “time” within NetCDF files:
>>> from hydpy.core.netcdftools import varmapping >>> varmapping["timepoints"] = "period"
- hydpy.core.netcdftools.fillvalue = nan¶
Default fill value for writing NetCDF files.
You can set another
float
value before writing a NetCDF file:>>> from hydpy.core import netcdftools >>> netcdftools.fillvalue = -777.0
- hydpy.core.netcdftools.summarise_ncfile(ncfile: Dataset | str, /) str [source]¶
Give a summary describing (a HydPy-compatible) NetCDF file.
You can pass the file path:
>>> import os >>> from hydpy import data, repr_, summarise_ncfile >>> filepath = os.path.join( ... data.__path__[0], "HydPy-H-Lahn", "series", "default", "hland_96_input_p.nc" ... ) >>> print(repr_(summarise_ncfile(filepath))) GENERAL file path = .../hydpy/data/HydPy-H-Lahn/series/default/hland_96_input_p.nc file format = NETCDF4 disk format = HDF5 Attributes hydts_timeRef = begin title = Daily total precipitation sum HydPy-H-HBV96 model river Lahn ... DIMENSIONS stations = 4 time = 11384 str_len = 40 VARIABLES time dimensions = time shape = 11384 data type = float64 Attributes units = days since 1900-01-01 00:00:00 +0100 long_name = time axis = T calendar = standard hland_96_input_p dimensions = time, stations shape = 11384, 4 data type = float64 Attributes units = mm _FillValue = -9999.0 long_name = Daily Precipitation Sum station_id dimensions = stations, str_len shape = 4, 40 data type = |S1 Attributes long_name = station or node identification code station_names dimensions = stations, str_len shape = 4, 40 data type = |S1 Attributes long_name = station or node name river_names dimensions = stations, str_len shape = 4, 40 data type = |S1 Attributes long_name = river name TIME GRID first date = 1989-11-01 00:00:00+01:00 last date = 2021-01-01 00:00:00+01:00 step size = 1d
Alternatively, you can pass a NetCDF4 Dataset object:
>>> from netCDF4 import Dataset >>> with Dataset(filepath, "r") as ncfile: ... print(repr_(summarise_ncfile(ncfile))) GENERAL file path = ...data/HydPy-H-Lahn/series/default/hland_96_input_p.nc ... _FillValue = -9999.0 ...
- hydpy.core.netcdftools.str2chars(strings: Sequence[str]) ndarray[Any, dtype[bytes_]] [source]¶
Return a
ndarray
object containing the byte characters (second axis) of all given strings (first axis).>>> from hydpy.core.netcdftools import str2chars >>> str2chars(['street', 'St.', 'Straße', 'Str.']) array([[b's', b't', b'r', b'e', b'e', b't', b''], [b'S', b't', b'.', b'', b'', b'', b''], [b'S', b't', b'r', b'a', b'Ã', b'', b'e'], [b'S', b't', b'r', b'.', b'', b'', b'']], dtype='|S1')
>>> str2chars([]) array([], shape=(0, 0), dtype='|S1')
- hydpy.core.netcdftools.chars2str(chars: ndarray[Any, dtype[bytes_]]) list[str] [source]¶
Inversion function of
str2chars()
.>>> from hydpy.core.netcdftools import chars2str
>>> chars2str([[b"s", b"t", b"r", b"e", b"e", b"t", b""], ... [b"S", b"t", b".", b"", b"", b"", b""], ... [b"S", b"t", b"r", b"a", b"\xc3", b"\x9f", b"e"], ... [b"S", b"t", b"r", b".", b"", b"", b""]]) ['street', 'St.', 'Straße', 'Str.']
>>> chars2str([]) []
>>> chars2str([[b"s", b"t", b"r", b"e", b"e", b"t"], ... [b"S", b"t", b".", b"", b"", b""], ... [b"S", b"t", b"r", b"a", b"\xc3", b"e"], ... [b"S", b"t", b"r", b".", b"", b""]]) Traceback (most recent call last): ... ValueError: Cannot decode `b'Stra\xc3e'` (not UTF-8 compliant).
- hydpy.core.netcdftools.create_dimension(ncfile: Dataset, name: str, length: int) None [source]¶
Add a new dimension with the given name and length to the given NetCDF file.
Essentially,
create_dimension()
only calls the equally named method of the NetCDF library but adds information to possible error messages:>>> from hydpy import TestIO >>> from hydpy.core.netcdftools import netcdf4 >>> with TestIO(): ... ncfile = netcdf4.Dataset("test.nc", "w") >>> from hydpy.core.netcdftools import create_dimension >>> create_dimension(ncfile, "dim1", 5) >>> dim = ncfile.dimensions["dim1"] >>> dim.size if hasattr(dim, "size") else dim 5
>>> try: ... create_dimension(ncfile, "dim1", 5) ... except BaseException as exc: ... print(exc) While trying to add dimension `dim1` with length `5` to the NetCDF file `test.nc`, the following error occurred: ...
>>> ncfile.close()
- hydpy.core.netcdftools.create_variable(ncfile: Dataset, name: str, datatype: str, dimensions: Sequence[str]) None [source]¶
Add a new variable with the given name, datatype, and dimensions to the given NetCDF file.
Essentially,
create_variable()
only calls the equally named method of the NetCDF library but adds information to possible error messages:>>> from hydpy import TestIO >>> from hydpy.core.netcdftools import netcdf4 >>> with TestIO(): ... ncfile = netcdf4.Dataset("test.nc", "w") >>> from hydpy.core.netcdftools import create_variable >>> try: ... create_variable(ncfile, "var1", "f8", ("dim1",)) ... except BaseException as exc: ... print(str(exc).strip('"')) While trying to add variable `var1` with datatype `f8` and dimensions `('dim1',)` to the NetCDF file `test.nc`, the following error occurred: ...
>>> from hydpy.core.netcdftools import create_dimension >>> create_dimension(ncfile, "dim1", 5) >>> create_variable(ncfile, "var1", "f8", ("dim1",)) >>> import numpy >>> from hydpy import print_vector >>> print_vector(numpy.array(ncfile["var1"][:])) nan, nan, nan, nan, nan
>>> ncfile.close()
- hydpy.core.netcdftools.query_variable(ncfile: Dataset, name: str) Variable [source]¶
Return the variable with the given name from the given NetCDF file.
Essentially,
query_variable()
only queries the variable via keyword access using the NetCDF library but adds information to possible error messages:>>> from hydpy.core.netcdftools import query_variable >>> from hydpy import TestIO >>> from hydpy.core.netcdftools import netcdf4 >>> with TestIO(): ... file_ = netcdf4.Dataset("model.nc", "w") >>> query_variable(file_, "values") Traceback (most recent call last): ... RuntimeError: NetCDF file `model.nc` does not contain variable `values`.
>>> from hydpy.core.netcdftools import create_variable >>> create_variable(file_, "values", "f8", ()) >>> assert isinstance(query_variable(file_, "values"), netcdf4.Variable)
>>> file_.close()
- hydpy.core.netcdftools.query_timegrid(ncfile: Dataset, sequence: IOSequence) Timegrid [source]¶
Return the
Timegrid
defined by the given NetCDF file.query_timegrid()
relies on the timereference attribute of the given NetCDF file, if available, and falls back to the globaltimestampleft
option when necessary. The NetCDF files of the HydPy-H-Lahn example project (and all other NetCDF files written by HydPy) include such information:>>> from hydpy.core.testtools import prepare_full_example_2 >>> hp, pub, TestIO = prepare_full_example_2() >>> from netCDF4 import Dataset >>> filepath = "HydPy-H-Lahn/series/default/hland_96_input_p.nc" >>> with TestIO(), Dataset(filepath) as ncfile: ... ncfile.timereference 'left interval boundary'
We start our examples considering the input sequence
P
, which handles precipitation sums.query_timegrid()
requires an instance ofP
to determine that each value of the time series of the NetCDF file references a time interval and not a time point:>>> p = hp.elements.land_dill_assl.model.sequences.inputs.p
If the file-specific setting does not conflict with the current value of
timestampleft
,query_timegrid()
works silently:>>> from hydpy.core.netcdftools import query_timegrid >>> with TestIO(), Dataset(filepath) as ncfile: ... query_timegrid(ncfile, p) Timegrid("1989-11-01 00:00:00", "2021-01-01 00:00:00", "1d")
If a file-specific setting is missing,
query_timegrid()
applies the currenttimestampleft
value:>>> with TestIO(), Dataset(filepath, "r+") as ncfile: ... del ncfile.timereference >>> from hydpy.core.testtools import warn_later >>> with TestIO(), Dataset(filepath) as ncfile: ... query_timegrid(ncfile, p) Timegrid("1989-11-01 00:00:00", "2021-01-01 00:00:00", "1d")
>>> with TestIO(), Dataset(filepath) as ncfile, pub.options.timestampleft(False): ... query_timegrid(ncfile, p) Timegrid("1989-10-31 00:00:00", "2020-12-31 00:00:00", "1d")
If the file-specific setting and
timestampleft
conflict,query_timegrid()
favours the file attribute and warns about this assumption:>>> with TestIO(), Dataset(filepath, "r+") as ncfile: ... ncfile.timereference = "right interval boundary" >>> with TestIO(), warn_later(), Dataset(filepath) as ncfile: ... query_timegrid(ncfile, p) Timegrid("1989-10-31 00:00:00", "2020-12-31 00:00:00", "1d") UserWarning: The `timereference` attribute (`right interval boundary`) of the NetCDF file `HydPy-H-Lahn/series/default/hland_96_input_p.nc` conflicts with the current value of the global `timestampleft` option (`True`). The file-specific information is prioritised.
State sequences like
SM
handle data for specific time points instead of time intervals. Theirseries
vector contains the calculated values for the end of each simulation step. Hence, without file-specific information,query_timegrid()
ignores thetimestampleft
option and follows the right interval boundary convention:>>> sm = hp.elements.land_dill_assl.model.sequences.states.sm >>> with TestIO(), Dataset(filepath, "r+") as ncfile: ... del ncfile.timereference >>> with TestIO(), Dataset(filepath) as ncfile: ... query_timegrid(ncfile, sm) Timegrid("1989-10-31 00:00:00", "2020-12-31 00:00:00", "1d")
Add a timereference attribute with the value current time to explicitly include this information in a NetCDF file:
>>> with TestIO(), Dataset(filepath, "r+") as ncfile: ... ncfile.timereference = "current time" >>> with TestIO(), Dataset(filepath) as ncfile: ... query_timegrid(ncfile, sm) Timegrid("1989-10-31 00:00:00", "2020-12-31 00:00:00", "1d")
query_timegrid()
raises special warnings when a NetCDF file’s timereference attribute conflicts with its judgement whether the contained data addresses time intervals or time points:>>> with TestIO(), warn_later(), Dataset(filepath) as ncfile: ... query_timegrid(ncfile, p) Timegrid("1989-10-31 00:00:00", "2020-12-31 00:00:00", "1d") UserWarning: The `timereference` attribute (`current time`) of the NetCDF file `HydPy-H-Lahn/series/default/hland_96_input_p.nc` conflicts with the type of the relevant sequence (`P`). The file-specific information is prioritised.
>>> with TestIO(), Dataset(filepath, "r+") as ncfile: ... ncfile.timereference = "left interval boundary" >>> with TestIO(), warn_later(), Dataset(filepath) as ncfile: ... query_timegrid(ncfile, sm) Timegrid("1989-11-01 00:00:00", "2021-01-01 00:00:00", "1d") UserWarning: The `timereference` attribute (`left interval boundary`) of the NetCDF file `HydPy-H-Lahn/series/default/hland_96_input_p.nc` conflicts with the type of the relevant sequence (`SM`). The file-specific information is prioritised.
query_timegrid()
also raises specific warnings for misstated timereference attributes describing the different fallbacks for data related to time intervals and time points:>>> with TestIO(), Dataset(filepath, "r+") as ncfile: ... ncfile.timereference = "wrong" >>> with TestIO(), warn_later(), Dataset(filepath) as ncfile: ... query_timegrid(ncfile, p) Timegrid("1989-11-01 00:00:00", "2021-01-01 00:00:00", "1d") UserWarning: The value of the `timereference` attribute (`wrong`) of the NetCDF file `HydPy-H-Lahn/series/default/hland_96_input_p.nc` is not among the accepted values (`left...`, `right...`, `current...`). Assuming `left interval boundary` according to the current value of the global `timestampleft` option.
>>> with TestIO(), warn_later(), Dataset(filepath) as ncfile: ... query_timegrid(ncfile, sm) Timegrid("1989-10-31 00:00:00", "2020-12-31 00:00:00", "1d") UserWarning: The value of the `timereference` attribute (`wrong`) of the NetCDF file `...hland_96_input_p.nc` is not among the accepted values (`left...`, `right...`, `current...`). Assuming `current time` according to the type of the relevant sequence (`SM`).
- hydpy.core.netcdftools.query_array(ncfile: Dataset, name: str) ndarray[Any, dtype[float64]] [source]¶
Return the data of the variable with the given name from the given NetCDF file.
The following example shows that
query_array()
returnsnan
entries for representing missing values even when the respective NetCDF variable defines a different fill value:>>> from hydpy import print_matrix, TestIO >>> from hydpy.core import netcdftools >>> from hydpy.core.netcdftools import netcdf4, create_dimension, create_variable >>> import numpy >>> with TestIO(): ... with netcdf4.Dataset("test.nc", "w") as ncfile: ... create_dimension(ncfile, "time", 2) ... create_dimension(ncfile, "stations", 3) ... netcdftools.fillvalue = -999.0 ... create_variable(ncfile, "var", "f8", ("time", "stations")) ... netcdftools.fillvalue = numpy.nan ... ncfile = netcdf4.Dataset("test.nc", "r") >>> from hydpy.core.netcdftools import query_variable, query_array >>> print_matrix(query_variable(ncfile, "var")[:].data) | -999.0, -999.0, -999.0 | | -999.0, -999.0, -999.0 | >>> print_matrix(query_array(ncfile, "var")) | nan, nan, nan | | nan, nan, nan | >>> ncfile.close()
Usually, HydPy expects all data variables in NetCDF files to be 2-dimensional, with time on the first and location on the second axis. However,
query_array()
allows for an exception for compatibility with Delft-FEWS. When working with ensembles, Delft-FEWS defines a third dimension called realization and puts it between the first dimension (time) and the last dimension (stations). In our experience, this additional dimension is always of length one, meaning we can safely ignore it:>>> with TestIO(): ... with netcdf4.Dataset("test.nc", "w") as ncfile: ... create_dimension(ncfile, "time", 2) ... create_dimension(ncfile, "realization", 1) ... create_dimension(ncfile, "stations", 3) ... var = create_variable(ncfile, "var", "f8", ... ("time", "realization", "stations")) ... ncfile["var"][:] = [[[1.1, 1.2, 1.3]], [[2.1, 2.2, 2.3]]] ... ncfile = netcdf4.Dataset("test.nc", "r") >>> var = query_variable(ncfile, "var")[:] >>> var.shape (2, 1, 3) >>> query_array(ncfile, "var").shape (2, 3) >>> print_matrix(query_array(ncfile, "var")) | 1.1, 1.2, 1.3 | | 2.1, 2.2, 2.3 | >>> ncfile.close()
query_array()
raises errors if dimensionality is smaller than two or larger than three or if there are three dimensions and the length of the second dimension is not one:>>> with TestIO(): ... with netcdf4.Dataset("test.nc", "w") as ncfile: ... create_dimension(ncfile, "time", 2) ... var = create_variable(ncfile, "var", "f8", ("time",)) ... with netcdf4.Dataset("test.nc", "r") as ncfile: ... query_array(ncfile, "var") Traceback (most recent call last): ... RuntimeError: Variable `var` of NetCDF file `test.nc` must be 2-dimensional (or 3-dimensional with a length of one on the second axis) but has the shape `(2,)`.
>>> with TestIO(): ... with netcdf4.Dataset("test.nc", "w") as ncfile: ... create_dimension(ncfile, "time", 2) ... create_dimension(ncfile, "realization", 2) ... create_dimension(ncfile, "stations", 3) ... var = create_variable(ncfile, "var", "f8", ... ("time", "realization", "stations")) ... with netcdf4.Dataset("test.nc", "r") as ncfile: ... query_array(ncfile, "var") Traceback (most recent call last): ... RuntimeError: Variable `var` of NetCDF file `test.nc` must be 2-dimensional (or 3-dimensional with a length of one on the second axis) but has the shape `(2, 2, 3)`.
The skipping of the realization axis is very specific to Delft-FEWS. To prevent hiding problems when reading erroneous data from other sources,
query_array()
emits the following warning if the name of the second dimension is not realization:>>> from hydpy.core.testtools import warn_later >>> with TestIO(): ... with netcdf4.Dataset("test.nc", "w") as ncfile: ... create_dimension(ncfile, "time", 2) ... create_dimension(ncfile, "realisation", 1) ... create_dimension(ncfile, "stations", 3) ... var = create_variable(ncfile, "var", "f8", ... ("time", "realisation", "stations")) ... with netcdf4.Dataset("test.nc", "r") as ncfile, warn_later(): ... print_matrix(query_array(ncfile, "var")) | nan, nan, nan | | nan, nan, nan | UserWarning: Variable `var` of NetCDF file `test.nc` is 3-dimensional and the length of the second dimension is one, but its name is `realisation` instead of `realization`.
- hydpy.core.netcdftools.get_filepath(ncfile: Dataset) str [source]¶
Return the path of the given NetCDF file.
>>> from hydpy import TestIO >>> from hydpy.core.netcdftools import netcdf4 >>> from hydpy.core.netcdftools import get_filepath >>> with TestIO(): ... with netcdf4.Dataset("test.nc", "w") as ncfile: ... get_filepath(ncfile) 'test.nc'
- class hydpy.core.netcdftools.JITAccessInfo(ncvariable: netcdf4.Variable, realisation: bool, timedelta: int, columns: tuple[int, ...], data: NDArrayFloat)[source]¶
Bases:
NamedTuple
Helper class for structuring reading from or writing to a NetCDF file “just in time” during a simulation run for a specific
NetCDFVariableFlat
object.- ncvariable: Variable¶
Variable for direct access to the relevant section of the NetCDF file.
- realisation: bool¶
Flag that indicates if the relevant
ncvariable
comes with an additional realization dimension (explained in the documentation on functionquery_array()
)
- timedelta: int¶
Difference between the relevant row of the NetCDF file and the current simulation index (as defined by
Idx_Sim
).
- class hydpy.core.netcdftools.JITAccessHandler(readers: tuple[JITAccessInfo, ...], writers: tuple[JITAccessInfo, ...])[source]¶
Bases:
NamedTuple
Handler used by the
SequenceManager
object available in modulepub
for reading data from and/or writing data to NetCDF files at each step of a simulation run.- readers: tuple[JITAccessInfo, ...]¶
All
JITAccessInfo
objects responsible for reading data during the simulation run.
- writers: tuple[JITAccessInfo, ...]¶
All
JITAccessInfo
objects responsible for writing data during the simulation run.
- class hydpy.core.netcdftools.Subdevice2Index(dict_: dict[str, int], name_ncfile: str)[source]¶
Bases:
object
Return type of method
query_subdevice2index()
.
- class hydpy.core.netcdftools.NetCDFVariableInfo(sequence: sequencetools.IOSequence, array: sequencetools.InfoArray | None)[source]¶
Bases:
NamedTuple
Returned type of
NetCDFVariableFlatWriter
andNetCDFVariableAggregated
when querying logged data via attribute access.- sequence: IOSequence¶
Alias for field number 0
- class hydpy.core.netcdftools.NetCDFVariable(filepath: str)[source]¶
Bases:
ABC
Base class for handling single NetCDF variables of a single NetCDF file.
- property name: str¶
Name of the NetCDF file and the NetCDF variable containing the time series data.
- query_subdevices(ncfile: Dataset) list[str] [source]¶
Query the names of the (sub)devices of the logged sequences from the given NetCDF file.
We apply method
query_subdevices()
on an empty NetCDF file. The error message shows that the method tries to query the (sub)device names:>>> from hydpy import TestIO >>> from hydpy.core.netcdftools import netcdf4, NetCDFVariableFlatWriter >>> class Var(NetCDFVariableFlatWriter): ... pass >>> Var.subdevicenames = "element1", "element_2" >>> var = Var("filename.nc") >>> with TestIO(): ... ncfile = netcdf4.Dataset("filename.nc", "w") >>> var.query_subdevices(ncfile) Traceback (most recent call last): ... RuntimeError: NetCDF file `filename.nc` does neither contain a variable named `values_station_id` nor `station_id` for defining coordinate locations.
After inserting the (sub)device names, they can be queried and returned:
>>> var.insert_subdevices(ncfile) >>> Var("filename.nc").query_subdevices(ncfile) ['element1', 'element_2']
>>> ncfile.close()
- query_subdevice2index(ncfile: Dataset) Subdevice2Index [source]¶
Return a
Subdevice2Index
object that maps the (sub)device names to their position within the given NetCDF file.Method
query_subdevice2index()
relies onquery_subdevices()
. The returnedSubdevice2Index
object remembers the NetCDF file from which the (sub)device names stem, allowing for clear error messages:>>> from hydpy import TestIO >>> from hydpy.core.netcdftools import ( ... netcdf4, NetCDFVariableFlatWriter, str2chars) >>> with TestIO(): ... ncfile = netcdf4.Dataset("filename.nc", "w") >>> class Var(NetCDFVariableFlatWriter): ... pass >>> Var.subdevicenames = ["element3", "element1", "element1_1", "element2"] >>> var = Var("filename.nc") >>> var.insert_subdevices(ncfile) >>> subdevice2index = var.query_subdevice2index(ncfile) >>> subdevice2index.get_index("element1_1") 2 >>> subdevice2index.get_index("element3") 0 >>> subdevice2index.get_index("element5") Traceback (most recent call last): ... RuntimeError: No data for (sub)device `element5` is available in NetCDF file `filename.nc`.
Additionally,
query_subdevice2index()
checks for duplicates:>>> ncfile["station_id"][:] = str2chars( ... ["element3", "element1", "element1_1", "element1"]) >>> var.query_subdevice2index(ncfile) Traceback (most recent call last): ... RuntimeError: The NetCDF file `filename.nc` contains duplicate (sub)device names (the first found duplicate is `element1`).
>>> ncfile.close()
- class hydpy.core.netcdftools.NetCDFVariableFlat(filepath: str)[source]¶
Bases:
NetCDFVariable
,ABC
Base class for handling single “flat” NetCDF variables (which deal with unaggregated time series) of single NetCDF files.
The following examples describe the functioning of the subclasses
NetCDFVariableFlatReader
andNetCDFVariableFlatWriter
, which serve to read and write data, respectively.We prepare some devices handling some sequences by applying the function
prepare_io_example_1()
. We limit our attention to the returned elements, which handle the more diverse sequences:>>> from hydpy.core.testtools import prepare_io_example_1 >>> nodes, (element1, element2, element3, element4) = prepare_io_example_1()
We define three
NetCDFVariableFlatWriter
instances with different dimensionalities structures and log theNied
andNKor
instances of the first two elements and theSP
instance of the fourth element:>>> from hydpy.core.netcdftools import NetCDFVariableFlatWriter >>> var_nied = NetCDFVariableFlatWriter("nied.nc") >>> var_nkor = NetCDFVariableFlatWriter("nkor.nc") >>> var_sp = NetCDFVariableFlatWriter("sp.nc") >>> for element in (element1, element3): ... seqs = element.model.sequences ... var_nied.log(seqs.inputs.nied, seqs.inputs.nied.series) ... var_nkor.log(seqs.fluxes.nkor, seqs.fluxes.nkor.series) >>> sp = element4.model.sequences.states.sp >>> var_sp.log(sp, sp.series)
We further try to log the equally named “wind speed” sequences of the main model
lland_knauf
and the submodelevap_aet_morsim
. As both models are handled by the same element, which defines the column name, their time series cannot be stored separately in the same NetCDF file. Thelog()
method defined byMixinVariableWriter
checks for potential conflicts:>>> var_windspeed = NetCDFVariableFlatWriter("windspeed.nc") >>> windspeed_l = element3.model.sequences.inputs.windspeed >>> var_windspeed.log(windspeed_l, windspeed_l.series) >>> windspeed_e = element3.model.aetmodel.sequences.inputs.windspeed >>> var_windspeed.log(windspeed_e, 2.0 * windspeed_e.series) Traceback (most recent call last): ... RuntimeError: When trying to log the time series of sequence `windspeed` of element `element3` of model `evap_aet_morsim` for writing, the following error occurred: Sequence `windspeed` of element `element3` of model `lland_knauf` is already registered under the same column name(s) but with different time series data.
If the supplied time series are equal, there is no problem. So,
log()
neither accepts the new sequence nor raises an error in such cases:ToDo: Should we implement additional checks for just-in-time writing?
>>> assert var_windspeed.element3.sequence is windspeed_l >>> var_windspeed.log(windspeed_e, windspeed_e.series) >>> assert var_windspeed.element3.sequence is windspeed_l
We write the data of all logged sequences to separate NetCDF files:
>>> from hydpy import TestIO >>> with TestIO(): ... var_nied.write() ... var_nkor.write() ... var_sp.write() ... var_windspeed.write()
We set all the values of the selected sequences to -777 and check that they differ from the original values available via the testarray attribute:
>>> nied = element1.model.sequences.inputs.nied >>> nkor = element3.model.sequences.fluxes.nkor >>> sp = element4.model.sequences.states.sp >>> import numpy >>> for seq in (nied, nkor, sp, windspeed_l, windspeed_e): ... seq.series = -777.0 ... assert numpy.any(seq.series != seq.testarray)
Now, we prepare three
NetCDFVariableFlatReader
instances and log the same sequences as above, open the existing NetCDF file for reading, read its data, and confirm that it has been correctly passed to the test sequences:>>> from hydpy.core.netcdftools import NetCDFVariableFlatReader >>> var_nied = NetCDFVariableFlatReader("nied.nc") >>> var_nkor = NetCDFVariableFlatReader("nkor.nc") >>> var_sp = NetCDFVariableFlatReader("sp.nc") >>> var_windspeed = NetCDFVariableFlatReader("windspeed.nc") >>> for element in (element1, element3): ... sequences = element.model.sequences ... var_nied.log(sequences.inputs.nied) ... var_nkor.log(sequences.fluxes.nkor) >>> var_sp.log(sp) >>> var_windspeed.log(windspeed_l) >>> var_windspeed.log(windspeed_e) >>> with TestIO(): ... var_nied.read() ... var_nkor.read() ... var_sp.read() ... var_windspeed.read() >>> for seq in (nied, nkor, sp): ... assert numpy.all(seq.series == seq.testarray) >>> assert numpy.all(windspeed_l.series == windspeed_l.testarray) >>> assert numpy.all(windspeed_e.series == windspeed_l.testarray)
Trying to read data that is not stored properly results in error messages like the following:
>>> for element in (element1, element2, element3): ... element.model.sequences.inputs.nied.series = -777.0 ... var_nied.log(element.model.sequences.inputs.nied) >>> with TestIO(): ... var_nied.read() Traceback (most recent call last): ... RuntimeError: While trying to read data from NetCDF file `nied.nc`, the following error occurred: No data for (sub)device `element2` is available in NetCDF file `nied.nc`.
Note that method
read()
does not abort the reading process when missing a time series. Instead, it sets the entries of the correspondingseries
array tonan
, proceeds with the following sequences, and finally re-raises the first encountered exception:>>> element1.model.sequences.inputs.nied.series InfoArray([0., 1., 2., 3.]) >>> element2.model.sequences.inputs.nied.series InfoArray([nan, nan, nan, nan]) >>> element3.model.sequences.inputs.nied.series InfoArray([ 8., 9., 10., 11.])
- property subdevicenames: tuple[str, ...]¶
The names of the (sub)devices.
Property
subdevicenames
clarifies which column of NetCDF file contains the series of which (sub)device. For 0-dimensional series likeNied
, we require no subdivision. Hence, it returns the original device names:>>> from hydpy.core.testtools import prepare_io_example_1 >>> nodes, elements = prepare_io_example_1() >>> from hydpy.core.netcdftools import NetCDFVariableFlatReader >>> var = NetCDFVariableFlatReader("filename.nc") >>> for element in elements: ... if element.model.name.startswith("lland"): ... var.log(element.model.sequences.inputs.nied) >>> var.subdevicenames ('element1', 'element2', 'element3')
For 1-dimensional sequences like
NKor
, a suffix defines the index of the respective subdevice. For example, the third column ofarray
contains the series of the first hydrological response unit of the second element:>>> var = NetCDFVariableFlatReader("filename.nc") >>> for element in elements: ... if element.model.name.startswith("lland"): ... var.log( element.model.sequences.fluxes.nkor) >>> var.subdevicenames ('element1_0', 'element2_0', 'element2_1', 'element3_0', 'element3_1', 'element3_2')
2-dimensional sequences like
SP
require an additional suffix:>>> var = NetCDFVariableFlatReader("filename.nc") >>> var.log(elements.element4.model.sequences.states.sp) >>> var.subdevicenames ('element4_0_0', 'element4_0_1', 'element4_0_2', 'element4_1_0', 'element4_1_1', 'element4_1_2')
- property shape: tuple[int, int]¶
Required shape of the NetCDF variable.
For 0-dimensional sequences like
Nied
, the first axis corresponds to the number of timesteps and the second axis to the number of devices:>>> from hydpy.core.testtools import prepare_io_example_1 >>> nodes, elements = prepare_io_example_1() >>> from hydpy.core.netcdftools import NetCDFVariableFlatReader >>> var = NetCDFVariableFlatReader("filename.nc") >>> for element in elements: ... if element.model.name.startswith("lland"): ... var.log(element.model.sequences.inputs.nied) >>> var.shape (4, 3)
For 1-dimensional sequences as
NKor
, the second axis corresponds to “subdevices”. Here, these “subdevices” are hydrological response units of different elements. The model instances of the three elements define one, two, and three response units, respectively, making up a sum of six subdevices:>>> var = NetCDFVariableFlatReader("filename.nc") >>> for element in elements: ... if element.model.name.startswith("lland"): ... var.log(element.model.sequences.fluxes.nkor) >>> var.shape (4, 6)
The above statements also hold for 2-dimensional sequences as
SP
. In this specific case, each “subdevice” corresponds to a single snow class (one element times three zones times two snow classes makes six subdevices):>>> var = NetCDFVariableFlatReader( "filename.nc") >>> var.log(elements.element4.model.sequences.states.sp) >>> var.shape (4, 6)
- class hydpy.core.netcdftools.NetCDFVariableFlatReader(filepath: str)[source]¶
Bases:
NetCDFVariableFlat
Concrete class for reading data of single “flat” NetCDF variables (which deal with unaggregated time series) from single NetCDF files.
For a general introduction to using
NetCDFVariableFlatReader
, see the documentation on base classNetCDFVariableFlat
.- log(sequence: IOSequence) None [source]¶
Log the given
IOSequence
object for reading data.The logged sequence is available via attribute access:
>>> from hydpy.core.netcdftools import NetCDFVariableFlatReader >>> class Var(NetCDFVariableFlatReader): ... pass >>> var = Var("filepath.nc") >>> from hydpy.core.testtools import prepare_io_example_1 >>> nodes, elements = prepare_io_example_1() >>> nkor = elements.element1.model.sequences.fluxes.nkor >>> var.log(nkor) >>> assert "element1" in dir(var) >>> assert nkor in var.element1 >>> assert "element2" not in dir(var) >>> var.element2 Traceback (most recent call last): ... AttributeError: The selected NetCDFVariableFlatReader object does neither handle a sequence for the (sub)device `element2` nor define a member named `element2`...
- read() None [source]¶
Read the data from the relevant NetCDF file.
See the general documentation on class
NetCDFVariableFlat
for some examples.
- class hydpy.core.netcdftools.MixinVariableWriter(filepath: str)[source]¶
Bases:
NetCDFVariable
,ABC
Mixin class for
NetCDFVariableFlatWriter
andNetCDFVariableAggregated
.- insert_subdevices(ncfile: Dataset) None [source]¶
Insert a variable of the names of the (sub)devices of the logged sequences into the given NetCDF file.
We prepare a
NetCDFVariableFlatWriter
subclass with fixed (sub)device names:>>> from hydpy import TestIO >>> from hydpy.core.netcdftools import NetCDFVariableFlatWriter, chars2str >>> from hydpy.core.netcdftools import netcdf4 >>> class Var(NetCDFVariableFlatWriter): ... pass >>> Var.subdevicenames = "element1", "element_2", "element_ß"
The first dimension of the added variable corresponds to the number of (sub)devices, and the second dimension to the number of characters of the longest (sub)device name:
>>> var = Var("filename.nc") >>> with TestIO(): ... ncfile = netcdf4.Dataset("filename.nc", "w") >>> var.insert_subdevices(ncfile) >>> ncfile["station_id"].dimensions ('stations', 'char_leng_name') >>> ncfile["station_id"].shape (3, 10) >>> chars2str(ncfile["station_id"][:].data) ['element1', 'element_2', 'element_ß'] >>> ncfile.close()
- log(sequence: IOSequence, infoarray: InfoArray | None = None) None [source]¶
Log the given
IOSequence
object for writing data.When writing data “in one step”, the second argument must be an
InfoArray
. PassNone
when usingNetCDFVariableFlatWriter
to write data “just in time”. The infoarray argument allows for passing alternative data that replaces the original series of theIOSequence
object.The logged time series data is available via attribute access:
>>> from hydpy.core.netcdftools import NetCDFVariableFlatWriter >>> class Var(NetCDFVariableFlatWriter): ... pass >>> var = Var("filepath.nc") >>> from hydpy.core.testtools import prepare_io_example_1 >>> nodes, elements = prepare_io_example_1() >>> nkor = elements.element1.model.sequences.fluxes.nkor >>> var.log(nkor, nkor.series) >>> assert "element1" in dir(var) >>> assert var.element1.sequence is nkor >>> import numpy >>> assert numpy.all(var.element1.array == nkor.series) >>> assert "element2" not in dir(var) >>> var.element2 Traceback (most recent call last): ... AttributeError: The selected NetCDFVariable object does neither handle time series data for the (sub)device `element2` nor define a member named `element2`...
- abstract property array: ndarray[Any, dtype[float64]]¶
A
ndarray
containing the values of all logged sequences.
- write() None [source]¶
Write the logged data to a new NetCDF file.
See the general documentation on classes
NetCDFVariableFlatWriter
andNetCDFVariableAggregated
for some examples.
- class hydpy.core.netcdftools.NetCDFVariableFlatWriter(filepath: str)[source]¶
Bases:
MixinVariableWriter
,NetCDFVariableFlat
Concrete class for writing data from single “flat” NetCDF variables (which deal with unaggregated time series) to single NetCDF files.
For a general introduction to using
NetCDFVariableFlatWriter
, see the documentation on base classNetCDFVariableFlat
.- property array: ndarray[Any, dtype[float64]]¶
The time series data of all logged
IOSequence
objects in one singlendarray
object.The documentation on
shape
explains the structure ofarray
. The first example confirms that the first axis corresponds to time while the second corresponds to the location:>>> from hydpy.core.testtools import prepare_io_example_1 >>> nodes, elements = prepare_io_example_1() >>> from hydpy.core.netcdftools import NetCDFVariableFlatWriter >>> var = NetCDFVariableFlatWriter("filename.nc") >>> for element in elements: ... if element.model.name.startswith("lland"): ... nied1 = element.model.sequences.inputs.nied ... var.log(nied1, nied1.series) >>> from hydpy import print_matrix >>> print_matrix(var.array) | 0.0, 4.0, 8.0 | | 1.0, 5.0, 9.0 | | 2.0, 6.0, 10.0 | | 3.0, 7.0, 11.0 |
The flattening of higher-dimensional sequences spreads the time series of individual “subdevices” over the array’s columns. For the 1-dimensional sequence
NKor
, we find the time series of both zones of the second element in columns two and three:>>> var = NetCDFVariableFlatWriter("filename.nc") >>> for element in elements: ... if element.model.name.startswith("lland"): ... nkor = element.model.sequences.fluxes.nkor ... var.log(nkor, nkor.series) >>> print_matrix(var.array[:, 1:3]) | 16.0, 17.0 | | 18.0, 19.0 | | 20.0, 21.0 | | 22.0, 23.0 |
The above statements also hold for 2-dimensional sequences like
SP
. In this specific case, each column contains the time series of a single snow class:>>> var = NetCDFVariableFlatWriter("filename.nc") >>> sp = elements.element4.model.sequences.states.sp >>> var.log(sp, sp.series) >>> print_matrix(var.array) | 68.0, 69.0, 70.0, 71.0, 72.0, 73.0 | | 74.0, 75.0, 76.0, 77.0, 78.0, 79.0 | | 80.0, 81.0, 82.0, 83.0, 84.0, 85.0 | | 86.0, 87.0, 88.0, 89.0, 90.0, 91.0 |
- class hydpy.core.netcdftools.NetCDFVariableAggregated(filepath: str)[source]¶
Bases:
MixinVariableWriter
,NetCDFVariable
Concrete class for writing data from single “aggregated” NetCDF variables (which deal with aggregated time series) to single NetCDF files.
Class
NetCDFVariableAggregated
works very similarly to classNetCDFVariableFlatWriter
. The following is a selection of the more thoroughly explained examples of the documentation on classNetCDFVariableFlat
:>>> from hydpy.core.testtools import prepare_io_example_1 >>> nodes, (element1, element2, element3, element4) = prepare_io_example_1() >>> from hydpy.core.netcdftools import NetCDFVariableAggregated >>> var_nied = NetCDFVariableAggregated("nied.nc") >>> var_nkor = NetCDFVariableAggregated("nkor.nc") >>> var_sp = NetCDFVariableAggregated("sp.nc") >>> for element in (element1, element2): ... nied = element.model.sequences.inputs.nied ... var_nied.log(nied, nied.average_series()) ... nkor = element.model.sequences.fluxes.nkor ... var_nkor.log(nkor, nkor.average_series()) >>> sp = element4.model.sequences.states.sp >>> var_sp.log(sp, sp.average_series()) >>> from hydpy import pub, TestIO >>> with TestIO(): ... var_nied.write() ... var_nkor.write() ... var_sp.write()
As
NetCDFVariableAggregated
provides no reading functionality, we show that the aggregated values are readily available using the external NetCDF4 library:>>> import numpy >>> from hydpy import print_matrix >>> with TestIO(), netcdf4.Dataset("nied.nc", "r") as ncfile: ... print_matrix(numpy.array(ncfile["nied"][:])) | 0.0, 4.0 | | 1.0, 5.0 | | 2.0, 6.0 | | 3.0, 7.0 |
>>> with TestIO(), netcdf4.Dataset("nkor.nc", "r") as ncfile: ... print_matrix(numpy.array(ncfile["nkor"][:])) | 12.0, 16.5 | | 13.0, 18.5 | | 14.0, 20.5 | | 15.0, 22.5 |
>>> with TestIO(), netcdf4.Dataset("sp.nc", "r") as ncfile: ... print_matrix(numpy.array(ncfile["sp"][:])) | 70.5 | | 76.5 | | 82.5 | | 88.5 |
- property shape: tuple[int, int]¶
Required shape of
array
.The first axis corresponds to the number of timesteps and the second axis to the number of devices. We show this for the 1-dimensional input sequence
NKor
:>>> from hydpy.core.testtools import prepare_io_example_1 >>> nodes, elements = prepare_io_example_1() >>> from hydpy.core.netcdftools import NetCDFVariableAggregated >>> var = NetCDFVariableAggregated("filename.nc") >>> for element in elements: ... if element.model.name.startswith("lland"): ... var.log(element.model.sequences.fluxes.nkor, None) >>> var.shape (4, 3)
There is no difference for 2-dimensional sequences as aggregating their time series also results in 1-dimensional data:
>>> var = NetCDFVariableAggregated("filename.nc") >>> var.log(elements.element4.model.sequences.states.sp, None) >>> var.shape (4, 1)
- property array: ndarray[Any, dtype[float64]]¶
The aggregated time series data of all logged
IOSequence
objects in a singlendarray
object.The documentation on
shape
explains the structure ofarray
. This first example confirms that the first axis corresponds to time while the second corresponds to the location:>>> from hydpy.core.testtools import prepare_io_example_1 >>> nodes, elements = prepare_io_example_1() >>> from hydpy.core.netcdftools import NetCDFVariableAggregated >>> var = NetCDFVariableAggregated("filename.nc") >>> for element in elements: ... if element.model.name.startswith("lland"): ... nkor = element.model.sequences.fluxes.nkor ... var.log(nkor, nkor.average_series()) >>> from hydpy import print_matrix >>> print_matrix(var.array) | 12.0, 16.5, 25.0 | | 13.0, 18.5, 28.0 | | 14.0, 20.5, 31.0 | | 15.0, 22.5, 34.0 |
There is no difference for 2-dimensional sequences as aggregating their time series also results in 1-dimensional data:
>>> var = NetCDFVariableAggregated("filename.nc") >>> sp = elements.element4.model.sequences.states.sp >>> var.log(sp, sp.average_series()) >>> print_matrix(var.array) | 70.5 | | 76.5 | | 82.5 | | 88.5 |
- class hydpy.core.netcdftools.NetCDFInterfaceBase[source]¶
Bases:
Generic
[TypeNetCDFVariable
]Base class for interfaces between
SequenceManager
and multiple NetCDF files.The core task of all concrete
NetCDFInterfaceBase
subclasses is to distribute differentIOSequence
objects on multiple instances of the concrete subclasses ofNetCDFVariable
. The following examples describe the functioning of the subclassesNetCDFInterfaceReader
andNetCDFInterfaceWriter
, which serve to read and write data “in one step”, respectively.We prepare a
SequenceManager
object and some devices handling different sequences by applying functionprepare_io_example_1()
:>>> from hydpy.core.testtools import prepare_io_example_1 >>> nodes, elements = prepare_io_example_1()
We collect all sequences used in the following examples except
NKor
of element element1, which we reserve for special tests:>>> sequences = [] >>> for node in nodes: ... sequences.append(node.sequences.sim) >>> for element in elements: ... if element.model.name == "hland_96": ... sequences.append(element.model.sequences.states.sp) ... else: ... sequences.append(element.model.sequences.inputs.nied) ... if element.name != "element1": ... sequences.append(element.model.sequences.fluxes.nkor)
We prepare a
NetCDFInterfaceWriter
object and log and write all test sequences exceptNKor
of element element1.NetCDFInterfaceWriter
initialises oneNetCDFVariableFlatWriter
and oneNetCDFVariableAggregated
object for eachIOSequence
subtype:>>> from hydpy.core.netcdftools import NetCDFInterfaceWriter >>> writer = NetCDFInterfaceWriter() >>> len(writer) 0
>>> from hydpy import pub, TestIO >>> pub.sequencemanager.filetype = "nc" >>> with TestIO(): ... for sequence in sequences: ... _ = writer.log(sequence, sequence.series) ... with pub.sequencemanager.aggregation("mean"): ... _ = writer.log(sequence, sequence.average_series()) >>> len(writer) 14
We change the relevant directory before logging the reserved sequence.
NetCDFInterfaceWriter
initialises two newNetCDFVariable
objects, despite otherNetCDFVariable
objects related to the same sequence type already being available:>>> nkor = elements.element1.model.sequences.fluxes.nkor >>> with TestIO(): ... pub.sequencemanager.currentdir = "test" ... _ = writer.log(nkor, nkor.series) ... with pub.sequencemanager.aggregation("mean"): ... _ = writer.log(nkor, nkor.average_series()) >>> len(writer) 16
You can query all relevant folder names and filenames via properties
foldernames
andfilenames
:>>> from hydpy import print_vector >>> print_vector(writer.foldernames) default, test >>> print_vector(writer.filenames) hland_96_state_sp, hland_96_state_sp_mean, lland_dd_flux_nkor, lland_dd_flux_nkor_mean, lland_dd_input_nied, lland_dd_input_nied_mean, lland_knauf_flux_nkor, lland_knauf_flux_nkor_mean, lland_knauf_input_nied, lland_knauf_input_nied_mean, sim_q, sim_q_mean, sim_t, sim_t_mean
NetCDFInterfaceWriter
provides attribute access to itsNetCDFVariable
instances, both via their filenames and the combination of their folder names and filenames:>>> assert writer.sim_q is writer.default_sim_q >>> print_vector(sorted(set(dir(writer)) - set(object.__dir__(writer)))) default_hland_96_state_sp, default_hland_96_state_sp_mean, default_lland_dd_flux_nkor, default_lland_dd_flux_nkor_mean, default_lland_dd_input_nied, default_lland_dd_input_nied_mean, default_lland_knauf_flux_nkor, default_lland_knauf_flux_nkor_mean, default_lland_knauf_input_nied, default_lland_knauf_input_nied_mean, default_sim_q, default_sim_q_mean, default_sim_t, default_sim_t_mean, hland_96_state_sp, hland_96_state_sp_mean, lland_dd_input_nied, lland_dd_input_nied_mean, lland_knauf_flux_nkor, lland_knauf_flux_nkor_mean, lland_knauf_input_nied, lland_knauf_input_nied_mean, sim_q, sim_q_mean, sim_t, sim_t_mean, test_lland_dd_flux_nkor, test_lland_dd_flux_nkor_mean
If multiple NetCDF files have the same name, you must prefix the relevant folder name:
>>> writer.lland_dd_flux_nkor Traceback (most recent call last): ... RuntimeError: The current NetCDFInterface object handles multiple NetCDF files named `lland_dd_flux_nkor`. Please be more specific. >>> assert hasattr(writer, "default_lland_dd_flux_nkor")
NetCDFInterfaceWriter
raises the following error for completely wrong attribute names:>>> writer.lland_dd Traceback (most recent call last): ... AttributeError: The current NetCDFInterface object neither handles a NetCDF file named `lland_dd` nor does it define a member named `lland_dd`.
We write all NetCDF files into the default folder of the testing directory, defined by
prepare_io_example_1()
:>>> from hydpy import TestIO >>> with TestIO(): ... writer.write()
We define a shorter initialisation period and re-activate the time series of the test sequences:
>>> from hydpy import pub >>> pub.timegrids = "02.01.2000", "04.01.2000", "1d" >>> for sequence in sequences: ... sequence.prepare_series(allocate_ram=False) ... sequence.prepare_series(allocate_ram=True) >>> nkor.prepare_series(allocate_ram=False) >>> nkor.prepare_series(allocate_ram=True)
We now initialise an object of class
NetCDFInterfaceReader
, log all test sequences, and read the test data of the defined subperiod:>>> from hydpy.core.netcdftools import NetCDFInterfaceReader >>> reader = NetCDFInterfaceReader() >>> with TestIO(): ... _ = reader.log(nkor) ... pub.sequencemanager.currentdir = "default" ... for sequence in sequences: ... _ = reader.log(sequence) ... reader.read() >>> nodes.node1.sequences.sim.series InfoArray([61., 62.]) >>> elements.element2.model.sequences.fluxes.nkor.series InfoArray([[18., 19.], [20., 21.]]) >>> elements.element4.model.sequences.states.sp.series InfoArray([[[74., 75., 76.], [77., 78., 79.]], [[80., 81., 82.], [83., 84., 85.]]])
- property foldernames: tuple[str, ...]¶
The names of all folders the sequences shall be read from or written to.
- property filenames: tuple[str, ...]¶
The base file names of all handled
NetCDFVariable
objects.
- class hydpy.core.netcdftools.NetCDFInterfaceReader[source]¶
Bases:
NetCDFInterfaceBase
[NetCDFVariableFlatReader
]Interface between
SequenceManager
and multipleNetCDFVariableFlatReader
instances for reading data in one step.For a general introduction to using
NetCDFInterfaceReader
, see the documentation on base classNetCDFInterfaceBase
.- log(sequence: IOSequence) NetCDFVariableFlatReader [source]¶
Pass the given
IOSequence
to the log method of an already existing or, if necessary, freshly createdNetCDFVariableFlatReader
object.
- read() None [source]¶
Call method
read()
of all handledNetCDFVariableFlatReader
objects.
- class hydpy.core.netcdftools.NetCDFInterfaceWriter[source]¶
Bases:
NetCDFInterfaceBase
[NetCDFVariableAggregated
|NetCDFVariableFlatWriter
]Interface between
SequenceManager
and multipleNetCDFVariableFlatWriter
orNetCDFVariableAggregated
instances for writing data in one step.For a general introduction to using
NetCDFInterfaceWriter
, see the documentation on base classNetCDFInterfaceBase
.- log(sequence: IOSequence, infoarray: InfoArray | None = None) NetCDFVariableAggregated | NetCDFVariableFlatWriter [source]¶
Pass the given
IOSequence
to the log method of an already existing or, if necessary, freshly createdNetCDFVariableFlatWriter
orNetCDFVariableAggregated
object (depending on the currently activeaggregation
mode).
- write() None [source]¶
Call method
write()
of all handledNetCDFVariableFlatWriter
andNetCDFVariableAggregated
objects.
- class hydpy.core.netcdftools.NetCDFInterfaceJIT[source]¶
Bases:
NetCDFInterfaceBase
[NetCDFVariableFlatReader
|NetCDFVariableFlatWriter
]Interface between
SequenceManager
and multipleNetCDFVariableFlatWriter
instances for reading or writing data just in time.See the documentation on method
provide_jitaccess()
for further information.- log(sequence: IOSequence) NetCDFVariableFlatReader | NetCDFVariableFlatWriter [source]¶
Pass the given
IOSequence
to the log method of an already existing or, if necessary, freshly createdNetCDFVariableFlatWriter
object.
- provide_jitaccess(deviceorder: Iterable[Node | Element]) Iterator[JITAccessHandler] [source]¶
Allow method
simulate()
of classHydPy
to read data from or write data to NetCDF files “just in time” during simulation runs.We consider it unlikely users need ever to call the method
provide_jitaccess()
directly. See the documentation on classHydPy
on applying it indirectly. However, the following explanations might give some additional insights into the options and limitations of the related functionalities.You can only either read from or write to each NetCDF file. We think this should rarely be a limitation for the anticipated workflows. One particular situation where one could eventually try to read and write simultaneously is when trying to overwrite some of the available input data. The following example tries to read the input data for all “headwater” catchments from specific NetCDF files but defines zero input values for all “non-headwater” catchments and tries to write them into the same files:
>>> from hydpy.core.testtools import prepare_full_example_1 >>> prepare_full_example_1() >>> from hydpy import HydPy, print_vector, pub, TestIO >>> with TestIO(): ... hp = HydPy("HydPy-H-Lahn") ... pub.timegrids = "1996-01-01", "1996-01-05", "1d" ... hp.prepare_network() ... hp.prepare_models() ... hp.load_conditions() ... headwaters = pub.selections["headwaters"].elements ... nonheadwaters = pub.selections["nonheadwaters"].elements ... headwaters.prepare_inputseries(allocate_ram=False, read_jit=True) ... nonheadwaters.prepare_inputseries(allocate_ram=True, write_jit=True) ... for element in nonheadwaters: ... for sequence in element.model.sequences.inputs: ... sequence.series = 0.0 ... hp.simulate() Traceback (most recent call last): ... RuntimeError: While trying to prepare NetCDF files for reading or writing data "just in time" during the current simulation run, the following error occurred: For a specific NetCDF file, you can either read or write data during a simulation run but for file `...hland_96_input_p.nc` both is requested.
Clearly, each NetCDF file we want to read data from needs to span the current simulation period:
>>> with TestIO(): ... pub.timegrids.init.firstdate = "1987-01-01" ... pub.timegrids.sim.firstdate = "1988-01-01" ... hp.prepare_inputseries(allocate_ram=False, read_jit=True) ... hp.simulate() Traceback (most recent call last): ... RuntimeError: While trying to prepare NetCDF files for reading or writing data "just in time" during the current simulation run, the following error occurred: The data of the NetCDF `...hland_96_input_p.nc` (Timegrid("1989-11-01 00:00:00", "2021-01-01 00:00:00", "1d")) does not correctly cover the current simulation period (Timegrid("1988-01-01 00:00:00", "1996-01-05 00:00:00", "1d")).
However, each NetCDF file selected for writing must also cover the complete initialisation period. If there is no adequately named NetCDF file,
provide_jitaccess()
creates a new one for the current initialisation period. If an adequately named file exists,provide_jitaccess()
uses it without any attempt to extend it temporally or spatially. The following example shows the insertion of the output data of two subsequent simulation runs into the same NetCDF files:>>> with TestIO(): ... pub.timegrids = "1996-01-01", "1996-01-05", "1d" ... hp.prepare_inputseries(allocate_ram=False, read_jit=True) ... hp.prepare_factorseries(allocate_ram=True, write_jit=True) ... pub.timegrids.sim.lastdate = "1996-01-03" ... hp.simulate() ... pub.timegrids.sim.firstdate = "1996-01-03" ... pub.timegrids.sim.lastdate = "1996-01-05" ... hp.simulate() >>> print_vector( ... hp.elements["land_dill_assl"].model.sequences.factors.contriarea.series) 0.497067, 0.496728, 0.496461, 0.496361 >>> from hydpy.core.netcdftools import netcdf4 >>> filepath = "HydPy-H-Lahn/series/default/hland_96_factor_contriarea.nc" >>> with TestIO(), netcdf4.Dataset(filepath, "r") as ncfile: ... print_vector(ncfile["hland_96_factor_contriarea"][:, 0]) 0.497067, 0.496728, 0.496461, 0.496361
Under particular circumstances, the data variable of a NetCDF file can be 3-dimensional. The documentation on function
query_array()
explains this in detail. The following example demonstrates that reading and writing such 3-dimensional variables “just in time” works correctly. Therefore, we add a realization dimension to the input file hland_96_input_t.nc (part of the example project data) and the output file hland_96_factor_contriarea.nc (written in the previous example) and use them for redefining their data variables with this additional dimension. As expected, the results are the same as in the previous example:>>> with TestIO(): ... for name in ("hland_96_input_t", "hland_96_factor_contriarea"): ... filepath = f"HydPy-H-Lahn/series/default/{name}.nc" ... with netcdf4.Dataset(filepath, "r+") as ncfile: ... ncfile.renameVariable(name, "old") ... _ = ncfile.createDimension("realization", 1) ... var = ncfile.createVariable(name, "f8", ... dimensions=("time", "realization", "stations")) ... if name == "hland_96_input_t": ... var[:] = ncfile["old"][:] ... else: ... var[:] = -999.0 ... pub.timegrids = "1996-01-01", "1996-01-05", "1d" ... hp.simulate() >>> with TestIO(), netcdf4.Dataset(filepath, "r") as ncfile: ... print_vector(ncfile["hland_96_factor_contriarea"][:, 0, 0]) 0.496003, 0.495664, 0.495398, 0.495298
If we try to write the output of a simulation run beyond the original initial initialisation period into the same files,
provide_jitaccess()
raises an equal error as above:>>> with TestIO(): ... pub.timegrids = "1996-01-05", "1996-01-10", "1d" ... hp.prepare_inputseries(allocate_ram=True, read_jit=False) ... hp.prepare_factorseries(allocate_ram=True, write_jit=True) ... hp.simulate() Traceback (most recent call last): ... RuntimeError: While trying to prepare NetCDF files for reading or writing data "just in time" during the current simulation run, the following error occurred: The data of the NetCDF `...hland_96_factor_tc.nc` (Timegrid("1996-01-01 00:00:00", "1996-01-05 00:00:00", "1d")) does not correctly cover the current simulation period (Timegrid("1996-01-05 00:00:00", "1996-01-10 00:00:00", "1d")).
>>> hp.prepare_factorseries(allocate_ram=False, write_jit=False)
Regarding the spatial dimension, things are similar. You can write data for different sequences in subsequent simulation runs, but you need to ensure all required data columns are available right from the start. Hence, relying on the automatic file generation of
provide_jitaccess()
fails in the following example:>>> with TestIO(): ... pub.timegrids = "1996-01-01", "1996-01-05", "1d" ... hp.prepare_inputseries(allocate_ram=False, read_jit=True) ... headwaters.prepare_fluxseries(allocate_ram=True, write_jit=True) ... hp.simulate() ... nonheadwaters.prepare_fluxseries(allocate_ram=True, write_jit=True) ... hp.simulate() Traceback (most recent call last): ... RuntimeError: While trying to prepare NetCDF files for reading or writing data "just in time" during the current simulation run, the following error occurred: No data for (sub)device `land_lahn_kalk_0` is available in NetCDF file `...hland_96_flux_pc.nc`.
Of course, one way to prepare complete HydPy-compatible NetCDF files is to let HydPy do it:
>>> with TestIO(), pub.sequencemanager.filetype("nc"): ... hp.prepare_fluxseries(allocate_ram=False, write_jit=False) ... hp.prepare_fluxseries(allocate_ram=True, write_jit=False) ... hp.save_fluxseries() ... headwaters.prepare_fluxseries(allocate_ram=True, write_jit=True) ... hp.load_conditions() ... hp.simulate() >>> for element in hp.elements.search_keywords("catchment"): ... print_vector(element.model.sequences.fluxes.qt.series) 11.757526, 8.865079, 7.101815, 5.994195 11.672862, 10.100089, 8.984317, 8.202706 20.588949, 8.644722, 7.265526, 6.385012 9.64767, 8.513649, 7.777628, 7.343314 >>> filepath_qt = "HydPy-H-Lahn/series/default/hland_96_flux_qt.nc" >>> with TestIO(), netcdf4.Dataset(filepath_qt, "r") as ncfile: ... for jdx in range(4): ... print_vector(ncfile["hland_96_flux_qt"][:, jdx]) 11.757526, 8.865079, 7.101815, 5.994195 0.0, 0.0, 0.0, 0.0 0.0, 0.0, 0.0, 0.0 9.64767, 8.513649, 7.777628, 7.343314 >>> with TestIO(): ... headwaters.prepare_fluxseries(allocate_ram=True, write_jit=False) ... nonheadwaters.prepare_fluxseries(allocate_ram=True, write_jit=True) ... hp.load_conditions() ... hp.simulate() >>> with TestIO(), netcdf4.Dataset(filepath_qt, "r") as ncfile: # ... for jdx in range(4): ... print_vector(ncfile["hland_96_flux_qt"][:, jdx]) 11.757526, 8.865079, 7.101815, 5.994195 11.672862, 10.100089, 8.984317, 8.202706 20.588949, 8.644722, 7.265526, 6.385012 9.64767, 8.513649, 7.777628, 7.343314
>>> hp.prepare_fluxseries(allocate_ram=False, write_jit=False)
There should be no limitation for reading data “just in time” and using different
deploymode
options. For demonstration, we first calculate the time series of theSim
sequences of all nodes, assign them to the correspondingObs
sequences afterwards, and then start another simulation to (again) write both the simulated and the observed values to NetCDF files:>>> with TestIO(): ... hp.prepare_simseries(allocate_ram=True, write_jit=True) ... hp.prepare_obsseries(allocate_ram=True, write_jit=True) ... hp.load_conditions() ... hp.simulate() ... for idx, node in enumerate(hp.nodes): ... node.sequences.obs.series = node.sequences.sim.series ... hp.load_conditions() ... hp.simulate() >>> for node in hp.nodes: ... print_vector(node.sequences.sim.series) 11.757526, 8.865079, 7.101815, 5.994195 54.019337, 37.257561, 31.865308, 28.359542 42.346475, 27.157472, 22.88099, 20.156836 9.64767, 8.513649, 7.777628, 7.343314 >>> for node in hp.nodes: ... print_vector(node.sequences.obs.series) 11.757526, 8.865079, 7.101815, 5.994195 54.019337, 37.257561, 31.865308, 28.359542 42.346475, 27.157472, 22.88099, 20.156836 9.64767, 8.513649, 7.777628, 7.343314 >>> filepath_sim = "HydPy-H-Lahn/series/default/sim_q.nc" >>> with TestIO(), netcdf4.Dataset(filepath_sim, "r") as ncfile: ... for jdx in range(4): ... print_vector(ncfile["sim_q"][:, jdx]) 11.757526, 8.865079, 7.101815, 5.994195 9.64767, 8.513649, 7.777628, 7.343314 42.346475, 27.157472, 22.88099, 20.156836 54.019337, 37.257561, 31.865308, 28.359542 >>> filepath_obs = "HydPy-H-Lahn/series/default/obs_q.nc" >>> from hydpy.core.netcdftools import query_timegrid >>> with TestIO(), netcdf4.Dataset(filepath_obs, "r") as ncfile: ... tg = query_timegrid(ncfile, hp.nodes.dill_assl.sequences.obs) ... i0 = tg[pub.timegrids.sim.firstdate] ... i1 = tg[pub.timegrids.sim.lastdate] ... for jdx in range(4): ... print_vector(ncfile["obs_q"][i0:i1, jdx]) 9.64767, 8.513649, 7.777628, 7.343314 11.757526, 8.865079, 7.101815, 5.994195 42.346475, 27.157472, 22.88099, 20.156836 54.019337, 37.257561, 31.865308, 28.359542
Now we stop all sequences from writing to NetCDF files, remove the two headwater elements from the currently active selection, and start another simulation run. The time series of both headwater nodes are zero due to the missing inflow from their inlet headwater sub-catchments. The non-headwater nodes only receive inflow from the two non-headwater sub-catchments:
>>> with TestIO(): ... hp.prepare_simseries(allocate_ram=True, write_jit=False) ... hp.prepare_obsseries(allocate_ram=True, write_jit=False) ... hp.update_devices(nodes=hp.nodes, elements=hp.elements - headwaters) ... hp.load_conditions() ... hp.simulate() >>> for node in hp.nodes: ... print_vector(node.sequences.sim.series) 0.0, 0.0, 0.0, 0.0 42.261811, 18.744811, 16.249844, 14.587718 30.588949, 8.644722, 7.265526, 6.385012 0.0, 0.0, 0.0, 0.0
Finally, we set the
deploymode
of the headwater nodes dill_assl and lahn_marb to oldsim and obs, respectively, and read their previously written time series “just in time”. As expected, the values of the two non-headwater nodes are identical to those of our initial example:>>> with TestIO(): ... hp.nodes["dill_assl"].prepare_simseries(allocate_ram=True, ... read_jit=True) ... hp.nodes["dill_assl"].deploymode = "oldsim" ... hp.nodes["lahn_marb"].prepare_obsseries(allocate_ram=True, ... read_jit=True) ... hp.nodes["lahn_marb"].deploymode = "obs" ... hp.load_conditions() ... hp.simulate() >>> for node in hp.nodes: ... print_vector(node.sequences.sim.series) 11.757526, 8.865079, 7.101815, 5.994195 54.019337, 37.257561, 31.865308, 28.359542 42.346475, 27.157472, 22.88099, 20.156836 0.0, 0.0, 0.0, 0.0