Karana.KUtils.DataStruct#
Classes and functions related to DataStruct.
- DataStruct is a wrapper around pydantic.BaseModel. It contains extra functionality to:
Easily save/load `DataStruct`s from various file types.
Compare DataStruct`s with one another either identitcally, using `__eq__, or approximately, using isApprox.
Populate a Kclick CLI applications options from a DataStruct
Create an instance of a DataStruct from Kclick CLI application’s options.
Generate an asciidoc with the data of a DataStruct.
The saving/loading and comparison can be done recusivley, meaning they accept DataStruct`s whose fields are other `DataStructs. In addition, they supported nested Python types, such as a dict of lists of numpy arrays.
Attributes#
Classes#
Wrapper around pydantic.BaseModel that adds functionality useful for modeling and simulation. |
|
Dummy class uses as a sentinal value. |
|
Class used to populate a CLI application with DataStruct fields and create a DataStruct instance from CLI options. |
|
Mixin to add ID tracking to a DataStruct. |
Functions#
|
Create a NestedBaseMixin class. |
Module Contents#
- class Karana.KUtils.DataStruct.DataStruct(/, **data: Any)[source]#
Bases:
pydantic.BaseModelWrapper around pydantic.BaseModel that adds functionality useful for modeling and simulation.
- This class adds functionality to the pydantic.BaseModel, including:
Easily save/load `DataStruct`s from various file types.
Compare DataStruct`s with one another either identitcally, using `__eq__, or approximately, using isApprox.
Generate an asciidoc with the data of a DataStruct.
The saving/loading and comparison can be done recusivley, meaning they accept DataStruct`s whose fields are other `DataStructs. In addition, they supported nested Python types, such as a dict of lists of numpy arrays.
- Parameters:
version (tuple[int, int]) – Holds the version of this DataStruct. Users should override the current version using the _version_default class variable. DataStructs that use this should also add a field validtor or model validator to handle version mismatches.
- version: tuple[int, int] = None#
- model_config#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- classmethod cli(exclude: list[str] = [], extra_info: dict[str, Any] = {})[source]#
Add this DataStruct’s fields to a cli.
- Parameters:
exclude (list[str]) – List of fields to exclude from the cli.
extra_info (dict[str, Any]) – Extra information that is used to generate the cli. Examples include: * name - Name the field. * help - Description for the field. * type - Type for the field. * default - Default for the field.
- classmethod fromLinker(vals_orig: dict[str, Any], excluded_vals: dict[str, Any] = {}, ds_linker: DSLinker | None = None) Self[source]#
Create an instance of the DataStruct from a CLI dictionary.
- Parameters:
vals_orig (dict[str, Any]) – CLI dictionary to use to build the DataStruct.
excluded_vals (dict[str, Any]) – Included any vals that were excluded from the DictLinker.
ds_linker (Optional[DSLinker]) – DSLinker to use to build the DataStruct. This is optional, and is only necessary if there are multiple DSLinker’s for a DataStruct.
- toFile(file: pathlib.Path | str | IO[bytes], suffix: Literal['.json', '.yaml', '.yml', '.h5', '.hdf5', '.pickle', '.pck', '.pcl'] | None = None) None[source]#
- toFile(g: h5py.Group) None
Write the DataStruct to a H5 group.
See overloads for deatils.
- classmethod fromFile(file: pathlib.Path | str | IO[bytes], suffix: Literal['.json', '.yaml', '.yml', '.h5', '.hdf5', '.pickle', '.pck', '.pcl'] | None = None) Self[source]#
- classmethod fromFile(g: h5py.Group) Self
Create an instance of this DataStruct a file.
See overloads for details.
- isApprox(other: Self, prec: float = MATH_EPSILON) bool[source]#
Check if this DataStruct is approximately equal to another DataStruct of the same type.
This recursively moves through the public fields only. Note that Pydantic’s __eq__ checks private and public fields. Recursivley here means if the field is an iterator, another DataStruct, an interator of DataStructs, etc. we will go into the nested structure calling isApprox where appropriate on the items in the iterator, fields in the DataStruct, etc. If the field (or iterated value, etc.) does not have an isApprox method, then we fallback to using the __eq__. If all values of isApprox (or __eq__) return True, then this returns True; otherwise, this returns false.
- Parameters:
other (Self) – A DataStruct of the same type.
prec (float) – The precision to use. Karana.Math.MATH_EPSILON is the default.
- Returns:
True if the two DataStructures are approximately equal. False otherwise.
- Return type:
bool
- __eq__(other: object) bool[source]#
Check if this DataStruct is equal to another DataStruct of the same type.
First a normal == is tried on the fields. If that doesn’t work, then we recurse into the fields looking for numpy arrays. Recursivley here means if the field is an iterator, another DataStruct, an interator of DataStructs, etc. we will go into the nested structure calling the appropriate operator on the items in the iterator, fields in the DataStruct, etc. This is done mainly for numpy arrays, where we want to call np.array_equal rather than use ==.
- Parameters:
other (object) – The object to compare with.
- Returns:
True if the two DataStructures are equal. False otherwise.
- Return type:
bool
- class Karana.KUtils.DataStruct.SentinalValueClass[source]#
Bases:
objectDummy class uses as a sentinal value.
- Karana.KUtils.DataStruct.SentinalValue#
- class Karana.KUtils.DataStruct.DSLinker(data_struct: type[DataStruct], exclude: list[str], extra_info: dict[str, Any])[source]#
Class used to populate a CLI application with DataStruct fields and create a DataStruct instance from CLI options.
- data_struct#
- exclude#
- extra_info#
- name_link: dict[str, str]#
- Karana.KUtils.DataStruct.T#
- class Karana.KUtils.DataStruct.IdMixin(/, **data: Any)[source]#
Bases:
pydantic.BaseModel,Generic[T]Mixin to add ID tracking to a DataStruct.
For book keeping, it is common to want to track the id of the object used to create the DataStruct. This mixin makes it easy to do so. It adds the private _id variable with default value None, adds a KaranaId property for it, and ovverrides the appropriate methods so that _id is serialized/deserialized if set. It is class using the Mixin’s job to add objects to _objects_from_id whenever appropriate.
- property karanaId: int | None#
Retrieve the private _id variable.
- static findObjectsCreatedById(val: Any, id: int) list[T] | None[source]#
Find any objects in the val data structure that were created for the ID given by id.
This assumes unique IDs (and a unique DataStruct with that ID). Therefore, the search stops at the first ID.
- Parameters:
val (Any) – The data structure to recurse through. This can be a composite type consisting of DataStructs, lists, tuples, sets, and dictionaries.
id (int) – The ID to use in the search.
- Returns:
None if the ID was not found. Otherwise, a list of objects created with the ID.
- Return type:
list[T] | None
- Karana.KUtils.DataStruct.NestedBaseMixin(field_name: str)[source]#
Create a NestedBaseMixin class.
This method returns a class that is a mixin. The mixin is used for classes that will be nested in other Pydantic models, and that serve as a base class for other Pydantic classes. An example of this is KModelDS, which is used in StatePropagatorDS. Other types, e.g., PointGravityModelDS, will be derived from this. When we serialize/deserialize, we want to save/load the derived type, not the base type. This should be used in conjuction with SerializeAsAny.
A field will be added to keep track of the type. The name is given by field_name parameters.
- Parameters:
field_name (str) – The name of the field that is automatically added to the base class as part of the mixin and all derived classes.
- Returns:
An instance of the mixin.
- Return type:
_NestedBaseMixin