anypinn.core
Core PINN building blocks.
This module provides the foundational abstractions for defining and solving physics-informed neural network problems.
Argument and Parameter
An Argument wraps a fixed value (float or callable) that an ODE/PDE
function receives. A Parameter is a learnable Argument — it
inherits from both nn.Module and Argument, so it participates in
gradient computation while exposing the same call interface.
To promote a fixed constant to a learnable parameter, replace:
with:
# Learnable: beta starts at 0.3, the optimizer adjusts it
args = {}
params = {"beta": Parameter(ScalarConfig(init_value=0.3))}
The ODE/PDE function signature stays the same either way:
This works because ResidualsConstraint merges params into args
before calling the ODE function, and Parameter is a subclass of
Argument. For function-valued parameters (e.g. beta(t) that varies over
the domain), use Parameter(MLPConfig(...)) instead of ScalarConfig.
ArgsRegistry
ArgsRegistry (a dict[str, Argument]) is the unified interface that
ODE/PDE callables receive. It maps string keys to Argument instances.
Because Parameter extends Argument, the callable is agnostic to
whether a value is fixed or being learned — it just calls
args["key"](x) and gets a tensor back.
InferredContext
InferredContext is created automatically during data loading. It holds:
- domain: an N-dimensional
Domaininferred from the training coordinates (bounds and step sizes). - validation: resolved ground-truth functions for parameter comparison.
The context is injected into the Problem (and transitively into each
Constraint) before training starts. Constraints can override
inject_context() to capture domain-specific information — for example,
ICConstraint reads domain.x0 to know where to enforce initial
conditions.
Collocation strategies
Collocation points are the unsupervised sample locations where the PDE/ODE residual is minimized. The choice of sampling strategy affects convergence:
- uniform: deterministic Cartesian grid. Predictable, but scales poorly to high dimensions.
- random: uniform random sampling. Simple and dimension-agnostic.
- latin_hypercube: stratified random sampling with better space-filling coverage than pure random. Good default for most problems.
- log_uniform_1d: samples densely near the lower domain bound. Useful for 1-D problems where early dynamics matter most (e.g. epidemic models).
- adaptive: residual-weighted resampling that concentrates points where
the current model has the largest residual. Requires a
ResidualScorerand theAdaptiveCollocationCallbackto refresh points during training.
Select a strategy via TrainingDataConfig(collocation_sampler="...").
Activations: TypeAlias = Literal['tanh', 'relu', 'leaky_relu', 'sigmoid', 'selu', 'softplus', 'identity']
module-attribute
Supported activation functions.
ArgsRegistry: TypeAlias = dict[str, Argument]
module-attribute
CollocationStrategies: TypeAlias = Literal['uniform', 'random', 'latin_hypercube', 'log_uniform_1d', 'adaptive']
module-attribute
Supported collocation sampling strategies.
Criteria: TypeAlias = Literal['mse', 'huber', 'l1']
module-attribute
Supported loss criteria.
DataBatch: TypeAlias = tuple[Tensor, Tensor]
module-attribute
Type alias for data batch: (x, y).
FieldsRegistry: TypeAlias = dict[str, Field]
module-attribute
LOSS_KEY = 'loss'
module-attribute
Key used for logging the total loss.
ParamsRegistry: TypeAlias = dict[str, Parameter]
module-attribute
Predictions: TypeAlias = tuple[DataBatch, dict[str, Tensor], dict[str, Tensor] | None]
module-attribute
Type alias for model predictions: (input_batch, predictions_dictionary, true_values_dictionary) where predictions_dictionary is a dictionary of {[field_name | param_name]: prediction} and where true_values_dictionary is a dictionary of {[field_name | param_name]: true_value}. If no validation source is configured, true_values_dictionary is None.
ResolvedValidation: TypeAlias = dict[str, Callable[[Tensor], Tensor]]
module-attribute
Validation registry after ColumnRef entries have been resolved to callables.
TrainingBatch: TypeAlias = tuple[DataBatch, Tensor]
module-attribute
Training batch tuple: ((x_data, y_data), x_coll).
ValidationRegistry: TypeAlias = dict[str, ValidationSource]
module-attribute
Registry mapping parameter names to their validation sources.
Example
validation: ValidationRegistry = { ... "beta": lambda x: torch.sin(x), # Pure function ... "gamma": ColumnRef(column="gamma_true"), # From data ... "delta": None, # No validation ... }
ValidationSource: TypeAlias = Callable[[Tensor], Tensor] | ColumnRef | None
module-attribute
A source for ground truth values. Can be: - A callable that takes x coordinates and returns true values - A ColumnRef that references a column in loaded data - None if no validation is needed for this parameter
__all__ = ['LOSS_KEY', 'Activations', 'AdamConfig', 'AdaptiveSampler', 'ArgsRegistry', 'Argument', 'CollocationSampler', 'CollocationStrategies', 'ColumnRef', 'Constraint', 'CosineAnnealingConfig', 'Criteria', 'DataBatch', 'DataCallback', 'Domain', 'EarlyStoppingConfig', 'Field', 'FieldsRegistry', 'FourierEncoding', 'GenerationConfig', 'InferredContext', 'IngestionConfig', 'LBFGSConfig', 'LatinHypercubeSampler', 'LogFn', 'LogUniform1DSampler', 'MLPConfig', 'PINNDataModule', 'PINNDataset', 'PINNHyperparameters', 'Parameter', 'ParamsRegistry', 'Predictions', 'Problem', 'RandomFourierFeatures', 'RandomSampler', 'ReduceLROnPlateauConfig', 'ResidualScorer', 'ResolvedValidation', 'SMMAStoppingConfig', 'ScalarConfig', 'TrainingBatch', 'TrainingDataConfig', 'UniformSampler', 'ValidationRegistry', 'ValidationSource', 'build_criterion', 'build_sampler', 'get_activation', 'resolve_validation']
module-attribute
AdamConfig
dataclass
Configuration for the Adam optimizer.
Attributes:
| Name | Type | Description |
|---|---|---|
lr |
float
|
Learning rate (must be positive). |
betas |
tuple[float, float]
|
Coefficients for computing running averages of gradient and its square. Both must be in (0, 1). |
weight_decay |
float
|
L2 penalty coefficient (non-negative). |
Source code in src/anypinn/core/config.py
betas: tuple[float, float] = (0.9, 0.999)
class-attribute
instance-attribute
lr: float = 0.001
class-attribute
instance-attribute
weight_decay: float = 0.0
class-attribute
instance-attribute
__init__(*, lr: float = 0.001, betas: tuple[float, float] = (0.9, 0.999), weight_decay: float = 0.0) -> None
__post_init__() -> None
Source code in src/anypinn/core/config.py
AdaptiveSampler
Residual-weighted adaptive collocation sampler.
Draws an oversample of candidate points, scores them using a
ResidualScorer, and retains the top-scoring subset. A configurable
exploration_ratio ensures a fraction of purely random points to prevent
mode collapse.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
scorer
|
ResidualScorer
|
Callable returning per-point residual scores |
required |
oversample_factor
|
int
|
Multiplier on |
4
|
exploration_ratio
|
float
|
Fraction of the budget reserved for random points. |
0.2
|
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
Source code in src/anypinn/core/samplers.py
__init__(scorer: ResidualScorer, oversample_factor: int = 4, exploration_ratio: float = 0.2, seed: int | None = None) -> None
Source code in src/anypinn/core/samplers.py
sample(n: int, domain: Domain) -> Tensor
Return n residual-weighted points within domain.
Source code in src/anypinn/core/samplers.py
Argument
A fixed (non-learnable) argument passed to an ODE/PDE function.
Wraps a float constant or a callable and provides a uniform
__call__ interface. See also Parameter for the learnable
variant.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
float | Callable[[Tensor], Tensor]
|
The value (float) or function (callable). |
required |
Example
beta = Argument(0.3) beta(torch.tensor([1.0])) tensor(0.3000) beta_fn = Argument(lambda t: 0.3 * torch.exp(-0.1 * t)) beta_fn(torch.tensor([0.0])) tensor([0.3000])
Source code in src/anypinn/core/nn.py
__call__(x: Tensor) -> Tensor
Evaluate the argument.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input tensor (context). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The value of the argument, broadcasted if necessary. |
Source code in src/anypinn/core/nn.py
__init__(value: float | Callable[[Tensor], Tensor])
CollocationSampler
Bases: Protocol
Protocol for collocation point samplers.
Implementations must return a tensor of shape (n, domain.ndim) with all
points inside the domain bounds.
Source code in src/anypinn/core/samplers.py
ColumnRef
dataclass
Reference to a column in loaded data for ground truth comparison.
This allows practitioners to specify validation data by column name without writing custom functions. The column is resolved lazily when data is loaded.
Attributes:
| Name | Type | Description |
|---|---|---|
column |
str
|
Name of the column in the loaded DataFrame. |
transform |
Callable[[Tensor], Tensor] | None
|
Optional transformation to apply to the column values. |
Example
validation = { ... "beta": ColumnRef(column="Rt", transform=lambda rt: rt * delta), ... }
Source code in src/anypinn/core/validation.py
column: str
instance-attribute
transform: Callable[[Tensor], Tensor] | None = None
class-attribute
instance-attribute
__init__(column: str, transform: Callable[[Tensor], Tensor] | None = None) -> None
Constraint
Bases: ABC
Abstract base class for a constraint (loss term) in the PINN.
Subclass this and implement loss() to define custom physics or
data-fitting terms. The Problem sums all constraint losses during
training.
Example
class EnergyConstraint(Constraint): ... def loss(self, batch, criterion, log=None): ... (x_data, y_data), x_coll = batch ... energy = compute_energy(x_coll) ... target = torch.zeros_like(energy) ... loss = criterion(energy, target) ... if log is not None: ... log("loss/energy", loss) ... return loss
Source code in src/anypinn/core/problem.py
inject_context(context: InferredContext) -> None
Inject the context into the constraint. This can be used by the constraint to access the data used to compute the loss.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
InferredContext
|
The context to inject. |
required |
Source code in src/anypinn/core/problem.py
loss(batch: TrainingBatch, criterion: nn.Module, log: LogFn | None = None) -> Tensor
abstractmethod
Calculate the loss for this constraint.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch
|
TrainingBatch
|
The current batch of data/collocation points. |
required |
criterion
|
Module
|
The loss function (e.g. MSE). |
required |
log
|
LogFn | None
|
Optional logging function. |
None
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The calculated loss tensor. |
Source code in src/anypinn/core/problem.py
CosineAnnealingConfig
dataclass
Configuration for Cosine Annealing LR Scheduler.
Attributes:
| Name | Type | Description |
|---|---|---|
T_max |
int
|
Maximum number of iterations (typically set to
|
eta_min |
float
|
Minimum learning rate at the end of the schedule. |
Source code in src/anypinn/core/config.py
T_max: int
instance-attribute
eta_min: float = 0.0
class-attribute
instance-attribute
__init__(*, T_max: int, eta_min: float = 0.0) -> None
DataCallback
Base class for callbacks that transform data during setup.
Subclass this to apply custom preprocessing (e.g. scaling, normalization) to training data and collocation points before the dataset is constructed.
Source code in src/anypinn/core/dataset.py
on_after_setup(dm: PINNDataModule) -> None
Hook called after PINNDataModule.setup() completes.
Use this to perform post-processing that depends on the fully constructed data module (e.g. adjusting validation functions to account for earlier scaling transforms).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dm
|
PINNDataModule
|
The fully initialized data module. |
required |
Source code in src/anypinn/core/dataset.py
transform_data(data: DataBatch, coll: Tensor) -> tuple[DataBatch, Tensor]
Transform training data and collocation points.
Called during PINNDataModule.setup() after data is loaded
but before the PINNDataset is created. Multiple callbacks
are applied in registration order.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
DataBatch
|
Tuple of (x, y) training tensors. |
required |
coll
|
Tensor
|
Collocation point coordinates. |
required |
Returns:
| Type | Description |
|---|---|
tuple[DataBatch, Tensor]
|
Transformed (data, coll) tuple. |
Source code in src/anypinn/core/dataset.py
Domain
dataclass
N-dimensional rectangular domain.
Attributes:
| Name | Type | Description |
|---|---|---|
bounds |
list[tuple[float, float]]
|
Per-dimension (min, max) pairs. |
dx |
list[float] | None
|
Per-dimension step size ( |
Source code in src/anypinn/core/nn.py
bounds: list[tuple[float, float]]
instance-attribute
dx: list[float] | None = None
class-attribute
instance-attribute
ndim: int
property
Number of spatial dimensions.
x0: float
property
Lower bound of the first dimension (convenience for 1-D / time-axis access).
x1: float
property
Upper bound of the first dimension.
__init__(bounds: list[tuple[float, float]], dx: list[float] | None = None) -> None
__repr__() -> str
from_x(x: Tensor) -> Domain
classmethod
Infer domain bounds and step sizes from a coordinate tensor of shape (N, d).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Coordinate tensor of shape |
required |
Returns:
| Type | Description |
|---|---|
Domain
|
Domain with bounds and dx inferred from the data. |
Example
coords = torch.linspace(0, 10, 100).unsqueeze(1) domain = Domain.from_x(coords) domain.x0, domain.x1 (0.0, 10.0)
Source code in src/anypinn/core/nn.py
EarlyStoppingConfig
dataclass
Configuration for Early Stopping callback.
Attributes:
| Name | Type | Description |
|---|---|---|
patience |
int
|
Number of epochs with no improvement before stopping. |
mode |
Literal['min', 'max']
|
|
Source code in src/anypinn/core/config.py
mode: Literal['min', 'max']
instance-attribute
patience: int
instance-attribute
__init__(*, patience: int, mode: Literal['min', 'max']) -> None
Field
Bases: Module
A neural field mapping coordinates to a vector of state variables.
For an ODE this maps t -> [S, I, R]; for a PDE it maps
(x, t) -> u(x, t).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
MLPConfig
|
Configuration for the MLP backing this field. |
required |
Example
field = Field(MLPConfig( ... in_dim=1, out_dim=3, ... hidden_layers=[32, 32], ... activation="tanh", ... )) t = torch.rand(10, 1) field(t).shape torch.Size([10, 3])
Source code in src/anypinn/core/nn.py
encoder: nn.Module | None = encode
instance-attribute
net = nn.Sequential(*layers)
instance-attribute
__init__(config: MLPConfig)
Source code in src/anypinn/core/nn.py
forward(x: Tensor) -> Tensor
Forward pass of the field.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input coordinates (e.g. time, space). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The values of the field at input coordinates. |
Source code in src/anypinn/core/nn.py
FourierEncoding
Bases: Module
Sinusoidal positional encoding for periodic or high-frequency signals.
For input \(\mathbf{x} \in \mathbb{R}^{n \times d}\) and
num_frequencies \(K\), the encoding is:
producing shape \((n,\, d\,(1 + 2K))\) when include_input=True,
or \((n,\, 2dK)\) when include_input=False.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_frequencies
|
int
|
Number of frequency bands \(K \geq 1\). |
6
|
include_input
|
bool
|
Prepend original coordinates to the encoded output. |
True
|
Source code in src/anypinn/lib/encodings.py
include_input = include_input
instance-attribute
num_frequencies = num_frequencies
instance-attribute
__init__(num_frequencies: int = 6, include_input: bool = True) -> None
Source code in src/anypinn/lib/encodings.py
forward(x: Tensor) -> Tensor
Encode input with sin/cos at each frequency.
Source code in src/anypinn/lib/encodings.py
out_dim(in_dim: int) -> int
Compute output dimension given input dimension.
GenerationConfig
dataclass
Bases: TrainingDataConfig
Configuration for generating synthetic training data.
Used in forward problems where the ground-truth ODE/PDE solution is computed from known parameters and optionally corrupted with noise.
Attributes:
| Name | Type | Description |
|---|---|---|
x |
Tensor
|
Coordinate tensor to evaluate the ODE/PDE at. |
noise_level |
float
|
Standard deviation of Gaussian noise added to the generated observations (0.0 for clean data). |
args_to_train |
ArgsRegistry
|
Arguments used by the data-generation ODE/PDE callable to produce the synthetic solution. |
Source code in src/anypinn/core/config.py
args_to_train: ArgsRegistry
instance-attribute
noise_level: float
instance-attribute
x: Tensor
instance-attribute
__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None, x: Tensor, noise_level: float, args_to_train: ArgsRegistry) -> None
InferredContext
dataclass
Runtime context inferred from training data.
This holds the data that is either explicitly provided in props or inferred from training data.
Source code in src/anypinn/core/context.py
domain = Domain.from_x(x)
instance-attribute
validation = validation
instance-attribute
__init__(x: Tensor, y: Tensor, validation: ResolvedValidation)
Infer context from either generated or loaded data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
x coordinates. |
required |
y
|
Tensor
|
observations. |
required |
validation
|
ResolvedValidation
|
Resolved validation dictionary. |
required |
Source code in src/anypinn/core/context.py
IngestionConfig
dataclass
Bases: TrainingDataConfig
Configuration for loading training data from a CSV file.
Attributes:
| Name | Type | Description |
|---|---|---|
df_path |
Path
|
Path to the CSV file. |
x_transform |
Callable[[Any], Any] | None
|
Optional transform applied to the x column values after loading (e.g. unit conversion). |
x_column |
str | None
|
Name of the column to use as x coordinates. If
|
y_columns |
list[str]
|
List of column names to use as y observations. |
Source code in src/anypinn/core/config.py
df_path: Path
instance-attribute
x_column: str | None = None
class-attribute
instance-attribute
x_transform: Callable[[Any], Any] | None = None
class-attribute
instance-attribute
y_columns: list[str]
instance-attribute
__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None, df_path: Path, x_transform: Callable[[Any], Any] | None = None, x_column: str | None = None, y_columns: list[str]) -> None
LBFGSConfig
dataclass
Configuration for the L-BFGS optimizer.
Attributes:
| Name | Type | Description |
|---|---|---|
lr |
float
|
Learning rate (must be positive). |
max_iter |
int
|
Maximum number of iterations per optimization step. |
max_eval |
int | None
|
Maximum number of function evaluations per step
(defaults to |
history_size |
int
|
Number of past updates to store for the approximation of the inverse Hessian. |
line_search_fn |
str | None
|
Line search function ( |
Source code in src/anypinn/core/config.py
history_size: int = 100
class-attribute
instance-attribute
line_search_fn: str | None = 'strong_wolfe'
class-attribute
instance-attribute
lr: float = 1.0
class-attribute
instance-attribute
max_eval: int | None = None
class-attribute
instance-attribute
max_iter: int = 20
class-attribute
instance-attribute
__init__(*, lr: float = 1.0, max_iter: int = 20, max_eval: int | None = None, history_size: int = 100, line_search_fn: str | None = 'strong_wolfe') -> None
__post_init__() -> None
Source code in src/anypinn/core/config.py
LatinHypercubeSampler
Latin Hypercube sampler (pure-PyTorch, no SciPy dependency).
Stratifies each dimension into n equal intervals and places one sample
per interval, then shuffles columns independently.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
Source code in src/anypinn/core/samplers.py
__init__(seed: int | None = None) -> None
sample(n: int, domain: Domain) -> Tensor
Return n Latin Hypercube-sampled points within domain.
Source code in src/anypinn/core/samplers.py
LogFn
Bases: Protocol
A function that logs a value to a dictionary.
Source code in src/anypinn/core/types.py
__call__(name: str, value: Tensor, progress_bar: bool = False) -> None
Log a value.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
The name to log the value under. |
required |
value
|
Tensor
|
The value to log. |
required |
progress_bar
|
bool
|
Whether the value should be logged to the progress bar. |
False
|
Source code in src/anypinn/core/types.py
LogUniform1DSampler
Log-uniform sampler for 1-D domains (reproduces SIR collocation behavior).
Samples uniformly in log1p space and maps back via expm1, producing
a distribution that is denser near the lower bound — useful for epidemic
models where early dynamics are most informative.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If the domain is not 1-D or |
Source code in src/anypinn/core/samplers.py
__init__(seed: int | None = None) -> None
sample(n: int, domain: Domain) -> Tensor
Return n log-uniformly spaced points within domain.
Source code in src/anypinn/core/samplers.py
MLPConfig
dataclass
Configuration for a Multi-Layer Perceptron (MLP).
Attributes:
| Name | Type | Description |
|---|---|---|
in_dim |
int
|
Dimension of input layer. |
out_dim |
int
|
Dimension of output layer. |
hidden_layers |
list[int]
|
List of dimensions for hidden layers. |
activation |
Activations
|
Activation function to use between layers. |
output_activation |
Activations | None
|
Optional activation function for the output layer. |
encode |
Callable[[Tensor], Tensor] | None
|
Optional function to encode inputs before passing to MLP. |
Source code in src/anypinn/core/config.py
activation: Activations
instance-attribute
encode: Callable[[Tensor], Tensor] | None = None
class-attribute
instance-attribute
hidden_layers: list[int]
instance-attribute
in_dim: int
instance-attribute
out_dim: int
instance-attribute
output_activation: Activations | None = None
class-attribute
instance-attribute
__init__(*, in_dim: int, out_dim: int, hidden_layers: list[int], activation: Activations, output_activation: Activations | None = None, encode: Callable[[Tensor], Tensor] | None = None) -> None
PINNDataModule
Bases: LightningDataModule, ABC
LightningDataModule for PINNs. Manages data and collocation datasets and creates the combined PINNDataset.
Collocation points are generated via a CollocationSampler selected by the
collocation_sampler field in TrainingDataConfig (string literal).
Subclasses only need to implement gen_data(); collocation generation is
handled by the sampler resolved from the hyperparameters.
Attributes:
| Name | Type | Description |
|---|---|---|
pinn_ds |
Combined PINNDataset for training. |
|
callbacks |
list[DataCallback]
|
Sequence of DataCallback callbacks applied after data loading. |
Source code in src/anypinn/core/dataset.py
151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 | |
callbacks: list[DataCallback] = list(callbacks) if callbacks else []
instance-attribute
context: InferredContext
property
hp = hp
instance-attribute
__init__(hp: PINNHyperparameters, validation: ValidationRegistry | None = None, callbacks: Sequence[DataCallback] | None = None, residual_scorer: ResidualScorer | None = None) -> None
Source code in src/anypinn/core/dataset.py
gen_data(config: GenerationConfig) -> DataBatch
abstractmethod
Generate synthetic training data from a known solution.
Subclasses implement this to solve the ODE/PDE with known parameters and return the resulting data (optionally with added noise).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
GenerationConfig
|
Generation configuration specifying the domain, noise level, and ground-truth arguments. |
required |
Returns:
| Type | Description |
|---|---|
DataBatch
|
Tuple of |
DataBatch
|
|
Source code in src/anypinn/core/dataset.py
load_data(config: IngestionConfig) -> DataBatch
Load training data from a CSV file.
Reads the CSV at config.df_path, extracts x and y columns,
and returns tensors shaped for PINN training.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
IngestionConfig
|
Ingestion configuration specifying paths and columns. |
required |
Returns:
| Type | Description |
|---|---|
DataBatch
|
Tuple of |
DataBatch
|
|
Source code in src/anypinn/core/dataset.py
predict_dataloader() -> DataLoader[PredictionBatch]
Returns the prediction dataloader using only the data dataset.
Source code in src/anypinn/core/dataset.py
setup(stage: str | None = None) -> None
Load raw data from IngestionConfig, or generate synthetic data from GenerationConfig. Apply registered callbacks, create InferredContext and datasets.
Source code in src/anypinn/core/dataset.py
train_dataloader() -> DataLoader[TrainingBatch]
Returns the training dataloader using PINNDataset.
Source code in src/anypinn/core/dataset.py
PINNDataset
Bases: Dataset[TrainingBatch]
Dataset used for PINN training. Combines labeled data and collocation points
per sample. Given a data_ratio, the amount of data points K is determined
either by applying data_ratio * batch_size if ratio is a float between 0
and 1 or by an absolute count if ratio is an integer. The remaining C
points are used for collocation. The data points are sampled without
replacement per epoch i.e. cycles through all data points and at the last
batch, wraps around to the first indices to ensure batch size. The collocation
points are sampled with replacement from the pool.
The dataset produces a batch of shape ((x_data[K,d], y_data[K,...]), x_coll[C,d]).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x_data
|
Tensor
|
Data point x coordinates (time values). |
required |
y_data
|
Tensor
|
Data point y values (observations). |
required |
x_coll
|
Tensor
|
Collocation point x coordinates. |
required |
batch_size
|
int
|
Size of the batch. |
required |
data_ratio
|
float | int
|
Ratio of data points to collocation points, either as a ratio [0,1] or absolute count [0,batch_size]. |
required |
Source code in src/anypinn/core/dataset.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 | |
C = batch_size - self.K
instance-attribute
K = round(data_ratio * batch_size)
instance-attribute
batch_size = batch_size
instance-attribute
total_coll = x_coll.shape[0]
instance-attribute
total_data = x_data.shape[0]
instance-attribute
x_coll = x_coll
instance-attribute
x_data = x_data
instance-attribute
y_data = y_data
instance-attribute
__getitem__(index: int) -> TrainingBatch
Return one sample containing K data points and C collocation points.
Source code in src/anypinn/core/dataset.py
__init__(x_data: Tensor, y_data: Tensor, x_coll: Tensor, batch_size: int, data_ratio: float | int)
Source code in src/anypinn/core/dataset.py
PINNHyperparameters
dataclass
Aggregated hyperparameters for the PINN model.
Attributes:
| Name | Type | Description |
|---|---|---|
lr |
float
|
Base learning rate (used as fallback when no |
training_data |
IngestionConfig | GenerationConfig
|
Data source configuration — either
|
fields_config |
MLPConfig
|
MLP architecture for the neural field(s). |
params_config |
MLPConfig | ScalarConfig
|
Configuration for learnable parameters (scalar or MLP-backed). |
max_epochs |
int | None
|
Maximum number of training epochs. |
gradient_clip_val |
float | None
|
Optional gradient clipping value. |
criterion |
Criteria
|
Loss function name ( |
optimizer |
AdamConfig | LBFGSConfig | None
|
Optimizer configuration. If |
scheduler |
ReduceLROnPlateauConfig | CosineAnnealingConfig | None
|
Learning rate scheduler configuration. |
early_stopping |
EarlyStoppingConfig | None
|
Optional early stopping configuration (patience-based). |
smma_stopping |
SMMAStoppingConfig | None
|
Optional SMMA stopping configuration (improvement-based). |
Source code in src/anypinn/core/config.py
criterion: Criteria = 'mse'
class-attribute
instance-attribute
early_stopping: EarlyStoppingConfig | None = None
class-attribute
instance-attribute
fields_config: MLPConfig
instance-attribute
gradient_clip_val: float | None = None
class-attribute
instance-attribute
lr: float
instance-attribute
max_epochs: int | None = None
class-attribute
instance-attribute
optimizer: AdamConfig | LBFGSConfig | None = None
class-attribute
instance-attribute
params_config: MLPConfig | ScalarConfig
instance-attribute
scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None
class-attribute
instance-attribute
smma_stopping: SMMAStoppingConfig | None = None
class-attribute
instance-attribute
training_data: IngestionConfig | GenerationConfig
instance-attribute
__init__(*, lr: float, training_data: IngestionConfig | GenerationConfig, fields_config: MLPConfig, params_config: MLPConfig | ScalarConfig, max_epochs: int | None = None, gradient_clip_val: float | None = None, criterion: Criteria = 'mse', optimizer: AdamConfig | LBFGSConfig | None = None, scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None, early_stopping: EarlyStoppingConfig | None = None, smma_stopping: SMMAStoppingConfig | None = None) -> None
Parameter
Bases: Module, Argument
A learnable parameter that participates in gradient optimization.
Supports scalar parameters (a single trainable value) or
function-valued parameters (e.g. beta(t)) backed by a small MLP.
Because Parameter is a subclass of Argument, it can be
used anywhere an Argument is expected.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
ScalarConfig | MLPConfig
|
Configuration for the parameter (ScalarConfig or MLPConfig). |
required |
Example
Scalar parameter starting at 0.3
beta = Parameter(ScalarConfig(init_value=0.3)) beta(torch.tensor([1.0])) # returns ~0.3
Function-valued parameter beta(t)
beta_t = Parameter(MLPConfig( ... in_dim=1, out_dim=1, ... hidden_layers=[8], ... activation="tanh", ... ))
Source code in src/anypinn/core/nn.py
236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 | |
config = config
instance-attribute
mode: Literal['scalar', 'mlp']
property
Mode of the parameter: 'scalar' or 'mlp'.
net = nn.Sequential(*layers)
instance-attribute
value = nn.Parameter(torch.tensor(float(config.init_value), dtype=(torch.float32)))
instance-attribute
__init__(config: ScalarConfig | MLPConfig)
Source code in src/anypinn/core/nn.py
forward(x: Tensor | None = None) -> Tensor
Get the value of the parameter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor | None
|
Input tensor (required for 'mlp' mode). |
None
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The parameter value. |
Source code in src/anypinn/core/nn.py
Problem
Bases: Module
Aggregates constraints into a total training loss.
Manages fields (neural networks), learnable parameters, and the loss
criterion. Call training_loss() during each training step and
predict() for inference.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
constraints
|
list[Constraint]
|
List of constraints to enforce. |
required |
criterion
|
Module
|
Loss function module. |
required |
fields
|
FieldsRegistry
|
Registry of named neural fields. |
required |
params
|
ParamsRegistry
|
Registry of named learnable parameters. |
required |
Example
problem = Problem( ... constraints=[residual_constraint, ic_constraint], ... criterion=nn.MSELoss(), ... fields={"u": field}, ... params={"alpha": Parameter(ScalarConfig(init_value=0.01))}, ... )
Source code in src/anypinn/core/problem.py
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 | |
constraints = constraints
instance-attribute
criterion = criterion
instance-attribute
fields = fields
instance-attribute
params = params
instance-attribute
__init__(constraints: list[Constraint], criterion: nn.Module, fields: FieldsRegistry, params: ParamsRegistry)
Source code in src/anypinn/core/problem.py
inject_context(context: InferredContext) -> None
Inject the context into the problem.
This should be called after data is loaded but before training starts. Pure function entries are passed through unchanged.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
InferredContext
|
The context to inject. |
required |
Source code in src/anypinn/core/problem.py
predict(batch: DataBatch) -> tuple[DataBatch, dict[str, Tensor]]
Generate predictions for a given batch of data. Returns unscaled predictions in original domain.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch
|
DataBatch
|
Batch of input coordinates. |
required |
Returns:
| Type | Description |
|---|---|
tuple[DataBatch, dict[str, Tensor]]
|
Tuple of (original_batch, predictions_dict). |
Source code in src/anypinn/core/problem.py
training_loss(batch: TrainingBatch, log: LogFn | None = None) -> Tensor
Calculate the total loss from all constraints.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch
|
TrainingBatch
|
Current batch. |
required |
log
|
LogFn | None
|
Optional logging function. |
None
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Sum of losses from all constraints. |
Source code in src/anypinn/core/problem.py
true_values(x: Tensor) -> dict[str, Tensor] | None
Get the true values for a given x coordinates. Returns None if no validation source is configured.
Source code in src/anypinn/core/problem.py
RandomFourierFeatures
Bases: Module
Random Fourier Features (Rahimi & Recht, 2007) for RBF kernel approximation.
Draws a fixed random matrix \(\mathbf{B} \sim \mathcal{N}(0, \sigma^2)\) of shape \((d_{\text{in}},\, m)\) and maps \(\mathbf{x} \in \mathbb{R}^{n \times d_{\text{in}}}\) to:
\(\mathbf{B}\) is registered as a buffer and moves with the module across devices.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
in_dim
|
int
|
Spatial dimension \(d_{\text{in}}\) of the input. |
required |
num_features
|
int
|
Number of random features \(m\) (output dimension \(= 2m\)). |
256
|
scale
|
float
|
Standard deviation \(\sigma\) of the frequency distribution. Higher values capture higher-frequency variation. Default: 1.0. |
1.0
|
seed
|
int | None
|
Optional seed for reproducible frequency sampling. |
None
|
Source code in src/anypinn/lib/encodings.py
num_features = num_features
instance-attribute
out_dim: int
property
Output dimension (always 2 * num_features).
__init__(in_dim: int, num_features: int = 256, scale: float = 1.0, seed: int | None = None) -> None
Source code in src/anypinn/lib/encodings.py
forward(x: Tensor) -> Tensor
Project input through random features and apply cos/sin.
Source code in src/anypinn/lib/encodings.py
RandomSampler
Uniform random sampler inside domain bounds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
Source code in src/anypinn/core/samplers.py
__init__(seed: int | None = None) -> None
sample(n: int, domain: Domain) -> Tensor
Return n uniformly random points within domain.
Source code in src/anypinn/core/samplers.py
ReduceLROnPlateauConfig
dataclass
Configuration for Learning Rate Scheduler (ReduceLROnPlateau).
Attributes:
| Name | Type | Description |
|---|---|---|
mode |
Literal['min', 'max']
|
|
factor |
float
|
Factor by which the learning rate is reduced (must be in (0, 1)). |
patience |
int
|
Number of epochs with no improvement before the LR is reduced. |
threshold |
float
|
Minimum change to qualify as an improvement. |
min_lr |
float
|
Lower bound on the learning rate. |
Source code in src/anypinn/core/config.py
factor: float
instance-attribute
min_lr: float
instance-attribute
mode: Literal['min', 'max']
instance-attribute
patience: int
instance-attribute
threshold: float
instance-attribute
__init__(*, mode: Literal['min', 'max'], factor: float, patience: int, threshold: float, min_lr: float) -> None
__post_init__() -> None
ResidualScorer
Bases: Protocol
Protocol for scoring candidate collocation points by PDE residual magnitude.
Source code in src/anypinn/core/samplers.py
residual_score(x: Tensor) -> Tensor
Return per-point non-negative residual score of shape (n,).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Candidate collocation points |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Scores |
Source code in src/anypinn/core/samplers.py
SMMAStoppingConfig
dataclass
Configuration for Smoothed Moving Average (SMMA) Stopping callback.
Training stops when the relative improvement of the SMMA over the
lookback window falls below threshold.
Attributes:
| Name | Type | Description |
|---|---|---|
window |
int
|
Number of epochs used to compute the smoothed moving average. |
threshold |
float
|
Minimum relative improvement required to continue training. |
lookback |
int
|
Number of SMMA values to compare for improvement detection. |
Source code in src/anypinn/core/config.py
lookback: int
instance-attribute
threshold: float
instance-attribute
window: int
instance-attribute
__init__(*, window: int, threshold: float, lookback: int) -> None
__post_init__() -> None
Source code in src/anypinn/core/config.py
ScalarConfig
dataclass
Configuration for a scalar parameter.
Attributes:
| Name | Type | Description |
|---|---|---|
init_value |
float
|
Initial value for the parameter. |
Source code in src/anypinn/core/config.py
init_value: float
instance-attribute
__init__(*, init_value: float) -> None
TrainingDataConfig
dataclass
Configuration for data loading and batching.
Attributes:
| Name | Type | Description |
|---|---|---|
batch_size |
int
|
Number of points per training batch. |
data_ratio |
int | float
|
Ratio of data to collocation points per batch. |
collocations |
int
|
Total number of collocation points to generate. |
collocation_sampler |
CollocationStrategies
|
Sampling strategy for collocation points. |
collocation_seed |
int | None
|
Optional seed for reproducible collocation sampling. |
Source code in src/anypinn/core/config.py
batch_size: int
instance-attribute
collocation_sampler: CollocationStrategies = 'random'
class-attribute
instance-attribute
collocation_seed: int | None = None
class-attribute
instance-attribute
collocations: int
instance-attribute
data_ratio: int | float
instance-attribute
__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None) -> None
__post_init__() -> None
Source code in src/anypinn/core/config.py
UniformSampler
Cartesian grid sampler that distributes points evenly across the domain.
For d-dimensional domains, places ceil(n^(1/d)) points per axis then
takes the first n points of the resulting grid.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Optional seed (unused — grid is deterministic). |
None
|
Source code in src/anypinn/core/samplers.py
__init__(seed: int | None = None) -> None
sample(n: int, domain: Domain) -> Tensor
Return n points on a uniform Cartesian grid over domain.
Source code in src/anypinn/core/samplers.py
build_criterion(name: Criteria) -> nn.Module
Return the loss-criterion module for the given name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Criteria
|
One of |
required |
Returns:
| Type | Description |
|---|---|
Module
|
The corresponding PyTorch loss module. |
Source code in src/anypinn/core/nn.py
build_sampler(strategy: CollocationStrategies, seed: int | None = None, scorer: ResidualScorer | None = None) -> CollocationSampler
Construct a collocation sampler from a strategy name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
strategy
|
CollocationStrategies
|
One of the |
required |
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
scorer
|
ResidualScorer | None
|
Required when |
None
|
Returns:
| Type | Description |
|---|---|
CollocationSampler
|
A sampler instance satisfying the |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Example
sampler = build_sampler("latin_hypercube", seed=42) domain = Domain(bounds=[(0, 1)]) points = sampler.sample(100, domain) points.shape torch.Size([100, 1])
Source code in src/anypinn/core/samplers.py
get_activation(name: Activations) -> nn.Module
Get the activation function module by name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Activations
|
The name of the activation function. |
required |
Returns:
| Type | Description |
|---|---|
Module
|
The PyTorch activation module. |
Source code in src/anypinn/core/nn.py
resolve_validation(registry: ValidationRegistry, df_path: Path | None = None) -> ResolvedValidation
Resolve a ValidationRegistry by converting ColumnRef entries to callables.
Pure function entries are passed through unchanged. ColumnRef entries are resolved using the provided data file path.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
registry
|
ValidationRegistry
|
The validation registry to resolve. |
required |
df_path
|
Path | None
|
Path to the CSV file for ColumnRef resolution. |
None
|
Returns:
| Type | Description |
|---|---|
ResolvedValidation
|
A dictionary mapping parameter names to callable validation functions. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If a ColumnRef cannot be resolved (missing column or no df_path). |