Core API
anypinn.core provides the foundational building blocks for defining and
solving physics-informed neural network problems.
Neural Network Primitives
Domain
N-dimensional rectangular domain.
Attributes:
| Name | Type | Description |
|---|---|---|
bounds |
list[tuple[float, float]]
|
Per-dimension (min, max) pairs. |
dx |
list[float] | None
|
Per-dimension step size ( |
bounds: list[tuple[float, float]]
instance-attribute
dx: list[float] | None = None
class-attribute
instance-attribute
ndim: int
property
Number of spatial dimensions.
x0: float
property
Lower bound of the first dimension (convenience for 1-D / time-axis access).
x1: float
property
Upper bound of the first dimension.
__init__(bounds: list[tuple[float, float]], dx: list[float] | None = None) -> None
__repr__() -> str
from_x(x: Tensor) -> Domain
classmethod
Infer domain bounds and step sizes from a coordinate tensor of shape (N, d).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Coordinate tensor of shape |
required |
Returns:
| Type | Description |
|---|---|
Domain
|
Domain with bounds and dx inferred from the data. |
Field
Bases: Module
A neural field mapping coordinates to a vector of state variables.
For an ODE this maps t -> [S, I, R]; for a PDE it maps
(x, t) -> u(x, t).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
MLPConfig
|
Configuration for the MLP backing this field. |
required |
Example
encoder: nn.Module | None = encode
instance-attribute
net = nn.Sequential(*layers)
instance-attribute
__init__(config: MLPConfig)
forward(x: Tensor) -> Tensor
Forward pass of the field.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input coordinates (e.g. time, space). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The values of the field at input coordinates. |
Argument
A fixed (non-learnable) argument passed to an ODE/PDE function.
Wraps a float constant or a callable and provides a uniform
__call__ interface. See also Parameter for the learnable
variant.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
float | Callable[[Tensor], Tensor]
|
The value (float) or function (callable). |
required |
Example
__call__(x: Tensor) -> Tensor
Evaluate the argument.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input tensor (context). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The value of the argument, broadcasted if necessary. |
__init__(value: float | Callable[[Tensor], Tensor])
__repr__() -> str
Parameter
Bases: Module, Argument
A learnable parameter that participates in gradient optimization.
Supports scalar parameters (a single trainable value) or
function-valued parameters (e.g. beta(t)) backed by a small MLP.
Because Parameter is a subclass of Argument, it can be
used anywhere an Argument is expected.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
ScalarConfig | MLPConfig
|
Configuration for the parameter (ScalarConfig or MLPConfig). |
required |
Example
config = config
instance-attribute
mode: Literal['scalar', 'mlp']
property
Mode of the parameter: 'scalar' or 'mlp'.
net = nn.Sequential(*layers)
instance-attribute
value = nn.Parameter(torch.tensor(float(config.init_value), dtype=(torch.float32)))
instance-attribute
__init__(config: ScalarConfig | MLPConfig)
forward(x: Tensor | None = None) -> Tensor
Get the value of the parameter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor | None
|
Input tensor (required for 'mlp' mode). |
None
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The parameter value. |
Problem Abstractions
Constraint
Bases: ABC
Abstract base class for a constraint (loss term) in the PINN.
Subclass this and implement loss() to define custom physics or
data-fitting terms. The Problem sums all constraint losses during
training.
Example
inject_context(context: InferredContext) -> None
Inject the context into the constraint. This can be used by the constraint to access the data used to compute the loss.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
InferredContext
|
The context to inject. |
required |
loss(batch: TrainingBatch, criterion: nn.Module, log: LogFn | None = None) -> Tensor
abstractmethod
Calculate the loss for this constraint.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch
|
TrainingBatch
|
The current batch of data/collocation points. |
required |
criterion
|
Module
|
The loss function (e.g. MSE). |
required |
log
|
LogFn | None
|
Optional logging function. |
None
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The calculated loss tensor. |
Problem
Bases: Module
Aggregates constraints into a total training loss.
Manages fields (neural networks), learnable parameters, and the loss
criterion. Call training_loss() during each training step and
predict() for inference.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
constraints
|
list[Constraint]
|
List of constraints to enforce. |
required |
criterion
|
Module
|
Loss function module. |
required |
fields
|
FieldsRegistry
|
Registry of named neural fields. |
required |
params
|
ParamsRegistry
|
Registry of named learnable parameters. |
required |
Example
constraints = constraints
instance-attribute
criterion = criterion
instance-attribute
fields = fields
instance-attribute
params = params
instance-attribute
__init__(constraints: list[Constraint], criterion: nn.Module, fields: FieldsRegistry, params: ParamsRegistry)
inject_context(context: InferredContext) -> None
Inject the context into the problem.
This should be called after data is loaded but before training starts. Pure function entries are passed through unchanged.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
InferredContext
|
The context to inject. |
required |
predict(batch: DataBatch) -> tuple[DataBatch, dict[str, Tensor]]
Generate predictions for a given batch of data. Returns unscaled predictions in original domain.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch
|
DataBatch
|
Batch of input coordinates. |
required |
Returns:
| Type | Description |
|---|---|
tuple[DataBatch, dict[str, Tensor]]
|
Tuple of (original_batch, predictions_dict). |
training_loss(batch: TrainingBatch, log: LogFn | None = None) -> Tensor
Calculate the total loss from all constraints.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch
|
TrainingBatch
|
Current batch. |
required |
log
|
LogFn | None
|
Optional logging function. |
None
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Sum of losses from all constraints. |
true_values(x: Tensor) -> dict[str, Tensor] | None
Get the true values for a given x coordinates. Returns None if no validation source is configured.
Configuration
MLPConfig
Configuration for a Multi-Layer Perceptron (MLP).
Attributes:
| Name | Type | Description |
|---|---|---|
in_dim |
int
|
Dimension of input layer. |
out_dim |
int
|
Dimension of output layer. |
hidden_layers |
list[int]
|
List of dimensions for hidden layers. |
activation |
Activations
|
Activation function to use between layers. |
output_activation |
Activations | None
|
Optional activation function for the output layer. |
encode |
Callable[[Tensor], Tensor] | None
|
Optional function to encode inputs before passing to MLP. |
activation: Activations
instance-attribute
encode: Callable[[Tensor], Tensor] | None = None
class-attribute
instance-attribute
hidden_layers: list[int]
instance-attribute
in_dim: int
instance-attribute
out_dim: int
instance-attribute
output_activation: Activations | None = None
class-attribute
instance-attribute
__init__(*, in_dim: int, out_dim: int, hidden_layers: list[int], activation: Activations, output_activation: Activations | None = None, encode: Callable[[Tensor], Tensor] | None = None) -> None
ScalarConfig
Configuration for a scalar parameter.
Attributes:
| Name | Type | Description |
|---|---|---|
init_value |
float
|
Initial value for the parameter. |
init_value: float
instance-attribute
__init__(*, init_value: float) -> None
PINNHyperparameters
Aggregated hyperparameters for the PINN model.
Attributes:
| Name | Type | Description |
|---|---|---|
lr |
float
|
Base learning rate (used as fallback when no |
training_data |
IngestionConfig | GenerationConfig
|
Data source configuration — either
|
fields_config |
MLPConfig
|
MLP architecture for the neural field(s). |
params_config |
MLPConfig | ScalarConfig
|
Configuration for learnable parameters (scalar or MLP-backed). |
max_epochs |
int | None
|
Maximum number of training epochs. |
gradient_clip_val |
float | None
|
Optional gradient clipping value. |
criterion |
Criteria
|
Loss function name ( |
optimizer |
AdamConfig | LBFGSConfig | None
|
Optimizer configuration. If |
scheduler |
ReduceLROnPlateauConfig | CosineAnnealingConfig | None
|
Learning rate scheduler configuration. |
early_stopping |
EarlyStoppingConfig | None
|
Optional early stopping configuration (patience-based). |
smma_stopping |
SMMAStoppingConfig | None
|
Optional SMMA stopping configuration (improvement-based). |
criterion: Criteria = 'mse'
class-attribute
instance-attribute
early_stopping: EarlyStoppingConfig | None = None
class-attribute
instance-attribute
fields_config: MLPConfig
instance-attribute
gradient_clip_val: float | None = None
class-attribute
instance-attribute
lr: float
instance-attribute
max_epochs: int | None = None
class-attribute
instance-attribute
optimizer: AdamConfig | LBFGSConfig | None = None
class-attribute
instance-attribute
params_config: MLPConfig | ScalarConfig
instance-attribute
scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None
class-attribute
instance-attribute
smma_stopping: SMMAStoppingConfig | None = None
class-attribute
instance-attribute
training_data: IngestionConfig | GenerationConfig
instance-attribute
__init__(*, lr: float, training_data: IngestionConfig | GenerationConfig, fields_config: MLPConfig, params_config: MLPConfig | ScalarConfig, max_epochs: int | None = None, gradient_clip_val: float | None = None, criterion: Criteria = 'mse', optimizer: AdamConfig | LBFGSConfig | None = None, scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None, early_stopping: EarlyStoppingConfig | None = None, smma_stopping: SMMAStoppingConfig | None = None) -> None
__post_init__() -> None
TrainingDataConfig
Configuration for data loading and batching.
Attributes:
| Name | Type | Description |
|---|---|---|
batch_size |
int
|
Number of points per training batch. |
data_ratio |
int | float
|
Ratio of data to collocation points per batch. |
collocations |
int
|
Total number of collocation points to generate. |
collocation_sampler |
CollocationStrategies
|
Sampling strategy for collocation points. |
collocation_seed |
int | None
|
Optional seed for reproducible collocation sampling. |
batch_size: int
instance-attribute
collocation_sampler: CollocationStrategies = 'random'
class-attribute
instance-attribute
collocation_seed: int | None = None
class-attribute
instance-attribute
collocations: int
instance-attribute
data_ratio: int | float
instance-attribute
__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None) -> None
__post_init__() -> None
GenerationConfig
Bases: TrainingDataConfig
Configuration for generating synthetic training data.
Used in forward problems where the ground-truth ODE/PDE solution is computed from known parameters and optionally corrupted with noise.
Attributes:
| Name | Type | Description |
|---|---|---|
x |
Tensor
|
Coordinate tensor to evaluate the ODE/PDE at. |
noise_level |
float
|
Standard deviation of Gaussian noise added to the generated observations (0.0 for clean data). |
args_to_train |
ArgsRegistry
|
Arguments used by the data-generation ODE/PDE callable to produce the synthetic solution. |
args_to_train: ArgsRegistry
instance-attribute
noise_level: float
instance-attribute
x: Tensor
instance-attribute
__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None, x: Tensor, noise_level: float, args_to_train: ArgsRegistry) -> None
IngestionConfig
Bases: TrainingDataConfig
Configuration for loading training data from a CSV file.
Attributes:
| Name | Type | Description |
|---|---|---|
df_path |
Path
|
Path to the CSV file. |
x_transform |
Callable[[Any], Any] | None
|
Optional transform applied to the x column values after loading (e.g. unit conversion). |
x_column |
str | None
|
Name of the column to use as x coordinates. If
|
y_columns |
list[str]
|
List of column names to use as y observations. |
df_path: Path
instance-attribute
x_column: str | None = None
class-attribute
instance-attribute
x_transform: Callable[[Any], Any] | None = None
class-attribute
instance-attribute
y_columns: list[str]
instance-attribute
__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None, df_path: Path, x_transform: Callable[[Any], Any] | None = None, x_column: str | None = None, y_columns: list[str]) -> None
Optimizer Configs
Configuration for the Adam optimizer.
Attributes:
| Name | Type | Description |
|---|---|---|
lr |
float
|
Learning rate (must be positive). |
betas |
tuple[float, float]
|
Coefficients for computing running averages of gradient and its square. Both must be in (0, 1). |
weight_decay |
float
|
L2 penalty coefficient (non-negative). |
betas: tuple[float, float] = (0.9, 0.999)
class-attribute
instance-attribute
lr: float = 0.001
class-attribute
instance-attribute
weight_decay: float = 0.0
class-attribute
instance-attribute
__init__(*, lr: float = 0.001, betas: tuple[float, float] = (0.9, 0.999), weight_decay: float = 0.0) -> None
__post_init__() -> None
Configuration for the L-BFGS optimizer.
Attributes:
| Name | Type | Description |
|---|---|---|
lr |
float
|
Learning rate (must be positive). |
max_iter |
int
|
Maximum number of iterations per optimization step. |
max_eval |
int | None
|
Maximum number of function evaluations per step
(defaults to |
history_size |
int
|
Number of past updates to store for the approximation of the inverse Hessian. |
line_search_fn |
str | None
|
Line search function ( |
history_size: int = 100
class-attribute
instance-attribute
line_search_fn: str | None = 'strong_wolfe'
class-attribute
instance-attribute
lr: float = 1.0
class-attribute
instance-attribute
max_eval: int | None = None
class-attribute
instance-attribute
max_iter: int = 20
class-attribute
instance-attribute
__init__(*, lr: float = 1.0, max_iter: int = 20, max_eval: int | None = None, history_size: int = 100, line_search_fn: str | None = 'strong_wolfe') -> None
__post_init__() -> None
Scheduler Configs
Configuration for Learning Rate Scheduler (ReduceLROnPlateau).
Attributes:
| Name | Type | Description |
|---|---|---|
mode |
Literal['min', 'max']
|
|
factor |
float
|
Factor by which the learning rate is reduced (must be in (0, 1)). |
patience |
int
|
Number of epochs with no improvement before the LR is reduced. |
threshold |
float
|
Minimum change to qualify as an improvement. |
min_lr |
float
|
Lower bound on the learning rate. |
factor: float
instance-attribute
min_lr: float
instance-attribute
mode: Literal['min', 'max']
instance-attribute
patience: int
instance-attribute
threshold: float
instance-attribute
__init__(*, mode: Literal['min', 'max'], factor: float, patience: int, threshold: float, min_lr: float) -> None
__post_init__() -> None
Configuration for Cosine Annealing LR Scheduler.
Attributes:
| Name | Type | Description |
|---|---|---|
T_max |
int
|
Maximum number of iterations (typically set to
|
eta_min |
float
|
Minimum learning rate at the end of the schedule. |
T_max: int
instance-attribute
eta_min: float = 0.0
class-attribute
instance-attribute
__init__(*, T_max: int, eta_min: float = 0.0) -> None
__post_init__() -> None
Stopping Configs
Configuration for Early Stopping callback.
Attributes:
| Name | Type | Description |
|---|---|---|
patience |
int
|
Number of epochs with no improvement before stopping. |
mode |
Literal['min', 'max']
|
|
mode: Literal['min', 'max']
instance-attribute
patience: int
instance-attribute
__init__(*, patience: int, mode: Literal['min', 'max']) -> None
__post_init__() -> None
Configuration for Smoothed Moving Average (SMMA) Stopping callback.
Training stops when the relative improvement of the SMMA over the
lookback window falls below threshold.
Attributes:
| Name | Type | Description |
|---|---|---|
window |
int
|
Number of epochs used to compute the smoothed moving average. |
threshold |
float
|
Minimum relative improvement required to continue training. |
lookback |
int
|
Number of SMMA values to compare for improvement detection. |
lookback: int
instance-attribute
threshold: float
instance-attribute
window: int
instance-attribute
__init__(*, window: int, threshold: float, lookback: int) -> None
__post_init__() -> None
Collocation Samplers
Bases: Protocol
Protocol for collocation point samplers.
Implementations must return a tensor of shape (n, domain.ndim) with all
points inside the domain bounds.
sample(n: int, domain: Domain) -> Tensor
Return n collocation points within domain.
Cartesian grid sampler that distributes points evenly across the domain.
For d-dimensional domains, places ceil(n^(1/d)) points per axis then
takes the first n points of the resulting grid.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Optional seed (unused — grid is deterministic). |
None
|
__init__(seed: int | None = None) -> None
sample(n: int, domain: Domain) -> Tensor
Return n points on a uniform Cartesian grid over domain.
Uniform random sampler inside domain bounds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
__init__(seed: int | None = None) -> None
sample(n: int, domain: Domain) -> Tensor
Return n uniformly random points within domain.
Latin Hypercube sampler (pure-PyTorch, no SciPy dependency).
Stratifies each dimension into n equal intervals and places one sample
per interval, then shuffles columns independently.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
__init__(seed: int | None = None) -> None
sample(n: int, domain: Domain) -> Tensor
Return n Latin Hypercube-sampled points within domain.
Log-uniform sampler for 1-D domains (reproduces SIR collocation behavior).
Samples uniformly in log1p space and maps back via expm1, producing
a distribution that is denser near the lower bound — useful for epidemic
models where early dynamics are most informative.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If the domain is not 1-D or |
__init__(seed: int | None = None) -> None
sample(n: int, domain: Domain) -> Tensor
Return n log-uniformly spaced points within domain.
Residual-weighted adaptive collocation sampler.
Draws an oversample of candidate points, scores them using a
ResidualScorer, and retains the top-scoring subset. A configurable
exploration_ratio ensures a fraction of purely random points to prevent
mode collapse.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
scorer
|
ResidualScorer
|
Callable returning per-point residual scores |
required |
oversample_factor
|
int
|
Multiplier on |
4
|
exploration_ratio
|
float
|
Fraction of the budget reserved for random points. |
0.2
|
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
__init__(scorer: ResidualScorer, oversample_factor: int = 4, exploration_ratio: float = 0.2, seed: int | None = None) -> None
sample(n: int, domain: Domain) -> Tensor
Return n residual-weighted points within domain.
Bases: Protocol
Protocol for scoring candidate collocation points by PDE residual magnitude.
residual_score(x: Tensor) -> Tensor
Return per-point non-negative residual score of shape (n,).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Candidate collocation points |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Scores |
Construct a collocation sampler from a strategy name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
strategy
|
CollocationStrategies
|
One of the |
required |
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
scorer
|
ResidualScorer | None
|
Required when |
None
|
Returns:
| Type | Description |
|---|---|
CollocationSampler
|
A sampler instance satisfying the |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Data Handling
PINNDataset
Bases: Dataset[TrainingBatch]
Dataset used for PINN training. Combines labeled data and collocation points
per sample. Given a data_ratio, the amount of data points K is determined
either by applying data_ratio * batch_size if ratio is a float between 0
and 1 or by an absolute count if ratio is an integer. The remaining C
points are used for collocation. The data points are sampled without
replacement per epoch i.e. cycles through all data points and at the last
batch, wraps around to the first indices to ensure batch size. The collocation
points are sampled with replacement from the pool.
The dataset produces a batch of shape ((x_data[K,d], y_data[K,...]), x_coll[C,d]).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x_data
|
Tensor
|
Data point x coordinates (time values). |
required |
y_data
|
Tensor
|
Data point y values (observations). |
required |
x_coll
|
Tensor
|
Collocation point x coordinates. |
required |
batch_size
|
int
|
Size of the batch. |
required |
data_ratio
|
float | int
|
Ratio of data points to collocation points, either as a ratio [0,1] or absolute count [0,batch_size]. |
required |
C = batch_size - self.K
instance-attribute
K = round(data_ratio * batch_size)
instance-attribute
batch_size = batch_size
instance-attribute
total_coll = x_coll.shape[0]
instance-attribute
total_data = x_data.shape[0]
instance-attribute
x_coll = x_coll
instance-attribute
x_data = x_data
instance-attribute
y_data = y_data
instance-attribute
__getitem__(index: int) -> TrainingBatch
Return one sample containing K data points and C collocation points.
__init__(x_data: Tensor, y_data: Tensor, x_coll: Tensor, batch_size: int, data_ratio: float | int)
__len__() -> int
Number of steps per epoch to see all data points once. Ceiling division.
PINNDataModule
Bases: LightningDataModule, ABC
LightningDataModule for PINNs. Manages data and collocation datasets and creates the combined PINNDataset.
Collocation points are generated via a CollocationSampler selected by the
collocation_sampler field in TrainingDataConfig (string literal).
Subclasses only need to implement gen_data(); collocation generation is
handled by the sampler resolved from the hyperparameters.
Attributes:
| Name | Type | Description |
|---|---|---|
pinn_ds |
Combined PINNDataset for training. |
|
callbacks |
list[DataCallback]
|
Sequence of DataCallback callbacks applied after data loading. |
callbacks: list[DataCallback] = list(callbacks) if callbacks else []
instance-attribute
context: InferredContext
property
hp = hp
instance-attribute
__init__(hp: PINNHyperparameters, validation: ValidationRegistry | None = None, callbacks: Sequence[DataCallback] | None = None, residual_scorer: ResidualScorer | None = None) -> None
gen_data(config: GenerationConfig) -> DataBatch
abstractmethod
Generate synthetic training data from a known solution.
Subclasses implement this to solve the ODE/PDE with known parameters and return the resulting data (optionally with added noise).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
GenerationConfig
|
Generation configuration specifying the domain, noise level, and ground-truth arguments. |
required |
Returns:
| Type | Description |
|---|---|
DataBatch
|
Tuple of |
DataBatch
|
|
load_data(config: IngestionConfig) -> DataBatch
Load training data from a CSV file.
Reads the CSV at config.df_path, extracts x and y columns,
and returns tensors shaped for PINN training.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
IngestionConfig
|
Ingestion configuration specifying paths and columns. |
required |
Returns:
| Type | Description |
|---|---|
DataBatch
|
Tuple of |
DataBatch
|
|
predict_dataloader() -> DataLoader[PredictionBatch]
Returns the prediction dataloader using only the data dataset.
setup(stage: str | None = None) -> None
Load raw data from IngestionConfig, or generate synthetic data from GenerationConfig. Apply registered callbacks, create InferredContext and datasets.
train_dataloader() -> DataLoader[TrainingBatch]
Returns the training dataloader using PINNDataset.
DataCallback
Base class for callbacks that transform data during setup.
Subclass this to apply custom preprocessing (e.g. scaling, normalization) to training data and collocation points before the dataset is constructed.
on_after_setup(dm: PINNDataModule) -> None
Hook called after PINNDataModule.setup() completes.
Use this to perform post-processing that depends on the fully constructed data module (e.g. adjusting validation functions to account for earlier scaling transforms).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dm
|
PINNDataModule
|
The fully initialized data module. |
required |
transform_data(data: DataBatch, coll: Tensor) -> tuple[DataBatch, Tensor]
Transform training data and collocation points.
Called during PINNDataModule.setup() after data is loaded
but before the PINNDataset is created. Multiple callbacks
are applied in registration order.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
DataBatch
|
Tuple of (x, y) training tensors. |
required |
coll
|
Tensor
|
Collocation point coordinates. |
required |
Returns:
| Type | Description |
|---|---|
tuple[DataBatch, Tensor]
|
Transformed (data, coll) tuple. |
InferredContext
Runtime context inferred from training data.
This holds the data that is either explicitly provided in props or inferred from training data.
domain = Domain.from_x(x)
instance-attribute
validation = validation
instance-attribute
__init__(x: Tensor, y: Tensor, validation: ResolvedValidation)
Infer context from either generated or loaded data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
x coordinates. |
required |
y
|
Tensor
|
observations. |
required |
validation
|
ResolvedValidation
|
Resolved validation dictionary. |
required |
Validation
Reference to a column in loaded data for ground truth comparison.
This allows practitioners to specify validation data by column name without writing custom functions. The column is resolved lazily when data is loaded.
Attributes:
| Name | Type | Description |
|---|---|---|
column |
str
|
Name of the column in the loaded DataFrame. |
transform |
Callable[[Tensor], Tensor] | None
|
Optional transformation to apply to the column values. |
column: str
instance-attribute
transform: Callable[[Tensor], Tensor] | None = None
class-attribute
instance-attribute
__init__(column: str, transform: Callable[[Tensor], Tensor] | None = None) -> None
Resolve a ValidationRegistry by converting ColumnRef entries to callables.
Pure function entries are passed through unchanged. ColumnRef entries are resolved using the provided data file path.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
registry
|
ValidationRegistry
|
The validation registry to resolve. |
required |
df_path
|
Path | None
|
Path to the CSV file for ColumnRef resolution. |
None
|
Returns:
| Type | Description |
|---|---|
ResolvedValidation
|
A dictionary mapping parameter names to callable validation functions. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If a ColumnRef cannot be resolved (missing column or no df_path). |
Encodings
Bases: Module
Sinusoidal positional encoding for periodic or high-frequency signals.
For input \(\mathbf{x} \in \mathbb{R}^{n \times d}\) and
num_frequencies \(K\), the encoding is:
producing shape \((n,\, d\,(1 + 2K))\) when include_input=True,
or \((n,\, 2dK)\) when include_input=False.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_frequencies
|
int
|
Number of frequency bands \(K \geq 1\). |
6
|
include_input
|
bool
|
Prepend original coordinates to the encoded output. |
True
|
include_input = include_input
instance-attribute
num_frequencies = num_frequencies
instance-attribute
__init__(num_frequencies: int = 6, include_input: bool = True) -> None
forward(x: Tensor) -> Tensor
Encode input with sin/cos at each frequency.
out_dim(in_dim: int) -> int
Compute output dimension given input dimension.
Bases: Module
Random Fourier Features (Rahimi & Recht, 2007) for RBF kernel approximation.
Draws a fixed random matrix \(\mathbf{B} \sim \mathcal{N}(0, \sigma^2)\) of shape \((d_{\text{in}},\, m)\) and maps \(\mathbf{x} \in \mathbb{R}^{n \times d_{\text{in}}}\) to:
\(\mathbf{B}\) is registered as a buffer and moves with the module across devices.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
in_dim
|
int
|
Spatial dimension \(d_{\text{in}}\) of the input. |
required |
num_features
|
int
|
Number of random features \(m\) (output dimension \(= 2m\)). |
256
|
scale
|
float
|
Standard deviation \(\sigma\) of the frequency distribution. Higher values capture higher-frequency variation. Default: 1.0. |
1.0
|
seed
|
int | None
|
Optional seed for reproducible frequency sampling. |
None
|
num_features = num_features
instance-attribute
out_dim: int
property
Output dimension (always 2 * num_features).
__init__(in_dim: int, num_features: int = 256, scale: float = 1.0, seed: int | None = None) -> None
forward(x: Tensor) -> Tensor
Project input through random features and apply cos/sin.
Utility Functions
Return the loss-criterion module for the given name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Criteria
|
One of |
required |
Returns:
| Type | Description |
|---|---|
Module
|
The corresponding PyTorch loss module. |
Get the activation function module by name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Activations
|
The name of the activation function. |
required |
Returns:
| Type | Description |
|---|---|
Module
|
The PyTorch activation module. |
Type Aliases
| Alias | Definition | Purpose |
|---|---|---|
ArgsRegistry |
dict[str, Argument] |
Named arguments passed to ODE/PDE callables |
ParamsRegistry |
dict[str, Parameter] |
Named learnable parameters |
FieldsRegistry |
dict[str, Field] |
Named neural fields |
TrainingBatch |
tuple[DataBatch, Tensor] |
(data, collocation_points) |
DataBatch |
tuple[Tensor, Tensor] |
(x_data, y_data) |
Predictions |
tuple[DataBatch, dict, dict \| None] |
(batch, preds, trues) |
LogFn |
Protocol |
Logging callback (key, value, progress_bar?) |
ValidationRegistry |
dict[str, ValidationSource] |
Ground-truth values for parameter comparison |
Activations |
Literal[...] |
Supported activation function names |
Criteria |
Literal[...] |
Supported loss function names |
CollocationStrategies |
Literal[...] |
Supported collocation sampling strategies |