anypinn.core
Core PINN building blocks.
Activations: TypeAlias = Literal['tanh', 'relu', 'leaky_relu', 'sigmoid', 'selu', 'softplus', 'identity']
module-attribute
Supported activation functions.
ArgsRegistry: TypeAlias = dict[str, Argument]
module-attribute
CollocationStrategies: TypeAlias = Literal['uniform', 'random', 'latin_hypercube', 'log_uniform_1d', 'adaptive']
module-attribute
Supported collocation sampling strategies.
Criteria: TypeAlias = Literal['mse', 'huber', 'l1']
module-attribute
Supported loss criteria.
DataBatch: TypeAlias = tuple[Tensor, Tensor]
module-attribute
Type alias for data batch: (x, y).
FieldsRegistry: TypeAlias = dict[str, Field]
module-attribute
LOSS_KEY = 'loss'
module-attribute
Key used for logging the total loss.
ParamsRegistry: TypeAlias = dict[str, Parameter]
module-attribute
Predictions: TypeAlias = tuple[DataBatch, dict[str, Tensor], dict[str, Tensor] | None]
module-attribute
Type alias for model predictions: (input_batch, predictions_dictionary, true_values_dictionary) where predictions_dictionary is a dictionary of {[field_name | param_name]: prediction} and where true_values_dictionary is a dictionary of {[field_name | param_name]: true_value}. If no validation source is configured, true_values_dictionary is None.
ResolvedValidation: TypeAlias = dict[str, Callable[[Tensor], Tensor]]
module-attribute
Validation registry after ColumnRef entries have been resolved to callables.
TrainingBatch: TypeAlias = tuple[DataBatch, Tensor]
module-attribute
Training batch tuple: ((x_data, y_data), x_coll).
ValidationRegistry: TypeAlias = dict[str, ValidationSource]
module-attribute
Registry mapping parameter names to their validation sources.
Example
validation: ValidationRegistry = { ... "beta": lambda x: torch.sin(x), # Pure function ... "gamma": ColumnRef(column="gamma_true"), # From data ... "delta": None, # No validation ... }
ValidationSource: TypeAlias = Callable[[Tensor], Tensor] | ColumnRef | None
module-attribute
A source for ground truth values. Can be: - A callable that takes x coordinates and returns true values - A ColumnRef that references a column in loaded data - None if no validation is needed for this parameter
__all__ = ['LOSS_KEY', 'Activations', 'AdamConfig', 'AdaptiveSampler', 'ArgsRegistry', 'Argument', 'CollocationSampler', 'CollocationStrategies', 'ColumnRef', 'Constraint', 'CosineAnnealingConfig', 'Criteria', 'DataBatch', 'DataCallback', 'Domain', 'EarlyStoppingConfig', 'Field', 'FieldsRegistry', 'FourierEncoding', 'GenerationConfig', 'InferredContext', 'IngestionConfig', 'LBFGSConfig', 'LatinHypercubeSampler', 'LogFn', 'LogUniform1DSampler', 'MLPConfig', 'PINNDataModule', 'PINNDataset', 'PINNHyperparameters', 'Parameter', 'ParamsRegistry', 'Predictions', 'Problem', 'RandomFourierFeatures', 'RandomSampler', 'ReduceLROnPlateauConfig', 'ResidualScorer', 'ResolvedValidation', 'SMMAStoppingConfig', 'ScalarConfig', 'TrainingBatch', 'TrainingDataConfig', 'UniformSampler', 'ValidationRegistry', 'ValidationSource', 'build_criterion', 'build_sampler', 'get_activation', 'resolve_validation']
module-attribute
AdamConfig
dataclass
Configuration for the Adam optimizer.
Source code in src/anypinn/core/config.py
betas: tuple[float, float] = (0.9, 0.999)
class-attribute
instance-attribute
lr: float = 0.001
class-attribute
instance-attribute
weight_decay: float = 0.0
class-attribute
instance-attribute
__init__(*, lr: float = 0.001, betas: tuple[float, float] = (0.9, 0.999), weight_decay: float = 0.0) -> None
__post_init__() -> None
Source code in src/anypinn/core/config.py
AdaptiveSampler
Residual-weighted adaptive collocation sampler.
Draws an oversample of candidate points, scores them using a
ResidualScorer, and retains the top-scoring subset. A configurable
exploration_ratio ensures a fraction of purely random points to prevent
mode collapse.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
scorer
|
ResidualScorer
|
Callable returning per-point residual scores |
required |
oversample_factor
|
int
|
Multiplier on |
4
|
exploration_ratio
|
float
|
Fraction of the budget reserved for random points. |
0.2
|
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
Source code in src/anypinn/core/samplers.py
__init__(scorer: ResidualScorer, oversample_factor: int = 4, exploration_ratio: float = 0.2, seed: int | None = None) -> None
Source code in src/anypinn/core/samplers.py
sample(n: int, domain: Domain) -> Tensor
Source code in src/anypinn/core/samplers.py
Argument
Represents an argument that can be passed to an ODE/PDE function. Can be a fixed float value or a callable function.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
float | Callable[[Tensor], Tensor]
|
The value (float) or function (callable). |
required |
Source code in src/anypinn/core/nn.py
__call__(x: Tensor) -> Tensor
Evaluate the argument.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input tensor (context). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The value of the argument, broadcasted if necessary. |
Source code in src/anypinn/core/nn.py
__init__(value: float | Callable[[Tensor], Tensor])
CollocationSampler
Bases: Protocol
Protocol for collocation point samplers.
Implementations must return a tensor of shape (n, domain.ndim) with all
points inside the domain bounds.
Source code in src/anypinn/core/samplers.py
sample(n: int, domain: Domain) -> Tensor
Source code in src/anypinn/core/samplers.py
ColumnRef
dataclass
Reference to a column in loaded data for ground truth comparison.
This allows practitioners to specify validation data by column name without writing custom functions. The column is resolved lazily when data is loaded.
Attributes:
| Name | Type | Description |
|---|---|---|
column |
str
|
Name of the column in the loaded DataFrame. |
transform |
Callable[[Tensor], Tensor] | None
|
Optional transformation to apply to the column values. |
Example
validation = { ... "beta": ColumnRef(column="Rt", transform=lambda rt: rt * delta), ... }
Source code in src/anypinn/core/validation.py
column: str
instance-attribute
transform: Callable[[Tensor], Tensor] | None = None
class-attribute
instance-attribute
__init__(column: str, transform: Callable[[Tensor], Tensor] | None = None) -> None
Constraint
Bases: ABC
Abstract base class for a constraint (loss term) in the PINN. Returns a loss value for the given batch.
Source code in src/anypinn/core/problem.py
inject_context(context: InferredContext) -> None
Inject the context into the constraint. This can be used by the constraint to access the data used to compute the loss.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
InferredContext
|
The context to inject. |
required |
Source code in src/anypinn/core/problem.py
loss(batch: TrainingBatch, criterion: nn.Module, log: LogFn | None = None) -> Tensor
abstractmethod
Calculate the loss for this constraint.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch
|
TrainingBatch
|
The current batch of data/collocation points. |
required |
criterion
|
Module
|
The loss function (e.g. MSE). |
required |
log
|
LogFn | None
|
Optional logging function. |
None
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The calculated loss tensor. |
Source code in src/anypinn/core/problem.py
CosineAnnealingConfig
dataclass
Configuration for Cosine Annealing LR Scheduler.
Source code in src/anypinn/core/config.py
T_max: int
instance-attribute
eta_min: float = 0.0
class-attribute
instance-attribute
__init__(*, T_max: int, eta_min: float = 0.0) -> None
DataCallback
Abstract base class for building new data callbacks.
Source code in src/anypinn/core/dataset.py
on_after_setup(dm: PINNDataModule) -> None
transform_data(data: DataBatch, coll: Tensor) -> tuple[DataBatch, Tensor]
Domain
dataclass
N-dimensional rectangular domain.
Attributes:
| Name | Type | Description |
|---|---|---|
bounds |
list[tuple[float, float]]
|
Per-dimension (min, max) pairs. |
dx |
list[float] | None
|
Per-dimension step size ( |
Source code in src/anypinn/core/nn.py
bounds: list[tuple[float, float]]
instance-attribute
dx: list[float] | None = None
class-attribute
instance-attribute
ndim: int
property
Number of spatial dimensions.
x0: float
property
Lower bound of the first dimension (convenience for 1-D / time-axis access).
x1: float
property
Upper bound of the first dimension.
__init__(bounds: list[tuple[float, float]], dx: list[float] | None = None) -> None
__repr__() -> str
from_x(x: Tensor) -> Domain
classmethod
Infer domain bounds and step sizes from a coordinate tensor of shape (N, d).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Coordinate tensor of shape |
required |
Returns:
| Type | Description |
|---|---|
Domain
|
Domain with bounds and dx inferred from the data. |
Source code in src/anypinn/core/nn.py
EarlyStoppingConfig
dataclass
Configuration for Early Stopping callback.
Source code in src/anypinn/core/config.py
mode: Literal['min', 'max']
instance-attribute
patience: int
instance-attribute
__init__(*, patience: int, mode: Literal['min', 'max']) -> None
Field
Bases: Module
A neural field mapping coordinates -> vector of state variables. Example (ODE): t -> [S, I, R].
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
MLPConfig
|
Configuration for the MLP backing this field. |
required |
Source code in src/anypinn/core/nn.py
encoder: nn.Module | None = encode
instance-attribute
net = nn.Sequential(*layers)
instance-attribute
__init__(config: MLPConfig)
Source code in src/anypinn/core/nn.py
forward(x: Tensor) -> Tensor
Forward pass of the field.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input coordinates (e.g. time, space). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The values of the field at input coordinates. |
Source code in src/anypinn/core/nn.py
FourierEncoding
Bases: Module
Sinusoidal positional encoding for periodic or high-frequency signals.
For input \(\mathbf{x} \in \mathbb{R}^{n \times d}\) and
num_frequencies \(K\), the encoding is:
producing shape \((n,\, d\,(1 + 2K))\) when include_input=True,
or \((n,\, 2dK)\) when include_input=False.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_frequencies
|
int
|
Number of frequency bands \(K \geq 1\). |
6
|
include_input
|
bool
|
Prepend original coordinates to the encoded output. |
True
|
Source code in src/anypinn/lib/encodings.py
include_input = include_input
instance-attribute
num_frequencies = num_frequencies
instance-attribute
__init__(num_frequencies: int = 6, include_input: bool = True) -> None
Source code in src/anypinn/lib/encodings.py
forward(x: Tensor) -> Tensor
out_dim(in_dim: int) -> int
Compute output dimension given input dimension.
GenerationConfig
dataclass
Bases: TrainingDataConfig
Configuration for data generation.
Source code in src/anypinn/core/config.py
args_to_train: ArgsRegistry
instance-attribute
noise_level: float
instance-attribute
x: Tensor
instance-attribute
__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None, x: Tensor, noise_level: float, args_to_train: ArgsRegistry) -> None
InferredContext
dataclass
Runtime context inferred from training data.
This holds the data that is either explicitly provided in props or inferred from training data.
Source code in src/anypinn/core/context.py
domain = Domain.from_x(x)
instance-attribute
validation = validation
instance-attribute
__init__(x: Tensor, y: Tensor, validation: ResolvedValidation)
Infer context from either generated or loaded data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
x coordinates. |
required |
y
|
Tensor
|
observations. |
required |
validation
|
ResolvedValidation
|
Resolved validation dictionary. |
required |
Source code in src/anypinn/core/context.py
IngestionConfig
dataclass
Bases: TrainingDataConfig
Configuration for data ingestion from files. If x_column is None, the data is assumed to be evenly spaced.
Source code in src/anypinn/core/config.py
df_path: Path
instance-attribute
x_column: str | None = None
class-attribute
instance-attribute
x_transform: Callable[[Any], Any] | None = None
class-attribute
instance-attribute
y_columns: list[str]
instance-attribute
__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None, df_path: Path, x_transform: Callable[[Any], Any] | None = None, x_column: str | None = None, y_columns: list[str]) -> None
LBFGSConfig
dataclass
Configuration for the L-BFGS optimizer.
Source code in src/anypinn/core/config.py
history_size: int = 100
class-attribute
instance-attribute
line_search_fn: str | None = 'strong_wolfe'
class-attribute
instance-attribute
lr: float = 1.0
class-attribute
instance-attribute
max_eval: int | None = None
class-attribute
instance-attribute
max_iter: int = 20
class-attribute
instance-attribute
__init__(*, lr: float = 1.0, max_iter: int = 20, max_eval: int | None = None, history_size: int = 100, line_search_fn: str | None = 'strong_wolfe') -> None
__post_init__() -> None
Source code in src/anypinn/core/config.py
LatinHypercubeSampler
Latin Hypercube sampler (pure-PyTorch, no SciPy dependency).
Stratifies each dimension into n equal intervals and places one sample
per interval, then shuffles columns independently.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
Source code in src/anypinn/core/samplers.py
__init__(seed: int | None = None) -> None
sample(n: int, domain: Domain) -> Tensor
Source code in src/anypinn/core/samplers.py
LogFn
Bases: Protocol
A function that logs a value to a dictionary.
Source code in src/anypinn/core/types.py
__call__(name: str, value: Tensor, progress_bar: bool = False) -> None
Log a value.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
The name to log the value under. |
required |
value
|
Tensor
|
The value to log. |
required |
progress_bar
|
bool
|
Whether the value should be logged to the progress bar. |
False
|
Source code in src/anypinn/core/types.py
LogUniform1DSampler
Log-uniform sampler for 1-D domains (reproduces SIR collocation behavior).
Samples uniformly in log1p space and maps back via expm1, producing
a distribution that is denser near the lower bound — useful for epidemic
models where early dynamics are most informative.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If the domain is not 1-D or |
Source code in src/anypinn/core/samplers.py
__init__(seed: int | None = None) -> None
sample(n: int, domain: Domain) -> Tensor
Source code in src/anypinn/core/samplers.py
MLPConfig
dataclass
Configuration for a Multi-Layer Perceptron (MLP).
Attributes:
| Name | Type | Description |
|---|---|---|
in_dim |
int
|
Dimension of input layer. |
out_dim |
int
|
Dimension of output layer. |
hidden_layers |
list[int]
|
List of dimensions for hidden layers. |
activation |
Activations
|
Activation function to use between layers. |
output_activation |
Activations | None
|
Optional activation function for the output layer. |
encode |
Callable[[Tensor], Tensor] | None
|
Optional function to encode inputs before passing to MLP. |
Source code in src/anypinn/core/config.py
activation: Activations
instance-attribute
encode: Callable[[Tensor], Tensor] | None = None
class-attribute
instance-attribute
hidden_layers: list[int]
instance-attribute
in_dim: int
instance-attribute
out_dim: int
instance-attribute
output_activation: Activations | None = None
class-attribute
instance-attribute
__init__(*, in_dim: int, out_dim: int, hidden_layers: list[int], activation: Activations, output_activation: Activations | None = None, encode: Callable[[Tensor], Tensor] | None = None) -> None
PINNDataModule
Bases: LightningDataModule, ABC
LightningDataModule for PINNs. Manages data and collocation datasets and creates the combined PINNDataset.
Collocation points are generated via a CollocationSampler selected by the
collocation_sampler field in TrainingDataConfig (string literal).
Subclasses only need to implement gen_data(); collocation generation is
handled by the sampler resolved from the hyperparameters.
Attributes:
| Name | Type | Description |
|---|---|---|
pinn_ds |
Combined PINNDataset for training. |
|
callbacks |
list[DataCallback]
|
Sequence of DataCallback callbacks applied after data loading. |
Source code in src/anypinn/core/dataset.py
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 | |
callbacks: list[DataCallback] = list(callbacks) if callbacks else []
instance-attribute
context: InferredContext
property
hp = hp
instance-attribute
__init__(hp: PINNHyperparameters, validation: ValidationRegistry | None = None, callbacks: Sequence[DataCallback] | None = None, residual_scorer: ResidualScorer | None = None) -> None
Source code in src/anypinn/core/dataset.py
gen_data(config: GenerationConfig) -> DataBatch
abstractmethod
load_data(config: IngestionConfig) -> DataBatch
Load raw data from IngestionConfig.
Source code in src/anypinn/core/dataset.py
predict_dataloader() -> DataLoader[PredictionBatch]
Returns the prediction dataloader using only the data dataset.
Source code in src/anypinn/core/dataset.py
setup(stage: str | None = None) -> None
Load raw data from IngestionConfig, or generate synthetic data from GenerationConfig. Apply registered callbacks, create InferredContext and datasets.
Source code in src/anypinn/core/dataset.py
train_dataloader() -> DataLoader[TrainingBatch]
Returns the training dataloader using PINNDataset.
Source code in src/anypinn/core/dataset.py
PINNDataset
Bases: Dataset[TrainingBatch]
Dataset used for PINN training. Combines labeled data and collocation points
per sample. Given a data_ratio, the amount of data points K is determined
either by applying data_ratio * batch_size if ratio is a float between 0
and 1 or by an absolute count if ratio is an integer. The remaining C
points are used for collocation. The data points are sampled without
replacement per epoch i.e. cycles through all data points and at the last
batch, wraps around to the first indices to ensure batch size. The collocation
points are sampled with replacement from the pool.
The dataset produces a batch of shape ((x_data[K,d], y_data[K,...]), x_coll[C,d]).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x_data
|
Tensor
|
Data point x coordinates (time values). |
required |
y_data
|
Tensor
|
Data point y values (observations). |
required |
x_coll
|
Tensor
|
Collocation point x coordinates. |
required |
batch_size
|
int
|
Size of the batch. |
required |
data_ratio
|
float | int
|
Ratio of data points to collocation points, either as a ratio [0,1] or absolute count [0,batch_size]. |
required |
Source code in src/anypinn/core/dataset.py
C = batch_size - self.K
instance-attribute
K = round(data_ratio * batch_size)
instance-attribute
batch_size = batch_size
instance-attribute
total_coll = x_coll.shape[0]
instance-attribute
total_data = x_data.shape[0]
instance-attribute
x_coll = x_coll
instance-attribute
x_data = x_data
instance-attribute
y_data = y_data
instance-attribute
__getitem__(index: int) -> TrainingBatch
Return one sample containing K data points and C collocation points.
Source code in src/anypinn/core/dataset.py
__init__(x_data: Tensor, y_data: Tensor, x_coll: Tensor, batch_size: int, data_ratio: float | int)
Source code in src/anypinn/core/dataset.py
PINNHyperparameters
dataclass
Aggregated hyperparameters for the PINN model.
Source code in src/anypinn/core/config.py
criterion: Criteria = 'mse'
class-attribute
instance-attribute
early_stopping: EarlyStoppingConfig | None = None
class-attribute
instance-attribute
fields_config: MLPConfig
instance-attribute
gradient_clip_val: float | None = None
class-attribute
instance-attribute
lr: float
instance-attribute
max_epochs: int | None = None
class-attribute
instance-attribute
optimizer: AdamConfig | LBFGSConfig | None = None
class-attribute
instance-attribute
params_config: MLPConfig | ScalarConfig
instance-attribute
scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None
class-attribute
instance-attribute
smma_stopping: SMMAStoppingConfig | None = None
class-attribute
instance-attribute
training_data: IngestionConfig | GenerationConfig
instance-attribute
__init__(*, lr: float, training_data: IngestionConfig | GenerationConfig, fields_config: MLPConfig, params_config: MLPConfig | ScalarConfig, max_epochs: int | None = None, gradient_clip_val: float | None = None, criterion: Criteria = 'mse', optimizer: AdamConfig | LBFGSConfig | None = None, scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None, early_stopping: EarlyStoppingConfig | None = None, smma_stopping: SMMAStoppingConfig | None = None) -> None
Parameter
Bases: Module, Argument
Learnable parameter. Supports scalar or function-valued parameter. For function-valued parameters (e.g. β(t)), uses a small MLP.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
ScalarConfig | MLPConfig
|
Configuration for the parameter (ScalarConfig or MLPConfig). |
required |
Source code in src/anypinn/core/nn.py
config = config
instance-attribute
mode: Literal['scalar', 'mlp']
property
Mode of the parameter: 'scalar' or 'mlp'.
net = nn.Sequential(*layers)
instance-attribute
value = nn.Parameter(torch.tensor(float(config.init_value), dtype=(torch.float32)))
instance-attribute
__init__(config: ScalarConfig | MLPConfig)
Source code in src/anypinn/core/nn.py
forward(x: Tensor | None = None) -> Tensor
Get the value of the parameter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor | None
|
Input tensor (required for 'mlp' mode). |
None
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The parameter value. |
Source code in src/anypinn/core/nn.py
Problem
Bases: Module
Aggregates operator residuals and constraints into total loss. Manages fields, parameters, constraints, and validation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
constraints
|
list[Constraint]
|
List of constraints to enforce. |
required |
criterion
|
Module
|
Loss function module. |
required |
fields
|
FieldsRegistry
|
List of fields (neural networks) to solve for. |
required |
params
|
ParamsRegistry
|
List of learnable parameters. |
required |
Source code in src/anypinn/core/problem.py
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 | |
constraints = constraints
instance-attribute
criterion = criterion
instance-attribute
fields = fields
instance-attribute
params = params
instance-attribute
__init__(constraints: list[Constraint], criterion: nn.Module, fields: FieldsRegistry, params: ParamsRegistry)
Source code in src/anypinn/core/problem.py
inject_context(context: InferredContext) -> None
Inject the context into the problem.
This should be called after data is loaded but before training starts. Pure function entries are passed through unchanged.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
InferredContext
|
The context to inject. |
required |
Source code in src/anypinn/core/problem.py
predict(batch: DataBatch) -> tuple[DataBatch, dict[str, Tensor]]
Generate predictions for a given batch of data. Returns unscaled predictions in original domain.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch
|
DataBatch
|
Batch of input coordinates. |
required |
Returns:
| Type | Description |
|---|---|
tuple[DataBatch, dict[str, Tensor]]
|
Tuple of (original_batch, predictions_dict). |
Source code in src/anypinn/core/problem.py
training_loss(batch: TrainingBatch, log: LogFn | None = None) -> Tensor
Calculate the total loss from all constraints.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch
|
TrainingBatch
|
Current batch. |
required |
log
|
LogFn | None
|
Optional logging function. |
None
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Sum of losses from all constraints. |
Source code in src/anypinn/core/problem.py
true_values(x: Tensor) -> dict[str, Tensor] | None
Get the true values for a given x coordinates. Returns None if no validation source is configured.
Source code in src/anypinn/core/problem.py
RandomFourierFeatures
Bases: Module
Random Fourier Features (Rahimi & Recht, 2007) for RBF kernel approximation.
Draws a fixed random matrix \(\mathbf{B} \sim \mathcal{N}(0, \sigma^2)\) of shape \((d_{\text{in}},\, m)\) and maps \(\mathbf{x} \in \mathbb{R}^{n \times d_{\text{in}}}\) to:
\(\mathbf{B}\) is registered as a buffer and moves with the module across devices.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
in_dim
|
int
|
Spatial dimension \(d_{\text{in}}\) of the input. |
required |
num_features
|
int
|
Number of random features \(m\) (output dimension \(= 2m\)). |
256
|
scale
|
float
|
Standard deviation \(\sigma\) of the frequency distribution. Higher values capture higher-frequency variation. Default: 1.0. |
1.0
|
seed
|
int | None
|
Optional seed for reproducible frequency sampling. |
None
|
Source code in src/anypinn/lib/encodings.py
num_features = num_features
instance-attribute
out_dim: int
property
Output dimension (always 2 * num_features).
__init__(in_dim: int, num_features: int = 256, scale: float = 1.0, seed: int | None = None) -> None
Source code in src/anypinn/lib/encodings.py
RandomSampler
Uniform random sampler inside domain bounds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
Source code in src/anypinn/core/samplers.py
__init__(seed: int | None = None) -> None
sample(n: int, domain: Domain) -> Tensor
ReduceLROnPlateauConfig
dataclass
Configuration for Learning Rate Scheduler (ReduceLROnPlateau).
Source code in src/anypinn/core/config.py
factor: float
instance-attribute
min_lr: float
instance-attribute
mode: Literal['min', 'max']
instance-attribute
patience: int
instance-attribute
threshold: float
instance-attribute
__init__(*, mode: Literal['min', 'max'], factor: float, patience: int, threshold: float, min_lr: float) -> None
__post_init__() -> None
ResidualScorer
Bases: Protocol
Protocol for scoring candidate collocation points by PDE residual magnitude.
Source code in src/anypinn/core/samplers.py
residual_score(x: Tensor) -> Tensor
Return per-point non-negative residual score of shape (n,).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Candidate collocation points |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Scores |
Source code in src/anypinn/core/samplers.py
SMMAStoppingConfig
dataclass
Configuration for Simple Moving Average Stopping callback.
Source code in src/anypinn/core/config.py
lookback: int
instance-attribute
threshold: float
instance-attribute
window: int
instance-attribute
__init__(*, window: int, threshold: float, lookback: int) -> None
__post_init__() -> None
Source code in src/anypinn/core/config.py
ScalarConfig
dataclass
Configuration for a scalar parameter.
Attributes:
| Name | Type | Description |
|---|---|---|
init_value |
float
|
Initial value for the parameter. |
Source code in src/anypinn/core/config.py
init_value: float
instance-attribute
__init__(*, init_value: float) -> None
TrainingDataConfig
dataclass
Configuration for data loading and batching.
Attributes:
| Name | Type | Description |
|---|---|---|
batch_size |
int
|
Number of points per training batch. |
data_ratio |
int | float
|
Ratio of data to collocation points per batch. |
collocations |
int
|
Total number of collocation points to generate. |
collocation_sampler |
CollocationStrategies
|
Sampling strategy for collocation points. |
collocation_seed |
int | None
|
Optional seed for reproducible collocation sampling. |
Source code in src/anypinn/core/config.py
batch_size: int
instance-attribute
collocation_sampler: CollocationStrategies = 'random'
class-attribute
instance-attribute
collocation_seed: int | None = None
class-attribute
instance-attribute
collocations: int
instance-attribute
data_ratio: int | float
instance-attribute
__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None) -> None
__post_init__() -> None
Source code in src/anypinn/core/config.py
UniformSampler
Cartesian grid sampler that distributes points evenly across the domain.
For d-dimensional domains, places ceil(n^(1/d)) points per axis then
takes the first n points of the resulting grid.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Optional seed (unused — grid is deterministic). |
None
|
Source code in src/anypinn/core/samplers.py
__init__(seed: int | None = None) -> None
sample(n: int, domain: Domain) -> Tensor
Source code in src/anypinn/core/samplers.py
build_criterion(name: Criteria) -> nn.Module
Return the loss-criterion module for the given name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Criteria
|
One of |
required |
Returns:
| Type | Description |
|---|---|
Module
|
The corresponding PyTorch loss module. |
Source code in src/anypinn/core/nn.py
build_sampler(strategy: CollocationStrategies, seed: int | None = None, scorer: ResidualScorer | None = None) -> CollocationSampler
Construct a collocation sampler from a strategy name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
strategy
|
CollocationStrategies
|
One of the |
required |
seed
|
int | None
|
Optional seed for reproducible sampling. |
None
|
scorer
|
ResidualScorer | None
|
Required when |
None
|
Returns:
| Type | Description |
|---|---|
CollocationSampler
|
A sampler instance satisfying the |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in src/anypinn/core/samplers.py
get_activation(name: Activations) -> nn.Module
Get the activation function module by name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Activations
|
The name of the activation function. |
required |
Returns:
| Type | Description |
|---|---|
Module
|
The PyTorch activation module. |
Source code in src/anypinn/core/nn.py
resolve_validation(registry: ValidationRegistry, df_path: Path | None = None) -> ResolvedValidation
Resolve a ValidationRegistry by converting ColumnRef entries to callables.
Pure function entries are passed through unchanged. ColumnRef entries are resolved using the provided data file path.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
registry
|
ValidationRegistry
|
The validation registry to resolve. |
required |
df_path
|
Path | None
|
Path to the CSV file for ColumnRef resolution. |
None
|
Returns:
| Type | Description |
|---|---|
ResolvedValidation
|
A dictionary mapping parameter names to callable validation functions. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If a ColumnRef cannot be resolved (missing column or no df_path). |