anypinn.core.config
Configuration dataclasses for PINN models.
AdamConfig
dataclass
Configuration for the Adam optimizer.
Attributes:
| Name | Type | Description |
|---|---|---|
lr |
float
|
Learning rate (must be positive). |
betas |
tuple[float, float]
|
Coefficients for computing running averages of gradient and its square. Both must be in (0, 1). |
weight_decay |
float
|
L2 penalty coefficient (non-negative). |
Source code in src/anypinn/core/config.py
betas: tuple[float, float] = (0.9, 0.999)
class-attribute
instance-attribute
lr: float = 0.001
class-attribute
instance-attribute
weight_decay: float = 0.0
class-attribute
instance-attribute
__init__(*, lr: float = 0.001, betas: tuple[float, float] = (0.9, 0.999), weight_decay: float = 0.0) -> None
__post_init__() -> None
Source code in src/anypinn/core/config.py
CosineAnnealingConfig
dataclass
Configuration for Cosine Annealing LR Scheduler.
Attributes:
| Name | Type | Description |
|---|---|---|
T_max |
int
|
Maximum number of iterations (typically set to
|
eta_min |
float
|
Minimum learning rate at the end of the schedule. |
Source code in src/anypinn/core/config.py
T_max: int
instance-attribute
eta_min: float = 0.0
class-attribute
instance-attribute
__init__(*, T_max: int, eta_min: float = 0.0) -> None
EarlyStoppingConfig
dataclass
Configuration for Early Stopping callback.
Attributes:
| Name | Type | Description |
|---|---|---|
patience |
int
|
Number of epochs with no improvement before stopping. |
mode |
Literal['min', 'max']
|
|
Source code in src/anypinn/core/config.py
mode: Literal['min', 'max']
instance-attribute
patience: int
instance-attribute
__init__(*, patience: int, mode: Literal['min', 'max']) -> None
GenerationConfig
dataclass
Bases: TrainingDataConfig
Configuration for generating synthetic training data.
Used in forward problems where the ground-truth ODE/PDE solution is computed from known parameters and optionally corrupted with noise.
Attributes:
| Name | Type | Description |
|---|---|---|
x |
Tensor
|
Coordinate tensor to evaluate the ODE/PDE at. |
noise_level |
float
|
Standard deviation of Gaussian noise added to the generated observations (0.0 for clean data). |
args_to_train |
ArgsRegistry
|
Arguments used by the data-generation ODE/PDE callable to produce the synthetic solution. |
Source code in src/anypinn/core/config.py
args_to_train: ArgsRegistry
instance-attribute
noise_level: float
instance-attribute
x: Tensor
instance-attribute
__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None, x: Tensor, noise_level: float, args_to_train: ArgsRegistry) -> None
IngestionConfig
dataclass
Bases: TrainingDataConfig
Configuration for loading training data from a CSV file.
Attributes:
| Name | Type | Description |
|---|---|---|
df_path |
Path
|
Path to the CSV file. |
x_transform |
Callable[[Any], Any] | None
|
Optional transform applied to the x column values after loading (e.g. unit conversion). |
x_column |
str | None
|
Name of the column to use as x coordinates. If
|
y_columns |
list[str]
|
List of column names to use as y observations. |
Source code in src/anypinn/core/config.py
df_path: Path
instance-attribute
x_column: str | None = None
class-attribute
instance-attribute
x_transform: Callable[[Any], Any] | None = None
class-attribute
instance-attribute
y_columns: list[str]
instance-attribute
__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None, df_path: Path, x_transform: Callable[[Any], Any] | None = None, x_column: str | None = None, y_columns: list[str]) -> None
LBFGSConfig
dataclass
Configuration for the L-BFGS optimizer.
Attributes:
| Name | Type | Description |
|---|---|---|
lr |
float
|
Learning rate (must be positive). |
max_iter |
int
|
Maximum number of iterations per optimization step. |
max_eval |
int | None
|
Maximum number of function evaluations per step
(defaults to |
history_size |
int
|
Number of past updates to store for the approximation of the inverse Hessian. |
line_search_fn |
str | None
|
Line search function ( |
Source code in src/anypinn/core/config.py
history_size: int = 100
class-attribute
instance-attribute
line_search_fn: str | None = 'strong_wolfe'
class-attribute
instance-attribute
lr: float = 1.0
class-attribute
instance-attribute
max_eval: int | None = None
class-attribute
instance-attribute
max_iter: int = 20
class-attribute
instance-attribute
__init__(*, lr: float = 1.0, max_iter: int = 20, max_eval: int | None = None, history_size: int = 100, line_search_fn: str | None = 'strong_wolfe') -> None
__post_init__() -> None
Source code in src/anypinn/core/config.py
MLPConfig
dataclass
Configuration for a Multi-Layer Perceptron (MLP).
Attributes:
| Name | Type | Description |
|---|---|---|
in_dim |
int
|
Dimension of input layer. |
out_dim |
int
|
Dimension of output layer. |
hidden_layers |
list[int]
|
List of dimensions for hidden layers. |
activation |
Activations
|
Activation function to use between layers. |
output_activation |
Activations | None
|
Optional activation function for the output layer. |
encode |
Callable[[Tensor], Tensor] | None
|
Optional function to encode inputs before passing to MLP. |
Source code in src/anypinn/core/config.py
activation: Activations
instance-attribute
encode: Callable[[Tensor], Tensor] | None = None
class-attribute
instance-attribute
hidden_layers: list[int]
instance-attribute
in_dim: int
instance-attribute
out_dim: int
instance-attribute
output_activation: Activations | None = None
class-attribute
instance-attribute
__init__(*, in_dim: int, out_dim: int, hidden_layers: list[int], activation: Activations, output_activation: Activations | None = None, encode: Callable[[Tensor], Tensor] | None = None) -> None
PINNHyperparameters
dataclass
Aggregated hyperparameters for the PINN model.
Attributes:
| Name | Type | Description |
|---|---|---|
lr |
float
|
Base learning rate (used as fallback when no |
training_data |
IngestionConfig | GenerationConfig
|
Data source configuration — either
|
fields_config |
MLPConfig
|
MLP architecture for the neural field(s). |
params_config |
MLPConfig | ScalarConfig
|
Configuration for learnable parameters (scalar or MLP-backed). |
max_epochs |
int | None
|
Maximum number of training epochs. |
gradient_clip_val |
float | None
|
Optional gradient clipping value. |
criterion |
Criteria
|
Loss function name ( |
optimizer |
AdamConfig | LBFGSConfig | None
|
Optimizer configuration. If |
scheduler |
ReduceLROnPlateauConfig | CosineAnnealingConfig | None
|
Learning rate scheduler configuration. |
early_stopping |
EarlyStoppingConfig | None
|
Optional early stopping configuration (patience-based). |
smma_stopping |
SMMAStoppingConfig | None
|
Optional SMMA stopping configuration (improvement-based). |
Source code in src/anypinn/core/config.py
criterion: Criteria = 'mse'
class-attribute
instance-attribute
early_stopping: EarlyStoppingConfig | None = None
class-attribute
instance-attribute
fields_config: MLPConfig
instance-attribute
gradient_clip_val: float | None = None
class-attribute
instance-attribute
lr: float
instance-attribute
max_epochs: int | None = None
class-attribute
instance-attribute
optimizer: AdamConfig | LBFGSConfig | None = None
class-attribute
instance-attribute
params_config: MLPConfig | ScalarConfig
instance-attribute
scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None
class-attribute
instance-attribute
smma_stopping: SMMAStoppingConfig | None = None
class-attribute
instance-attribute
training_data: IngestionConfig | GenerationConfig
instance-attribute
__init__(*, lr: float, training_data: IngestionConfig | GenerationConfig, fields_config: MLPConfig, params_config: MLPConfig | ScalarConfig, max_epochs: int | None = None, gradient_clip_val: float | None = None, criterion: Criteria = 'mse', optimizer: AdamConfig | LBFGSConfig | None = None, scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None, early_stopping: EarlyStoppingConfig | None = None, smma_stopping: SMMAStoppingConfig | None = None) -> None
ReduceLROnPlateauConfig
dataclass
Configuration for Learning Rate Scheduler (ReduceLROnPlateau).
Attributes:
| Name | Type | Description |
|---|---|---|
mode |
Literal['min', 'max']
|
|
factor |
float
|
Factor by which the learning rate is reduced (must be in (0, 1)). |
patience |
int
|
Number of epochs with no improvement before the LR is reduced. |
threshold |
float
|
Minimum change to qualify as an improvement. |
min_lr |
float
|
Lower bound on the learning rate. |
Source code in src/anypinn/core/config.py
factor: float
instance-attribute
min_lr: float
instance-attribute
mode: Literal['min', 'max']
instance-attribute
patience: int
instance-attribute
threshold: float
instance-attribute
__init__(*, mode: Literal['min', 'max'], factor: float, patience: int, threshold: float, min_lr: float) -> None
__post_init__() -> None
SMMAStoppingConfig
dataclass
Configuration for Smoothed Moving Average (SMMA) Stopping callback.
Training stops when the relative improvement of the SMMA over the
lookback window falls below threshold.
Attributes:
| Name | Type | Description |
|---|---|---|
window |
int
|
Number of epochs used to compute the smoothed moving average. |
threshold |
float
|
Minimum relative improvement required to continue training. |
lookback |
int
|
Number of SMMA values to compare for improvement detection. |
Source code in src/anypinn/core/config.py
lookback: int
instance-attribute
threshold: float
instance-attribute
window: int
instance-attribute
__init__(*, window: int, threshold: float, lookback: int) -> None
__post_init__() -> None
Source code in src/anypinn/core/config.py
ScalarConfig
dataclass
Configuration for a scalar parameter.
Attributes:
| Name | Type | Description |
|---|---|---|
init_value |
float
|
Initial value for the parameter. |
Source code in src/anypinn/core/config.py
init_value: float
instance-attribute
__init__(*, init_value: float) -> None
TrainingDataConfig
dataclass
Configuration for data loading and batching.
Attributes:
| Name | Type | Description |
|---|---|---|
batch_size |
int
|
Number of points per training batch. |
data_ratio |
int | float
|
Ratio of data to collocation points per batch. |
collocations |
int
|
Total number of collocation points to generate. |
collocation_sampler |
CollocationStrategies
|
Sampling strategy for collocation points. |
collocation_seed |
int | None
|
Optional seed for reproducible collocation sampling. |