Skip to content

anypinn.core

Core PINN building blocks.

This module provides the foundational abstractions for defining and solving physics-informed neural network problems.

Argument and Parameter

An Argument wraps a fixed value (float or callable) that an ODE/PDE function receives. A Parameter is a learnable Argument — it inherits from both nn.Module and Argument, so it participates in gradient computation while exposing the same call interface.

To promote a fixed constant to a learnable parameter, replace:

# Fixed: beta = 0.3 throughout training
args = {"beta": Argument(0.3)}
params = {}

with:

# Learnable: beta starts at 0.3, the optimizer adjusts it
args = {}
params = {"beta": Parameter(ScalarConfig(init_value=0.3))}

The ODE/PDE function signature stays the same either way:

def my_ode(x, y, args):
    beta = args["beta"](x)  # works for both Argument and Parameter
    ...

This works because ResidualsConstraint merges params into args before calling the ODE function, and Parameter is a subclass of Argument. For function-valued parameters (e.g. beta(t) that varies over the domain), use Parameter(MLPConfig(...)) instead of ScalarConfig.

ArgsRegistry

ArgsRegistry (a dict[str, Argument]) is the unified interface that ODE/PDE callables receive. It maps string keys to Argument instances. Because Parameter extends Argument, the callable is agnostic to whether a value is fixed or being learned — it just calls args["key"](x) and gets a tensor back.

InferredContext

InferredContext is created automatically during data loading. It holds:

  • domain: an N-dimensional Domain inferred from the training coordinates (bounds and step sizes).
  • validation: resolved ground-truth functions for parameter comparison.

The context is injected into the Problem (and transitively into each Constraint) before training starts. Constraints can override inject_context() to capture domain-specific information — for example, ICConstraint reads domain.x0 to know where to enforce initial conditions.

Collocation strategies

Collocation points are the unsupervised sample locations where the PDE/ODE residual is minimized. The choice of sampling strategy affects convergence:

  • uniform: deterministic Cartesian grid. Predictable, but scales poorly to high dimensions.
  • random: uniform random sampling. Simple and dimension-agnostic.
  • latin_hypercube: stratified random sampling with better space-filling coverage than pure random. Good default for most problems.
  • log_uniform_1d: samples densely near the lower domain bound. Useful for 1-D problems where early dynamics matter most (e.g. epidemic models).
  • adaptive: residual-weighted resampling that concentrates points where the current model has the largest residual. Requires a ResidualScorer and the AdaptiveCollocationCallback to refresh points during training.

Select a strategy via TrainingDataConfig(collocation_sampler="...").

Activations: TypeAlias = Literal['tanh', 'relu', 'leaky_relu', 'sigmoid', 'selu', 'softplus', 'identity'] module-attribute

Supported activation functions.

ArgsRegistry: TypeAlias = dict[str, Argument] module-attribute

CollocationStrategies: TypeAlias = Literal['uniform', 'random', 'latin_hypercube', 'log_uniform_1d', 'adaptive'] module-attribute

Supported collocation sampling strategies.

Criteria: TypeAlias = Literal['mse', 'huber', 'l1'] module-attribute

Supported loss criteria.

DataBatch: TypeAlias = tuple[Tensor, Tensor] module-attribute

Type alias for data batch: (x, y).

FieldsRegistry: TypeAlias = dict[str, Field] module-attribute

LOSS_KEY = 'loss' module-attribute

Key used for logging the total loss.

ParamsRegistry: TypeAlias = dict[str, Parameter] module-attribute

Predictions: TypeAlias = tuple[DataBatch, dict[str, Tensor], dict[str, Tensor] | None] module-attribute

Type alias for model predictions: (input_batch, predictions_dictionary, true_values_dictionary) where predictions_dictionary is a dictionary of {[field_name | param_name]: prediction} and where true_values_dictionary is a dictionary of {[field_name | param_name]: true_value}. If no validation source is configured, true_values_dictionary is None.

ResolvedValidation: TypeAlias = dict[str, Callable[[Tensor], Tensor]] module-attribute

Validation registry after ColumnRef entries have been resolved to callables.

TrainingBatch: TypeAlias = tuple[DataBatch, Tensor] module-attribute

Training batch tuple: ((x_data, y_data), x_coll).

ValidationRegistry: TypeAlias = dict[str, ValidationSource] module-attribute

Registry mapping parameter names to their validation sources.

Example

validation: ValidationRegistry = { ... "beta": lambda x: torch.sin(x), # Pure function ... "gamma": ColumnRef(column="gamma_true"), # From data ... "delta": None, # No validation ... }

ValidationSource: TypeAlias = Callable[[Tensor], Tensor] | ColumnRef | None module-attribute

A source for ground truth values. Can be: - A callable that takes x coordinates and returns true values - A ColumnRef that references a column in loaded data - None if no validation is needed for this parameter

__all__ = ['LOSS_KEY', 'Activations', 'AdamConfig', 'AdaptiveSampler', 'ArgsRegistry', 'Argument', 'CollocationSampler', 'CollocationStrategies', 'ColumnRef', 'Constraint', 'CosineAnnealingConfig', 'Criteria', 'DataBatch', 'DataCallback', 'Domain', 'EarlyStoppingConfig', 'Field', 'FieldsRegistry', 'FourierEncoding', 'GenerationConfig', 'InferredContext', 'IngestionConfig', 'LBFGSConfig', 'LatinHypercubeSampler', 'LogFn', 'LogUniform1DSampler', 'MLPConfig', 'PINNDataModule', 'PINNDataset', 'PINNHyperparameters', 'Parameter', 'ParamsRegistry', 'Predictions', 'Problem', 'RandomFourierFeatures', 'RandomSampler', 'ReduceLROnPlateauConfig', 'ResidualScorer', 'ResolvedValidation', 'SMMAStoppingConfig', 'ScalarConfig', 'TrainingBatch', 'TrainingDataConfig', 'UniformSampler', 'ValidationRegistry', 'ValidationSource', 'build_criterion', 'build_sampler', 'get_activation', 'resolve_validation'] module-attribute

AdamConfig dataclass

Configuration for the Adam optimizer.

Attributes:

Name Type Description
lr float

Learning rate (must be positive).

betas tuple[float, float]

Coefficients for computing running averages of gradient and its square. Both must be in (0, 1).

weight_decay float

L2 penalty coefficient (non-negative).

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class AdamConfig:
    """
    Configuration for the Adam optimizer.

    Attributes:
        lr: Learning rate (must be positive).
        betas: Coefficients for computing running averages of gradient
            and its square. Both must be in (0, 1).
        weight_decay: L2 penalty coefficient (non-negative).
    """

    lr: float = 1e-3
    betas: tuple[float, float] = (0.9, 0.999)
    weight_decay: float = 0.0

    def __post_init__(self) -> None:
        if self.lr <= 0:
            raise ValueError(f"lr must be positive, got {self.lr}.")
        if self.weight_decay < 0:
            raise ValueError(f"weight_decay must be non-negative, got {self.weight_decay}.")
        if not (0 < self.betas[0] < 1):
            raise ValueError(f"betas[0] must be in (0, 1), got {self.betas[0]}.")
        if not (0 < self.betas[1] < 1):
            raise ValueError(f"betas[1] must be in (0, 1), got {self.betas[1]}.")

betas: tuple[float, float] = (0.9, 0.999) class-attribute instance-attribute

lr: float = 0.001 class-attribute instance-attribute

weight_decay: float = 0.0 class-attribute instance-attribute

__init__(*, lr: float = 0.001, betas: tuple[float, float] = (0.9, 0.999), weight_decay: float = 0.0) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.lr <= 0:
        raise ValueError(f"lr must be positive, got {self.lr}.")
    if self.weight_decay < 0:
        raise ValueError(f"weight_decay must be non-negative, got {self.weight_decay}.")
    if not (0 < self.betas[0] < 1):
        raise ValueError(f"betas[0] must be in (0, 1), got {self.betas[0]}.")
    if not (0 < self.betas[1] < 1):
        raise ValueError(f"betas[1] must be in (0, 1), got {self.betas[1]}.")

AdaptiveSampler

Residual-weighted adaptive collocation sampler.

Draws an oversample of candidate points, scores them using a ResidualScorer, and retains the top-scoring subset. A configurable exploration_ratio ensures a fraction of purely random points to prevent mode collapse.

Parameters:

Name Type Description Default
scorer ResidualScorer

Callable returning per-point residual scores (n,).

required
oversample_factor int

Multiplier on n for candidate generation.

4
exploration_ratio float

Fraction of the budget reserved for random points.

0.2
seed int | None

Optional seed for reproducible sampling.

None
Source code in src/anypinn/core/samplers.py
class AdaptiveSampler:
    """Residual-weighted adaptive collocation sampler.

    Draws an oversample of candidate points, scores them using a
    ``ResidualScorer``, and retains the top-scoring subset. A configurable
    ``exploration_ratio`` ensures a fraction of purely random points to prevent
    mode collapse.

    Args:
        scorer: Callable returning per-point residual scores ``(n,)``.
        oversample_factor: Multiplier on ``n`` for candidate generation.
        exploration_ratio: Fraction of the budget reserved for random points.
        seed: Optional seed for reproducible sampling.
    """

    def __init__(
        self,
        scorer: ResidualScorer,
        oversample_factor: int = 4,
        exploration_ratio: float = 0.2,
        seed: int | None = None,
    ) -> None:
        if oversample_factor < 1:
            raise ValueError(f"oversample_factor must be >= 1, got {oversample_factor}.")
        if not (0.0 <= exploration_ratio <= 1.0):
            raise ValueError(f"exploration_ratio must be in [0, 1], got {exploration_ratio}.")
        self._scorer = scorer
        self._oversample = oversample_factor
        self._explore = exploration_ratio
        self._random = RandomSampler(seed=seed)

    def sample(self, n: int, domain: Domain) -> Tensor:
        """Return ``n`` residual-weighted points within ``domain``."""
        n_explore = max(1, int(n * self._explore))
        n_exploit = n - n_explore

        explore_pts = self._random.sample(n_explore, domain)

        if n_exploit <= 0:
            return explore_pts

        n_candidates = n_exploit * self._oversample
        candidates = self._random.sample(n_candidates, domain)

        with torch.no_grad():
            scores = self._scorer.residual_score(candidates)

        _, top_idx = scores.topk(min(n_exploit, len(scores)))
        exploit_pts = candidates[top_idx]

        return torch.cat([explore_pts, exploit_pts], dim=0)

__init__(scorer: ResidualScorer, oversample_factor: int = 4, exploration_ratio: float = 0.2, seed: int | None = None) -> None

Source code in src/anypinn/core/samplers.py
def __init__(
    self,
    scorer: ResidualScorer,
    oversample_factor: int = 4,
    exploration_ratio: float = 0.2,
    seed: int | None = None,
) -> None:
    if oversample_factor < 1:
        raise ValueError(f"oversample_factor must be >= 1, got {oversample_factor}.")
    if not (0.0 <= exploration_ratio <= 1.0):
        raise ValueError(f"exploration_ratio must be in [0, 1], got {exploration_ratio}.")
    self._scorer = scorer
    self._oversample = oversample_factor
    self._explore = exploration_ratio
    self._random = RandomSampler(seed=seed)

sample(n: int, domain: Domain) -> Tensor

Return n residual-weighted points within domain.

Source code in src/anypinn/core/samplers.py
def sample(self, n: int, domain: Domain) -> Tensor:
    """Return ``n`` residual-weighted points within ``domain``."""
    n_explore = max(1, int(n * self._explore))
    n_exploit = n - n_explore

    explore_pts = self._random.sample(n_explore, domain)

    if n_exploit <= 0:
        return explore_pts

    n_candidates = n_exploit * self._oversample
    candidates = self._random.sample(n_candidates, domain)

    with torch.no_grad():
        scores = self._scorer.residual_score(candidates)

    _, top_idx = scores.topk(min(n_exploit, len(scores)))
    exploit_pts = candidates[top_idx]

    return torch.cat([explore_pts, exploit_pts], dim=0)

Argument

A fixed (non-learnable) argument passed to an ODE/PDE function.

Wraps a float constant or a callable and provides a uniform __call__ interface. See also Parameter for the learnable variant.

Parameters:

Name Type Description Default
value float | Callable[[Tensor], Tensor]

The value (float) or function (callable).

required
Example

beta = Argument(0.3) beta(torch.tensor([1.0])) tensor(0.3000) beta_fn = Argument(lambda t: 0.3 * torch.exp(-0.1 * t)) beta_fn(torch.tensor([0.0])) tensor([0.3000])

Source code in src/anypinn/core/nn.py
class Argument:
    """
    A fixed (non-learnable) argument passed to an ODE/PDE function.

    Wraps a float constant or a callable and provides a uniform
    ``__call__`` interface. See also ``Parameter`` for the learnable
    variant.

    Args:
        value: The value (float) or function (callable).

    Example:
        >>> beta = Argument(0.3)
        >>> beta(torch.tensor([1.0]))
        tensor(0.3000)
        >>> beta_fn = Argument(lambda t: 0.3 * torch.exp(-0.1 * t))
        >>> beta_fn(torch.tensor([0.0]))
        tensor([0.3000])
    """

    def __init__(self, value: float | Callable[[Tensor], Tensor]):
        self._value: float | Callable[[Tensor], Tensor] = value
        self._callable = callable(value) and not isinstance(value, (int, float))
        self._tensor_cache: dict[torch.device, Tensor] = {}

    def __call__(self, x: Tensor) -> Tensor:
        """
        Evaluate the argument.

        Args:
            x: Input tensor (context).

        Returns:
            The value of the argument, broadcasted if necessary.
        """
        if self._callable:
            fn = cast(Callable[[Tensor], Tensor], self._value)
            return fn(x)
        device = x.device
        if device not in self._tensor_cache:
            self._tensor_cache[device] = torch.tensor(self._value, device=device)
        return self._tensor_cache[device]

    @override
    def __repr__(self) -> str:
        return f"Argument(value={self._value})"

__call__(x: Tensor) -> Tensor

Evaluate the argument.

Parameters:

Name Type Description Default
x Tensor

Input tensor (context).

required

Returns:

Type Description
Tensor

The value of the argument, broadcasted if necessary.

Source code in src/anypinn/core/nn.py
def __call__(self, x: Tensor) -> Tensor:
    """
    Evaluate the argument.

    Args:
        x: Input tensor (context).

    Returns:
        The value of the argument, broadcasted if necessary.
    """
    if self._callable:
        fn = cast(Callable[[Tensor], Tensor], self._value)
        return fn(x)
    device = x.device
    if device not in self._tensor_cache:
        self._tensor_cache[device] = torch.tensor(self._value, device=device)
    return self._tensor_cache[device]

__init__(value: float | Callable[[Tensor], Tensor])

Source code in src/anypinn/core/nn.py
def __init__(self, value: float | Callable[[Tensor], Tensor]):
    self._value: float | Callable[[Tensor], Tensor] = value
    self._callable = callable(value) and not isinstance(value, (int, float))
    self._tensor_cache: dict[torch.device, Tensor] = {}

__repr__() -> str

Source code in src/anypinn/core/nn.py
@override
def __repr__(self) -> str:
    return f"Argument(value={self._value})"

CollocationSampler

Bases: Protocol

Protocol for collocation point samplers.

Implementations must return a tensor of shape (n, domain.ndim) with all points inside the domain bounds.

Source code in src/anypinn/core/samplers.py
class CollocationSampler(Protocol):
    """Protocol for collocation point samplers.

    Implementations must return a tensor of shape ``(n, domain.ndim)`` with all
    points inside the domain bounds.
    """

    def sample(self, n: int, domain: Domain) -> Tensor:
        """Return ``n`` collocation points within ``domain``."""
        ...

sample(n: int, domain: Domain) -> Tensor

Return n collocation points within domain.

Source code in src/anypinn/core/samplers.py
def sample(self, n: int, domain: Domain) -> Tensor:
    """Return ``n`` collocation points within ``domain``."""
    ...

ColumnRef dataclass

Reference to a column in loaded data for ground truth comparison.

This allows practitioners to specify validation data by column name without writing custom functions. The column is resolved lazily when data is loaded.

Attributes:

Name Type Description
column str

Name of the column in the loaded DataFrame.

transform Callable[[Tensor], Tensor] | None

Optional transformation to apply to the column values.

Example

validation = { ... "beta": ColumnRef(column="Rt", transform=lambda rt: rt * delta), ... }

Source code in src/anypinn/core/validation.py
@dataclass
class ColumnRef:
    """
    Reference to a column in loaded data for ground truth comparison.

    This allows practitioners to specify validation data by column name
    without writing custom functions. The column is resolved lazily when
    data is loaded.

    Attributes:
        column: Name of the column in the loaded DataFrame.
        transform: Optional transformation to apply to the column values.

    Example:
        >>> validation = {
        ...     "beta": ColumnRef(column="Rt", transform=lambda rt: rt * delta),
        ... }
    """

    column: str
    transform: Callable[[Tensor], Tensor] | None = None

column: str instance-attribute

transform: Callable[[Tensor], Tensor] | None = None class-attribute instance-attribute

__init__(column: str, transform: Callable[[Tensor], Tensor] | None = None) -> None

Constraint

Bases: ABC

Abstract base class for a constraint (loss term) in the PINN.

Subclass this and implement loss() to define custom physics or data-fitting terms. The Problem sums all constraint losses during training.

Example

class EnergyConstraint(Constraint): ... def loss(self, batch, criterion, log=None): ... (x_data, y_data), x_coll = batch ... energy = compute_energy(x_coll) ... target = torch.zeros_like(energy) ... loss = criterion(energy, target) ... if log is not None: ... log("loss/energy", loss) ... return loss

Source code in src/anypinn/core/problem.py
class Constraint(ABC):
    """
    Abstract base class for a constraint (loss term) in the PINN.

    Subclass this and implement ``loss()`` to define custom physics or
    data-fitting terms. The ``Problem`` sums all constraint losses during
    training.

    Example:
        >>> class EnergyConstraint(Constraint):
        ...     def loss(self, batch, criterion, log=None):
        ...         (x_data, y_data), x_coll = batch
        ...         energy = compute_energy(x_coll)
        ...         target = torch.zeros_like(energy)
        ...         loss = criterion(energy, target)
        ...         if log is not None:
        ...             log("loss/energy", loss)
        ...         return loss
    """

    def inject_context(self, context: InferredContext) -> None:
        """
        Inject the context into the constraint. This can be used by the constraint to access the
        data used to compute the loss.

        Args:
            context: The context to inject.
        """
        return None

    @abstractmethod
    def loss(
        self,
        batch: TrainingBatch,
        criterion: nn.Module,
        log: LogFn | None = None,
    ) -> Tensor:
        """
        Calculate the loss for this constraint.

        Args:
            batch: The current batch of data/collocation points.
            criterion: The loss function (e.g. MSE).
            log: Optional logging function.

        Returns:
            The calculated loss tensor.
        """

inject_context(context: InferredContext) -> None

Inject the context into the constraint. This can be used by the constraint to access the data used to compute the loss.

Parameters:

Name Type Description Default
context InferredContext

The context to inject.

required
Source code in src/anypinn/core/problem.py
def inject_context(self, context: InferredContext) -> None:
    """
    Inject the context into the constraint. This can be used by the constraint to access the
    data used to compute the loss.

    Args:
        context: The context to inject.
    """
    return None

loss(batch: TrainingBatch, criterion: nn.Module, log: LogFn | None = None) -> Tensor abstractmethod

Calculate the loss for this constraint.

Parameters:

Name Type Description Default
batch TrainingBatch

The current batch of data/collocation points.

required
criterion Module

The loss function (e.g. MSE).

required
log LogFn | None

Optional logging function.

None

Returns:

Type Description
Tensor

The calculated loss tensor.

Source code in src/anypinn/core/problem.py
@abstractmethod
def loss(
    self,
    batch: TrainingBatch,
    criterion: nn.Module,
    log: LogFn | None = None,
) -> Tensor:
    """
    Calculate the loss for this constraint.

    Args:
        batch: The current batch of data/collocation points.
        criterion: The loss function (e.g. MSE).
        log: Optional logging function.

    Returns:
        The calculated loss tensor.
    """

CosineAnnealingConfig dataclass

Configuration for Cosine Annealing LR Scheduler.

Attributes:

Name Type Description
T_max int

Maximum number of iterations (typically set to max_epochs).

eta_min float

Minimum learning rate at the end of the schedule.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class CosineAnnealingConfig:
    """
    Configuration for Cosine Annealing LR Scheduler.

    Attributes:
        T_max: Maximum number of iterations (typically set to
            ``max_epochs``).
        eta_min: Minimum learning rate at the end of the schedule.
    """

    T_max: int
    eta_min: float = 0.0

    def __post_init__(self) -> None:
        if self.T_max <= 0:
            raise ValueError(f"T_max must be positive, got {self.T_max}.")

T_max: int instance-attribute

eta_min: float = 0.0 class-attribute instance-attribute

__init__(*, T_max: int, eta_min: float = 0.0) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.T_max <= 0:
        raise ValueError(f"T_max must be positive, got {self.T_max}.")

DataCallback

Base class for callbacks that transform data during setup.

Subclass this to apply custom preprocessing (e.g. scaling, normalization) to training data and collocation points before the dataset is constructed.

Source code in src/anypinn/core/dataset.py
class DataCallback:
    """Base class for callbacks that transform data during setup.

    Subclass this to apply custom preprocessing (e.g. scaling,
    normalization) to training data and collocation points before the
    dataset is constructed.
    """

    def transform_data(self, data: DataBatch, coll: Tensor) -> tuple[DataBatch, Tensor]:
        """Transform training data and collocation points.

        Called during ``PINNDataModule.setup()`` after data is loaded
        but before the ``PINNDataset`` is created. Multiple callbacks
        are applied in registration order.

        Args:
            data: Tuple of (x, y) training tensors.
            coll: Collocation point coordinates.

        Returns:
            Transformed (data, coll) tuple.
        """
        return data, coll

    def on_after_setup(self, dm: "PINNDataModule") -> None:
        """Hook called after ``PINNDataModule.setup()`` completes.

        Use this to perform post-processing that depends on the fully
        constructed data module (e.g. adjusting validation functions
        to account for earlier scaling transforms).

        Args:
            dm: The fully initialized data module.
        """
        return None

on_after_setup(dm: PINNDataModule) -> None

Hook called after PINNDataModule.setup() completes.

Use this to perform post-processing that depends on the fully constructed data module (e.g. adjusting validation functions to account for earlier scaling transforms).

Parameters:

Name Type Description Default
dm PINNDataModule

The fully initialized data module.

required
Source code in src/anypinn/core/dataset.py
def on_after_setup(self, dm: "PINNDataModule") -> None:
    """Hook called after ``PINNDataModule.setup()`` completes.

    Use this to perform post-processing that depends on the fully
    constructed data module (e.g. adjusting validation functions
    to account for earlier scaling transforms).

    Args:
        dm: The fully initialized data module.
    """
    return None

transform_data(data: DataBatch, coll: Tensor) -> tuple[DataBatch, Tensor]

Transform training data and collocation points.

Called during PINNDataModule.setup() after data is loaded but before the PINNDataset is created. Multiple callbacks are applied in registration order.

Parameters:

Name Type Description Default
data DataBatch

Tuple of (x, y) training tensors.

required
coll Tensor

Collocation point coordinates.

required

Returns:

Type Description
tuple[DataBatch, Tensor]

Transformed (data, coll) tuple.

Source code in src/anypinn/core/dataset.py
def transform_data(self, data: DataBatch, coll: Tensor) -> tuple[DataBatch, Tensor]:
    """Transform training data and collocation points.

    Called during ``PINNDataModule.setup()`` after data is loaded
    but before the ``PINNDataset`` is created. Multiple callbacks
    are applied in registration order.

    Args:
        data: Tuple of (x, y) training tensors.
        coll: Collocation point coordinates.

    Returns:
        Transformed (data, coll) tuple.
    """
    return data, coll

Domain dataclass

N-dimensional rectangular domain.

Attributes:

Name Type Description
bounds list[tuple[float, float]]

Per-dimension (min, max) pairs. bounds[i] covers dimension i.

dx list[float] | None

Per-dimension step size (None when not applicable).

Source code in src/anypinn/core/nn.py
@dataclass
class Domain:
    """
    N-dimensional rectangular domain.

    Attributes:
        bounds: Per-dimension (min, max) pairs. ``bounds[i]`` covers dimension i.
        dx: Per-dimension step size (``None`` when not applicable).
    """

    bounds: list[tuple[float, float]]
    dx: list[float] | None = None

    @property
    def ndim(self) -> int:
        """Number of spatial dimensions."""
        return len(self.bounds)

    @property
    def x0(self) -> float:
        """Lower bound of the first dimension (convenience for 1-D / time-axis access)."""
        return self.bounds[0][0]

    @property
    def x1(self) -> float:
        """Upper bound of the first dimension."""
        return self.bounds[0][1]

    @classmethod
    def from_x(cls, x: Tensor) -> Domain:
        """
        Infer domain bounds and step sizes from a coordinate tensor of shape (N, d).

        Args:
            x: Coordinate tensor of shape ``(N, d)``.

        Returns:
            Domain with bounds and dx inferred from the data.

        Example:
            >>> coords = torch.linspace(0, 10, 100).unsqueeze(1)
            >>> domain = Domain.from_x(coords)
            >>> domain.x0, domain.x1
            (0.0, 10.0)
        """
        if x.ndim != 2:
            raise ValueError(f"Expected 2-D coordinate tensor (N, d), got shape {tuple(x.shape)}.")
        if x.shape[0] < 2:
            raise ValueError(
                f"At least two points are required to infer the domain, got {x.shape[0]}."
            )

        d = x.shape[1]
        bounds = [(x[:, i].min().item(), x[:, i].max().item()) for i in range(d)]
        dx = [(x[1, i] - x[0, i]).item() for i in range(d)]
        return cls(bounds=bounds, dx=dx)

    @override
    def __repr__(self) -> str:
        return f"Domain(ndim={self.ndim}, bounds={self.bounds}, dx={self.dx})"

bounds: list[tuple[float, float]] instance-attribute

dx: list[float] | None = None class-attribute instance-attribute

ndim: int property

Number of spatial dimensions.

x0: float property

Lower bound of the first dimension (convenience for 1-D / time-axis access).

x1: float property

Upper bound of the first dimension.

__init__(bounds: list[tuple[float, float]], dx: list[float] | None = None) -> None

__repr__() -> str

Source code in src/anypinn/core/nn.py
@override
def __repr__(self) -> str:
    return f"Domain(ndim={self.ndim}, bounds={self.bounds}, dx={self.dx})"

from_x(x: Tensor) -> Domain classmethod

Infer domain bounds and step sizes from a coordinate tensor of shape (N, d).

Parameters:

Name Type Description Default
x Tensor

Coordinate tensor of shape (N, d).

required

Returns:

Type Description
Domain

Domain with bounds and dx inferred from the data.

Example

coords = torch.linspace(0, 10, 100).unsqueeze(1) domain = Domain.from_x(coords) domain.x0, domain.x1 (0.0, 10.0)

Source code in src/anypinn/core/nn.py
@classmethod
def from_x(cls, x: Tensor) -> Domain:
    """
    Infer domain bounds and step sizes from a coordinate tensor of shape (N, d).

    Args:
        x: Coordinate tensor of shape ``(N, d)``.

    Returns:
        Domain with bounds and dx inferred from the data.

    Example:
        >>> coords = torch.linspace(0, 10, 100).unsqueeze(1)
        >>> domain = Domain.from_x(coords)
        >>> domain.x0, domain.x1
        (0.0, 10.0)
    """
    if x.ndim != 2:
        raise ValueError(f"Expected 2-D coordinate tensor (N, d), got shape {tuple(x.shape)}.")
    if x.shape[0] < 2:
        raise ValueError(
            f"At least two points are required to infer the domain, got {x.shape[0]}."
        )

    d = x.shape[1]
    bounds = [(x[:, i].min().item(), x[:, i].max().item()) for i in range(d)]
    dx = [(x[1, i] - x[0, i]).item() for i in range(d)]
    return cls(bounds=bounds, dx=dx)

EarlyStoppingConfig dataclass

Configuration for Early Stopping callback.

Attributes:

Name Type Description
patience int

Number of epochs with no improvement before stopping.

mode Literal['min', 'max']

"min" to stop when the metric stops decreasing, "max" when it stops increasing.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class EarlyStoppingConfig:
    """
    Configuration for Early Stopping callback.

    Attributes:
        patience: Number of epochs with no improvement before stopping.
        mode: ``"min"`` to stop when the metric stops decreasing,
            ``"max"`` when it stops increasing.
    """

    patience: int
    mode: Literal["min", "max"]

    def __post_init__(self) -> None:
        if self.patience <= 0:
            raise ValueError(f"patience must be positive, got {self.patience}.")

mode: Literal['min', 'max'] instance-attribute

patience: int instance-attribute

__init__(*, patience: int, mode: Literal['min', 'max']) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.patience <= 0:
        raise ValueError(f"patience must be positive, got {self.patience}.")

Field

Bases: Module

A neural field mapping coordinates to a vector of state variables.

For an ODE this maps t -> [S, I, R]; for a PDE it maps (x, t) -> u(x, t).

Parameters:

Name Type Description Default
config MLPConfig

Configuration for the MLP backing this field.

required
Example

field = Field(MLPConfig( ... in_dim=1, out_dim=3, ... hidden_layers=[32, 32], ... activation="tanh", ... )) t = torch.rand(10, 1) field(t).shape torch.Size([10, 3])

Source code in src/anypinn/core/nn.py
class Field(nn.Module):
    """
    A neural field mapping coordinates to a vector of state variables.

    For an ODE this maps ``t -> [S, I, R]``; for a PDE it maps
    ``(x, t) -> u(x, t)``.

    Args:
        config: Configuration for the MLP backing this field.

    Example:
        >>> field = Field(MLPConfig(
        ...     in_dim=1, out_dim=3,
        ...     hidden_layers=[32, 32],
        ...     activation="tanh",
        ... ))
        >>> t = torch.rand(10, 1)
        >>> field(t).shape
        torch.Size([10, 3])
    """

    def __init__(
        self,
        config: MLPConfig,
    ):
        super().__init__()
        encode = config.encode
        if isinstance(encode, nn.Module):
            # registers → participates in .to(), .state_dict()
            self.encoder: nn.Module | None = encode
        else:
            self.encoder = None
        self._encode_fn = encode  # callable reference (module or plain fn)
        dims = [config.in_dim] + config.hidden_layers + [config.out_dim]
        act = get_activation(config.activation)

        layers: list[nn.Module] = []
        for i in range(len(dims) - 1):
            layers.append(nn.Linear(dims[i], dims[i + 1]))
            if i < len(dims) - 2:
                layers.append(act)

        if config.output_activation is not None:
            out_act = get_activation(config.output_activation)
            layers.append(out_act)

        self.net = nn.Sequential(*layers)
        self.apply(self._init)

    @staticmethod
    def _init(m: nn.Module) -> None:
        if isinstance(m, nn.Linear):
            nn.init.xavier_normal_(m.weight)
            nn.init.zeros_(m.bias)

    @override
    def forward(self, x: Tensor) -> Tensor:
        """
        Forward pass of the field.

        Args:
            x: Input coordinates (e.g. time, space).

        Returns:
            The values of the field at input coordinates.
        """
        if self._encode_fn is not None:
            x = self._encode_fn(x)
        return cast(Tensor, self.net(x))

encoder: nn.Module | None = encode instance-attribute

net = nn.Sequential(*layers) instance-attribute

__init__(config: MLPConfig)

Source code in src/anypinn/core/nn.py
def __init__(
    self,
    config: MLPConfig,
):
    super().__init__()
    encode = config.encode
    if isinstance(encode, nn.Module):
        # registers → participates in .to(), .state_dict()
        self.encoder: nn.Module | None = encode
    else:
        self.encoder = None
    self._encode_fn = encode  # callable reference (module or plain fn)
    dims = [config.in_dim] + config.hidden_layers + [config.out_dim]
    act = get_activation(config.activation)

    layers: list[nn.Module] = []
    for i in range(len(dims) - 1):
        layers.append(nn.Linear(dims[i], dims[i + 1]))
        if i < len(dims) - 2:
            layers.append(act)

    if config.output_activation is not None:
        out_act = get_activation(config.output_activation)
        layers.append(out_act)

    self.net = nn.Sequential(*layers)
    self.apply(self._init)

forward(x: Tensor) -> Tensor

Forward pass of the field.

Parameters:

Name Type Description Default
x Tensor

Input coordinates (e.g. time, space).

required

Returns:

Type Description
Tensor

The values of the field at input coordinates.

Source code in src/anypinn/core/nn.py
@override
def forward(self, x: Tensor) -> Tensor:
    """
    Forward pass of the field.

    Args:
        x: Input coordinates (e.g. time, space).

    Returns:
        The values of the field at input coordinates.
    """
    if self._encode_fn is not None:
        x = self._encode_fn(x)
    return cast(Tensor, self.net(x))

FourierEncoding

Bases: Module

Sinusoidal positional encoding for periodic or high-frequency signals.

For input \(\mathbf{x} \in \mathbb{R}^{n \times d}\) and num_frequencies \(K\), the encoding is:

\[ \gamma(\mathbf{x}) = [\mathbf{x},\, \sin(\mathbf{x}),\, \cos(\mathbf{x}),\, \sin(2\mathbf{x}),\, \cos(2\mathbf{x}),\, \ldots,\, \sin(K\mathbf{x}),\, \cos(K\mathbf{x})] \]

producing shape \((n,\, d\,(1 + 2K))\) when include_input=True, or \((n,\, 2dK)\) when include_input=False.

Parameters:

Name Type Description Default
num_frequencies int

Number of frequency bands \(K \geq 1\).

6
include_input bool

Prepend original coordinates to the encoded output.

True
Source code in src/anypinn/lib/encodings.py
class FourierEncoding(nn.Module):
    """Sinusoidal positional encoding for periodic or high-frequency signals.

    For input $\\mathbf{x} \\in \\mathbb{R}^{n \\times d}$ and
    `num_frequencies` $K$, the encoding is:

    $$
    \\gamma(\\mathbf{x}) = [\\mathbf{x},\\,
        \\sin(\\mathbf{x}),\\, \\cos(\\mathbf{x}),\\,
        \\sin(2\\mathbf{x}),\\, \\cos(2\\mathbf{x}),\\,
        \\ldots,\\,
        \\sin(K\\mathbf{x}),\\, \\cos(K\\mathbf{x})]
    $$

    producing shape $(n,\\, d\\,(1 + 2K))$ when `include_input=True`,
    or $(n,\\, 2dK)$ when `include_input=False`.

    Args:
        num_frequencies: Number of frequency bands $K \\geq 1$.
        include_input:   Prepend original coordinates to the encoded output.
    """

    def __init__(self, num_frequencies: int = 6, include_input: bool = True) -> None:
        if num_frequencies < 1:
            raise ValueError(f"num_frequencies must be >= 1, got {num_frequencies}.")
        super().__init__()
        self.num_frequencies = num_frequencies
        self.include_input = include_input

    def out_dim(self, in_dim: int) -> int:
        """Compute output dimension given input dimension."""
        factor = 1 + 2 * self.num_frequencies if self.include_input else 2 * self.num_frequencies
        return in_dim * factor

    def forward(self, x: Tensor) -> Tensor:
        """Encode input with sin/cos at each frequency."""
        parts = [x] if self.include_input else []
        for k in range(1, self.num_frequencies + 1):
            parts.append(torch.sin(k * x))
            parts.append(torch.cos(k * x))
        return torch.cat(parts, dim=-1)

include_input = include_input instance-attribute

num_frequencies = num_frequencies instance-attribute

__init__(num_frequencies: int = 6, include_input: bool = True) -> None

Source code in src/anypinn/lib/encodings.py
def __init__(self, num_frequencies: int = 6, include_input: bool = True) -> None:
    if num_frequencies < 1:
        raise ValueError(f"num_frequencies must be >= 1, got {num_frequencies}.")
    super().__init__()
    self.num_frequencies = num_frequencies
    self.include_input = include_input

forward(x: Tensor) -> Tensor

Encode input with sin/cos at each frequency.

Source code in src/anypinn/lib/encodings.py
def forward(self, x: Tensor) -> Tensor:
    """Encode input with sin/cos at each frequency."""
    parts = [x] if self.include_input else []
    for k in range(1, self.num_frequencies + 1):
        parts.append(torch.sin(k * x))
        parts.append(torch.cos(k * x))
    return torch.cat(parts, dim=-1)

out_dim(in_dim: int) -> int

Compute output dimension given input dimension.

Source code in src/anypinn/lib/encodings.py
def out_dim(self, in_dim: int) -> int:
    """Compute output dimension given input dimension."""
    factor = 1 + 2 * self.num_frequencies if self.include_input else 2 * self.num_frequencies
    return in_dim * factor

GenerationConfig dataclass

Bases: TrainingDataConfig

Configuration for generating synthetic training data.

Used in forward problems where the ground-truth ODE/PDE solution is computed from known parameters and optionally corrupted with noise.

Attributes:

Name Type Description
x Tensor

Coordinate tensor to evaluate the ODE/PDE at.

noise_level float

Standard deviation of Gaussian noise added to the generated observations (0.0 for clean data).

args_to_train ArgsRegistry

Arguments used by the data-generation ODE/PDE callable to produce the synthetic solution.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class GenerationConfig(TrainingDataConfig):
    """
    Configuration for generating synthetic training data.

    Used in forward problems where the ground-truth ODE/PDE solution is
    computed from known parameters and optionally corrupted with noise.

    Attributes:
        x: Coordinate tensor to evaluate the ODE/PDE at.
        noise_level: Standard deviation of Gaussian noise added to the
            generated observations (0.0 for clean data).
        args_to_train: Arguments used by the data-generation ODE/PDE
            callable to produce the synthetic solution.
    """

    x: Tensor
    noise_level: float
    args_to_train: ArgsRegistry

args_to_train: ArgsRegistry instance-attribute

noise_level: float instance-attribute

x: Tensor instance-attribute

__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None, x: Tensor, noise_level: float, args_to_train: ArgsRegistry) -> None

InferredContext dataclass

Runtime context inferred from training data.

This holds the data that is either explicitly provided in props or inferred from training data.

Source code in src/anypinn/core/context.py
@dataclass
class InferredContext:
    """
    Runtime context inferred from training data.

    This holds the data that is either explicitly provided in props or inferred from training data.
    """

    def __init__(
        self,
        x: Tensor,
        y: Tensor,
        validation: ResolvedValidation,
    ):
        """
        Infer context from either generated or loaded data.

        Args:
            x: x coordinates.
            y: observations.
            validation: Resolved validation dictionary.
        """

        self.domain = Domain.from_x(x)
        self.validation = validation

domain = Domain.from_x(x) instance-attribute

validation = validation instance-attribute

__init__(x: Tensor, y: Tensor, validation: ResolvedValidation)

Infer context from either generated or loaded data.

Parameters:

Name Type Description Default
x Tensor

x coordinates.

required
y Tensor

observations.

required
validation ResolvedValidation

Resolved validation dictionary.

required
Source code in src/anypinn/core/context.py
def __init__(
    self,
    x: Tensor,
    y: Tensor,
    validation: ResolvedValidation,
):
    """
    Infer context from either generated or loaded data.

    Args:
        x: x coordinates.
        y: observations.
        validation: Resolved validation dictionary.
    """

    self.domain = Domain.from_x(x)
    self.validation = validation

IngestionConfig dataclass

Bases: TrainingDataConfig

Configuration for loading training data from a CSV file.

Attributes:

Name Type Description
df_path Path

Path to the CSV file.

x_transform Callable[[Any], Any] | None

Optional transform applied to the x column values after loading (e.g. unit conversion).

x_column str | None

Name of the column to use as x coordinates. If None, rows are assumed to be evenly spaced and an integer index is used.

y_columns list[str]

List of column names to use as y observations.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class IngestionConfig(TrainingDataConfig):
    """
    Configuration for loading training data from a CSV file.

    Attributes:
        df_path: Path to the CSV file.
        x_transform: Optional transform applied to the x column values
            after loading (e.g. unit conversion).
        x_column: Name of the column to use as x coordinates. If
            ``None``, rows are assumed to be evenly spaced and an
            integer index is used.
        y_columns: List of column names to use as y observations.
    """

    df_path: Path
    x_transform: Callable[[Any], Any] | None = None
    x_column: str | None = None
    y_columns: list[str]

df_path: Path instance-attribute

x_column: str | None = None class-attribute instance-attribute

x_transform: Callable[[Any], Any] | None = None class-attribute instance-attribute

y_columns: list[str] instance-attribute

__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None, df_path: Path, x_transform: Callable[[Any], Any] | None = None, x_column: str | None = None, y_columns: list[str]) -> None

LBFGSConfig dataclass

Configuration for the L-BFGS optimizer.

Attributes:

Name Type Description
lr float

Learning rate (must be positive).

max_iter int

Maximum number of iterations per optimization step.

max_eval int | None

Maximum number of function evaluations per step (defaults to max_iter * 1.25).

history_size int

Number of past updates to store for the approximation of the inverse Hessian.

line_search_fn str | None

Line search function ("strong_wolfe" or None).

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class LBFGSConfig:
    """
    Configuration for the L-BFGS optimizer.

    Attributes:
        lr: Learning rate (must be positive).
        max_iter: Maximum number of iterations per optimization step.
        max_eval: Maximum number of function evaluations per step
            (defaults to ``max_iter * 1.25``).
        history_size: Number of past updates to store for the
            approximation of the inverse Hessian.
        line_search_fn: Line search function (``"strong_wolfe"`` or ``None``).
    """

    lr: float = 1.0
    max_iter: int = 20
    max_eval: int | None = None
    history_size: int = 100
    line_search_fn: str | None = "strong_wolfe"

    def __post_init__(self) -> None:
        if self.lr <= 0:
            raise ValueError(f"lr must be positive, got {self.lr}.")
        if self.max_iter <= 0:
            raise ValueError(f"max_iter must be positive, got {self.max_iter}.")
        if self.history_size <= 0:
            raise ValueError(f"history_size must be positive, got {self.history_size}.")

history_size: int = 100 class-attribute instance-attribute

line_search_fn: str | None = 'strong_wolfe' class-attribute instance-attribute

lr: float = 1.0 class-attribute instance-attribute

max_eval: int | None = None class-attribute instance-attribute

max_iter: int = 20 class-attribute instance-attribute

__init__(*, lr: float = 1.0, max_iter: int = 20, max_eval: int | None = None, history_size: int = 100, line_search_fn: str | None = 'strong_wolfe') -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.lr <= 0:
        raise ValueError(f"lr must be positive, got {self.lr}.")
    if self.max_iter <= 0:
        raise ValueError(f"max_iter must be positive, got {self.max_iter}.")
    if self.history_size <= 0:
        raise ValueError(f"history_size must be positive, got {self.history_size}.")

LatinHypercubeSampler

Latin Hypercube sampler (pure-PyTorch, no SciPy dependency).

Stratifies each dimension into n equal intervals and places one sample per interval, then shuffles columns independently.

Parameters:

Name Type Description Default
seed int | None

Optional seed for reproducible sampling.

None
Source code in src/anypinn/core/samplers.py
class LatinHypercubeSampler:
    """Latin Hypercube sampler (pure-PyTorch, no SciPy dependency).

    Stratifies each dimension into ``n`` equal intervals and places one sample
    per interval, then shuffles columns independently.

    Args:
        seed: Optional seed for reproducible sampling.
    """

    def __init__(self, seed: int | None = None) -> None:
        self._gen = torch.Generator()
        if seed is not None:
            self._gen.manual_seed(seed)

    def sample(self, n: int, domain: Domain) -> Tensor:
        """Return ``n`` Latin Hypercube-sampled points within ``domain``."""
        d = domain.ndim
        result = torch.empty(n, d)

        for i, (lo, hi) in enumerate(domain.bounds):
            perm = torch.randperm(n, generator=self._gen)
            base = (perm.float() + torch.rand(n, generator=self._gen)) / n
            result[:, i] = base * (hi - lo) + lo

        return result

__init__(seed: int | None = None) -> None

Source code in src/anypinn/core/samplers.py
def __init__(self, seed: int | None = None) -> None:
    self._gen = torch.Generator()
    if seed is not None:
        self._gen.manual_seed(seed)

sample(n: int, domain: Domain) -> Tensor

Return n Latin Hypercube-sampled points within domain.

Source code in src/anypinn/core/samplers.py
def sample(self, n: int, domain: Domain) -> Tensor:
    """Return ``n`` Latin Hypercube-sampled points within ``domain``."""
    d = domain.ndim
    result = torch.empty(n, d)

    for i, (lo, hi) in enumerate(domain.bounds):
        perm = torch.randperm(n, generator=self._gen)
        base = (perm.float() + torch.rand(n, generator=self._gen)) / n
        result[:, i] = base * (hi - lo) + lo

    return result

LogFn

Bases: Protocol

A function that logs a value to a dictionary.

Source code in src/anypinn/core/types.py
class LogFn(Protocol):
    """
    A function that logs a value to a dictionary.
    """

    def __call__(self, name: str, value: Tensor, progress_bar: bool = False) -> None:
        """
        Log a value.

        Args:
            name: The name to log the value under.
            value: The value to log.
            progress_bar: Whether the value should be logged to the progress bar.
        """
        ...

__call__(name: str, value: Tensor, progress_bar: bool = False) -> None

Log a value.

Parameters:

Name Type Description Default
name str

The name to log the value under.

required
value Tensor

The value to log.

required
progress_bar bool

Whether the value should be logged to the progress bar.

False
Source code in src/anypinn/core/types.py
def __call__(self, name: str, value: Tensor, progress_bar: bool = False) -> None:
    """
    Log a value.

    Args:
        name: The name to log the value under.
        value: The value to log.
        progress_bar: Whether the value should be logged to the progress bar.
    """
    ...

LogUniform1DSampler

Log-uniform sampler for 1-D domains (reproduces SIR collocation behavior).

Samples uniformly in log1p space and maps back via expm1, producing a distribution that is denser near the lower bound — useful for epidemic models where early dynamics are most informative.

Parameters:

Name Type Description Default
seed int | None

Optional seed for reproducible sampling.

None

Raises:

Type Description
ValueError

If the domain is not 1-D or x0 <= -1.

Source code in src/anypinn/core/samplers.py
class LogUniform1DSampler:
    """Log-uniform sampler for 1-D domains (reproduces SIR collocation behavior).

    Samples uniformly in ``log1p`` space and maps back via ``expm1``, producing
    a distribution that is denser near the lower bound — useful for epidemic
    models where early dynamics are most informative.

    Args:
        seed: Optional seed for reproducible sampling.

    Raises:
        ValueError: If the domain is not 1-D or ``x0 <= -1``.
    """

    def __init__(self, seed: int | None = None) -> None:
        self._gen = torch.Generator()
        if seed is not None:
            self._gen.manual_seed(seed)

    def sample(self, n: int, domain: Domain) -> Tensor:
        """Return ``n`` log-uniformly spaced points within ``domain``."""
        if domain.ndim != 1:
            raise ValueError(
                f"log_uniform_1d sampler supports only 1-D domains, got ndim={domain.ndim}."
            )
        x0, x1 = domain.x0, domain.x1
        if x0 <= -1.0:
            raise ValueError(f"log_uniform_1d requires x0 > -1 for log1p, got x0={x0}.")
        log_lo = torch.tensor(x0, dtype=torch.float32).log1p()
        log_hi = torch.tensor(x1, dtype=torch.float32).log1p()
        u = torch.rand((n, 1), generator=self._gen)
        return torch.expm1(u * (log_hi - log_lo) + log_lo)

__init__(seed: int | None = None) -> None

Source code in src/anypinn/core/samplers.py
def __init__(self, seed: int | None = None) -> None:
    self._gen = torch.Generator()
    if seed is not None:
        self._gen.manual_seed(seed)

sample(n: int, domain: Domain) -> Tensor

Return n log-uniformly spaced points within domain.

Source code in src/anypinn/core/samplers.py
def sample(self, n: int, domain: Domain) -> Tensor:
    """Return ``n`` log-uniformly spaced points within ``domain``."""
    if domain.ndim != 1:
        raise ValueError(
            f"log_uniform_1d sampler supports only 1-D domains, got ndim={domain.ndim}."
        )
    x0, x1 = domain.x0, domain.x1
    if x0 <= -1.0:
        raise ValueError(f"log_uniform_1d requires x0 > -1 for log1p, got x0={x0}.")
    log_lo = torch.tensor(x0, dtype=torch.float32).log1p()
    log_hi = torch.tensor(x1, dtype=torch.float32).log1p()
    u = torch.rand((n, 1), generator=self._gen)
    return torch.expm1(u * (log_hi - log_lo) + log_lo)

MLPConfig dataclass

Configuration for a Multi-Layer Perceptron (MLP).

Attributes:

Name Type Description
in_dim int

Dimension of input layer.

out_dim int

Dimension of output layer.

hidden_layers list[int]

List of dimensions for hidden layers.

activation Activations

Activation function to use between layers.

output_activation Activations | None

Optional activation function for the output layer.

encode Callable[[Tensor], Tensor] | None

Optional function to encode inputs before passing to MLP.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class MLPConfig:
    """
    Configuration for a Multi-Layer Perceptron (MLP).

    Attributes:
        in_dim: Dimension of input layer.
        out_dim: Dimension of output layer.
        hidden_layers: List of dimensions for hidden layers.
        activation: Activation function to use between layers.
        output_activation: Optional activation function for the output layer.
        encode: Optional function to encode inputs before passing to MLP.
    """

    in_dim: int
    out_dim: int
    hidden_layers: list[int]
    activation: Activations
    output_activation: Activations | None = None
    encode: Callable[[Tensor], Tensor] | None = None

activation: Activations instance-attribute

encode: Callable[[Tensor], Tensor] | None = None class-attribute instance-attribute

hidden_layers: list[int] instance-attribute

in_dim: int instance-attribute

out_dim: int instance-attribute

output_activation: Activations | None = None class-attribute instance-attribute

__init__(*, in_dim: int, out_dim: int, hidden_layers: list[int], activation: Activations, output_activation: Activations | None = None, encode: Callable[[Tensor], Tensor] | None = None) -> None

PINNDataModule

Bases: LightningDataModule, ABC

LightningDataModule for PINNs. Manages data and collocation datasets and creates the combined PINNDataset.

Collocation points are generated via a CollocationSampler selected by the collocation_sampler field in TrainingDataConfig (string literal). Subclasses only need to implement gen_data(); collocation generation is handled by the sampler resolved from the hyperparameters.

Attributes:

Name Type Description
pinn_ds

Combined PINNDataset for training.

callbacks list[DataCallback]

Sequence of DataCallback callbacks applied after data loading.

Source code in src/anypinn/core/dataset.py
class PINNDataModule(pl.LightningDataModule, ABC):
    """
    LightningDataModule for PINNs.
    Manages data and collocation datasets and creates the combined PINNDataset.

    Collocation points are generated via a ``CollocationSampler`` selected by the
    ``collocation_sampler`` field in ``TrainingDataConfig`` (string literal).
    Subclasses only need to implement ``gen_data()``; collocation generation is
    handled by the sampler resolved from the hyperparameters.

    Attributes:
        pinn_ds: Combined PINNDataset for training.
        callbacks: Sequence of DataCallback callbacks applied after data loading.
    """

    def __init__(
        self,
        hp: PINNHyperparameters,
        validation: ValidationRegistry | None = None,
        callbacks: Sequence[DataCallback] | None = None,
        residual_scorer: ResidualScorer | None = None,
    ) -> None:
        super().__init__()
        self.hp = hp
        self.callbacks: list[DataCallback] = list(callbacks) if callbacks else []
        self._residual_scorer = residual_scorer

        self._unresolved_validation = validation or {}
        self._context: InferredContext | None = None

    def _build_sampler(self, strategy: CollocationStrategies) -> CollocationSampler:
        """Resolve a collocation sampler from a strategy name."""
        return build_sampler(
            strategy=strategy,
            seed=self.hp.training_data.collocation_seed,
            scorer=self._residual_scorer,
        )

    def load_data(self, config: IngestionConfig) -> DataBatch:
        """Load training data from a CSV file.

        Reads the CSV at ``config.df_path``, extracts x and y columns,
        and returns tensors shaped for PINN training.

        Args:
            config: Ingestion configuration specifying paths and columns.

        Returns:
            Tuple of ``(x, y)`` tensors with shapes ``(N, 1)`` and
            ``(N, k, 1)`` respectively.
        """
        df = pd.read_csv(config.df_path)

        if config.x_column is not None:
            x_values = df[config.x_column].values

            if config.x_transform is not None:
                x_values = config.x_transform(x_values)

            x = torch.tensor(x_values, dtype=torch.float32)
        else:
            x = torch.arange(len(df), dtype=torch.float32)

        y = torch.tensor(df[config.y_columns].values, dtype=torch.float32)

        if y.ndim == 1:
            y = y.unsqueeze(-1)  # (N,) → (N, 1)
        y = y.unsqueeze(-1)  # (N, k) → (N, k, 1) always

        return x.unsqueeze(-1), y

    @abstractmethod
    def gen_data(self, config: GenerationConfig) -> DataBatch:
        """Generate synthetic training data from a known solution.

        Subclasses implement this to solve the ODE/PDE with known
        parameters and return the resulting data (optionally with added
        noise).

        Args:
            config: Generation configuration specifying the domain,
                noise level, and ground-truth arguments.

        Returns:
            Tuple of ``(x, y)`` tensors with shapes ``(N, d)`` and
            ``(N, k, 1)`` respectively.
        """

    @override
    def setup(self, stage: str | None = None) -> None:
        """
        Load raw data from IngestionConfig, or generate synthetic data from GenerationConfig.
        Apply registered callbacks, create InferredContext and datasets.
        """
        config = self.hp.training_data

        self.validation = resolve_validation(
            self._unresolved_validation,
            config.df_path if isinstance(config, IngestionConfig) else None,
        )

        self.data = (
            self.load_data(config)
            if isinstance(config, IngestionConfig)
            else self.gen_data(config)
        )

        domain = Domain.from_x(self.data[0])
        self._domain = domain
        self._sampler = self._build_sampler(config.collocation_sampler)
        self.coll = self._sampler.sample(config.collocations, domain)

        for callback in self.callbacks:
            self.data, self.coll = callback.transform_data(self.data, self.coll)

        x_data, y_data = self.data

        if x_data.shape[0] != y_data.shape[0]:
            raise ValueError(
                f"Size mismatch: x has {x_data.shape[0]} rows, y has {y_data.shape[0]} rows."
            )
        if x_data.ndim != 2 or x_data.shape[1] < 1:
            raise ValueError(f"Expected x shape (n, d) with d >= 1, got {tuple(x_data.shape)}.")
        if y_data.ndim < 2 or y_data.shape[-1] != 1:
            raise ValueError(f"Expected y shape (n, ..., 1), got {tuple(y_data.shape)}.")
        if self.coll.ndim != 2 or self.coll.shape[1] < 1:
            raise ValueError(
                f"Expected coll shape (m, d) with d >= 1, got {tuple(self.coll.shape)}."
            )
        if x_data.shape[1] != self.coll.shape[1]:
            raise ValueError(
                f"Spatial dimension mismatch: x_data has d={x_data.shape[1]}, "
                f"coll has d={self.coll.shape[1]}. Both must share the same number of dimensions."
            )

        self._data_size = x_data.shape[0]

        self._context = InferredContext(
            x_data,
            y_data,
            self.validation,
        )

        self.pinn_ds = PINNDataset(
            x_data,
            y_data,
            self.coll,
            config.batch_size,
            config.data_ratio,
        )

        self.predict_ds = TensorDataset(
            x_data,
            y_data,
        )

        for callback in self.callbacks:
            callback.on_after_setup(self)

    @override
    def train_dataloader(self) -> DataLoader[TrainingBatch]:
        """
        Returns the training dataloader using PINNDataset.
        """
        return DataLoader[TrainingBatch](
            self.pinn_ds,
            batch_size=None,  # handled internally
            num_workers=cpu_count() or 1,
            persistent_workers=True,
            pin_memory=True,
        )

    @override
    def predict_dataloader(self) -> DataLoader[PredictionBatch]:
        """
        Returns the prediction dataloader using only the data dataset.
        """
        return DataLoader[PredictionBatch](
            cast(Dataset[PredictionBatch], self.predict_ds),
            batch_size=self._data_size,
            num_workers=cpu_count() or 1,
            persistent_workers=True,
            pin_memory=True,
        )

    @property
    def context(self) -> InferredContext:
        if self._context is None:
            raise RuntimeError("Context does not exist. Call setup() before accessing context.")
        return self._context

callbacks: list[DataCallback] = list(callbacks) if callbacks else [] instance-attribute

context: InferredContext property

hp = hp instance-attribute

__init__(hp: PINNHyperparameters, validation: ValidationRegistry | None = None, callbacks: Sequence[DataCallback] | None = None, residual_scorer: ResidualScorer | None = None) -> None

Source code in src/anypinn/core/dataset.py
def __init__(
    self,
    hp: PINNHyperparameters,
    validation: ValidationRegistry | None = None,
    callbacks: Sequence[DataCallback] | None = None,
    residual_scorer: ResidualScorer | None = None,
) -> None:
    super().__init__()
    self.hp = hp
    self.callbacks: list[DataCallback] = list(callbacks) if callbacks else []
    self._residual_scorer = residual_scorer

    self._unresolved_validation = validation or {}
    self._context: InferredContext | None = None

gen_data(config: GenerationConfig) -> DataBatch abstractmethod

Generate synthetic training data from a known solution.

Subclasses implement this to solve the ODE/PDE with known parameters and return the resulting data (optionally with added noise).

Parameters:

Name Type Description Default
config GenerationConfig

Generation configuration specifying the domain, noise level, and ground-truth arguments.

required

Returns:

Type Description
DataBatch

Tuple of (x, y) tensors with shapes (N, d) and

DataBatch

(N, k, 1) respectively.

Source code in src/anypinn/core/dataset.py
@abstractmethod
def gen_data(self, config: GenerationConfig) -> DataBatch:
    """Generate synthetic training data from a known solution.

    Subclasses implement this to solve the ODE/PDE with known
    parameters and return the resulting data (optionally with added
    noise).

    Args:
        config: Generation configuration specifying the domain,
            noise level, and ground-truth arguments.

    Returns:
        Tuple of ``(x, y)`` tensors with shapes ``(N, d)`` and
        ``(N, k, 1)`` respectively.
    """

load_data(config: IngestionConfig) -> DataBatch

Load training data from a CSV file.

Reads the CSV at config.df_path, extracts x and y columns, and returns tensors shaped for PINN training.

Parameters:

Name Type Description Default
config IngestionConfig

Ingestion configuration specifying paths and columns.

required

Returns:

Type Description
DataBatch

Tuple of (x, y) tensors with shapes (N, 1) and

DataBatch

(N, k, 1) respectively.

Source code in src/anypinn/core/dataset.py
def load_data(self, config: IngestionConfig) -> DataBatch:
    """Load training data from a CSV file.

    Reads the CSV at ``config.df_path``, extracts x and y columns,
    and returns tensors shaped for PINN training.

    Args:
        config: Ingestion configuration specifying paths and columns.

    Returns:
        Tuple of ``(x, y)`` tensors with shapes ``(N, 1)`` and
        ``(N, k, 1)`` respectively.
    """
    df = pd.read_csv(config.df_path)

    if config.x_column is not None:
        x_values = df[config.x_column].values

        if config.x_transform is not None:
            x_values = config.x_transform(x_values)

        x = torch.tensor(x_values, dtype=torch.float32)
    else:
        x = torch.arange(len(df), dtype=torch.float32)

    y = torch.tensor(df[config.y_columns].values, dtype=torch.float32)

    if y.ndim == 1:
        y = y.unsqueeze(-1)  # (N,) → (N, 1)
    y = y.unsqueeze(-1)  # (N, k) → (N, k, 1) always

    return x.unsqueeze(-1), y

predict_dataloader() -> DataLoader[PredictionBatch]

Returns the prediction dataloader using only the data dataset.

Source code in src/anypinn/core/dataset.py
@override
def predict_dataloader(self) -> DataLoader[PredictionBatch]:
    """
    Returns the prediction dataloader using only the data dataset.
    """
    return DataLoader[PredictionBatch](
        cast(Dataset[PredictionBatch], self.predict_ds),
        batch_size=self._data_size,
        num_workers=cpu_count() or 1,
        persistent_workers=True,
        pin_memory=True,
    )

setup(stage: str | None = None) -> None

Load raw data from IngestionConfig, or generate synthetic data from GenerationConfig. Apply registered callbacks, create InferredContext and datasets.

Source code in src/anypinn/core/dataset.py
@override
def setup(self, stage: str | None = None) -> None:
    """
    Load raw data from IngestionConfig, or generate synthetic data from GenerationConfig.
    Apply registered callbacks, create InferredContext and datasets.
    """
    config = self.hp.training_data

    self.validation = resolve_validation(
        self._unresolved_validation,
        config.df_path if isinstance(config, IngestionConfig) else None,
    )

    self.data = (
        self.load_data(config)
        if isinstance(config, IngestionConfig)
        else self.gen_data(config)
    )

    domain = Domain.from_x(self.data[0])
    self._domain = domain
    self._sampler = self._build_sampler(config.collocation_sampler)
    self.coll = self._sampler.sample(config.collocations, domain)

    for callback in self.callbacks:
        self.data, self.coll = callback.transform_data(self.data, self.coll)

    x_data, y_data = self.data

    if x_data.shape[0] != y_data.shape[0]:
        raise ValueError(
            f"Size mismatch: x has {x_data.shape[0]} rows, y has {y_data.shape[0]} rows."
        )
    if x_data.ndim != 2 or x_data.shape[1] < 1:
        raise ValueError(f"Expected x shape (n, d) with d >= 1, got {tuple(x_data.shape)}.")
    if y_data.ndim < 2 or y_data.shape[-1] != 1:
        raise ValueError(f"Expected y shape (n, ..., 1), got {tuple(y_data.shape)}.")
    if self.coll.ndim != 2 or self.coll.shape[1] < 1:
        raise ValueError(
            f"Expected coll shape (m, d) with d >= 1, got {tuple(self.coll.shape)}."
        )
    if x_data.shape[1] != self.coll.shape[1]:
        raise ValueError(
            f"Spatial dimension mismatch: x_data has d={x_data.shape[1]}, "
            f"coll has d={self.coll.shape[1]}. Both must share the same number of dimensions."
        )

    self._data_size = x_data.shape[0]

    self._context = InferredContext(
        x_data,
        y_data,
        self.validation,
    )

    self.pinn_ds = PINNDataset(
        x_data,
        y_data,
        self.coll,
        config.batch_size,
        config.data_ratio,
    )

    self.predict_ds = TensorDataset(
        x_data,
        y_data,
    )

    for callback in self.callbacks:
        callback.on_after_setup(self)

train_dataloader() -> DataLoader[TrainingBatch]

Returns the training dataloader using PINNDataset.

Source code in src/anypinn/core/dataset.py
@override
def train_dataloader(self) -> DataLoader[TrainingBatch]:
    """
    Returns the training dataloader using PINNDataset.
    """
    return DataLoader[TrainingBatch](
        self.pinn_ds,
        batch_size=None,  # handled internally
        num_workers=cpu_count() or 1,
        persistent_workers=True,
        pin_memory=True,
    )

PINNDataset

Bases: Dataset[TrainingBatch]

Dataset used for PINN training. Combines labeled data and collocation points per sample. Given a data_ratio, the amount of data points K is determined either by applying data_ratio * batch_size if ratio is a float between 0 and 1 or by an absolute count if ratio is an integer. The remaining C points are used for collocation. The data points are sampled without replacement per epoch i.e. cycles through all data points and at the last batch, wraps around to the first indices to ensure batch size. The collocation points are sampled with replacement from the pool. The dataset produces a batch of shape ((x_data[K,d], y_data[K,...]), x_coll[C,d]).

Parameters:

Name Type Description Default
x_data Tensor

Data point x coordinates (time values).

required
y_data Tensor

Data point y values (observations).

required
x_coll Tensor

Collocation point x coordinates.

required
batch_size int

Size of the batch.

required
data_ratio float | int

Ratio of data points to collocation points, either as a ratio [0,1] or absolute count [0,batch_size].

required
Source code in src/anypinn/core/dataset.py
class PINNDataset(Dataset[TrainingBatch]):
    """
    Dataset used for PINN training. Combines labeled data and collocation points
    per sample.  Given a data_ratio, the amount of data points `K` is determined
    either by applying `data_ratio * batch_size` if ratio is a float between 0
    and 1 or by an absolute count if ratio is an integer. The remaining `C`
    points are used for collocation.  The data points are sampled without
    replacement per epoch i.e. cycles through all data points and at the last
    batch, wraps around to the first indices to ensure batch size. The collocation
    points are sampled with replacement from the pool.
    The dataset produces a batch of shape ((x_data[K,d], y_data[K,...]), x_coll[C,d]).

    Args:
        x_data: Data point x coordinates (time values).
        y_data: Data point y values (observations).
        x_coll: Collocation point x coordinates.
        batch_size: Size of the batch.
        data_ratio: Ratio of data points to collocation points, either as a ratio [0,1] or absolute
            count [0,batch_size].
    """

    def __init__(
        self,
        x_data: Tensor,
        y_data: Tensor,
        x_coll: Tensor,
        batch_size: int,
        data_ratio: float | int,
    ):
        super().__init__()
        if batch_size <= 0:
            raise ValueError(f"batch_size must be positive, got {batch_size}.")

        if isinstance(data_ratio, float):
            if not (0.0 <= data_ratio <= 1.0):
                raise ValueError(f"Float data_ratio must be in [0.0, 1.0], got {data_ratio}.")
            self.K = round(data_ratio * batch_size)
        else:
            if not (0 <= data_ratio <= batch_size):
                raise ValueError(
                    f"Integer data_ratio must be in [0, {batch_size}], got {data_ratio}."
                )
            self.K = data_ratio

        self.x_data = x_data
        self.y_data = y_data
        self.x_coll = x_coll

        self.batch_size = batch_size
        self.C = batch_size - self.K

        self.total_data = x_data.shape[0]
        self.total_coll = x_coll.shape[0]

        self._coll_gen = torch.Generator()

    def __len__(self) -> int:
        """Number of steps per epoch to see all data points once. Ceiling division."""
        return (self.total_data + self.K - 1) // self.K

    @override
    def __getitem__(self, index: int) -> TrainingBatch:
        """Return one sample containing K data points and C collocation points."""
        data_idx = self._get_data_indices(index)
        coll_idx = self._get_coll_indices(index)

        x_data = self.x_data[data_idx]
        y_data = self.y_data[data_idx]
        x_coll = self.x_coll[coll_idx]

        return ((x_data, y_data), x_coll)

    def _get_data_indices(self, idx: int) -> Tensor:
        """Get data indices for this step without replacement.
        When getting the last batch, wrap around to the first indices to ensure batch size.
        """
        if self.total_data == 0:
            return torch.empty(0, 1)

        start = idx * self.K
        indices = [(start + i) % self.total_data for i in range(self.K)]
        return torch.tensor(indices)

    def _get_coll_indices(self, idx: int) -> Tensor:
        """Get collocation indices for this step with replacement."""
        if self.total_coll == 0:
            return torch.empty(0, 1)

        self._coll_gen.manual_seed(idx)
        return torch.randint(0, self.total_coll, (self.C,), generator=self._coll_gen)

C = batch_size - self.K instance-attribute

K = round(data_ratio * batch_size) instance-attribute

batch_size = batch_size instance-attribute

total_coll = x_coll.shape[0] instance-attribute

total_data = x_data.shape[0] instance-attribute

x_coll = x_coll instance-attribute

x_data = x_data instance-attribute

y_data = y_data instance-attribute

__getitem__(index: int) -> TrainingBatch

Return one sample containing K data points and C collocation points.

Source code in src/anypinn/core/dataset.py
@override
def __getitem__(self, index: int) -> TrainingBatch:
    """Return one sample containing K data points and C collocation points."""
    data_idx = self._get_data_indices(index)
    coll_idx = self._get_coll_indices(index)

    x_data = self.x_data[data_idx]
    y_data = self.y_data[data_idx]
    x_coll = self.x_coll[coll_idx]

    return ((x_data, y_data), x_coll)

__init__(x_data: Tensor, y_data: Tensor, x_coll: Tensor, batch_size: int, data_ratio: float | int)

Source code in src/anypinn/core/dataset.py
def __init__(
    self,
    x_data: Tensor,
    y_data: Tensor,
    x_coll: Tensor,
    batch_size: int,
    data_ratio: float | int,
):
    super().__init__()
    if batch_size <= 0:
        raise ValueError(f"batch_size must be positive, got {batch_size}.")

    if isinstance(data_ratio, float):
        if not (0.0 <= data_ratio <= 1.0):
            raise ValueError(f"Float data_ratio must be in [0.0, 1.0], got {data_ratio}.")
        self.K = round(data_ratio * batch_size)
    else:
        if not (0 <= data_ratio <= batch_size):
            raise ValueError(
                f"Integer data_ratio must be in [0, {batch_size}], got {data_ratio}."
            )
        self.K = data_ratio

    self.x_data = x_data
    self.y_data = y_data
    self.x_coll = x_coll

    self.batch_size = batch_size
    self.C = batch_size - self.K

    self.total_data = x_data.shape[0]
    self.total_coll = x_coll.shape[0]

    self._coll_gen = torch.Generator()

__len__() -> int

Number of steps per epoch to see all data points once. Ceiling division.

Source code in src/anypinn/core/dataset.py
def __len__(self) -> int:
    """Number of steps per epoch to see all data points once. Ceiling division."""
    return (self.total_data + self.K - 1) // self.K

PINNHyperparameters dataclass

Aggregated hyperparameters for the PINN model.

Attributes:

Name Type Description
lr float

Base learning rate (used as fallback when no optimizer config is provided).

training_data IngestionConfig | GenerationConfig

Data source configuration — either IngestionConfig (CSV) or GenerationConfig (synthetic).

fields_config MLPConfig

MLP architecture for the neural field(s).

params_config MLPConfig | ScalarConfig

Configuration for learnable parameters (scalar or MLP-backed).

max_epochs int | None

Maximum number of training epochs.

gradient_clip_val float | None

Optional gradient clipping value.

criterion Criteria

Loss function name ("mse", "huber", or "l1").

optimizer AdamConfig | LBFGSConfig | None

Optimizer configuration. If None, Adam with lr is used.

scheduler ReduceLROnPlateauConfig | CosineAnnealingConfig | None

Learning rate scheduler configuration.

early_stopping EarlyStoppingConfig | None

Optional early stopping configuration (patience-based).

smma_stopping SMMAStoppingConfig | None

Optional SMMA stopping configuration (improvement-based).

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class PINNHyperparameters:
    """
    Aggregated hyperparameters for the PINN model.

    Attributes:
        lr: Base learning rate (used as fallback when no ``optimizer``
            config is provided).
        training_data: Data source configuration — either
            ``IngestionConfig`` (CSV) or ``GenerationConfig`` (synthetic).
        fields_config: MLP architecture for the neural field(s).
        params_config: Configuration for learnable parameters (scalar
            or MLP-backed).
        max_epochs: Maximum number of training epochs.
        gradient_clip_val: Optional gradient clipping value.
        criterion: Loss function name (``"mse"``, ``"huber"``, or
            ``"l1"``).
        optimizer: Optimizer configuration. If ``None``, Adam with
            ``lr`` is used.
        scheduler: Learning rate scheduler configuration.
        early_stopping: Optional early stopping configuration
            (patience-based).
        smma_stopping: Optional SMMA stopping configuration
            (improvement-based).
    """

    lr: float
    training_data: IngestionConfig | GenerationConfig
    fields_config: MLPConfig
    params_config: MLPConfig | ScalarConfig
    max_epochs: int | None = None
    gradient_clip_val: float | None = None
    criterion: Criteria = "mse"
    optimizer: AdamConfig | LBFGSConfig | None = None
    scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None
    early_stopping: EarlyStoppingConfig | None = None
    smma_stopping: SMMAStoppingConfig | None = None

    def __post_init__(self) -> None:
        if self.lr <= 0:
            raise ValueError(f"lr must be positive, got {self.lr}.")

criterion: Criteria = 'mse' class-attribute instance-attribute

early_stopping: EarlyStoppingConfig | None = None class-attribute instance-attribute

fields_config: MLPConfig instance-attribute

gradient_clip_val: float | None = None class-attribute instance-attribute

lr: float instance-attribute

max_epochs: int | None = None class-attribute instance-attribute

optimizer: AdamConfig | LBFGSConfig | None = None class-attribute instance-attribute

params_config: MLPConfig | ScalarConfig instance-attribute

scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None class-attribute instance-attribute

smma_stopping: SMMAStoppingConfig | None = None class-attribute instance-attribute

training_data: IngestionConfig | GenerationConfig instance-attribute

__init__(*, lr: float, training_data: IngestionConfig | GenerationConfig, fields_config: MLPConfig, params_config: MLPConfig | ScalarConfig, max_epochs: int | None = None, gradient_clip_val: float | None = None, criterion: Criteria = 'mse', optimizer: AdamConfig | LBFGSConfig | None = None, scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None, early_stopping: EarlyStoppingConfig | None = None, smma_stopping: SMMAStoppingConfig | None = None) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.lr <= 0:
        raise ValueError(f"lr must be positive, got {self.lr}.")

Parameter

Bases: Module, Argument

A learnable parameter that participates in gradient optimization.

Supports scalar parameters (a single trainable value) or function-valued parameters (e.g. beta(t)) backed by a small MLP. Because Parameter is a subclass of Argument, it can be used anywhere an Argument is expected.

Parameters:

Name Type Description Default
config ScalarConfig | MLPConfig

Configuration for the parameter (ScalarConfig or MLPConfig).

required
Example

Scalar parameter starting at 0.3

beta = Parameter(ScalarConfig(init_value=0.3)) beta(torch.tensor([1.0])) # returns ~0.3

Function-valued parameter beta(t)

beta_t = Parameter(MLPConfig( ... in_dim=1, out_dim=1, ... hidden_layers=[8], ... activation="tanh", ... ))

Source code in src/anypinn/core/nn.py
class Parameter(nn.Module, Argument):
    """
    A learnable parameter that participates in gradient optimization.

    Supports scalar parameters (a single trainable value) or
    function-valued parameters (e.g. beta(t)) backed by a small MLP.
    Because ``Parameter`` is a subclass of ``Argument``, it can be
    used anywhere an ``Argument`` is expected.

    Args:
        config: Configuration for the parameter (ScalarConfig or MLPConfig).

    Example:
        >>> # Scalar parameter starting at 0.3
        >>> beta = Parameter(ScalarConfig(init_value=0.3))
        >>> beta(torch.tensor([1.0]))  # returns ~0.3
        >>> # Function-valued parameter beta(t)
        >>> beta_t = Parameter(MLPConfig(
        ...     in_dim=1, out_dim=1,
        ...     hidden_layers=[8],
        ...     activation="tanh",
        ... ))
    """

    def __init__(
        self,
        config: ScalarConfig | MLPConfig,
    ):
        super().__init__()
        self.config = config
        self._mode: Literal["scalar", "mlp"]

        if isinstance(config, ScalarConfig):
            self._mode = "scalar"
            self.value = nn.Parameter(torch.tensor(float(config.init_value), dtype=torch.float32))

        else:  # isinstance(config, MLPConfig)
            self._mode = "mlp"
            dims = [config.in_dim] + config.hidden_layers + [config.out_dim]
            act = get_activation(config.activation)

            layers: list[nn.Module] = []
            for i in range(len(dims) - 1):
                layers.append(nn.Linear(dims[i], dims[i + 1]))
                if i < len(dims) - 2:
                    layers.append(act)

            if config.output_activation is not None:
                out_act = get_activation(config.output_activation)
                layers.append(out_act)

            self.net = nn.Sequential(*layers)
            self.apply(self._init)

    @property
    def mode(self) -> Literal["scalar", "mlp"]:
        """Mode of the parameter: 'scalar' or 'mlp'."""
        return self._mode

    @staticmethod
    def _init(m: nn.Module) -> None:
        if isinstance(m, nn.Linear):
            nn.init.xavier_normal_(m.weight)
            nn.init.zeros_(m.bias)

    @override
    def forward(self, x: Tensor | None = None) -> Tensor:
        """
        Get the value of the parameter.

        Args:
            x: Input tensor (required for 'mlp' mode).

        Returns:
            The parameter value.
        """
        if self.mode == "scalar":
            return self.value if x is None else self.value.expand_as(x)
        else:
            if x is None:
                raise TypeError("Function-valued parameter requires input.")
            return cast(Tensor, self.net(x))

config = config instance-attribute

mode: Literal['scalar', 'mlp'] property

Mode of the parameter: 'scalar' or 'mlp'.

net = nn.Sequential(*layers) instance-attribute

value = nn.Parameter(torch.tensor(float(config.init_value), dtype=(torch.float32))) instance-attribute

__init__(config: ScalarConfig | MLPConfig)

Source code in src/anypinn/core/nn.py
def __init__(
    self,
    config: ScalarConfig | MLPConfig,
):
    super().__init__()
    self.config = config
    self._mode: Literal["scalar", "mlp"]

    if isinstance(config, ScalarConfig):
        self._mode = "scalar"
        self.value = nn.Parameter(torch.tensor(float(config.init_value), dtype=torch.float32))

    else:  # isinstance(config, MLPConfig)
        self._mode = "mlp"
        dims = [config.in_dim] + config.hidden_layers + [config.out_dim]
        act = get_activation(config.activation)

        layers: list[nn.Module] = []
        for i in range(len(dims) - 1):
            layers.append(nn.Linear(dims[i], dims[i + 1]))
            if i < len(dims) - 2:
                layers.append(act)

        if config.output_activation is not None:
            out_act = get_activation(config.output_activation)
            layers.append(out_act)

        self.net = nn.Sequential(*layers)
        self.apply(self._init)

forward(x: Tensor | None = None) -> Tensor

Get the value of the parameter.

Parameters:

Name Type Description Default
x Tensor | None

Input tensor (required for 'mlp' mode).

None

Returns:

Type Description
Tensor

The parameter value.

Source code in src/anypinn/core/nn.py
@override
def forward(self, x: Tensor | None = None) -> Tensor:
    """
    Get the value of the parameter.

    Args:
        x: Input tensor (required for 'mlp' mode).

    Returns:
        The parameter value.
    """
    if self.mode == "scalar":
        return self.value if x is None else self.value.expand_as(x)
    else:
        if x is None:
            raise TypeError("Function-valued parameter requires input.")
        return cast(Tensor, self.net(x))

Problem

Bases: Module

Aggregates constraints into a total training loss.

Manages fields (neural networks), learnable parameters, and the loss criterion. Call training_loss() during each training step and predict() for inference.

Parameters:

Name Type Description Default
constraints list[Constraint]

List of constraints to enforce.

required
criterion Module

Loss function module.

required
fields FieldsRegistry

Registry of named neural fields.

required
params ParamsRegistry

Registry of named learnable parameters.

required
Example

problem = Problem( ... constraints=[residual_constraint, ic_constraint], ... criterion=nn.MSELoss(), ... fields={"u": field}, ... params={"alpha": Parameter(ScalarConfig(init_value=0.01))}, ... )

Source code in src/anypinn/core/problem.py
class Problem(nn.Module):
    """
    Aggregates constraints into a total training loss.

    Manages fields (neural networks), learnable parameters, and the loss
    criterion. Call ``training_loss()`` during each training step and
    ``predict()`` for inference.

    Args:
        constraints: List of constraints to enforce.
        criterion: Loss function module.
        fields: Registry of named neural fields.
        params: Registry of named learnable parameters.

    Example:
        >>> problem = Problem(
        ...     constraints=[residual_constraint, ic_constraint],
        ...     criterion=nn.MSELoss(),
        ...     fields={"u": field},
        ...     params={"alpha": Parameter(ScalarConfig(init_value=0.01))},
        ... )
    """

    def __init__(
        self,
        constraints: list[Constraint],
        criterion: nn.Module,
        fields: FieldsRegistry,
        params: ParamsRegistry,
    ):
        super().__init__()
        self.constraints = constraints
        self.criterion = criterion
        self.fields = fields
        self.params = params

        self._fields = nn.ModuleList(fields.values())
        self._params = nn.ModuleList(params.values())

    def inject_context(self, context: InferredContext) -> None:
        """
        Inject the context into the problem.

        This should be called after data is loaded but before training starts.
        Pure function entries are passed through unchanged.

        Args:
            context: The context to inject.
        """
        self.context = context
        for c in self.constraints:
            c.inject_context(context)

    def training_loss(self, batch: TrainingBatch, log: LogFn | None = None) -> Tensor:
        """
        Calculate the total loss from all constraints.

        Args:
            batch: Current batch.
            log: Optional logging function.

        Returns:
            Sum of losses from all constraints.
        """
        _, x_coll = batch

        if not self.constraints:
            total = torch.tensor(0.0, device=x_coll.device)
        else:
            losses = iter(self.constraints)
            total = next(losses).loss(batch, self.criterion, log)
            for c in losses:
                total = total + c.loss(batch, self.criterion, log)

        if log is not None:
            for name, param in self.params.items():
                param_loss = self._param_validation_loss(name, param, x_coll)
                if param_loss is not None:
                    log(f"loss/{name}", param_loss, progress_bar=True)

            log(LOSS_KEY, total, progress_bar=True)

        return total

    def predict(self, batch: DataBatch) -> tuple[DataBatch, dict[str, Tensor]]:
        """
        Generate predictions for a given batch of data.
        Returns unscaled predictions in original domain.

        Args:
            batch: Batch of input coordinates.

        Returns:
            Tuple of (original_batch, predictions_dict).
        """

        x, y = batch

        n = x.shape[0]
        preds = {name: f(x).reshape(n, -1).squeeze(-1) for name, f in self.fields.items()}
        preds |= {name: p(x).reshape(n, -1).squeeze(-1) for name, p in self.params.items()}

        return (x.squeeze(-1), y.squeeze(-1)), preds

    def true_values(self, x: Tensor) -> dict[str, Tensor] | None:
        """
        Get the true values for a given x coordinates.
        Returns None if no validation source is configured.
        """

        return {
            name: p_true.reshape(x.shape[0], -1).squeeze(-1)
            for name, p in self.params.items()
            if (p_true := self._get_true_param(name, x)) is not None
        } or None

    def _get_true_param(self, param_name: str, x: Tensor) -> Tensor | None:
        """
        Get the ground truth values for a parameter at given coordinates.

        Args:
            param_name: Name of the parameter.
            x: Input coordinates.

        Returns:
            Ground truth values, or None if no validation source is configured.
        """
        if param_name not in self.context.validation:
            return None

        fn = self.context.validation[param_name]

        if isinstance(fn, _ColumnLookup):
            domain = self.context.domain
            if domain.dx is None:
                raise ValueError(
                    f"Cannot perform ColumnRef lookup for '{param_name}': "
                    "domain step size (dx) is unknown. Ensure the domain was inferred from "
                    "a uniformly-spaced coordinate tensor, or use a callable validation source."
                )
            x_idx = ((x.squeeze(-1) - domain.x0) / domain.dx[0]).round().unsqueeze(-1)
            return fn(x_idx)

        return fn(x)

    @torch.no_grad()
    def _param_validation_loss(
        self, param_name: str, param: Parameter, x_coll: Tensor
    ) -> Tensor | None:
        """
        Compute validation loss for a parameter against ground truth.

        Args:
            param: The parameter to compute validation loss for.
            x_coll: The input coordinates.

        Returns:
            Loss value, or None if no validation source is configured.
        """
        true = self._get_true_param(param_name, x_coll)
        if true is None:
            return None

        pred = param(x_coll)

        return torch.mean((true - pred) ** 2)

constraints = constraints instance-attribute

criterion = criterion instance-attribute

fields = fields instance-attribute

params = params instance-attribute

__init__(constraints: list[Constraint], criterion: nn.Module, fields: FieldsRegistry, params: ParamsRegistry)

Source code in src/anypinn/core/problem.py
def __init__(
    self,
    constraints: list[Constraint],
    criterion: nn.Module,
    fields: FieldsRegistry,
    params: ParamsRegistry,
):
    super().__init__()
    self.constraints = constraints
    self.criterion = criterion
    self.fields = fields
    self.params = params

    self._fields = nn.ModuleList(fields.values())
    self._params = nn.ModuleList(params.values())

inject_context(context: InferredContext) -> None

Inject the context into the problem.

This should be called after data is loaded but before training starts. Pure function entries are passed through unchanged.

Parameters:

Name Type Description Default
context InferredContext

The context to inject.

required
Source code in src/anypinn/core/problem.py
def inject_context(self, context: InferredContext) -> None:
    """
    Inject the context into the problem.

    This should be called after data is loaded but before training starts.
    Pure function entries are passed through unchanged.

    Args:
        context: The context to inject.
    """
    self.context = context
    for c in self.constraints:
        c.inject_context(context)

predict(batch: DataBatch) -> tuple[DataBatch, dict[str, Tensor]]

Generate predictions for a given batch of data. Returns unscaled predictions in original domain.

Parameters:

Name Type Description Default
batch DataBatch

Batch of input coordinates.

required

Returns:

Type Description
tuple[DataBatch, dict[str, Tensor]]

Tuple of (original_batch, predictions_dict).

Source code in src/anypinn/core/problem.py
def predict(self, batch: DataBatch) -> tuple[DataBatch, dict[str, Tensor]]:
    """
    Generate predictions for a given batch of data.
    Returns unscaled predictions in original domain.

    Args:
        batch: Batch of input coordinates.

    Returns:
        Tuple of (original_batch, predictions_dict).
    """

    x, y = batch

    n = x.shape[0]
    preds = {name: f(x).reshape(n, -1).squeeze(-1) for name, f in self.fields.items()}
    preds |= {name: p(x).reshape(n, -1).squeeze(-1) for name, p in self.params.items()}

    return (x.squeeze(-1), y.squeeze(-1)), preds

training_loss(batch: TrainingBatch, log: LogFn | None = None) -> Tensor

Calculate the total loss from all constraints.

Parameters:

Name Type Description Default
batch TrainingBatch

Current batch.

required
log LogFn | None

Optional logging function.

None

Returns:

Type Description
Tensor

Sum of losses from all constraints.

Source code in src/anypinn/core/problem.py
def training_loss(self, batch: TrainingBatch, log: LogFn | None = None) -> Tensor:
    """
    Calculate the total loss from all constraints.

    Args:
        batch: Current batch.
        log: Optional logging function.

    Returns:
        Sum of losses from all constraints.
    """
    _, x_coll = batch

    if not self.constraints:
        total = torch.tensor(0.0, device=x_coll.device)
    else:
        losses = iter(self.constraints)
        total = next(losses).loss(batch, self.criterion, log)
        for c in losses:
            total = total + c.loss(batch, self.criterion, log)

    if log is not None:
        for name, param in self.params.items():
            param_loss = self._param_validation_loss(name, param, x_coll)
            if param_loss is not None:
                log(f"loss/{name}", param_loss, progress_bar=True)

        log(LOSS_KEY, total, progress_bar=True)

    return total

true_values(x: Tensor) -> dict[str, Tensor] | None

Get the true values for a given x coordinates. Returns None if no validation source is configured.

Source code in src/anypinn/core/problem.py
def true_values(self, x: Tensor) -> dict[str, Tensor] | None:
    """
    Get the true values for a given x coordinates.
    Returns None if no validation source is configured.
    """

    return {
        name: p_true.reshape(x.shape[0], -1).squeeze(-1)
        for name, p in self.params.items()
        if (p_true := self._get_true_param(name, x)) is not None
    } or None

RandomFourierFeatures

Bases: Module

Random Fourier Features (Rahimi & Recht, 2007) for RBF kernel approximation.

Draws a fixed random matrix \(\mathbf{B} \sim \mathcal{N}(0, \sigma^2)\) of shape \((d_{\text{in}},\, m)\) and maps \(\mathbf{x} \in \mathbb{R}^{n \times d_{\text{in}}}\) to:

\[ \phi(\mathbf{x}) = \frac{1}{\sqrt{m}} [\cos(\mathbf{x}\mathbf{B}),\; \sin(\mathbf{x}\mathbf{B})] \in \mathbb{R}^{n \times 2m} \]

\(\mathbf{B}\) is registered as a buffer and moves with the module across devices.

Parameters:

Name Type Description Default
in_dim int

Spatial dimension \(d_{\text{in}}\) of the input.

required
num_features int

Number of random features \(m\) (output dimension \(= 2m\)).

256
scale float

Standard deviation \(\sigma\) of the frequency distribution. Higher values capture higher-frequency variation. Default: 1.0.

1.0
seed int | None

Optional seed for reproducible frequency sampling.

None
Source code in src/anypinn/lib/encodings.py
class RandomFourierFeatures(nn.Module):
    """Random Fourier Features (Rahimi & Recht, 2007) for RBF kernel approximation.

    Draws a fixed random matrix $\\mathbf{B} \\sim \\mathcal{N}(0, \\sigma^2)$
    of shape $(d_{\\text{in}},\\, m)$ and maps
    $\\mathbf{x} \\in \\mathbb{R}^{n \\times d_{\\text{in}}}$ to:

    $$
    \\phi(\\mathbf{x}) = \\frac{1}{\\sqrt{m}}
        [\\cos(\\mathbf{x}\\mathbf{B}),\\; \\sin(\\mathbf{x}\\mathbf{B})]
        \\in \\mathbb{R}^{n \\times 2m}
    $$

    $\\mathbf{B}$ is registered as a buffer and moves with the module across devices.

    Args:
        in_dim:       Spatial dimension $d_{\\text{in}}$ of the input.
        num_features: Number of random features $m$
                      (output dimension $= 2m$).
        scale:        Standard deviation $\\sigma$ of the frequency distribution.
                      Higher values capture higher-frequency variation. Default: 1.0.
        seed:         Optional seed for reproducible frequency sampling.
    """

    def __init__(
        self,
        in_dim: int,
        num_features: int = 256,
        scale: float = 1.0,
        seed: int | None = None,
    ) -> None:
        if in_dim < 1:
            raise ValueError(f"in_dim must be >= 1, got {in_dim}.")
        if num_features < 1:
            raise ValueError(f"num_features must be >= 1, got {num_features}.")
        if scale <= 0.0:
            raise ValueError(f"scale must be > 0, got {scale}.")
        super().__init__()
        gen = torch.Generator()
        if seed is not None:
            gen.manual_seed(seed)
        B = torch.randn(in_dim, num_features, generator=gen) * scale
        self.register_buffer("B", B)
        self.num_features = num_features

    @property
    def out_dim(self) -> int:
        """Output dimension (always 2 * num_features)."""
        return 2 * self.num_features

    def forward(self, x: Tensor) -> Tensor:
        """Project input through random features and apply cos/sin."""
        proj = x @ self.B  # type: ignore[operator]  # ty: ignore[unsupported-operator]  # (n, num_features)
        return torch.cat([torch.cos(proj), torch.sin(proj)], dim=-1) / (self.num_features**0.5)

num_features = num_features instance-attribute

out_dim: int property

Output dimension (always 2 * num_features).

__init__(in_dim: int, num_features: int = 256, scale: float = 1.0, seed: int | None = None) -> None

Source code in src/anypinn/lib/encodings.py
def __init__(
    self,
    in_dim: int,
    num_features: int = 256,
    scale: float = 1.0,
    seed: int | None = None,
) -> None:
    if in_dim < 1:
        raise ValueError(f"in_dim must be >= 1, got {in_dim}.")
    if num_features < 1:
        raise ValueError(f"num_features must be >= 1, got {num_features}.")
    if scale <= 0.0:
        raise ValueError(f"scale must be > 0, got {scale}.")
    super().__init__()
    gen = torch.Generator()
    if seed is not None:
        gen.manual_seed(seed)
    B = torch.randn(in_dim, num_features, generator=gen) * scale
    self.register_buffer("B", B)
    self.num_features = num_features

forward(x: Tensor) -> Tensor

Project input through random features and apply cos/sin.

Source code in src/anypinn/lib/encodings.py
def forward(self, x: Tensor) -> Tensor:
    """Project input through random features and apply cos/sin."""
    proj = x @ self.B  # type: ignore[operator]  # ty: ignore[unsupported-operator]  # (n, num_features)
    return torch.cat([torch.cos(proj), torch.sin(proj)], dim=-1) / (self.num_features**0.5)

RandomSampler

Uniform random sampler inside domain bounds.

Parameters:

Name Type Description Default
seed int | None

Optional seed for reproducible sampling.

None
Source code in src/anypinn/core/samplers.py
class RandomSampler:
    """Uniform random sampler inside domain bounds.

    Args:
        seed: Optional seed for reproducible sampling.
    """

    def __init__(self, seed: int | None = None) -> None:
        self._gen = torch.Generator()
        if seed is not None:
            self._gen.manual_seed(seed)

    def sample(self, n: int, domain: Domain) -> Tensor:
        """Return ``n`` uniformly random points within ``domain``."""
        d = domain.ndim
        u = torch.rand((n, d), generator=self._gen)
        for i, (lo, hi) in enumerate(domain.bounds):
            u[:, i] = u[:, i] * (hi - lo) + lo
        return u

__init__(seed: int | None = None) -> None

Source code in src/anypinn/core/samplers.py
def __init__(self, seed: int | None = None) -> None:
    self._gen = torch.Generator()
    if seed is not None:
        self._gen.manual_seed(seed)

sample(n: int, domain: Domain) -> Tensor

Return n uniformly random points within domain.

Source code in src/anypinn/core/samplers.py
def sample(self, n: int, domain: Domain) -> Tensor:
    """Return ``n`` uniformly random points within ``domain``."""
    d = domain.ndim
    u = torch.rand((n, d), generator=self._gen)
    for i, (lo, hi) in enumerate(domain.bounds):
        u[:, i] = u[:, i] * (hi - lo) + lo
    return u

ReduceLROnPlateauConfig dataclass

Configuration for Learning Rate Scheduler (ReduceLROnPlateau).

Attributes:

Name Type Description
mode Literal['min', 'max']

"min" to reduce LR when the metric stops decreasing, "max" when it stops increasing.

factor float

Factor by which the learning rate is reduced (must be in (0, 1)).

patience int

Number of epochs with no improvement before the LR is reduced.

threshold float

Minimum change to qualify as an improvement.

min_lr float

Lower bound on the learning rate.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class ReduceLROnPlateauConfig:
    """
    Configuration for Learning Rate Scheduler (ReduceLROnPlateau).

    Attributes:
        mode: ``"min"`` to reduce LR when the metric stops decreasing,
            ``"max"`` when it stops increasing.
        factor: Factor by which the learning rate is reduced (must be
            in (0, 1)).
        patience: Number of epochs with no improvement before the LR
            is reduced.
        threshold: Minimum change to qualify as an improvement.
        min_lr: Lower bound on the learning rate.
    """

    mode: Literal["min", "max"]
    factor: float
    patience: int
    threshold: float
    min_lr: float

    def __post_init__(self) -> None:
        if not (0 < self.factor < 1):
            raise ValueError(f"factor must be in (0, 1), got {self.factor}.")
        if self.patience <= 0:
            raise ValueError(f"patience must be positive, got {self.patience}.")

factor: float instance-attribute

min_lr: float instance-attribute

mode: Literal['min', 'max'] instance-attribute

patience: int instance-attribute

threshold: float instance-attribute

__init__(*, mode: Literal['min', 'max'], factor: float, patience: int, threshold: float, min_lr: float) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if not (0 < self.factor < 1):
        raise ValueError(f"factor must be in (0, 1), got {self.factor}.")
    if self.patience <= 0:
        raise ValueError(f"patience must be positive, got {self.patience}.")

ResidualScorer

Bases: Protocol

Protocol for scoring candidate collocation points by PDE residual magnitude.

Source code in src/anypinn/core/samplers.py
class ResidualScorer(Protocol):
    """Protocol for scoring candidate collocation points by PDE residual magnitude."""

    def residual_score(self, x: Tensor) -> Tensor:
        """Return per-point non-negative residual score of shape ``(n,)``.

        Args:
            x: Candidate collocation points ``(n, d)``.

        Returns:
            Scores ``(n,)`` — higher means larger residual.
        """
        ...

residual_score(x: Tensor) -> Tensor

Return per-point non-negative residual score of shape (n,).

Parameters:

Name Type Description Default
x Tensor

Candidate collocation points (n, d).

required

Returns:

Type Description
Tensor

Scores (n,) — higher means larger residual.

Source code in src/anypinn/core/samplers.py
def residual_score(self, x: Tensor) -> Tensor:
    """Return per-point non-negative residual score of shape ``(n,)``.

    Args:
        x: Candidate collocation points ``(n, d)``.

    Returns:
        Scores ``(n,)`` — higher means larger residual.
    """
    ...

SMMAStoppingConfig dataclass

Configuration for Smoothed Moving Average (SMMA) Stopping callback.

Training stops when the relative improvement of the SMMA over the lookback window falls below threshold.

Attributes:

Name Type Description
window int

Number of epochs used to compute the smoothed moving average.

threshold float

Minimum relative improvement required to continue training.

lookback int

Number of SMMA values to compare for improvement detection.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class SMMAStoppingConfig:
    """
    Configuration for Smoothed Moving Average (SMMA) Stopping callback.

    Training stops when the relative improvement of the SMMA over the
    ``lookback`` window falls below ``threshold``.

    Attributes:
        window: Number of epochs used to compute the smoothed moving
            average.
        threshold: Minimum relative improvement required to continue
            training.
        lookback: Number of SMMA values to compare for improvement
            detection.
    """

    window: int
    threshold: float
    lookback: int

    def __post_init__(self) -> None:
        if self.window <= 0:
            raise ValueError(f"window must be positive, got {self.window}.")
        if self.lookback <= 0:
            raise ValueError(f"lookback must be positive, got {self.lookback}.")
        if self.threshold <= 0:
            raise ValueError(f"threshold must be positive, got {self.threshold}.")

lookback: int instance-attribute

threshold: float instance-attribute

window: int instance-attribute

__init__(*, window: int, threshold: float, lookback: int) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.window <= 0:
        raise ValueError(f"window must be positive, got {self.window}.")
    if self.lookback <= 0:
        raise ValueError(f"lookback must be positive, got {self.lookback}.")
    if self.threshold <= 0:
        raise ValueError(f"threshold must be positive, got {self.threshold}.")

ScalarConfig dataclass

Configuration for a scalar parameter.

Attributes:

Name Type Description
init_value float

Initial value for the parameter.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class ScalarConfig:
    """
    Configuration for a scalar parameter.

    Attributes:
        init_value: Initial value for the parameter.
    """

    init_value: float

init_value: float instance-attribute

__init__(*, init_value: float) -> None

TrainingDataConfig dataclass

Configuration for data loading and batching.

Attributes:

Name Type Description
batch_size int

Number of points per training batch.

data_ratio int | float

Ratio of data to collocation points per batch.

collocations int

Total number of collocation points to generate.

collocation_sampler CollocationStrategies

Sampling strategy for collocation points.

collocation_seed int | None

Optional seed for reproducible collocation sampling.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class TrainingDataConfig:
    """
    Configuration for data loading and batching.

    Attributes:
        batch_size: Number of points per training batch.
        data_ratio: Ratio of data to collocation points per batch.
        collocations: Total number of collocation points to generate.
        collocation_sampler: Sampling strategy for collocation points.
        collocation_seed: Optional seed for reproducible collocation sampling.
    """

    batch_size: int
    data_ratio: int | float
    collocations: int
    collocation_sampler: CollocationStrategies = "random"
    collocation_seed: int | None = None

    def __post_init__(self) -> None:
        if self.batch_size <= 0:
            raise ValueError(f"batch_size must be positive, got {self.batch_size}.")
        if self.collocations < 0:
            raise ValueError(f"collocations must be non-negative, got {self.collocations}.")
        if isinstance(self.data_ratio, float):
            if not (0.0 <= self.data_ratio <= 1.0):
                raise ValueError(f"Float data_ratio must be in [0.0, 1.0], got {self.data_ratio}.")
        else:
            if not (0 <= self.data_ratio <= self.batch_size):
                raise ValueError(
                    f"Integer data_ratio must be in [0, {self.batch_size}], got {self.data_ratio}."
                )

batch_size: int instance-attribute

collocation_sampler: CollocationStrategies = 'random' class-attribute instance-attribute

collocation_seed: int | None = None class-attribute instance-attribute

collocations: int instance-attribute

data_ratio: int | float instance-attribute

__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.batch_size <= 0:
        raise ValueError(f"batch_size must be positive, got {self.batch_size}.")
    if self.collocations < 0:
        raise ValueError(f"collocations must be non-negative, got {self.collocations}.")
    if isinstance(self.data_ratio, float):
        if not (0.0 <= self.data_ratio <= 1.0):
            raise ValueError(f"Float data_ratio must be in [0.0, 1.0], got {self.data_ratio}.")
    else:
        if not (0 <= self.data_ratio <= self.batch_size):
            raise ValueError(
                f"Integer data_ratio must be in [0, {self.batch_size}], got {self.data_ratio}."
            )

UniformSampler

Cartesian grid sampler that distributes points evenly across the domain.

For d-dimensional domains, places ceil(n^(1/d)) points per axis then takes the first n points of the resulting grid.

Parameters:

Name Type Description Default
seed int | None

Optional seed (unused — grid is deterministic).

None
Source code in src/anypinn/core/samplers.py
class UniformSampler:
    """Cartesian grid sampler that distributes points evenly across the domain.

    For d-dimensional domains, places ``ceil(n^(1/d))`` points per axis then
    takes the first ``n`` points of the resulting grid.

    Args:
        seed: Optional seed (unused — grid is deterministic).
    """

    def __init__(self, seed: int | None = None) -> None:
        pass

    def sample(self, n: int, domain: Domain) -> Tensor:
        """Return ``n`` points on a uniform Cartesian grid over ``domain``."""
        d = domain.ndim
        pts_per_dim = math.ceil(n ** (1.0 / d))

        linspaces = [torch.linspace(lo, hi, pts_per_dim) for lo, hi in domain.bounds]
        grids = torch.meshgrid(*linspaces, indexing="ij")
        flat = torch.stack([g.reshape(-1) for g in grids], dim=-1)
        return flat[:n]

__init__(seed: int | None = None) -> None

Source code in src/anypinn/core/samplers.py
def __init__(self, seed: int | None = None) -> None:
    pass

sample(n: int, domain: Domain) -> Tensor

Return n points on a uniform Cartesian grid over domain.

Source code in src/anypinn/core/samplers.py
def sample(self, n: int, domain: Domain) -> Tensor:
    """Return ``n`` points on a uniform Cartesian grid over ``domain``."""
    d = domain.ndim
    pts_per_dim = math.ceil(n ** (1.0 / d))

    linspaces = [torch.linspace(lo, hi, pts_per_dim) for lo, hi in domain.bounds]
    grids = torch.meshgrid(*linspaces, indexing="ij")
    flat = torch.stack([g.reshape(-1) for g in grids], dim=-1)
    return flat[:n]

build_criterion(name: Criteria) -> nn.Module

Return the loss-criterion module for the given name.

Parameters:

Name Type Description Default
name Criteria

One of "mse", "huber", "l1".

required

Returns:

Type Description
Module

The corresponding PyTorch loss module.

Source code in src/anypinn/core/nn.py
def build_criterion(name: Criteria) -> nn.Module:
    """
    Return the loss-criterion module for the given name.

    Args:
        name: One of ``"mse"``, ``"huber"``, ``"l1"``.

    Returns:
        The corresponding PyTorch loss module.
    """
    return {
        "mse": nn.MSELoss(),
        "huber": nn.HuberLoss(),
        "l1": nn.L1Loss(),
    }[name]

build_sampler(strategy: CollocationStrategies, seed: int | None = None, scorer: ResidualScorer | None = None) -> CollocationSampler

Construct a collocation sampler from a strategy name.

Parameters:

Name Type Description Default
strategy CollocationStrategies

One of the CollocationStrategies literals.

required
seed int | None

Optional seed for reproducible sampling.

None
scorer ResidualScorer | None

Required when strategy="adaptive".

None

Returns:

Type Description
CollocationSampler

A sampler instance satisfying the CollocationSampler protocol.

Raises:

Type Description
ValueError

If strategy="adaptive" but no scorer is provided.

Example

sampler = build_sampler("latin_hypercube", seed=42) domain = Domain(bounds=[(0, 1)]) points = sampler.sample(100, domain) points.shape torch.Size([100, 1])

Source code in src/anypinn/core/samplers.py
def build_sampler(
    strategy: CollocationStrategies,
    seed: int | None = None,
    scorer: ResidualScorer | None = None,
) -> CollocationSampler:
    """Construct a collocation sampler from a strategy name.

    Args:
        strategy: One of the ``CollocationStrategies`` literals.
        seed: Optional seed for reproducible sampling.
        scorer: Required when ``strategy="adaptive"``.

    Returns:
        A sampler instance satisfying the ``CollocationSampler`` protocol.

    Raises:
        ValueError: If ``strategy="adaptive"`` but no scorer is provided.

    Example:
        >>> sampler = build_sampler("latin_hypercube", seed=42)
        >>> domain = Domain(bounds=[(0, 1)])
        >>> points = sampler.sample(100, domain)
        >>> points.shape
        torch.Size([100, 1])
    """
    if strategy == "adaptive":
        if scorer is None:
            raise ValueError(
                "AdaptiveSampler requires a ResidualScorer. "
                "Pass a scorer via PINNDataModule or use a different strategy."
            )
        return AdaptiveSampler(scorer=scorer, seed=seed)

    cls = _SAMPLER_REGISTRY.get(strategy)
    if cls is None:
        raise ValueError(
            f"Unknown collocation strategy '{strategy}'. "
            f"Choose from: {', '.join(_SAMPLER_REGISTRY)} or 'adaptive'."
        )
    return cls(seed=seed)

get_activation(name: Activations) -> nn.Module

Get the activation function module by name.

Parameters:

Name Type Description Default
name Activations

The name of the activation function.

required

Returns:

Type Description
Module

The PyTorch activation module.

Source code in src/anypinn/core/nn.py
def get_activation(name: Activations) -> nn.Module:
    """
    Get the activation function module by name.

    Args:
        name: The name of the activation function.

    Returns:
        The PyTorch activation module.
    """
    return {
        "tanh": nn.Tanh(),
        "relu": nn.ReLU(),
        "leaky_relu": nn.LeakyReLU(),
        "sigmoid": nn.Sigmoid(),
        "selu": nn.SELU(),
        "softplus": nn.Softplus(),
        "identity": nn.Identity(),
    }[name]

resolve_validation(registry: ValidationRegistry, df_path: Path | None = None) -> ResolvedValidation

Resolve a ValidationRegistry by converting ColumnRef entries to callables.

Pure function entries are passed through unchanged. ColumnRef entries are resolved using the provided data file path.

Parameters:

Name Type Description Default
registry ValidationRegistry

The validation registry to resolve.

required
df_path Path | None

Path to the CSV file for ColumnRef resolution.

None

Returns:

Type Description
ResolvedValidation

A dictionary mapping parameter names to callable validation functions.

Raises:

Type Description
ValueError

If a ColumnRef cannot be resolved (missing column or no df_path).

Source code in src/anypinn/core/validation.py
def resolve_validation(
    registry: ValidationRegistry,
    df_path: Path | None = None,
) -> ResolvedValidation:
    """
    Resolve a ValidationRegistry by converting ColumnRef entries to callables.

    Pure function entries are passed through unchanged. ColumnRef entries
    are resolved using the provided data file path.

    Args:
        registry: The validation registry to resolve.
        df_path: Path to the CSV file for ColumnRef resolution.

    Returns:
        A dictionary mapping parameter names to callable validation functions.

    Raises:
        ValueError: If a ColumnRef cannot be resolved (missing column or no df_path).
    """

    resolved: ResolvedValidation = {}
    df: pd.DataFrame | None = None

    for name, source in registry.items():
        if source is None:
            continue

        if callable(source) and not isinstance(source, ColumnRef):
            resolved[name] = source

        elif isinstance(source, ColumnRef):
            if df_path is None:
                raise ValueError(
                    f"Cannot resolve ColumnRef for '{name}': no df_path provided. "
                    "Either pass a df_path or use a callable instead of ColumnRef."
                )

            if df is None:
                df = pd.read_csv(df_path)

            if source.column not in df.columns:
                raise ValueError(
                    f"Cannot resolve ColumnRef for '{name}': "
                    f"column '{source.column}' not found in data. "
                    f"Available columns: {list(df.columns)}"
                )

            column_values = torch.tensor(df[source.column].values, dtype=torch.float32)

            if source.transform is not None:
                column_values = source.transform(column_values)

            def make_lookup_fn(values: Tensor) -> Callable[[Tensor], Tensor]:
                cache: dict[torch.device, Tensor] = {}

                def lookup(x: Tensor) -> Tensor:
                    device = x.device
                    if device not in cache:
                        cache[device] = values.to(device)
                    idx = x.squeeze(-1).round().to(torch.int32)
                    return cache[device][idx]

                return lookup

            resolved[name] = _ColumnLookup(make_lookup_fn(column_values))

    return resolved