Skip to content

anypinn.core

Core PINN building blocks.

Activations: TypeAlias = Literal['tanh', 'relu', 'leaky_relu', 'sigmoid', 'selu', 'softplus', 'identity'] module-attribute

Supported activation functions.

ArgsRegistry: TypeAlias = dict[str, Argument] module-attribute

CollocationStrategies: TypeAlias = Literal['uniform', 'random', 'latin_hypercube', 'log_uniform_1d', 'adaptive'] module-attribute

Supported collocation sampling strategies.

Criteria: TypeAlias = Literal['mse', 'huber', 'l1'] module-attribute

Supported loss criteria.

DataBatch: TypeAlias = tuple[Tensor, Tensor] module-attribute

Type alias for data batch: (x, y).

FieldsRegistry: TypeAlias = dict[str, Field] module-attribute

LOSS_KEY = 'loss' module-attribute

Key used for logging the total loss.

ParamsRegistry: TypeAlias = dict[str, Parameter] module-attribute

Predictions: TypeAlias = tuple[DataBatch, dict[str, Tensor], dict[str, Tensor] | None] module-attribute

Type alias for model predictions: (input_batch, predictions_dictionary, true_values_dictionary) where predictions_dictionary is a dictionary of {[field_name | param_name]: prediction} and where true_values_dictionary is a dictionary of {[field_name | param_name]: true_value}. If no validation source is configured, true_values_dictionary is None.

ResolvedValidation: TypeAlias = dict[str, Callable[[Tensor], Tensor]] module-attribute

Validation registry after ColumnRef entries have been resolved to callables.

TrainingBatch: TypeAlias = tuple[DataBatch, Tensor] module-attribute

Training batch tuple: ((x_data, y_data), x_coll).

ValidationRegistry: TypeAlias = dict[str, ValidationSource] module-attribute

Registry mapping parameter names to their validation sources.

Example

validation: ValidationRegistry = { ... "beta": lambda x: torch.sin(x), # Pure function ... "gamma": ColumnRef(column="gamma_true"), # From data ... "delta": None, # No validation ... }

ValidationSource: TypeAlias = Callable[[Tensor], Tensor] | ColumnRef | None module-attribute

A source for ground truth values. Can be: - A callable that takes x coordinates and returns true values - A ColumnRef that references a column in loaded data - None if no validation is needed for this parameter

__all__ = ['LOSS_KEY', 'Activations', 'AdamConfig', 'AdaptiveSampler', 'ArgsRegistry', 'Argument', 'CollocationSampler', 'CollocationStrategies', 'ColumnRef', 'Constraint', 'CosineAnnealingConfig', 'Criteria', 'DataBatch', 'DataCallback', 'Domain', 'EarlyStoppingConfig', 'Field', 'FieldsRegistry', 'FourierEncoding', 'GenerationConfig', 'InferredContext', 'IngestionConfig', 'LBFGSConfig', 'LatinHypercubeSampler', 'LogFn', 'LogUniform1DSampler', 'MLPConfig', 'PINNDataModule', 'PINNDataset', 'PINNHyperparameters', 'Parameter', 'ParamsRegistry', 'Predictions', 'Problem', 'RandomFourierFeatures', 'RandomSampler', 'ReduceLROnPlateauConfig', 'ResidualScorer', 'ResolvedValidation', 'SMMAStoppingConfig', 'ScalarConfig', 'TrainingBatch', 'TrainingDataConfig', 'UniformSampler', 'ValidationRegistry', 'ValidationSource', 'build_criterion', 'build_sampler', 'get_activation', 'resolve_validation'] module-attribute

AdamConfig dataclass

Configuration for the Adam optimizer.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class AdamConfig:
    """
    Configuration for the Adam optimizer.
    """

    lr: float = 1e-3
    betas: tuple[float, float] = (0.9, 0.999)
    weight_decay: float = 0.0

    def __post_init__(self) -> None:
        if self.lr <= 0:
            raise ValueError(f"lr must be positive, got {self.lr}.")
        if self.weight_decay < 0:
            raise ValueError(f"weight_decay must be non-negative, got {self.weight_decay}.")
        if not (0 < self.betas[0] < 1):
            raise ValueError(f"betas[0] must be in (0, 1), got {self.betas[0]}.")
        if not (0 < self.betas[1] < 1):
            raise ValueError(f"betas[1] must be in (0, 1), got {self.betas[1]}.")

betas: tuple[float, float] = (0.9, 0.999) class-attribute instance-attribute

lr: float = 0.001 class-attribute instance-attribute

weight_decay: float = 0.0 class-attribute instance-attribute

__init__(*, lr: float = 0.001, betas: tuple[float, float] = (0.9, 0.999), weight_decay: float = 0.0) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.lr <= 0:
        raise ValueError(f"lr must be positive, got {self.lr}.")
    if self.weight_decay < 0:
        raise ValueError(f"weight_decay must be non-negative, got {self.weight_decay}.")
    if not (0 < self.betas[0] < 1):
        raise ValueError(f"betas[0] must be in (0, 1), got {self.betas[0]}.")
    if not (0 < self.betas[1] < 1):
        raise ValueError(f"betas[1] must be in (0, 1), got {self.betas[1]}.")

AdaptiveSampler

Residual-weighted adaptive collocation sampler.

Draws an oversample of candidate points, scores them using a ResidualScorer, and retains the top-scoring subset. A configurable exploration_ratio ensures a fraction of purely random points to prevent mode collapse.

Parameters:

Name Type Description Default
scorer ResidualScorer

Callable returning per-point residual scores (n,).

required
oversample_factor int

Multiplier on n for candidate generation.

4
exploration_ratio float

Fraction of the budget reserved for random points.

0.2
seed int | None

Optional seed for reproducible sampling.

None
Source code in src/anypinn/core/samplers.py
class AdaptiveSampler:
    """Residual-weighted adaptive collocation sampler.

    Draws an oversample of candidate points, scores them using a
    ``ResidualScorer``, and retains the top-scoring subset. A configurable
    ``exploration_ratio`` ensures a fraction of purely random points to prevent
    mode collapse.

    Args:
        scorer: Callable returning per-point residual scores ``(n,)``.
        oversample_factor: Multiplier on ``n`` for candidate generation.
        exploration_ratio: Fraction of the budget reserved for random points.
        seed: Optional seed for reproducible sampling.
    """

    def __init__(
        self,
        scorer: ResidualScorer,
        oversample_factor: int = 4,
        exploration_ratio: float = 0.2,
        seed: int | None = None,
    ) -> None:
        if oversample_factor < 1:
            raise ValueError(f"oversample_factor must be >= 1, got {oversample_factor}.")
        if not (0.0 <= exploration_ratio <= 1.0):
            raise ValueError(f"exploration_ratio must be in [0, 1], got {exploration_ratio}.")
        self._scorer = scorer
        self._oversample = oversample_factor
        self._explore = exploration_ratio
        self._random = RandomSampler(seed=seed)

    def sample(self, n: int, domain: Domain) -> Tensor:
        n_explore = max(1, int(n * self._explore))
        n_exploit = n - n_explore

        explore_pts = self._random.sample(n_explore, domain)

        if n_exploit <= 0:
            return explore_pts

        n_candidates = n_exploit * self._oversample
        candidates = self._random.sample(n_candidates, domain)

        with torch.no_grad():
            scores = self._scorer.residual_score(candidates)

        _, top_idx = scores.topk(min(n_exploit, len(scores)))
        exploit_pts = candidates[top_idx]

        return torch.cat([explore_pts, exploit_pts], dim=0)

__init__(scorer: ResidualScorer, oversample_factor: int = 4, exploration_ratio: float = 0.2, seed: int | None = None) -> None

Source code in src/anypinn/core/samplers.py
def __init__(
    self,
    scorer: ResidualScorer,
    oversample_factor: int = 4,
    exploration_ratio: float = 0.2,
    seed: int | None = None,
) -> None:
    if oversample_factor < 1:
        raise ValueError(f"oversample_factor must be >= 1, got {oversample_factor}.")
    if not (0.0 <= exploration_ratio <= 1.0):
        raise ValueError(f"exploration_ratio must be in [0, 1], got {exploration_ratio}.")
    self._scorer = scorer
    self._oversample = oversample_factor
    self._explore = exploration_ratio
    self._random = RandomSampler(seed=seed)

sample(n: int, domain: Domain) -> Tensor

Source code in src/anypinn/core/samplers.py
def sample(self, n: int, domain: Domain) -> Tensor:
    n_explore = max(1, int(n * self._explore))
    n_exploit = n - n_explore

    explore_pts = self._random.sample(n_explore, domain)

    if n_exploit <= 0:
        return explore_pts

    n_candidates = n_exploit * self._oversample
    candidates = self._random.sample(n_candidates, domain)

    with torch.no_grad():
        scores = self._scorer.residual_score(candidates)

    _, top_idx = scores.topk(min(n_exploit, len(scores)))
    exploit_pts = candidates[top_idx]

    return torch.cat([explore_pts, exploit_pts], dim=0)

Argument

Represents an argument that can be passed to an ODE/PDE function. Can be a fixed float value or a callable function.

Parameters:

Name Type Description Default
value float | Callable[[Tensor], Tensor]

The value (float) or function (callable).

required
Source code in src/anypinn/core/nn.py
class Argument:
    """
    Represents an argument that can be passed to an ODE/PDE function.
    Can be a fixed float value or a callable function.

    Args:
        value: The value (float) or function (callable).
    """

    def __init__(self, value: float | Callable[[Tensor], Tensor]):
        self._value = value
        self._tensor_cache: dict[torch.device, Tensor] = {}

    def __call__(self, x: Tensor) -> Tensor:
        """
        Evaluate the argument.

        Args:
            x: Input tensor (context).

        Returns:
            The value of the argument, broadcasted if necessary.
        """
        if callable(self._value):
            return self._value(x)
        device = x.device
        if device not in self._tensor_cache:
            self._tensor_cache[device] = torch.tensor(self._value, device=device)
        return self._tensor_cache[device]

    @override
    def __repr__(self) -> str:
        return f"Argument(value={self._value})"

__call__(x: Tensor) -> Tensor

Evaluate the argument.

Parameters:

Name Type Description Default
x Tensor

Input tensor (context).

required

Returns:

Type Description
Tensor

The value of the argument, broadcasted if necessary.

Source code in src/anypinn/core/nn.py
def __call__(self, x: Tensor) -> Tensor:
    """
    Evaluate the argument.

    Args:
        x: Input tensor (context).

    Returns:
        The value of the argument, broadcasted if necessary.
    """
    if callable(self._value):
        return self._value(x)
    device = x.device
    if device not in self._tensor_cache:
        self._tensor_cache[device] = torch.tensor(self._value, device=device)
    return self._tensor_cache[device]

__init__(value: float | Callable[[Tensor], Tensor])

Source code in src/anypinn/core/nn.py
def __init__(self, value: float | Callable[[Tensor], Tensor]):
    self._value = value
    self._tensor_cache: dict[torch.device, Tensor] = {}

__repr__() -> str

Source code in src/anypinn/core/nn.py
@override
def __repr__(self) -> str:
    return f"Argument(value={self._value})"

CollocationSampler

Bases: Protocol

Protocol for collocation point samplers.

Implementations must return a tensor of shape (n, domain.ndim) with all points inside the domain bounds.

Source code in src/anypinn/core/samplers.py
class CollocationSampler(Protocol):
    """Protocol for collocation point samplers.

    Implementations must return a tensor of shape ``(n, domain.ndim)`` with all
    points inside the domain bounds.
    """

    def sample(self, n: int, domain: Domain) -> Tensor: ...

sample(n: int, domain: Domain) -> Tensor

Source code in src/anypinn/core/samplers.py
def sample(self, n: int, domain: Domain) -> Tensor: ...

ColumnRef dataclass

Reference to a column in loaded data for ground truth comparison.

This allows practitioners to specify validation data by column name without writing custom functions. The column is resolved lazily when data is loaded.

Attributes:

Name Type Description
column str

Name of the column in the loaded DataFrame.

transform Callable[[Tensor], Tensor] | None

Optional transformation to apply to the column values.

Example

validation = { ... "beta": ColumnRef(column="Rt", transform=lambda rt: rt * delta), ... }

Source code in src/anypinn/core/validation.py
@dataclass
class ColumnRef:
    """
    Reference to a column in loaded data for ground truth comparison.

    This allows practitioners to specify validation data by column name
    without writing custom functions. The column is resolved lazily when
    data is loaded.

    Attributes:
        column: Name of the column in the loaded DataFrame.
        transform: Optional transformation to apply to the column values.

    Example:
        >>> validation = {
        ...     "beta": ColumnRef(column="Rt", transform=lambda rt: rt * delta),
        ... }
    """

    column: str
    transform: Callable[[Tensor], Tensor] | None = None

column: str instance-attribute

transform: Callable[[Tensor], Tensor] | None = None class-attribute instance-attribute

__init__(column: str, transform: Callable[[Tensor], Tensor] | None = None) -> None

Constraint

Bases: ABC

Abstract base class for a constraint (loss term) in the PINN. Returns a loss value for the given batch.

Source code in src/anypinn/core/problem.py
class Constraint(ABC):
    """
    Abstract base class for a constraint (loss term) in the PINN.
    Returns a loss value for the given batch.
    """

    def inject_context(self, context: InferredContext) -> None:
        """
        Inject the context into the constraint. This can be used by the constraint to access the
        data used to compute the loss.

        Args:
            context: The context to inject.
        """
        return None

    @abstractmethod
    def loss(
        self,
        batch: TrainingBatch,
        criterion: nn.Module,
        log: LogFn | None = None,
    ) -> Tensor:
        """
        Calculate the loss for this constraint.

        Args:
            batch: The current batch of data/collocation points.
            criterion: The loss function (e.g. MSE).
            log: Optional logging function.

        Returns:
            The calculated loss tensor.
        """

inject_context(context: InferredContext) -> None

Inject the context into the constraint. This can be used by the constraint to access the data used to compute the loss.

Parameters:

Name Type Description Default
context InferredContext

The context to inject.

required
Source code in src/anypinn/core/problem.py
def inject_context(self, context: InferredContext) -> None:
    """
    Inject the context into the constraint. This can be used by the constraint to access the
    data used to compute the loss.

    Args:
        context: The context to inject.
    """
    return None

loss(batch: TrainingBatch, criterion: nn.Module, log: LogFn | None = None) -> Tensor abstractmethod

Calculate the loss for this constraint.

Parameters:

Name Type Description Default
batch TrainingBatch

The current batch of data/collocation points.

required
criterion Module

The loss function (e.g. MSE).

required
log LogFn | None

Optional logging function.

None

Returns:

Type Description
Tensor

The calculated loss tensor.

Source code in src/anypinn/core/problem.py
@abstractmethod
def loss(
    self,
    batch: TrainingBatch,
    criterion: nn.Module,
    log: LogFn | None = None,
) -> Tensor:
    """
    Calculate the loss for this constraint.

    Args:
        batch: The current batch of data/collocation points.
        criterion: The loss function (e.g. MSE).
        log: Optional logging function.

    Returns:
        The calculated loss tensor.
    """

CosineAnnealingConfig dataclass

Configuration for Cosine Annealing LR Scheduler.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class CosineAnnealingConfig:
    """
    Configuration for Cosine Annealing LR Scheduler.
    """

    T_max: int
    eta_min: float = 0.0

    def __post_init__(self) -> None:
        if self.T_max <= 0:
            raise ValueError(f"T_max must be positive, got {self.T_max}.")

T_max: int instance-attribute

eta_min: float = 0.0 class-attribute instance-attribute

__init__(*, T_max: int, eta_min: float = 0.0) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.T_max <= 0:
        raise ValueError(f"T_max must be positive, got {self.T_max}.")

DataCallback

Abstract base class for building new data callbacks.

Source code in src/anypinn/core/dataset.py
class DataCallback:
    """Abstract base class for building new data callbacks."""

    def transform_data(self, data: DataBatch, coll: Tensor) -> tuple[DataBatch, Tensor]:
        """Transform the data and collocation points."""
        return data, coll

    def on_after_setup(self, dm: "PINNDataModule") -> None:
        """Called after setup is complete."""
        return None

on_after_setup(dm: PINNDataModule) -> None

Called after setup is complete.

Source code in src/anypinn/core/dataset.py
def on_after_setup(self, dm: "PINNDataModule") -> None:
    """Called after setup is complete."""
    return None

transform_data(data: DataBatch, coll: Tensor) -> tuple[DataBatch, Tensor]

Transform the data and collocation points.

Source code in src/anypinn/core/dataset.py
def transform_data(self, data: DataBatch, coll: Tensor) -> tuple[DataBatch, Tensor]:
    """Transform the data and collocation points."""
    return data, coll

Domain dataclass

N-dimensional rectangular domain.

Attributes:

Name Type Description
bounds list[tuple[float, float]]

Per-dimension (min, max) pairs. bounds[i] covers dimension i.

dx list[float] | None

Per-dimension step size (None when not applicable).

Source code in src/anypinn/core/nn.py
@dataclass
class Domain:
    """
    N-dimensional rectangular domain.

    Attributes:
        bounds: Per-dimension (min, max) pairs. ``bounds[i]`` covers dimension i.
        dx: Per-dimension step size (``None`` when not applicable).
    """

    bounds: list[tuple[float, float]]
    dx: list[float] | None = None

    @property
    def ndim(self) -> int:
        """Number of spatial dimensions."""
        return len(self.bounds)

    @property
    def x0(self) -> float:
        """Lower bound of the first dimension (convenience for 1-D / time-axis access)."""
        return self.bounds[0][0]

    @property
    def x1(self) -> float:
        """Upper bound of the first dimension."""
        return self.bounds[0][1]

    @classmethod
    def from_x(cls, x: Tensor) -> Domain:
        """
        Infer domain bounds and step sizes from a coordinate tensor of shape (N, d).

        Args:
            x: Coordinate tensor of shape ``(N, d)``.

        Returns:
            Domain with bounds and dx inferred from the data.
        """
        if x.ndim != 2:
            raise ValueError(f"Expected 2-D coordinate tensor (N, d), got shape {tuple(x.shape)}.")
        if x.shape[0] < 2:
            raise ValueError(
                f"At least two points are required to infer the domain, got {x.shape[0]}."
            )

        d = x.shape[1]
        bounds = [(x[:, i].min().item(), x[:, i].max().item()) for i in range(d)]
        dx = [(x[1, i] - x[0, i]).item() for i in range(d)]
        return cls(bounds=bounds, dx=dx)

    @override
    def __repr__(self) -> str:
        return f"Domain(ndim={self.ndim}, bounds={self.bounds}, dx={self.dx})"

bounds: list[tuple[float, float]] instance-attribute

dx: list[float] | None = None class-attribute instance-attribute

ndim: int property

Number of spatial dimensions.

x0: float property

Lower bound of the first dimension (convenience for 1-D / time-axis access).

x1: float property

Upper bound of the first dimension.

__init__(bounds: list[tuple[float, float]], dx: list[float] | None = None) -> None

__repr__() -> str

Source code in src/anypinn/core/nn.py
@override
def __repr__(self) -> str:
    return f"Domain(ndim={self.ndim}, bounds={self.bounds}, dx={self.dx})"

from_x(x: Tensor) -> Domain classmethod

Infer domain bounds and step sizes from a coordinate tensor of shape (N, d).

Parameters:

Name Type Description Default
x Tensor

Coordinate tensor of shape (N, d).

required

Returns:

Type Description
Domain

Domain with bounds and dx inferred from the data.

Source code in src/anypinn/core/nn.py
@classmethod
def from_x(cls, x: Tensor) -> Domain:
    """
    Infer domain bounds and step sizes from a coordinate tensor of shape (N, d).

    Args:
        x: Coordinate tensor of shape ``(N, d)``.

    Returns:
        Domain with bounds and dx inferred from the data.
    """
    if x.ndim != 2:
        raise ValueError(f"Expected 2-D coordinate tensor (N, d), got shape {tuple(x.shape)}.")
    if x.shape[0] < 2:
        raise ValueError(
            f"At least two points are required to infer the domain, got {x.shape[0]}."
        )

    d = x.shape[1]
    bounds = [(x[:, i].min().item(), x[:, i].max().item()) for i in range(d)]
    dx = [(x[1, i] - x[0, i]).item() for i in range(d)]
    return cls(bounds=bounds, dx=dx)

EarlyStoppingConfig dataclass

Configuration for Early Stopping callback.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class EarlyStoppingConfig:
    """
    Configuration for Early Stopping callback.
    """

    patience: int
    mode: Literal["min", "max"]

    def __post_init__(self) -> None:
        if self.patience <= 0:
            raise ValueError(f"patience must be positive, got {self.patience}.")

mode: Literal['min', 'max'] instance-attribute

patience: int instance-attribute

__init__(*, patience: int, mode: Literal['min', 'max']) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.patience <= 0:
        raise ValueError(f"patience must be positive, got {self.patience}.")

Field

Bases: Module

A neural field mapping coordinates -> vector of state variables. Example (ODE): t -> [S, I, R].

Parameters:

Name Type Description Default
config MLPConfig

Configuration for the MLP backing this field.

required
Source code in src/anypinn/core/nn.py
class Field(nn.Module):
    """
    A neural field mapping coordinates -> vector of state variables.
    Example (ODE): t -> [S, I, R].

    Args:
        config: Configuration for the MLP backing this field.
    """

    def __init__(
        self,
        config: MLPConfig,
    ):
        super().__init__()
        encode = config.encode
        if isinstance(encode, nn.Module):
            # registers → participates in .to(), .state_dict()
            self.encoder: nn.Module | None = encode
        else:
            self.encoder = None
        self._encode_fn = encode  # callable reference (module or plain fn)
        dims = [config.in_dim] + config.hidden_layers + [config.out_dim]
        act = get_activation(config.activation)

        layers: list[nn.Module] = []
        for i in range(len(dims) - 1):
            layers.append(nn.Linear(dims[i], dims[i + 1]))
            if i < len(dims) - 2:
                layers.append(act)

        if config.output_activation is not None:
            out_act = get_activation(config.output_activation)
            layers.append(out_act)

        self.net = nn.Sequential(*layers)
        self.apply(self._init)

    @staticmethod
    def _init(m: nn.Module) -> None:
        if isinstance(m, nn.Linear):
            nn.init.xavier_normal_(m.weight)
            nn.init.zeros_(m.bias)

    @override
    def forward(self, x: Tensor) -> Tensor:
        """
        Forward pass of the field.

        Args:
            x: Input coordinates (e.g. time, space).

        Returns:
            The values of the field at input coordinates.
        """
        if self._encode_fn is not None:
            x = self._encode_fn(x)
        return cast(Tensor, self.net(x))

encoder: nn.Module | None = encode instance-attribute

net = nn.Sequential(*layers) instance-attribute

__init__(config: MLPConfig)

Source code in src/anypinn/core/nn.py
def __init__(
    self,
    config: MLPConfig,
):
    super().__init__()
    encode = config.encode
    if isinstance(encode, nn.Module):
        # registers → participates in .to(), .state_dict()
        self.encoder: nn.Module | None = encode
    else:
        self.encoder = None
    self._encode_fn = encode  # callable reference (module or plain fn)
    dims = [config.in_dim] + config.hidden_layers + [config.out_dim]
    act = get_activation(config.activation)

    layers: list[nn.Module] = []
    for i in range(len(dims) - 1):
        layers.append(nn.Linear(dims[i], dims[i + 1]))
        if i < len(dims) - 2:
            layers.append(act)

    if config.output_activation is not None:
        out_act = get_activation(config.output_activation)
        layers.append(out_act)

    self.net = nn.Sequential(*layers)
    self.apply(self._init)

forward(x: Tensor) -> Tensor

Forward pass of the field.

Parameters:

Name Type Description Default
x Tensor

Input coordinates (e.g. time, space).

required

Returns:

Type Description
Tensor

The values of the field at input coordinates.

Source code in src/anypinn/core/nn.py
@override
def forward(self, x: Tensor) -> Tensor:
    """
    Forward pass of the field.

    Args:
        x: Input coordinates (e.g. time, space).

    Returns:
        The values of the field at input coordinates.
    """
    if self._encode_fn is not None:
        x = self._encode_fn(x)
    return cast(Tensor, self.net(x))

FourierEncoding

Bases: Module

Sinusoidal positional encoding for periodic or high-frequency signals.

For input \(\mathbf{x} \in \mathbb{R}^{n \times d}\) and num_frequencies \(K\), the encoding is:

\[ \gamma(\mathbf{x}) = [\mathbf{x},\, \sin(\mathbf{x}),\, \cos(\mathbf{x}),\, \sin(2\mathbf{x}),\, \cos(2\mathbf{x}),\, \ldots,\, \sin(K\mathbf{x}),\, \cos(K\mathbf{x})] \]

producing shape \((n,\, d\,(1 + 2K))\) when include_input=True, or \((n,\, 2dK)\) when include_input=False.

Parameters:

Name Type Description Default
num_frequencies int

Number of frequency bands \(K \geq 1\).

6
include_input bool

Prepend original coordinates to the encoded output.

True
Source code in src/anypinn/lib/encodings.py
class FourierEncoding(nn.Module):
    """Sinusoidal positional encoding for periodic or high-frequency signals.

    For input $\\mathbf{x} \\in \\mathbb{R}^{n \\times d}$ and
    `num_frequencies` $K$, the encoding is:

    $$
    \\gamma(\\mathbf{x}) = [\\mathbf{x},\\,
        \\sin(\\mathbf{x}),\\, \\cos(\\mathbf{x}),\\,
        \\sin(2\\mathbf{x}),\\, \\cos(2\\mathbf{x}),\\,
        \\ldots,\\,
        \\sin(K\\mathbf{x}),\\, \\cos(K\\mathbf{x})]
    $$

    producing shape $(n,\\, d\\,(1 + 2K))$ when `include_input=True`,
    or $(n,\\, 2dK)$ when `include_input=False`.

    Args:
        num_frequencies: Number of frequency bands $K \\geq 1$.
        include_input:   Prepend original coordinates to the encoded output.
    """

    def __init__(self, num_frequencies: int = 6, include_input: bool = True) -> None:
        if num_frequencies < 1:
            raise ValueError(f"num_frequencies must be >= 1, got {num_frequencies}.")
        super().__init__()
        self.num_frequencies = num_frequencies
        self.include_input = include_input

    def out_dim(self, in_dim: int) -> int:
        """Compute output dimension given input dimension."""
        factor = 1 + 2 * self.num_frequencies if self.include_input else 2 * self.num_frequencies
        return in_dim * factor

    def forward(self, x: Tensor) -> Tensor:
        parts = [x] if self.include_input else []
        for k in range(1, self.num_frequencies + 1):
            parts.append(torch.sin(k * x))
            parts.append(torch.cos(k * x))
        return torch.cat(parts, dim=-1)

include_input = include_input instance-attribute

num_frequencies = num_frequencies instance-attribute

__init__(num_frequencies: int = 6, include_input: bool = True) -> None

Source code in src/anypinn/lib/encodings.py
def __init__(self, num_frequencies: int = 6, include_input: bool = True) -> None:
    if num_frequencies < 1:
        raise ValueError(f"num_frequencies must be >= 1, got {num_frequencies}.")
    super().__init__()
    self.num_frequencies = num_frequencies
    self.include_input = include_input

forward(x: Tensor) -> Tensor

Source code in src/anypinn/lib/encodings.py
def forward(self, x: Tensor) -> Tensor:
    parts = [x] if self.include_input else []
    for k in range(1, self.num_frequencies + 1):
        parts.append(torch.sin(k * x))
        parts.append(torch.cos(k * x))
    return torch.cat(parts, dim=-1)

out_dim(in_dim: int) -> int

Compute output dimension given input dimension.

Source code in src/anypinn/lib/encodings.py
def out_dim(self, in_dim: int) -> int:
    """Compute output dimension given input dimension."""
    factor = 1 + 2 * self.num_frequencies if self.include_input else 2 * self.num_frequencies
    return in_dim * factor

GenerationConfig dataclass

Bases: TrainingDataConfig

Configuration for data generation.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class GenerationConfig(TrainingDataConfig):
    """
    Configuration for data generation.
    """

    x: Tensor
    noise_level: float
    args_to_train: ArgsRegistry

args_to_train: ArgsRegistry instance-attribute

noise_level: float instance-attribute

x: Tensor instance-attribute

__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None, x: Tensor, noise_level: float, args_to_train: ArgsRegistry) -> None

InferredContext dataclass

Runtime context inferred from training data.

This holds the data that is either explicitly provided in props or inferred from training data.

Source code in src/anypinn/core/context.py
@dataclass
class InferredContext:
    """
    Runtime context inferred from training data.

    This holds the data that is either explicitly provided in props or inferred from training data.
    """

    def __init__(
        self,
        x: Tensor,
        y: Tensor,
        validation: ResolvedValidation,
    ):
        """
        Infer context from either generated or loaded data.

        Args:
            x: x coordinates.
            y: observations.
            validation: Resolved validation dictionary.
        """

        self.domain = Domain.from_x(x)
        self.validation = validation

domain = Domain.from_x(x) instance-attribute

validation = validation instance-attribute

__init__(x: Tensor, y: Tensor, validation: ResolvedValidation)

Infer context from either generated or loaded data.

Parameters:

Name Type Description Default
x Tensor

x coordinates.

required
y Tensor

observations.

required
validation ResolvedValidation

Resolved validation dictionary.

required
Source code in src/anypinn/core/context.py
def __init__(
    self,
    x: Tensor,
    y: Tensor,
    validation: ResolvedValidation,
):
    """
    Infer context from either generated or loaded data.

    Args:
        x: x coordinates.
        y: observations.
        validation: Resolved validation dictionary.
    """

    self.domain = Domain.from_x(x)
    self.validation = validation

IngestionConfig dataclass

Bases: TrainingDataConfig

Configuration for data ingestion from files. If x_column is None, the data is assumed to be evenly spaced.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class IngestionConfig(TrainingDataConfig):
    """
    Configuration for data ingestion from files.
    If x_column is None, the data is assumed to be evenly spaced.
    """

    df_path: Path
    x_transform: Callable[[Any], Any] | None = None
    x_column: str | None = None
    y_columns: list[str]

df_path: Path instance-attribute

x_column: str | None = None class-attribute instance-attribute

x_transform: Callable[[Any], Any] | None = None class-attribute instance-attribute

y_columns: list[str] instance-attribute

__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None, df_path: Path, x_transform: Callable[[Any], Any] | None = None, x_column: str | None = None, y_columns: list[str]) -> None

LBFGSConfig dataclass

Configuration for the L-BFGS optimizer.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class LBFGSConfig:
    """
    Configuration for the L-BFGS optimizer.
    """

    lr: float = 1.0
    max_iter: int = 20
    max_eval: int | None = None
    history_size: int = 100
    line_search_fn: str | None = "strong_wolfe"

    def __post_init__(self) -> None:
        if self.lr <= 0:
            raise ValueError(f"lr must be positive, got {self.lr}.")
        if self.max_iter <= 0:
            raise ValueError(f"max_iter must be positive, got {self.max_iter}.")
        if self.history_size <= 0:
            raise ValueError(f"history_size must be positive, got {self.history_size}.")

history_size: int = 100 class-attribute instance-attribute

line_search_fn: str | None = 'strong_wolfe' class-attribute instance-attribute

lr: float = 1.0 class-attribute instance-attribute

max_eval: int | None = None class-attribute instance-attribute

max_iter: int = 20 class-attribute instance-attribute

__init__(*, lr: float = 1.0, max_iter: int = 20, max_eval: int | None = None, history_size: int = 100, line_search_fn: str | None = 'strong_wolfe') -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.lr <= 0:
        raise ValueError(f"lr must be positive, got {self.lr}.")
    if self.max_iter <= 0:
        raise ValueError(f"max_iter must be positive, got {self.max_iter}.")
    if self.history_size <= 0:
        raise ValueError(f"history_size must be positive, got {self.history_size}.")

LatinHypercubeSampler

Latin Hypercube sampler (pure-PyTorch, no SciPy dependency).

Stratifies each dimension into n equal intervals and places one sample per interval, then shuffles columns independently.

Parameters:

Name Type Description Default
seed int | None

Optional seed for reproducible sampling.

None
Source code in src/anypinn/core/samplers.py
class LatinHypercubeSampler:
    """Latin Hypercube sampler (pure-PyTorch, no SciPy dependency).

    Stratifies each dimension into ``n`` equal intervals and places one sample
    per interval, then shuffles columns independently.

    Args:
        seed: Optional seed for reproducible sampling.
    """

    def __init__(self, seed: int | None = None) -> None:
        self._gen = torch.Generator()
        if seed is not None:
            self._gen.manual_seed(seed)

    def sample(self, n: int, domain: Domain) -> Tensor:
        d = domain.ndim
        result = torch.empty(n, d)

        for i, (lo, hi) in enumerate(domain.bounds):
            perm = torch.randperm(n, generator=self._gen)
            base = (perm.float() + torch.rand(n, generator=self._gen)) / n
            result[:, i] = base * (hi - lo) + lo

        return result

__init__(seed: int | None = None) -> None

Source code in src/anypinn/core/samplers.py
def __init__(self, seed: int | None = None) -> None:
    self._gen = torch.Generator()
    if seed is not None:
        self._gen.manual_seed(seed)

sample(n: int, domain: Domain) -> Tensor

Source code in src/anypinn/core/samplers.py
def sample(self, n: int, domain: Domain) -> Tensor:
    d = domain.ndim
    result = torch.empty(n, d)

    for i, (lo, hi) in enumerate(domain.bounds):
        perm = torch.randperm(n, generator=self._gen)
        base = (perm.float() + torch.rand(n, generator=self._gen)) / n
        result[:, i] = base * (hi - lo) + lo

    return result

LogFn

Bases: Protocol

A function that logs a value to a dictionary.

Source code in src/anypinn/core/types.py
class LogFn(Protocol):
    """
    A function that logs a value to a dictionary.
    """

    def __call__(self, name: str, value: Tensor, progress_bar: bool = False) -> None:
        """
        Log a value.

        Args:
            name: The name to log the value under.
            value: The value to log.
            progress_bar: Whether the value should be logged to the progress bar.
        """
        ...

__call__(name: str, value: Tensor, progress_bar: bool = False) -> None

Log a value.

Parameters:

Name Type Description Default
name str

The name to log the value under.

required
value Tensor

The value to log.

required
progress_bar bool

Whether the value should be logged to the progress bar.

False
Source code in src/anypinn/core/types.py
def __call__(self, name: str, value: Tensor, progress_bar: bool = False) -> None:
    """
    Log a value.

    Args:
        name: The name to log the value under.
        value: The value to log.
        progress_bar: Whether the value should be logged to the progress bar.
    """
    ...

LogUniform1DSampler

Log-uniform sampler for 1-D domains (reproduces SIR collocation behavior).

Samples uniformly in log1p space and maps back via expm1, producing a distribution that is denser near the lower bound — useful for epidemic models where early dynamics are most informative.

Parameters:

Name Type Description Default
seed int | None

Optional seed for reproducible sampling.

None

Raises:

Type Description
ValueError

If the domain is not 1-D or x0 <= -1.

Source code in src/anypinn/core/samplers.py
class LogUniform1DSampler:
    """Log-uniform sampler for 1-D domains (reproduces SIR collocation behavior).

    Samples uniformly in ``log1p`` space and maps back via ``expm1``, producing
    a distribution that is denser near the lower bound — useful for epidemic
    models where early dynamics are most informative.

    Args:
        seed: Optional seed for reproducible sampling.

    Raises:
        ValueError: If the domain is not 1-D or ``x0 <= -1``.
    """

    def __init__(self, seed: int | None = None) -> None:
        self._gen = torch.Generator()
        if seed is not None:
            self._gen.manual_seed(seed)

    def sample(self, n: int, domain: Domain) -> Tensor:
        if domain.ndim != 1:
            raise ValueError(
                f"log_uniform_1d sampler supports only 1-D domains, got ndim={domain.ndim}."
            )
        x0, x1 = domain.x0, domain.x1
        if x0 <= -1.0:
            raise ValueError(f"log_uniform_1d requires x0 > -1 for log1p, got x0={x0}.")
        log_lo = torch.tensor(x0, dtype=torch.float32).log1p()
        log_hi = torch.tensor(x1, dtype=torch.float32).log1p()
        u = torch.rand((n, 1), generator=self._gen)
        return torch.expm1(u * (log_hi - log_lo) + log_lo)

__init__(seed: int | None = None) -> None

Source code in src/anypinn/core/samplers.py
def __init__(self, seed: int | None = None) -> None:
    self._gen = torch.Generator()
    if seed is not None:
        self._gen.manual_seed(seed)

sample(n: int, domain: Domain) -> Tensor

Source code in src/anypinn/core/samplers.py
def sample(self, n: int, domain: Domain) -> Tensor:
    if domain.ndim != 1:
        raise ValueError(
            f"log_uniform_1d sampler supports only 1-D domains, got ndim={domain.ndim}."
        )
    x0, x1 = domain.x0, domain.x1
    if x0 <= -1.0:
        raise ValueError(f"log_uniform_1d requires x0 > -1 for log1p, got x0={x0}.")
    log_lo = torch.tensor(x0, dtype=torch.float32).log1p()
    log_hi = torch.tensor(x1, dtype=torch.float32).log1p()
    u = torch.rand((n, 1), generator=self._gen)
    return torch.expm1(u * (log_hi - log_lo) + log_lo)

MLPConfig dataclass

Configuration for a Multi-Layer Perceptron (MLP).

Attributes:

Name Type Description
in_dim int

Dimension of input layer.

out_dim int

Dimension of output layer.

hidden_layers list[int]

List of dimensions for hidden layers.

activation Activations

Activation function to use between layers.

output_activation Activations | None

Optional activation function for the output layer.

encode Callable[[Tensor], Tensor] | None

Optional function to encode inputs before passing to MLP.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class MLPConfig:
    """
    Configuration for a Multi-Layer Perceptron (MLP).

    Attributes:
        in_dim: Dimension of input layer.
        out_dim: Dimension of output layer.
        hidden_layers: List of dimensions for hidden layers.
        activation: Activation function to use between layers.
        output_activation: Optional activation function for the output layer.
        encode: Optional function to encode inputs before passing to MLP.
    """

    in_dim: int
    out_dim: int
    hidden_layers: list[int]
    activation: Activations
    output_activation: Activations | None = None
    encode: Callable[[Tensor], Tensor] | None = None

activation: Activations instance-attribute

encode: Callable[[Tensor], Tensor] | None = None class-attribute instance-attribute

hidden_layers: list[int] instance-attribute

in_dim: int instance-attribute

out_dim: int instance-attribute

output_activation: Activations | None = None class-attribute instance-attribute

__init__(*, in_dim: int, out_dim: int, hidden_layers: list[int], activation: Activations, output_activation: Activations | None = None, encode: Callable[[Tensor], Tensor] | None = None) -> None

PINNDataModule

Bases: LightningDataModule, ABC

LightningDataModule for PINNs. Manages data and collocation datasets and creates the combined PINNDataset.

Collocation points are generated via a CollocationSampler selected by the collocation_sampler field in TrainingDataConfig (string literal). Subclasses only need to implement gen_data(); collocation generation is handled by the sampler resolved from the hyperparameters.

Attributes:

Name Type Description
pinn_ds

Combined PINNDataset for training.

callbacks list[DataCallback]

Sequence of DataCallback callbacks applied after data loading.

Source code in src/anypinn/core/dataset.py
class PINNDataModule(pl.LightningDataModule, ABC):
    """
    LightningDataModule for PINNs.
    Manages data and collocation datasets and creates the combined PINNDataset.

    Collocation points are generated via a ``CollocationSampler`` selected by the
    ``collocation_sampler`` field in ``TrainingDataConfig`` (string literal).
    Subclasses only need to implement ``gen_data()``; collocation generation is
    handled by the sampler resolved from the hyperparameters.

    Attributes:
        pinn_ds: Combined PINNDataset for training.
        callbacks: Sequence of DataCallback callbacks applied after data loading.
    """

    def __init__(
        self,
        hp: PINNHyperparameters,
        validation: ValidationRegistry | None = None,
        callbacks: Sequence[DataCallback] | None = None,
        residual_scorer: ResidualScorer | None = None,
    ) -> None:
        super().__init__()
        self.hp = hp
        self.callbacks: list[DataCallback] = list(callbacks) if callbacks else []
        self._residual_scorer = residual_scorer

        self._unresolved_validation = validation or {}
        self._context: InferredContext | None = None

    def _build_sampler(self, strategy: CollocationStrategies) -> CollocationSampler:
        """Resolve a collocation sampler from a strategy name."""
        return build_sampler(
            strategy=strategy,
            seed=self.hp.training_data.collocation_seed,
            scorer=self._residual_scorer,
        )

    def load_data(self, config: IngestionConfig) -> DataBatch:
        """Load raw data from IngestionConfig."""
        df = pd.read_csv(config.df_path)

        if config.x_column is not None:
            x_values = df[config.x_column].values

            if config.x_transform is not None:
                x_values = config.x_transform(x_values)

            x = torch.tensor(x_values, dtype=torch.float32)
        else:
            x = torch.arange(len(df), dtype=torch.float32)

        y = torch.tensor(df[config.y_columns].values, dtype=torch.float32)

        if y.ndim == 1:
            y = y.unsqueeze(-1)  # (N,) → (N, 1)
        y = y.unsqueeze(-1)  # (N, k) → (N, k, 1) always

        return x.unsqueeze(-1), y

    @abstractmethod
    def gen_data(self, config: GenerationConfig) -> DataBatch:
        """Generate synthetic data from GenerationConfig."""

    @override
    def setup(self, stage: str | None = None) -> None:
        """
        Load raw data from IngestionConfig, or generate synthetic data from GenerationConfig.
        Apply registered callbacks, create InferredContext and datasets.
        """
        config = self.hp.training_data

        self.validation = resolve_validation(
            self._unresolved_validation,
            config.df_path if isinstance(config, IngestionConfig) else None,
        )

        self.data = (
            self.load_data(config)
            if isinstance(config, IngestionConfig)
            else self.gen_data(config)
        )

        domain = Domain.from_x(self.data[0])
        self._domain = domain
        self._sampler = self._build_sampler(config.collocation_sampler)
        self.coll = self._sampler.sample(config.collocations, domain)

        for callback in self.callbacks:
            self.data, self.coll = callback.transform_data(self.data, self.coll)

        x_data, y_data = self.data

        if x_data.shape[0] != y_data.shape[0]:
            raise ValueError(
                f"Size mismatch: x has {x_data.shape[0]} rows, y has {y_data.shape[0]} rows."
            )
        if x_data.ndim != 2 or x_data.shape[1] < 1:
            raise ValueError(f"Expected x shape (n, d) with d >= 1, got {tuple(x_data.shape)}.")
        if y_data.ndim < 2 or y_data.shape[-1] != 1:
            raise ValueError(f"Expected y shape (n, ..., 1), got {tuple(y_data.shape)}.")
        if self.coll.ndim != 2 or self.coll.shape[1] < 1:
            raise ValueError(
                f"Expected coll shape (m, d) with d >= 1, got {tuple(self.coll.shape)}."
            )
        if x_data.shape[1] != self.coll.shape[1]:
            raise ValueError(
                f"Spatial dimension mismatch: x_data has d={x_data.shape[1]}, "
                f"coll has d={self.coll.shape[1]}. Both must share the same number of dimensions."
            )

        self._data_size = x_data.shape[0]

        self._context = InferredContext(
            x_data,
            y_data,
            self.validation,
        )

        self.pinn_ds = PINNDataset(
            x_data,
            y_data,
            self.coll,
            config.batch_size,
            config.data_ratio,
        )

        self.predict_ds = TensorDataset(
            x_data,
            y_data,
        )

        for callback in self.callbacks:
            callback.on_after_setup(self)

    @override
    def train_dataloader(self) -> DataLoader[TrainingBatch]:
        """
        Returns the training dataloader using PINNDataset.
        """
        return DataLoader[TrainingBatch](
            self.pinn_ds,
            batch_size=None,  # handled internally
            num_workers=cpu_count() or 1,
            persistent_workers=True,
            pin_memory=True,
        )

    @override
    def predict_dataloader(self) -> DataLoader[PredictionBatch]:
        """
        Returns the prediction dataloader using only the data dataset.
        """
        return DataLoader[PredictionBatch](
            cast(Dataset[PredictionBatch], self.predict_ds),
            batch_size=self._data_size,
            num_workers=cpu_count() or 1,
            persistent_workers=True,
            pin_memory=True,
        )

    @property
    def context(self) -> InferredContext:
        if self._context is None:
            raise RuntimeError("Context does not exist. Call setup() before accessing context.")
        return self._context

callbacks: list[DataCallback] = list(callbacks) if callbacks else [] instance-attribute

context: InferredContext property

hp = hp instance-attribute

__init__(hp: PINNHyperparameters, validation: ValidationRegistry | None = None, callbacks: Sequence[DataCallback] | None = None, residual_scorer: ResidualScorer | None = None) -> None

Source code in src/anypinn/core/dataset.py
def __init__(
    self,
    hp: PINNHyperparameters,
    validation: ValidationRegistry | None = None,
    callbacks: Sequence[DataCallback] | None = None,
    residual_scorer: ResidualScorer | None = None,
) -> None:
    super().__init__()
    self.hp = hp
    self.callbacks: list[DataCallback] = list(callbacks) if callbacks else []
    self._residual_scorer = residual_scorer

    self._unresolved_validation = validation or {}
    self._context: InferredContext | None = None

gen_data(config: GenerationConfig) -> DataBatch abstractmethod

Generate synthetic data from GenerationConfig.

Source code in src/anypinn/core/dataset.py
@abstractmethod
def gen_data(self, config: GenerationConfig) -> DataBatch:
    """Generate synthetic data from GenerationConfig."""

load_data(config: IngestionConfig) -> DataBatch

Load raw data from IngestionConfig.

Source code in src/anypinn/core/dataset.py
def load_data(self, config: IngestionConfig) -> DataBatch:
    """Load raw data from IngestionConfig."""
    df = pd.read_csv(config.df_path)

    if config.x_column is not None:
        x_values = df[config.x_column].values

        if config.x_transform is not None:
            x_values = config.x_transform(x_values)

        x = torch.tensor(x_values, dtype=torch.float32)
    else:
        x = torch.arange(len(df), dtype=torch.float32)

    y = torch.tensor(df[config.y_columns].values, dtype=torch.float32)

    if y.ndim == 1:
        y = y.unsqueeze(-1)  # (N,) → (N, 1)
    y = y.unsqueeze(-1)  # (N, k) → (N, k, 1) always

    return x.unsqueeze(-1), y

predict_dataloader() -> DataLoader[PredictionBatch]

Returns the prediction dataloader using only the data dataset.

Source code in src/anypinn/core/dataset.py
@override
def predict_dataloader(self) -> DataLoader[PredictionBatch]:
    """
    Returns the prediction dataloader using only the data dataset.
    """
    return DataLoader[PredictionBatch](
        cast(Dataset[PredictionBatch], self.predict_ds),
        batch_size=self._data_size,
        num_workers=cpu_count() or 1,
        persistent_workers=True,
        pin_memory=True,
    )

setup(stage: str | None = None) -> None

Load raw data from IngestionConfig, or generate synthetic data from GenerationConfig. Apply registered callbacks, create InferredContext and datasets.

Source code in src/anypinn/core/dataset.py
@override
def setup(self, stage: str | None = None) -> None:
    """
    Load raw data from IngestionConfig, or generate synthetic data from GenerationConfig.
    Apply registered callbacks, create InferredContext and datasets.
    """
    config = self.hp.training_data

    self.validation = resolve_validation(
        self._unresolved_validation,
        config.df_path if isinstance(config, IngestionConfig) else None,
    )

    self.data = (
        self.load_data(config)
        if isinstance(config, IngestionConfig)
        else self.gen_data(config)
    )

    domain = Domain.from_x(self.data[0])
    self._domain = domain
    self._sampler = self._build_sampler(config.collocation_sampler)
    self.coll = self._sampler.sample(config.collocations, domain)

    for callback in self.callbacks:
        self.data, self.coll = callback.transform_data(self.data, self.coll)

    x_data, y_data = self.data

    if x_data.shape[0] != y_data.shape[0]:
        raise ValueError(
            f"Size mismatch: x has {x_data.shape[0]} rows, y has {y_data.shape[0]} rows."
        )
    if x_data.ndim != 2 or x_data.shape[1] < 1:
        raise ValueError(f"Expected x shape (n, d) with d >= 1, got {tuple(x_data.shape)}.")
    if y_data.ndim < 2 or y_data.shape[-1] != 1:
        raise ValueError(f"Expected y shape (n, ..., 1), got {tuple(y_data.shape)}.")
    if self.coll.ndim != 2 or self.coll.shape[1] < 1:
        raise ValueError(
            f"Expected coll shape (m, d) with d >= 1, got {tuple(self.coll.shape)}."
        )
    if x_data.shape[1] != self.coll.shape[1]:
        raise ValueError(
            f"Spatial dimension mismatch: x_data has d={x_data.shape[1]}, "
            f"coll has d={self.coll.shape[1]}. Both must share the same number of dimensions."
        )

    self._data_size = x_data.shape[0]

    self._context = InferredContext(
        x_data,
        y_data,
        self.validation,
    )

    self.pinn_ds = PINNDataset(
        x_data,
        y_data,
        self.coll,
        config.batch_size,
        config.data_ratio,
    )

    self.predict_ds = TensorDataset(
        x_data,
        y_data,
    )

    for callback in self.callbacks:
        callback.on_after_setup(self)

train_dataloader() -> DataLoader[TrainingBatch]

Returns the training dataloader using PINNDataset.

Source code in src/anypinn/core/dataset.py
@override
def train_dataloader(self) -> DataLoader[TrainingBatch]:
    """
    Returns the training dataloader using PINNDataset.
    """
    return DataLoader[TrainingBatch](
        self.pinn_ds,
        batch_size=None,  # handled internally
        num_workers=cpu_count() or 1,
        persistent_workers=True,
        pin_memory=True,
    )

PINNDataset

Bases: Dataset[TrainingBatch]

Dataset used for PINN training. Combines labeled data and collocation points per sample. Given a data_ratio, the amount of data points K is determined either by applying data_ratio * batch_size if ratio is a float between 0 and 1 or by an absolute count if ratio is an integer. The remaining C points are used for collocation. The data points are sampled without replacement per epoch i.e. cycles through all data points and at the last batch, wraps around to the first indices to ensure batch size. The collocation points are sampled with replacement from the pool. The dataset produces a batch of shape ((x_data[K,d], y_data[K,...]), x_coll[C,d]).

Parameters:

Name Type Description Default
x_data Tensor

Data point x coordinates (time values).

required
y_data Tensor

Data point y values (observations).

required
x_coll Tensor

Collocation point x coordinates.

required
batch_size int

Size of the batch.

required
data_ratio float | int

Ratio of data points to collocation points, either as a ratio [0,1] or absolute count [0,batch_size].

required
Source code in src/anypinn/core/dataset.py
class PINNDataset(Dataset[TrainingBatch]):
    """
    Dataset used for PINN training. Combines labeled data and collocation points
    per sample.  Given a data_ratio, the amount of data points `K` is determined
    either by applying `data_ratio * batch_size` if ratio is a float between 0
    and 1 or by an absolute count if ratio is an integer. The remaining `C`
    points are used for collocation.  The data points are sampled without
    replacement per epoch i.e. cycles through all data points and at the last
    batch, wraps around to the first indices to ensure batch size. The collocation
    points are sampled with replacement from the pool.
    The dataset produces a batch of shape ((x_data[K,d], y_data[K,...]), x_coll[C,d]).

    Args:
        x_data: Data point x coordinates (time values).
        y_data: Data point y values (observations).
        x_coll: Collocation point x coordinates.
        batch_size: Size of the batch.
        data_ratio: Ratio of data points to collocation points, either as a ratio [0,1] or absolute
            count [0,batch_size].
    """

    def __init__(
        self,
        x_data: Tensor,
        y_data: Tensor,
        x_coll: Tensor,
        batch_size: int,
        data_ratio: float | int,
    ):
        super().__init__()
        if batch_size <= 0:
            raise ValueError(f"batch_size must be positive, got {batch_size}.")

        if isinstance(data_ratio, float):
            if not (0.0 <= data_ratio <= 1.0):
                raise ValueError(f"Float data_ratio must be in [0.0, 1.0], got {data_ratio}.")
            self.K = round(data_ratio * batch_size)
        else:
            if not (0 <= data_ratio <= batch_size):
                raise ValueError(
                    f"Integer data_ratio must be in [0, {batch_size}], got {data_ratio}."
                )
            self.K = data_ratio

        self.x_data = x_data
        self.y_data = y_data
        self.x_coll = x_coll

        self.batch_size = batch_size
        self.C = batch_size - self.K

        self.total_data = x_data.shape[0]
        self.total_coll = x_coll.shape[0]

        self._coll_gen = torch.Generator()

    def __len__(self) -> int:
        """Number of steps per epoch to see all data points once. Ceiling division."""
        return (self.total_data + self.K - 1) // self.K

    @override
    def __getitem__(self, index: int) -> TrainingBatch:
        """Return one sample containing K data points and C collocation points."""
        data_idx = self._get_data_indices(index)
        coll_idx = self._get_coll_indices(index)

        x_data = self.x_data[data_idx]
        y_data = self.y_data[data_idx]
        x_coll = self.x_coll[coll_idx]

        return ((x_data, y_data), x_coll)

    def _get_data_indices(self, idx: int) -> Tensor:
        """Get data indices for this step without replacement.
        When getting the last batch, wrap around to the first indices to ensure batch size.
        """
        if self.total_data == 0:
            return torch.empty(0, 1)

        start = idx * self.K
        indices = [(start + i) % self.total_data for i in range(self.K)]
        return torch.tensor(indices)

    def _get_coll_indices(self, idx: int) -> Tensor:
        """Get collocation indices for this step with replacement."""
        if self.total_coll == 0:
            return torch.empty(0, 1)

        self._coll_gen.manual_seed(idx)
        return torch.randint(0, self.total_coll, (self.C,), generator=self._coll_gen)

C = batch_size - self.K instance-attribute

K = round(data_ratio * batch_size) instance-attribute

batch_size = batch_size instance-attribute

total_coll = x_coll.shape[0] instance-attribute

total_data = x_data.shape[0] instance-attribute

x_coll = x_coll instance-attribute

x_data = x_data instance-attribute

y_data = y_data instance-attribute

__getitem__(index: int) -> TrainingBatch

Return one sample containing K data points and C collocation points.

Source code in src/anypinn/core/dataset.py
@override
def __getitem__(self, index: int) -> TrainingBatch:
    """Return one sample containing K data points and C collocation points."""
    data_idx = self._get_data_indices(index)
    coll_idx = self._get_coll_indices(index)

    x_data = self.x_data[data_idx]
    y_data = self.y_data[data_idx]
    x_coll = self.x_coll[coll_idx]

    return ((x_data, y_data), x_coll)

__init__(x_data: Tensor, y_data: Tensor, x_coll: Tensor, batch_size: int, data_ratio: float | int)

Source code in src/anypinn/core/dataset.py
def __init__(
    self,
    x_data: Tensor,
    y_data: Tensor,
    x_coll: Tensor,
    batch_size: int,
    data_ratio: float | int,
):
    super().__init__()
    if batch_size <= 0:
        raise ValueError(f"batch_size must be positive, got {batch_size}.")

    if isinstance(data_ratio, float):
        if not (0.0 <= data_ratio <= 1.0):
            raise ValueError(f"Float data_ratio must be in [0.0, 1.0], got {data_ratio}.")
        self.K = round(data_ratio * batch_size)
    else:
        if not (0 <= data_ratio <= batch_size):
            raise ValueError(
                f"Integer data_ratio must be in [0, {batch_size}], got {data_ratio}."
            )
        self.K = data_ratio

    self.x_data = x_data
    self.y_data = y_data
    self.x_coll = x_coll

    self.batch_size = batch_size
    self.C = batch_size - self.K

    self.total_data = x_data.shape[0]
    self.total_coll = x_coll.shape[0]

    self._coll_gen = torch.Generator()

__len__() -> int

Number of steps per epoch to see all data points once. Ceiling division.

Source code in src/anypinn/core/dataset.py
def __len__(self) -> int:
    """Number of steps per epoch to see all data points once. Ceiling division."""
    return (self.total_data + self.K - 1) // self.K

PINNHyperparameters dataclass

Aggregated hyperparameters for the PINN model.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class PINNHyperparameters:
    """
    Aggregated hyperparameters for the PINN model.
    """

    lr: float
    training_data: IngestionConfig | GenerationConfig
    fields_config: MLPConfig
    params_config: MLPConfig | ScalarConfig
    max_epochs: int | None = None
    gradient_clip_val: float | None = None
    criterion: Criteria = "mse"
    optimizer: AdamConfig | LBFGSConfig | None = None
    scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None
    early_stopping: EarlyStoppingConfig | None = None
    smma_stopping: SMMAStoppingConfig | None = None

    def __post_init__(self) -> None:
        if self.lr <= 0:
            raise ValueError(f"lr must be positive, got {self.lr}.")

criterion: Criteria = 'mse' class-attribute instance-attribute

early_stopping: EarlyStoppingConfig | None = None class-attribute instance-attribute

fields_config: MLPConfig instance-attribute

gradient_clip_val: float | None = None class-attribute instance-attribute

lr: float instance-attribute

max_epochs: int | None = None class-attribute instance-attribute

optimizer: AdamConfig | LBFGSConfig | None = None class-attribute instance-attribute

params_config: MLPConfig | ScalarConfig instance-attribute

scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None class-attribute instance-attribute

smma_stopping: SMMAStoppingConfig | None = None class-attribute instance-attribute

training_data: IngestionConfig | GenerationConfig instance-attribute

__init__(*, lr: float, training_data: IngestionConfig | GenerationConfig, fields_config: MLPConfig, params_config: MLPConfig | ScalarConfig, max_epochs: int | None = None, gradient_clip_val: float | None = None, criterion: Criteria = 'mse', optimizer: AdamConfig | LBFGSConfig | None = None, scheduler: ReduceLROnPlateauConfig | CosineAnnealingConfig | None = None, early_stopping: EarlyStoppingConfig | None = None, smma_stopping: SMMAStoppingConfig | None = None) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.lr <= 0:
        raise ValueError(f"lr must be positive, got {self.lr}.")

Parameter

Bases: Module, Argument

Learnable parameter. Supports scalar or function-valued parameter. For function-valued parameters (e.g. β(t)), uses a small MLP.

Parameters:

Name Type Description Default
config ScalarConfig | MLPConfig

Configuration for the parameter (ScalarConfig or MLPConfig).

required
Source code in src/anypinn/core/nn.py
class Parameter(nn.Module, Argument):
    """
    Learnable parameter. Supports scalar or function-valued parameter.
    For function-valued parameters (e.g. β(t)), uses a small MLP.

    Args:
        config: Configuration for the parameter (ScalarConfig or MLPConfig).
    """

    def __init__(
        self,
        config: ScalarConfig | MLPConfig,
    ):
        super().__init__()
        self.config = config
        self._mode: Literal["scalar", "mlp"]

        if isinstance(config, ScalarConfig):
            self._mode = "scalar"
            self.value = nn.Parameter(torch.tensor(float(config.init_value), dtype=torch.float32))

        else:  # isinstance(config, MLPConfig)
            self._mode = "mlp"
            dims = [config.in_dim] + config.hidden_layers + [config.out_dim]
            act = get_activation(config.activation)

            layers: list[nn.Module] = []
            for i in range(len(dims) - 1):
                layers.append(nn.Linear(dims[i], dims[i + 1]))
                if i < len(dims) - 2:
                    layers.append(act)

            if config.output_activation is not None:
                out_act = get_activation(config.output_activation)
                layers.append(out_act)

            self.net = nn.Sequential(*layers)
            self.apply(self._init)

    @property
    def mode(self) -> Literal["scalar", "mlp"]:
        """Mode of the parameter: 'scalar' or 'mlp'."""
        return self._mode

    @staticmethod
    def _init(m: nn.Module) -> None:
        if isinstance(m, nn.Linear):
            nn.init.xavier_normal_(m.weight)
            nn.init.zeros_(m.bias)

    @override
    def forward(self, x: Tensor | None = None) -> Tensor:
        """
        Get the value of the parameter.

        Args:
            x: Input tensor (required for 'mlp' mode).

        Returns:
            The parameter value.
        """
        if self.mode == "scalar":
            return self.value if x is None else self.value.expand_as(x)
        else:
            if x is None:
                raise TypeError("Function-valued parameter requires input.")
            return cast(Tensor, self.net(x))

config = config instance-attribute

mode: Literal['scalar', 'mlp'] property

Mode of the parameter: 'scalar' or 'mlp'.

net = nn.Sequential(*layers) instance-attribute

value = nn.Parameter(torch.tensor(float(config.init_value), dtype=(torch.float32))) instance-attribute

__init__(config: ScalarConfig | MLPConfig)

Source code in src/anypinn/core/nn.py
def __init__(
    self,
    config: ScalarConfig | MLPConfig,
):
    super().__init__()
    self.config = config
    self._mode: Literal["scalar", "mlp"]

    if isinstance(config, ScalarConfig):
        self._mode = "scalar"
        self.value = nn.Parameter(torch.tensor(float(config.init_value), dtype=torch.float32))

    else:  # isinstance(config, MLPConfig)
        self._mode = "mlp"
        dims = [config.in_dim] + config.hidden_layers + [config.out_dim]
        act = get_activation(config.activation)

        layers: list[nn.Module] = []
        for i in range(len(dims) - 1):
            layers.append(nn.Linear(dims[i], dims[i + 1]))
            if i < len(dims) - 2:
                layers.append(act)

        if config.output_activation is not None:
            out_act = get_activation(config.output_activation)
            layers.append(out_act)

        self.net = nn.Sequential(*layers)
        self.apply(self._init)

forward(x: Tensor | None = None) -> Tensor

Get the value of the parameter.

Parameters:

Name Type Description Default
x Tensor | None

Input tensor (required for 'mlp' mode).

None

Returns:

Type Description
Tensor

The parameter value.

Source code in src/anypinn/core/nn.py
@override
def forward(self, x: Tensor | None = None) -> Tensor:
    """
    Get the value of the parameter.

    Args:
        x: Input tensor (required for 'mlp' mode).

    Returns:
        The parameter value.
    """
    if self.mode == "scalar":
        return self.value if x is None else self.value.expand_as(x)
    else:
        if x is None:
            raise TypeError("Function-valued parameter requires input.")
        return cast(Tensor, self.net(x))

Problem

Bases: Module

Aggregates operator residuals and constraints into total loss. Manages fields, parameters, constraints, and validation.

Parameters:

Name Type Description Default
constraints list[Constraint]

List of constraints to enforce.

required
criterion Module

Loss function module.

required
fields FieldsRegistry

List of fields (neural networks) to solve for.

required
params ParamsRegistry

List of learnable parameters.

required
Source code in src/anypinn/core/problem.py
class Problem(nn.Module):
    """
    Aggregates operator residuals and constraints into total loss.
    Manages fields, parameters, constraints, and validation.

    Args:
        constraints: List of constraints to enforce.
        criterion: Loss function module.
        fields: List of fields (neural networks) to solve for.
        params: List of learnable parameters.
    """

    def __init__(
        self,
        constraints: list[Constraint],
        criterion: nn.Module,
        fields: FieldsRegistry,
        params: ParamsRegistry,
    ):
        super().__init__()
        self.constraints = constraints
        self.criterion = criterion
        self.fields = fields
        self.params = params

        self._fields = nn.ModuleList(fields.values())
        self._params = nn.ModuleList(params.values())

    def inject_context(self, context: InferredContext) -> None:
        """
        Inject the context into the problem.

        This should be called after data is loaded but before training starts.
        Pure function entries are passed through unchanged.

        Args:
            context: The context to inject.
        """
        self.context = context
        for c in self.constraints:
            c.inject_context(context)

    def training_loss(self, batch: TrainingBatch, log: LogFn | None = None) -> Tensor:
        """
        Calculate the total loss from all constraints.

        Args:
            batch: Current batch.
            log: Optional logging function.

        Returns:
            Sum of losses from all constraints.
        """
        _, x_coll = batch

        if not self.constraints:
            total = torch.tensor(0.0, device=x_coll.device)
        else:
            losses = iter(self.constraints)
            total = next(losses).loss(batch, self.criterion, log)
            for c in losses:
                total = total + c.loss(batch, self.criterion, log)

        if log is not None:
            for name, param in self.params.items():
                param_loss = self._param_validation_loss(name, param, x_coll)
                if param_loss is not None:
                    log(f"loss/{name}", param_loss, progress_bar=True)

            log(LOSS_KEY, total, progress_bar=True)

        return total

    def predict(self, batch: DataBatch) -> tuple[DataBatch, dict[str, Tensor]]:
        """
        Generate predictions for a given batch of data.
        Returns unscaled predictions in original domain.

        Args:
            batch: Batch of input coordinates.

        Returns:
            Tuple of (original_batch, predictions_dict).
        """

        x, y = batch

        n = x.shape[0]
        preds = {name: f(x).reshape(n, -1).squeeze(-1) for name, f in self.fields.items()}
        preds |= {name: p(x).reshape(n, -1).squeeze(-1) for name, p in self.params.items()}

        return (x.squeeze(-1), y.squeeze(-1)), preds

    def true_values(self, x: Tensor) -> dict[str, Tensor] | None:
        """
        Get the true values for a given x coordinates.
        Returns None if no validation source is configured.
        """

        return {
            name: p_true.reshape(x.shape[0], -1).squeeze(-1)
            for name, p in self.params.items()
            if (p_true := self._get_true_param(name, x)) is not None
        } or None

    def _get_true_param(self, param_name: str, x: Tensor) -> Tensor | None:
        """
        Get the ground truth values for a parameter at given coordinates.

        Args:
            param_name: Name of the parameter.
            x: Input coordinates.

        Returns:
            Ground truth values, or None if no validation source is configured.
        """
        if param_name not in self.context.validation:
            return None

        fn = self.context.validation[param_name]

        if isinstance(fn, _ColumnLookup):
            domain = self.context.domain
            if domain.dx is None:
                raise ValueError(
                    f"Cannot perform ColumnRef lookup for '{param_name}': "
                    "domain step size (dx) is unknown. Ensure the domain was inferred from "
                    "a uniformly-spaced coordinate tensor, or use a callable validation source."
                )
            x_idx = ((x.squeeze(-1) - domain.x0) / domain.dx[0]).round().unsqueeze(-1)
            return fn(x_idx)

        return fn(x)

    @torch.no_grad()
    def _param_validation_loss(
        self, param_name: str, param: Parameter, x_coll: Tensor
    ) -> Tensor | None:
        """
        Compute validation loss for a parameter against ground truth.

        Args:
            param: The parameter to compute validation loss for.
            x_coll: The input coordinates.

        Returns:
            Loss value, or None if no validation source is configured.
        """
        true = self._get_true_param(param_name, x_coll)
        if true is None:
            return None

        pred = param(x_coll)

        return torch.mean((true - pred) ** 2)

constraints = constraints instance-attribute

criterion = criterion instance-attribute

fields = fields instance-attribute

params = params instance-attribute

__init__(constraints: list[Constraint], criterion: nn.Module, fields: FieldsRegistry, params: ParamsRegistry)

Source code in src/anypinn/core/problem.py
def __init__(
    self,
    constraints: list[Constraint],
    criterion: nn.Module,
    fields: FieldsRegistry,
    params: ParamsRegistry,
):
    super().__init__()
    self.constraints = constraints
    self.criterion = criterion
    self.fields = fields
    self.params = params

    self._fields = nn.ModuleList(fields.values())
    self._params = nn.ModuleList(params.values())

inject_context(context: InferredContext) -> None

Inject the context into the problem.

This should be called after data is loaded but before training starts. Pure function entries are passed through unchanged.

Parameters:

Name Type Description Default
context InferredContext

The context to inject.

required
Source code in src/anypinn/core/problem.py
def inject_context(self, context: InferredContext) -> None:
    """
    Inject the context into the problem.

    This should be called after data is loaded but before training starts.
    Pure function entries are passed through unchanged.

    Args:
        context: The context to inject.
    """
    self.context = context
    for c in self.constraints:
        c.inject_context(context)

predict(batch: DataBatch) -> tuple[DataBatch, dict[str, Tensor]]

Generate predictions for a given batch of data. Returns unscaled predictions in original domain.

Parameters:

Name Type Description Default
batch DataBatch

Batch of input coordinates.

required

Returns:

Type Description
tuple[DataBatch, dict[str, Tensor]]

Tuple of (original_batch, predictions_dict).

Source code in src/anypinn/core/problem.py
def predict(self, batch: DataBatch) -> tuple[DataBatch, dict[str, Tensor]]:
    """
    Generate predictions for a given batch of data.
    Returns unscaled predictions in original domain.

    Args:
        batch: Batch of input coordinates.

    Returns:
        Tuple of (original_batch, predictions_dict).
    """

    x, y = batch

    n = x.shape[0]
    preds = {name: f(x).reshape(n, -1).squeeze(-1) for name, f in self.fields.items()}
    preds |= {name: p(x).reshape(n, -1).squeeze(-1) for name, p in self.params.items()}

    return (x.squeeze(-1), y.squeeze(-1)), preds

training_loss(batch: TrainingBatch, log: LogFn | None = None) -> Tensor

Calculate the total loss from all constraints.

Parameters:

Name Type Description Default
batch TrainingBatch

Current batch.

required
log LogFn | None

Optional logging function.

None

Returns:

Type Description
Tensor

Sum of losses from all constraints.

Source code in src/anypinn/core/problem.py
def training_loss(self, batch: TrainingBatch, log: LogFn | None = None) -> Tensor:
    """
    Calculate the total loss from all constraints.

    Args:
        batch: Current batch.
        log: Optional logging function.

    Returns:
        Sum of losses from all constraints.
    """
    _, x_coll = batch

    if not self.constraints:
        total = torch.tensor(0.0, device=x_coll.device)
    else:
        losses = iter(self.constraints)
        total = next(losses).loss(batch, self.criterion, log)
        for c in losses:
            total = total + c.loss(batch, self.criterion, log)

    if log is not None:
        for name, param in self.params.items():
            param_loss = self._param_validation_loss(name, param, x_coll)
            if param_loss is not None:
                log(f"loss/{name}", param_loss, progress_bar=True)

        log(LOSS_KEY, total, progress_bar=True)

    return total

true_values(x: Tensor) -> dict[str, Tensor] | None

Get the true values for a given x coordinates. Returns None if no validation source is configured.

Source code in src/anypinn/core/problem.py
def true_values(self, x: Tensor) -> dict[str, Tensor] | None:
    """
    Get the true values for a given x coordinates.
    Returns None if no validation source is configured.
    """

    return {
        name: p_true.reshape(x.shape[0], -1).squeeze(-1)
        for name, p in self.params.items()
        if (p_true := self._get_true_param(name, x)) is not None
    } or None

RandomFourierFeatures

Bases: Module

Random Fourier Features (Rahimi & Recht, 2007) for RBF kernel approximation.

Draws a fixed random matrix \(\mathbf{B} \sim \mathcal{N}(0, \sigma^2)\) of shape \((d_{\text{in}},\, m)\) and maps \(\mathbf{x} \in \mathbb{R}^{n \times d_{\text{in}}}\) to:

\[ \phi(\mathbf{x}) = \frac{1}{\sqrt{m}} [\cos(\mathbf{x}\mathbf{B}),\; \sin(\mathbf{x}\mathbf{B})] \in \mathbb{R}^{n \times 2m} \]

\(\mathbf{B}\) is registered as a buffer and moves with the module across devices.

Parameters:

Name Type Description Default
in_dim int

Spatial dimension \(d_{\text{in}}\) of the input.

required
num_features int

Number of random features \(m\) (output dimension \(= 2m\)).

256
scale float

Standard deviation \(\sigma\) of the frequency distribution. Higher values capture higher-frequency variation. Default: 1.0.

1.0
seed int | None

Optional seed for reproducible frequency sampling.

None
Source code in src/anypinn/lib/encodings.py
class RandomFourierFeatures(nn.Module):
    """Random Fourier Features (Rahimi & Recht, 2007) for RBF kernel approximation.

    Draws a fixed random matrix $\\mathbf{B} \\sim \\mathcal{N}(0, \\sigma^2)$
    of shape $(d_{\\text{in}},\\, m)$ and maps
    $\\mathbf{x} \\in \\mathbb{R}^{n \\times d_{\\text{in}}}$ to:

    $$
    \\phi(\\mathbf{x}) = \\frac{1}{\\sqrt{m}}
        [\\cos(\\mathbf{x}\\mathbf{B}),\\; \\sin(\\mathbf{x}\\mathbf{B})]
        \\in \\mathbb{R}^{n \\times 2m}
    $$

    $\\mathbf{B}$ is registered as a buffer and moves with the module across devices.

    Args:
        in_dim:       Spatial dimension $d_{\\text{in}}$ of the input.
        num_features: Number of random features $m$
                      (output dimension $= 2m$).
        scale:        Standard deviation $\\sigma$ of the frequency distribution.
                      Higher values capture higher-frequency variation. Default: 1.0.
        seed:         Optional seed for reproducible frequency sampling.
    """

    def __init__(
        self,
        in_dim: int,
        num_features: int = 256,
        scale: float = 1.0,
        seed: int | None = None,
    ) -> None:
        if in_dim < 1:
            raise ValueError(f"in_dim must be >= 1, got {in_dim}.")
        if num_features < 1:
            raise ValueError(f"num_features must be >= 1, got {num_features}.")
        if scale <= 0.0:
            raise ValueError(f"scale must be > 0, got {scale}.")
        super().__init__()
        gen = torch.Generator()
        if seed is not None:
            gen.manual_seed(seed)
        B = torch.randn(in_dim, num_features, generator=gen) * scale
        self.register_buffer("B", B)
        self.num_features = num_features

    @property
    def out_dim(self) -> int:
        """Output dimension (always 2 * num_features)."""
        return 2 * self.num_features

    def forward(self, x: Tensor) -> Tensor:
        proj = x @ self.B  # type: ignore[operator]  # (n, num_features)
        return torch.cat([torch.cos(proj), torch.sin(proj)], dim=-1) / (self.num_features**0.5)

num_features = num_features instance-attribute

out_dim: int property

Output dimension (always 2 * num_features).

__init__(in_dim: int, num_features: int = 256, scale: float = 1.0, seed: int | None = None) -> None

Source code in src/anypinn/lib/encodings.py
def __init__(
    self,
    in_dim: int,
    num_features: int = 256,
    scale: float = 1.0,
    seed: int | None = None,
) -> None:
    if in_dim < 1:
        raise ValueError(f"in_dim must be >= 1, got {in_dim}.")
    if num_features < 1:
        raise ValueError(f"num_features must be >= 1, got {num_features}.")
    if scale <= 0.0:
        raise ValueError(f"scale must be > 0, got {scale}.")
    super().__init__()
    gen = torch.Generator()
    if seed is not None:
        gen.manual_seed(seed)
    B = torch.randn(in_dim, num_features, generator=gen) * scale
    self.register_buffer("B", B)
    self.num_features = num_features

forward(x: Tensor) -> Tensor

Source code in src/anypinn/lib/encodings.py
def forward(self, x: Tensor) -> Tensor:
    proj = x @ self.B  # type: ignore[operator]  # (n, num_features)
    return torch.cat([torch.cos(proj), torch.sin(proj)], dim=-1) / (self.num_features**0.5)

RandomSampler

Uniform random sampler inside domain bounds.

Parameters:

Name Type Description Default
seed int | None

Optional seed for reproducible sampling.

None
Source code in src/anypinn/core/samplers.py
class RandomSampler:
    """Uniform random sampler inside domain bounds.

    Args:
        seed: Optional seed for reproducible sampling.
    """

    def __init__(self, seed: int | None = None) -> None:
        self._gen = torch.Generator()
        if seed is not None:
            self._gen.manual_seed(seed)

    def sample(self, n: int, domain: Domain) -> Tensor:
        d = domain.ndim
        u = torch.rand((n, d), generator=self._gen)
        for i, (lo, hi) in enumerate(domain.bounds):
            u[:, i] = u[:, i] * (hi - lo) + lo
        return u

__init__(seed: int | None = None) -> None

Source code in src/anypinn/core/samplers.py
def __init__(self, seed: int | None = None) -> None:
    self._gen = torch.Generator()
    if seed is not None:
        self._gen.manual_seed(seed)

sample(n: int, domain: Domain) -> Tensor

Source code in src/anypinn/core/samplers.py
def sample(self, n: int, domain: Domain) -> Tensor:
    d = domain.ndim
    u = torch.rand((n, d), generator=self._gen)
    for i, (lo, hi) in enumerate(domain.bounds):
        u[:, i] = u[:, i] * (hi - lo) + lo
    return u

ReduceLROnPlateauConfig dataclass

Configuration for Learning Rate Scheduler (ReduceLROnPlateau).

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class ReduceLROnPlateauConfig:
    """
    Configuration for Learning Rate Scheduler (ReduceLROnPlateau).
    """

    mode: Literal["min", "max"]
    factor: float
    patience: int
    threshold: float
    min_lr: float

    def __post_init__(self) -> None:
        if not (0 < self.factor < 1):
            raise ValueError(f"factor must be in (0, 1), got {self.factor}.")
        if self.patience <= 0:
            raise ValueError(f"patience must be positive, got {self.patience}.")

factor: float instance-attribute

min_lr: float instance-attribute

mode: Literal['min', 'max'] instance-attribute

patience: int instance-attribute

threshold: float instance-attribute

__init__(*, mode: Literal['min', 'max'], factor: float, patience: int, threshold: float, min_lr: float) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if not (0 < self.factor < 1):
        raise ValueError(f"factor must be in (0, 1), got {self.factor}.")
    if self.patience <= 0:
        raise ValueError(f"patience must be positive, got {self.patience}.")

ResidualScorer

Bases: Protocol

Protocol for scoring candidate collocation points by PDE residual magnitude.

Source code in src/anypinn/core/samplers.py
class ResidualScorer(Protocol):
    """Protocol for scoring candidate collocation points by PDE residual magnitude."""

    def residual_score(self, x: Tensor) -> Tensor:
        """Return per-point non-negative residual score of shape ``(n,)``.

        Args:
            x: Candidate collocation points ``(n, d)``.

        Returns:
            Scores ``(n,)`` — higher means larger residual.
        """
        ...

residual_score(x: Tensor) -> Tensor

Return per-point non-negative residual score of shape (n,).

Parameters:

Name Type Description Default
x Tensor

Candidate collocation points (n, d).

required

Returns:

Type Description
Tensor

Scores (n,) — higher means larger residual.

Source code in src/anypinn/core/samplers.py
def residual_score(self, x: Tensor) -> Tensor:
    """Return per-point non-negative residual score of shape ``(n,)``.

    Args:
        x: Candidate collocation points ``(n, d)``.

    Returns:
        Scores ``(n,)`` — higher means larger residual.
    """
    ...

SMMAStoppingConfig dataclass

Configuration for Simple Moving Average Stopping callback.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class SMMAStoppingConfig:
    """
    Configuration for Simple Moving Average Stopping callback.
    """

    window: int
    threshold: float
    lookback: int

    def __post_init__(self) -> None:
        if self.window <= 0:
            raise ValueError(f"window must be positive, got {self.window}.")
        if self.lookback <= 0:
            raise ValueError(f"lookback must be positive, got {self.lookback}.")
        if self.threshold <= 0:
            raise ValueError(f"threshold must be positive, got {self.threshold}.")

lookback: int instance-attribute

threshold: float instance-attribute

window: int instance-attribute

__init__(*, window: int, threshold: float, lookback: int) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.window <= 0:
        raise ValueError(f"window must be positive, got {self.window}.")
    if self.lookback <= 0:
        raise ValueError(f"lookback must be positive, got {self.lookback}.")
    if self.threshold <= 0:
        raise ValueError(f"threshold must be positive, got {self.threshold}.")

ScalarConfig dataclass

Configuration for a scalar parameter.

Attributes:

Name Type Description
init_value float

Initial value for the parameter.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class ScalarConfig:
    """
    Configuration for a scalar parameter.

    Attributes:
        init_value: Initial value for the parameter.
    """

    init_value: float

init_value: float instance-attribute

__init__(*, init_value: float) -> None

TrainingDataConfig dataclass

Configuration for data loading and batching.

Attributes:

Name Type Description
batch_size int

Number of points per training batch.

data_ratio int | float

Ratio of data to collocation points per batch.

collocations int

Total number of collocation points to generate.

collocation_sampler CollocationStrategies

Sampling strategy for collocation points.

collocation_seed int | None

Optional seed for reproducible collocation sampling.

Source code in src/anypinn/core/config.py
@dataclass(kw_only=True)
class TrainingDataConfig:
    """
    Configuration for data loading and batching.

    Attributes:
        batch_size: Number of points per training batch.
        data_ratio: Ratio of data to collocation points per batch.
        collocations: Total number of collocation points to generate.
        collocation_sampler: Sampling strategy for collocation points.
        collocation_seed: Optional seed for reproducible collocation sampling.
    """

    batch_size: int
    data_ratio: int | float
    collocations: int
    collocation_sampler: CollocationStrategies = "random"
    collocation_seed: int | None = None

    def __post_init__(self) -> None:
        if self.batch_size <= 0:
            raise ValueError(f"batch_size must be positive, got {self.batch_size}.")
        if self.collocations < 0:
            raise ValueError(f"collocations must be non-negative, got {self.collocations}.")
        if isinstance(self.data_ratio, float):
            if not (0.0 <= self.data_ratio <= 1.0):
                raise ValueError(f"Float data_ratio must be in [0.0, 1.0], got {self.data_ratio}.")
        else:
            if not (0 <= self.data_ratio <= self.batch_size):
                raise ValueError(
                    f"Integer data_ratio must be in [0, {self.batch_size}], got {self.data_ratio}."
                )

batch_size: int instance-attribute

collocation_sampler: CollocationStrategies = 'random' class-attribute instance-attribute

collocation_seed: int | None = None class-attribute instance-attribute

collocations: int instance-attribute

data_ratio: int | float instance-attribute

__init__(*, batch_size: int, data_ratio: int | float, collocations: int, collocation_sampler: CollocationStrategies = 'random', collocation_seed: int | None = None) -> None

__post_init__() -> None

Source code in src/anypinn/core/config.py
def __post_init__(self) -> None:
    if self.batch_size <= 0:
        raise ValueError(f"batch_size must be positive, got {self.batch_size}.")
    if self.collocations < 0:
        raise ValueError(f"collocations must be non-negative, got {self.collocations}.")
    if isinstance(self.data_ratio, float):
        if not (0.0 <= self.data_ratio <= 1.0):
            raise ValueError(f"Float data_ratio must be in [0.0, 1.0], got {self.data_ratio}.")
    else:
        if not (0 <= self.data_ratio <= self.batch_size):
            raise ValueError(
                f"Integer data_ratio must be in [0, {self.batch_size}], got {self.data_ratio}."
            )

UniformSampler

Cartesian grid sampler that distributes points evenly across the domain.

For d-dimensional domains, places ceil(n^(1/d)) points per axis then takes the first n points of the resulting grid.

Parameters:

Name Type Description Default
seed int | None

Optional seed (unused — grid is deterministic).

None
Source code in src/anypinn/core/samplers.py
class UniformSampler:
    """Cartesian grid sampler that distributes points evenly across the domain.

    For d-dimensional domains, places ``ceil(n^(1/d))`` points per axis then
    takes the first ``n`` points of the resulting grid.

    Args:
        seed: Optional seed (unused — grid is deterministic).
    """

    def __init__(self, seed: int | None = None) -> None:
        pass

    def sample(self, n: int, domain: Domain) -> Tensor:
        d = domain.ndim
        pts_per_dim = math.ceil(n ** (1.0 / d))

        linspaces = [torch.linspace(lo, hi, pts_per_dim) for lo, hi in domain.bounds]
        grids = torch.meshgrid(*linspaces, indexing="ij")
        flat = torch.stack([g.reshape(-1) for g in grids], dim=-1)
        return flat[:n]

__init__(seed: int | None = None) -> None

Source code in src/anypinn/core/samplers.py
def __init__(self, seed: int | None = None) -> None:
    pass

sample(n: int, domain: Domain) -> Tensor

Source code in src/anypinn/core/samplers.py
def sample(self, n: int, domain: Domain) -> Tensor:
    d = domain.ndim
    pts_per_dim = math.ceil(n ** (1.0 / d))

    linspaces = [torch.linspace(lo, hi, pts_per_dim) for lo, hi in domain.bounds]
    grids = torch.meshgrid(*linspaces, indexing="ij")
    flat = torch.stack([g.reshape(-1) for g in grids], dim=-1)
    return flat[:n]

build_criterion(name: Criteria) -> nn.Module

Return the loss-criterion module for the given name.

Parameters:

Name Type Description Default
name Criteria

One of "mse", "huber", "l1".

required

Returns:

Type Description
Module

The corresponding PyTorch loss module.

Source code in src/anypinn/core/nn.py
def build_criterion(name: Criteria) -> nn.Module:
    """
    Return the loss-criterion module for the given name.

    Args:
        name: One of ``"mse"``, ``"huber"``, ``"l1"``.

    Returns:
        The corresponding PyTorch loss module.
    """
    return {
        "mse": nn.MSELoss(),
        "huber": nn.HuberLoss(),
        "l1": nn.L1Loss(),
    }[name]

build_sampler(strategy: CollocationStrategies, seed: int | None = None, scorer: ResidualScorer | None = None) -> CollocationSampler

Construct a collocation sampler from a strategy name.

Parameters:

Name Type Description Default
strategy CollocationStrategies

One of the CollocationStrategies literals.

required
seed int | None

Optional seed for reproducible sampling.

None
scorer ResidualScorer | None

Required when strategy="adaptive".

None

Returns:

Type Description
CollocationSampler

A sampler instance satisfying the CollocationSampler protocol.

Raises:

Type Description
ValueError

If strategy="adaptive" but no scorer is provided.

Source code in src/anypinn/core/samplers.py
def build_sampler(
    strategy: CollocationStrategies,
    seed: int | None = None,
    scorer: ResidualScorer | None = None,
) -> CollocationSampler:
    """Construct a collocation sampler from a strategy name.

    Args:
        strategy: One of the ``CollocationStrategies`` literals.
        seed: Optional seed for reproducible sampling.
        scorer: Required when ``strategy="adaptive"``.

    Returns:
        A sampler instance satisfying the ``CollocationSampler`` protocol.

    Raises:
        ValueError: If ``strategy="adaptive"`` but no scorer is provided.
    """
    if strategy == "adaptive":
        if scorer is None:
            raise ValueError(
                "AdaptiveSampler requires a ResidualScorer. "
                "Pass a scorer via PINNDataModule or use a different strategy."
            )
        return AdaptiveSampler(scorer=scorer, seed=seed)

    cls = _SAMPLER_REGISTRY.get(strategy)
    if cls is None:
        raise ValueError(
            f"Unknown collocation strategy '{strategy}'. "
            f"Choose from: {', '.join(_SAMPLER_REGISTRY)} or 'adaptive'."
        )
    return cls(seed=seed)

get_activation(name: Activations) -> nn.Module

Get the activation function module by name.

Parameters:

Name Type Description Default
name Activations

The name of the activation function.

required

Returns:

Type Description
Module

The PyTorch activation module.

Source code in src/anypinn/core/nn.py
def get_activation(name: Activations) -> nn.Module:
    """
    Get the activation function module by name.

    Args:
        name: The name of the activation function.

    Returns:
        The PyTorch activation module.
    """
    return {
        "tanh": nn.Tanh(),
        "relu": nn.ReLU(),
        "leaky_relu": nn.LeakyReLU(),
        "sigmoid": nn.Sigmoid(),
        "selu": nn.SELU(),
        "softplus": nn.Softplus(),
        "identity": nn.Identity(),
    }[name]

resolve_validation(registry: ValidationRegistry, df_path: Path | None = None) -> ResolvedValidation

Resolve a ValidationRegistry by converting ColumnRef entries to callables.

Pure function entries are passed through unchanged. ColumnRef entries are resolved using the provided data file path.

Parameters:

Name Type Description Default
registry ValidationRegistry

The validation registry to resolve.

required
df_path Path | None

Path to the CSV file for ColumnRef resolution.

None

Returns:

Type Description
ResolvedValidation

A dictionary mapping parameter names to callable validation functions.

Raises:

Type Description
ValueError

If a ColumnRef cannot be resolved (missing column or no df_path).

Source code in src/anypinn/core/validation.py
def resolve_validation(
    registry: ValidationRegistry,
    df_path: Path | None = None,
) -> ResolvedValidation:
    """
    Resolve a ValidationRegistry by converting ColumnRef entries to callables.

    Pure function entries are passed through unchanged. ColumnRef entries
    are resolved using the provided data file path.

    Args:
        registry: The validation registry to resolve.
        df_path: Path to the CSV file for ColumnRef resolution.

    Returns:
        A dictionary mapping parameter names to callable validation functions.

    Raises:
        ValueError: If a ColumnRef cannot be resolved (missing column or no df_path).
    """

    resolved: ResolvedValidation = {}
    df: pd.DataFrame | None = None

    for name, source in registry.items():
        if source is None:
            continue

        if callable(source) and not isinstance(source, ColumnRef):
            resolved[name] = source

        elif isinstance(source, ColumnRef):
            if df_path is None:
                raise ValueError(
                    f"Cannot resolve ColumnRef for '{name}': no df_path provided. "
                    "Either pass a df_path or use a callable instead of ColumnRef."
                )

            if df is None:
                df = pd.read_csv(df_path)

            if source.column not in df.columns:
                raise ValueError(
                    f"Cannot resolve ColumnRef for '{name}': "
                    f"column '{source.column}' not found in data. "
                    f"Available columns: {list(df.columns)}"
                )

            column_values = torch.tensor(df[source.column].values, dtype=torch.float32)

            if source.transform is not None:
                column_values = source.transform(column_values)

            def make_lookup_fn(values: Tensor) -> Callable[[Tensor], Tensor]:
                cache: dict[torch.device, Tensor] = {}

                def lookup(x: Tensor) -> Tensor:
                    device = x.device
                    if device not in cache:
                        cache[device] = values.to(device)
                    idx = x.squeeze(-1).round().to(torch.int32)
                    return cache[device][idx]

                return lookup

            resolved[name] = _ColumnLookup(make_lookup_fn(column_values))

    return resolved