Skip to content

Public API reference

All symbols are exported from tldecpy and available as tl.<name> after import tldecpy as tl.

Fitting

tldecpy.fit.multi.fit_multi

fit_multi(x, y, peaks, bg=None, *, beta=1.0, robust=None, options=None, strategy='local')

Fit a TL glow curve with multiple kinetic components and optional background.

Parameters:

Name Type Description Default
x ndarray

Temperature grid :math:T in kelvin. Must be 1-D and monotonically increasing. Minimum 10 points.

required
y ndarray

Observed thermoluminescence intensity :math:I(T). Same length as x. Negative values are accepted but may degrade convergence.

required
peaks list[PeakSpec]

One :class:~tldecpy.schemas.PeakSpec per component, each defining a model key (e.g. "fo_rq", "otor_lw"), initial parameter values in init, optional bounds, and optional fixed parameters.

required
bg BackgroundSpec | None

Background model specification. Pass None or omit for no background. Supported types: "linear", "exponential", "none".

None
beta float

Heating rate :math:\beta in K/s. Must be positive. Affects the derived frequency factor s reported in peak results.

1.0
robust RobustOptions | None

Robust-loss configuration for residual minimisation. Defaults to linear (ordinary least squares) with no Poisson weighting.

None
options FitOptions | None

Solver options (local optimizer, tolerances) and optional :class:~tldecpy.schemas.UncertaintyOptions.

None
strategy ('local', 'global_hybrid', 'global_hybrid_pso')

Optimisation strategy. "local" uses the configured optimizer directly. "global_hybrid" seeds with Differential Evolution; "global_hybrid_pso" seeds with Particle Swarm Optimization. Both global strategies perform a local TRF refinement on the best seed.

"local"

Returns:

Type Description
MultiFitResult

Structured fit output. Key fields:

  • convergedTrue if the optimizer reached tolerance.
  • metrics — :class:~tldecpy.schemas.Metrics with FOM, R², AIC, BIC and optional uncertainty fields.
  • peaks — list of :class:~tldecpy.schemas.PeakResult with fitted parameters, y_hat, area and uncertainties.
  • hit_bounds — dict of {param_name: bool}; True means the parameter touched an optimization bound.
  • jac_cond — Jacobian condition number estimate.

Raises:

Type Description
ValueError

If local_optimizer and loss are incompatible ("lm" supports only "linear" loss).

ValueError

If strategy is not one of the three accepted strings.

Notes

The Levenberg-Marquardt optimizer ("lm") does not support box bounds natively. Bounds are enforced via soft penalty terms appended to the residual vector.

Examples:

>>> import tldecpy as tl
>>> T, I = tl.load_refglow("x001")
>>> peaks, bg = tl.autoinit_multi(T, I, max_peaks=2)
>>> result = tl.fit_multi(T, I, peaks=peaks, bg=bg, beta=1.0)
>>> print(result.converged, f"{result.metrics.FOM:.2f}%")
References

.. [1] Kitis, G., et al. (1998). Thermoluminescence glow-curve deconvolution functions for first, second and general orders of kinetics. J. Phys. D 31, 2636. .. [2] Virtanen, P., et al. (2020). SciPy 1.0: Fundamental Algorithms. Nature Methods 17, 261.

Source code in tldecpy/fit/multi.py
def fit_multi(
    x: np.ndarray,
    y: np.ndarray,
    peaks: list[PeakSpec],
    bg: BackgroundSpec | None = None,
    *,
    beta: float = 1.0,
    robust: RobustOptions | None = None,
    options: FitOptions | None = None,
    strategy: Literal["local", "global_hybrid", "global_hybrid_pso"] = "local",
) -> MultiFitResult:
    r"""
    Fit a TL glow curve with multiple kinetic components and optional background.

    Parameters
    ----------
    x : numpy.ndarray
        Temperature grid :math:`T` in kelvin. Must be 1-D and monotonically
        increasing. Minimum 10 points.
    y : numpy.ndarray
        Observed thermoluminescence intensity :math:`I(T)`. Same length as
        ``x``. Negative values are accepted but may degrade convergence.
    peaks : list[PeakSpec]
        One :class:`~tldecpy.schemas.PeakSpec` per component, each defining
        a model key (e.g. ``"fo_rq"``, ``"otor_lw"``), initial parameter
        values in ``init``, optional ``bounds``, and optional ``fixed``
        parameters.
    bg : BackgroundSpec | None, optional
        Background model specification.  Pass ``None`` or omit for no
        background.  Supported types: ``"linear"``, ``"exponential"``,
        ``"none"``.
    beta : float, default=1.0
        Heating rate :math:`\beta` in K/s. Must be positive. Affects the
        derived frequency factor ``s`` reported in peak results.
    robust : RobustOptions | None, optional
        Robust-loss configuration for residual minimisation.  Defaults to
        linear (ordinary least squares) with no Poisson weighting.
    options : FitOptions | None, optional
        Solver options (local optimizer, tolerances) and optional
        :class:`~tldecpy.schemas.UncertaintyOptions`.
    strategy : {"local", "global_hybrid", "global_hybrid_pso"}, default="local"
        Optimisation strategy.  ``"local"`` uses the configured optimizer
        directly.  ``"global_hybrid"`` seeds with Differential Evolution;
        ``"global_hybrid_pso"`` seeds with Particle Swarm Optimization.
        Both global strategies perform a local TRF refinement on the best
        seed.

    Returns
    -------
    MultiFitResult
        Structured fit output.  Key fields:

        - ``converged`` — ``True`` if the optimizer reached tolerance.
        - ``metrics`` — :class:`~tldecpy.schemas.Metrics` with FOM, R²,
          AIC, BIC and optional uncertainty fields.
        - ``peaks`` — list of :class:`~tldecpy.schemas.PeakResult` with
          fitted parameters, ``y_hat``, ``area`` and ``uncertainties``.
        - ``hit_bounds`` — dict of ``{param_name: bool}``; ``True`` means
          the parameter touched an optimization bound.
        - ``jac_cond`` — Jacobian condition number estimate.

    Raises
    ------
    ValueError
        If ``local_optimizer`` and ``loss`` are incompatible (``"lm"``
        supports only ``"linear"`` loss).
    ValueError
        If ``strategy`` is not one of the three accepted strings.

    Notes
    -----
    The Levenberg-Marquardt optimizer (``"lm"``) does not support box bounds
    natively.  Bounds are enforced via soft penalty terms appended to the
    residual vector.

    Examples
    --------
    >>> import tldecpy as tl
    >>> T, I = tl.load_refglow("x001")
    >>> peaks, bg = tl.autoinit_multi(T, I, max_peaks=2)
    >>> result = tl.fit_multi(T, I, peaks=peaks, bg=bg, beta=1.0)
    >>> print(result.converged, f"{result.metrics.FOM:.2f}%")

    References
    ----------
    .. [1] Kitis, G., et al. (1998). Thermoluminescence glow-curve
           deconvolution functions for first, second and general orders
           of kinetics. J. Phys. D 31, 2636.
    .. [2] Virtanen, P., et al. (2020). SciPy 1.0: Fundamental Algorithms.
           Nature Methods 17, 261.
    """
    fitter = MultiFitter(x, y, beta=beta, robust=robust, options=options)
    for peak in peaks:
        fitter.add_peak(peak)
    if bg is not None:
        fitter.set_background(bg)
    else:
        fitter.set_background(BackgroundSpec.model_validate({"type": "none"}))
    return fitter.solve(strategy=strategy)

tldecpy.fit.solvers.fit_single_peak

fit_single_peak(x, y, model='fo', init=None, bounds=None, beta=1.0, robust=None)

Fit a single TL component using the multi-component engine.

Parameters:

Name Type Description Default
x ndarray

Temperature grid :math:T in kelvin.

required
y ndarray

Measured intensity :math:I(T).

required
model str

Model key/family accepted by :mod:tldecpy.models.registry.

"fo"
init dict[str, float] | None

Initial parameter guesses (for example Tm, Im, E).

None
bounds dict[str, tuple[float, float]] | None

Optional parameter bounds as (min, max).

None
beta float

Heating rate :math:\beta in K/s.

1.0
robust RobustOptions | None

Robust fitting configuration.

None

Returns:

Type Description
FitResult

Backward-compatible single-peak result schema.

Source code in tldecpy/fit/solvers.py
def fit_single_peak(
    x: np.ndarray,
    y: np.ndarray,
    model: str = "fo",
    init: Optional[Dict[str, float]] = None,
    bounds: Optional[Dict[str, Tuple[float, float]]] = None,
    beta: float = 1.0,
    robust: Optional[RobustOptions] = None,
) -> FitResult:
    r"""
    Fit a single TL component using the multi-component engine.

    Parameters
    ----------
    x : numpy.ndarray
        Temperature grid :math:`T` in kelvin.
    y : numpy.ndarray
        Measured intensity :math:`I(T)`.
    model : str, default="fo"
        Model key/family accepted by :mod:`tldecpy.models.registry`.
    init : dict[str, float] | None, optional
        Initial parameter guesses (for example ``Tm``, ``Im``, ``E``).
    bounds : dict[str, tuple[float, float]] | None, optional
        Optional parameter bounds as ``(min, max)``.
    beta : float, default=1.0
        Heating rate :math:`\beta` in K/s.
    robust : RobustOptions | None, optional
        Robust fitting configuration.

    Returns
    -------
    FitResult
        Backward-compatible single-peak result schema.
    """
    order_type = get_order_from_key(model)
    init_data = dict(init) if init else {}
    bounds_data = dict(bounds) if bounds else None
    fixed_data: Optional[Dict[str, float]] = None

    if order_type == "continuous":
        # Accept legacy single-peak aliases for compatibility.
        if "In" not in init_data and "Im" in init_data:
            init_data["In"] = init_data["Im"]
        if "E0" not in init_data and "E" in init_data:
            init_data["E0"] = init_data["E"]
        if "Tn" not in init_data and "Tm" in init_data:
            init_data["Tn"] = init_data["Tm"]

        if bounds_data is not None:
            if "In" not in bounds_data and "Im" in bounds_data:
                bounds_data["In"] = bounds_data["Im"]
            if "E0" not in bounds_data and "E" in bounds_data:
                bounds_data["E0"] = bounds_data["E"]
            if "Tn" not in bounds_data and "Tm" in bounds_data:
                bounds_data["Tn"] = bounds_data["Tm"]

        # Legacy continuous calls (Im/E/Tm) do not specify sigma.
        # Keep the historical synthetic-variant behavior with sigma=0.05.
        has_sigma_bound = bounds_data is not None and "sigma" in bounds_data
        if "sigma" not in init_data and not has_sigma_bound:
            fixed_data = {"sigma": 0.05}

    spec = PeakSpec(
        model=model,
        init=init_data,
        bounds=bounds_data,
        fixed=fixed_data,
        name="Single Peak",
    )

    result_multi = fit_multi(x, y, peaks=[spec], bg=None, beta=beta, robust=robust)
    peak = result_multi.peaks[0]
    params = dict(peak.params)
    if order_type == "continuous":
        # Legacy aliases expected by existing single-peak tests/consumers.
        params.setdefault("Im", params.get("In", np.nan))
        params.setdefault("E", params.get("E0", np.nan))
        params.setdefault("Tm", params.get("Tn", np.nan))

    return FitResult(
        params=params,
        cov=None,
        metrics=result_multi.metrics.model_dump(),
        y_hat=result_multi.y_hat_total,
        residuals=result_multi.residuals,
        converged=result_multi.converged,
        message=result_multi.message,
        model_type=model,
    )

tldecpy.fit.automator.iterative_deconvolution

iterative_deconvolution(x, y, *, max_peaks=6, allow_models=('fo', 'go', 'otor_lw'), bg_mode='auto', sensitivity=1.0, min_snr=1.0, widths=None, residual_sigma_threshold=3.0, beta=1.0, robust=None, options=None, strategy='local')

Run automatic TL deconvolution with heuristic seeding and one-shot fitting.

Parameters:

Name Type Description Default
x ndarray

Temperature grid :math:T in kelvin.

required
y ndarray

Measured glow-curve intensity :math:I(T).

required
max_peaks int

Maximum number of components used in the automatic model.

6
allow_models tuple[str, ...]

Allowed model families/aliases for automatic seed-to-model mapping.

("fo", "go", "otor_lw")
bg_mode ('linear', 'exponential', 'none', 'auto')

Background inference mode.

"linear"
sensitivity float

Sensitivity used by fallback local-peak detector.

1.0
min_snr float

Minimum SNR used by CWT-based detector.

1.0
widths ndarray | None

Optional CWT width grid in sample units.

None
residual_sigma_threshold float

Kept for backward compatibility with historical iterative API.

3.0
beta float

Heating rate :math:\beta in K/s.

1.0
robust RobustOptions | None

Robust loss/weighting settings.

None
options FitOptions | None

Local optimizer and uncertainty options.

None
strategy ('local', 'global_hybrid', 'global_hybrid_pso')

Optimization strategy used by the underlying fitter.

"local"

Returns:

Type Description
MultiFitResult

Full multi-peak fit result including diagnostics and uncertainty payloads.

Source code in tldecpy/fit/automator.py
def iterative_deconvolution(
    x: np.ndarray,
    y: np.ndarray,
    *,
    max_peaks: int = 6,
    allow_models: tuple[str, ...] = ("fo", "go", "otor_lw"),
    bg_mode: Literal["linear", "exponential", "none", "auto"] = "auto",
    sensitivity: float = 1.0,
    min_snr: float = 1.0,
    widths: np.ndarray | None = None,
    residual_sigma_threshold: float = 3.0,
    beta: float = 1.0,
    robust: RobustOptions | None = None,
    options: FitOptions | None = None,
    strategy: Literal["local", "global_hybrid", "global_hybrid_pso"] = "local",
) -> MultiFitResult:
    r"""
    Run automatic TL deconvolution with heuristic seeding and one-shot fitting.

    Parameters
    ----------
    x : numpy.ndarray
        Temperature grid :math:`T` in kelvin.
    y : numpy.ndarray
        Measured glow-curve intensity :math:`I(T)`.
    max_peaks : int, default=6
        Maximum number of components used in the automatic model.
    allow_models : tuple[str, ...], default=("fo", "go", "otor_lw")
        Allowed model families/aliases for automatic seed-to-model mapping.
    bg_mode : {"linear", "exponential", "none", "auto"}, default="auto"
        Background inference mode.
    sensitivity : float, default=1.0
        Sensitivity used by fallback local-peak detector.
    min_snr : float, default=1.0
        Minimum SNR used by CWT-based detector.
    widths : numpy.ndarray | None, optional
        Optional CWT width grid in sample units.
    residual_sigma_threshold : float, default=3.0
        Kept for backward compatibility with historical iterative API.
    beta : float, default=1.0
        Heating rate :math:`\beta` in K/s.
    robust : RobustOptions | None, optional
        Robust loss/weighting settings.
    options : FitOptions | None, optional
        Local optimizer and uncertainty options.
    strategy : {"local", "global_hybrid", "global_hybrid_pso"}, default="local"
        Optimization strategy used by the underlying fitter.

    Returns
    -------
    MultiFitResult
        Full multi-peak fit result including diagnostics and uncertainty payloads.
    """
    _ = residual_sigma_threshold
    x_arr = _as_1d(x, "x")
    y_arr = _as_1d(y, "y")
    if x_arr.size != y_arr.size:
        raise ValueError("x and y must have the same length.")
    if max_peaks < 1:
        raise ValueError("max_peaks must be >= 1.")

    y_clean = preprocess(x_arr, y_arr)
    allow_set = {model.lower() for model in allow_models}
    sample_spacing = max(float(np.median(np.diff(x_arr))), np.finfo(float).eps)

    seeds = detect_peaks_cwt(x_arr, y_clean, min_snr=min_snr, widths=widths)
    if not seeds:
        seeds = pick_peaks(x_arr, y_clean, sensitivity=sensitivity)
    if not seeds:
        seeds = [_seed_from_index(x_arr, y_clean, int(np.argmax(y_clean)))]

    x_span = float(np.max(x_arr) - np.min(x_arr))
    min_sep = max(float(np.median(np.diff(x_arr))) * 5.0, x_span / 25.0)
    selected = _select_distinct_seeds(seeds, int(max_peaks), min_sep=min_sep)
    specs = [_seed_to_spec(seed, i, allow_set, sample_spacing) for i, seed in enumerate(selected)]
    background = _infer_background(x_arr, y_clean, bg_mode)

    return fit_multi(
        x_arr,
        y_arr,
        peaks=specs,
        bg=background,
        beta=beta,
        robust=robust,
        options=options,
        strategy=strategy,
    )

Initialization

tldecpy.fit.init.autoinit_multi

autoinit_multi(x, y, max_peaks=6, allow_models=('fo', 'go', 'otor_lw'), bg_mode='auto', sensitivity=1.0)

Build heuristic peak/background initialisation for multi-peak deconvolution.

The function executes three stages:

  1. Preprocess — Savitzky-Golay smoothing + robust outlier removal.
  2. Detect — CWT-based peak finder returns candidate positions, FWHM and asymmetry coefficient :math:\mu_g.
  3. Seed — assigns a kinetic model per peak using :math:\mu_g, estimates activation energy :math:E with Chen-style FWHM heuristics, and constructs :class:~tldecpy.schemas.PeakSpec objects with automatic bounds.

Parameters:

Name Type Description Default
x ndarray

Temperature grid :math:T in kelvin. Must be 1-D with at least 10 points.

required
y ndarray

Measured TL intensity :math:I(T). Same length as x. Values should be non-negative; negative values are clipped to zero during preprocessing.

required
max_peaks int

Maximum number of peaks to include in the returned list. If more candidates are detected, the max_peaks highest-intensity ones are kept and sorted by temperature.

6
allow_models tuple[str, ...]

Allowed model families or canonical keys. The seeding algorithm selects the best match from this set based on peak asymmetry. Continuous models ("cont_gauss", "cont_exp") are only selected when explicitly listed here.

("fo", "go", "otor_lw")
bg_mode ('linear', 'exponential', 'none', 'auto')

Background initialisation mode. "auto" adds an exponential background when the signal at the curve boundaries is significantly above the noise floor; "none" suppresses background entirely.

"linear"
sensitivity float

Peak detection sensitivity multiplier passed to :func:pick_peaks. Values > 1 lower prominence thresholds and recover weaker peaks; values < 1 raise thresholds and suppress weak candidates. Range: (0, ∞).

1.0

Returns:

Type Description
tuple[list[PeakSpec], BackgroundSpec | None]

peaks — list of :class:~tldecpy.schemas.PeakSpec, one per detected candidate, with init, bounds and name filled in. bg — :class:~tldecpy.schemas.BackgroundSpec if a background was inferred, or None.

Raises:

Type Description
ValueError

If x or y are not 1-D arrays with at least 10 points.

Notes

Activation-energy initialisation uses Chen-style peak-shape heuristics:

.. math::

E \approx c_w \frac{k T_m^2}{\omega} - b_w (2 k T_m)

where :math:\omega is the FWHM and the constants :math:c_w, :math:b_w depend on the kinetic order.

Examples:

>>> import tldecpy as tl
>>> T, I = tl.load_refglow("x002")
>>> peaks, bg = tl.autoinit_multi(T, I, max_peaks=4, allow_models=("fo_rq",))
>>> print(len(peaks), "peaks seeded")
References

.. [1] Chen, R., and McKeever, S. W. S. (1997). Theory of thermoluminescence and related phenomena. World Scientific.

Source code in tldecpy/fit/init.py
def autoinit_multi(
    x: np.ndarray,
    y: np.ndarray,
    max_peaks: int = 6,
    allow_models: tuple[str, ...] = ("fo", "go", "otor_lw"),
    bg_mode: Literal["linear", "exponential", "none", "auto"] = "auto",
    sensitivity: float = 1.0,
) -> Tuple[List[PeakSpec], Optional[BackgroundSpec]]:
    r"""
    Build heuristic peak/background initialisation for multi-peak deconvolution.

    The function executes three stages:

    1. **Preprocess** — Savitzky-Golay smoothing + robust outlier removal.
    2. **Detect** — CWT-based peak finder returns candidate positions, FWHM
       and asymmetry coefficient :math:`\mu_g`.
    3. **Seed** — assigns a kinetic model per peak using :math:`\mu_g`, estimates
       activation energy :math:`E` with Chen-style FWHM heuristics, and
       constructs :class:`~tldecpy.schemas.PeakSpec` objects with automatic bounds.

    Parameters
    ----------
    x : numpy.ndarray
        Temperature grid :math:`T` in kelvin. Must be 1-D with at least
        10 points.
    y : numpy.ndarray
        Measured TL intensity :math:`I(T)`. Same length as ``x``. Values
        should be non-negative; negative values are clipped to zero during
        preprocessing.
    max_peaks : int, default=6
        Maximum number of peaks to include in the returned list.  If more
        candidates are detected, the ``max_peaks`` highest-intensity ones
        are kept and sorted by temperature.
    allow_models : tuple[str, ...], default=("fo", "go", "otor_lw")
        Allowed model families or canonical keys.  The seeding algorithm
        selects the best match from this set based on peak asymmetry.
        Continuous models (``"cont_gauss"``, ``"cont_exp"``) are only
        selected when explicitly listed here.
    bg_mode : {"linear", "exponential", "none", "auto"}, default="auto"
        Background initialisation mode.  ``"auto"`` adds an exponential
        background when the signal at the curve boundaries is significantly
        above the noise floor; ``"none"`` suppresses background entirely.
    sensitivity : float, default=1.0
        Peak detection sensitivity multiplier passed to :func:`pick_peaks`.
        Values > 1 lower prominence thresholds and recover weaker peaks;
        values < 1 raise thresholds and suppress weak candidates.
        Range: (0, ∞).

    Returns
    -------
    tuple[list[PeakSpec], BackgroundSpec | None]
        ``peaks`` — list of :class:`~tldecpy.schemas.PeakSpec`, one per
        detected candidate, with ``init``, ``bounds`` and ``name`` filled in.
        ``bg`` — :class:`~tldecpy.schemas.BackgroundSpec` if a background was
        inferred, or ``None``.

    Raises
    ------
    ValueError
        If ``x`` or ``y`` are not 1-D arrays with at least 10 points.

    Notes
    -----
    Activation-energy initialisation uses Chen-style peak-shape heuristics:

    .. math::

        E \approx c_w \frac{k T_m^2}{\omega} - b_w (2 k T_m)

    where :math:`\omega` is the FWHM and the constants :math:`c_w`, :math:`b_w`
    depend on the kinetic order.

    Examples
    --------
    >>> import tldecpy as tl
    >>> T, I = tl.load_refglow("x002")
    >>> peaks, bg = tl.autoinit_multi(T, I, max_peaks=4, allow_models=("fo_rq",))
    >>> print(len(peaks), "peaks seeded")

    References
    ----------
    .. [1] Chen, R., and McKeever, S. W. S. (1997). *Theory of
           thermoluminescence and related phenomena.* World Scientific.
    """
    y_clean = preprocess(x, y)
    seeds = pick_peaks(x, y_clean, sensitivity=sensitivity)
    allow_set = {m.lower() for m in allow_models}

    if len(seeds) > max_peaks:
        seeds = sorted(seeds, key=lambda seed: seed.Im, reverse=True)[:max_peaks]
        seeds = sorted(seeds, key=lambda seed: seed.Tm)

    specs: List[PeakSpec] = []
    for i, seed in enumerate(seeds):
        model, shape_hint = _select_model(seed.symmetry, allow_set)
        if shape_hint == "continuous":
            # Continuous-distribution heuristic used in Benavente (2019): TN ~ TM and IN ~ IM.
            init_params = {"Tn": seed.Tm, "In": seed.Im, "E0": 1.0, "sigma": 0.05}
        else:
            energy_est = estimate_energy_chen(seed.Tm, seed.fwhm, shape_hint)
            init_params = {"Tm": seed.Tm, "Im": seed.Im, "E": energy_est}
            if shape_hint == "go":
                init_params["b"] = 1.5
            if shape_hint == "otor":
                init_params["R"] = 1e-3
            if shape_hint == "mix":
                init_params["alpha"] = 0.5

        bounds = make_bounds_from_init(init_params, model)
        specs.append(
            PeakSpec.model_validate(
                {
                    "name": f"P{i + 1}",
                    "model": model,
                    "init": init_params,
                    "bounds": bounds,
                }
            )
        )

    background_spec: Optional[BackgroundSpec] = None
    if bg_mode == "auto":
        noise_floor = float(np.percentile(y_clean, 5))
        if y_clean[0] > noise_floor * 2.0 or y_clean[-1] > noise_floor * 2.0:
            background_spec = BackgroundSpec.model_validate(
                {"type": "exponential", "init": {"a": 0.0, "b": 1.0, "c": 100.0}}
            )
    elif bg_mode != "none":
        background_spec = BackgroundSpec.model_validate({"type": bg_mode})

    return specs, background_spec

tldecpy.fit.init.pick_peaks

pick_peaks(x, y, sensitivity=1.0)

Detect candidate TL peaks and return initialization seeds.

Parameters:

Name Type Description Default
x ndarray

Temperature grid in kelvin.

required
y ndarray

Preprocessed intensity signal.

required
sensitivity float

Detection sensitivity multiplier. Higher values lower effective prominence/height thresholds and can recover weaker peaks.

1.0

Returns:

Type Description
list[PeakSeed]

Ordered list of peak seeds including :math:T_m, :math:I_m, full-width-at-half-maximum and symmetry descriptors.

Source code in tldecpy/fit/init.py
def pick_peaks(x: np.ndarray, y: np.ndarray, sensitivity: float = 1.0) -> List[PeakSeed]:
    r"""
    Detect candidate TL peaks and return initialization seeds.

    Parameters
    ----------
    x : numpy.ndarray
        Temperature grid in kelvin.
    y : numpy.ndarray
        Preprocessed intensity signal.
    sensitivity : float, default=1.0
        Detection sensitivity multiplier. Higher values lower effective
        prominence/height thresholds and can recover weaker peaks.

    Returns
    -------
    list[PeakSeed]
        Ordered list of peak seeds including :math:`T_m`, :math:`I_m`,
        full-width-at-half-maximum and symmetry descriptors.
    """
    base_prominence_pct = 0.01
    base_height_pct = 0.005

    prominence = float(np.max(y) * (base_prominence_pct / max(sensitivity, 1e-6)))
    height = float(np.max(y) * (base_height_pct / max(sensitivity, 1e-6)))
    distance = max(3, int(7 / max(sensitivity, 1e-6)))

    peaks, _ = find_peaks(y, height=height, prominence=prominence, distance=distance)
    if len(peaks) == 0 and sensitivity <= 1.0:
        peaks, _ = find_peaks(y, height=height, distance=distance)

    try:
        widths = peak_widths(y, peaks, rel_height=0.5)
    except ValueError:
        return []

    seeds: List[PeakSeed] = []
    for i, idx in enumerate(peaks):
        width_data = (
            widths[0][i : i + 1],
            widths[1][i : i + 1],
            widths[2][i : i + 1],
            widths[3][i : i + 1],
        )
        shape = analyze_peak_shape(x, y, int(idx), width_data)
        seeds.append(
            PeakSeed(
                index=int(idx),
                Tm=float(x[idx]),
                Im=float(y[idx]),
                fwhm=float(shape["omega"]),
                symmetry=float(shape["mu_g"]),
            )
        )

    return seeds

tldecpy.fit.init.preprocess

preprocess(x, y, sg_window=7, sg_poly=3, remove_outliers=True)

Preprocess a TL glow curve before peak detection.

Parameters:

Name Type Description Default
x ndarray

Temperature grid in kelvin.

required
y ndarray

Measured intensity values for each temperature sample.

required
sg_window int

Savitzky-Golay window length (odd integer in samples).

7
sg_poly int

Savitzky-Golay polynomial order.

3
remove_outliers bool

If True, apply a robust outlier cleaning pass before smoothing.

True

Returns:

Type Description
ndarray

Smoothed, non-negative intensity signal used by automatic initialization.

Source code in tldecpy/fit/init.py
def preprocess(
    x: np.ndarray,
    y: np.ndarray,
    sg_window: int = 7,
    sg_poly: int = 3,
    remove_outliers: bool = True,
) -> np.ndarray:
    r"""
    Preprocess a TL glow curve before peak detection.

    Parameters
    ----------
    x : numpy.ndarray
        Temperature grid in kelvin.
    y : numpy.ndarray
        Measured intensity values for each temperature sample.
    sg_window : int, default=7
        Savitzky-Golay window length (odd integer in samples).
    sg_poly : int, default=3
        Savitzky-Golay polynomial order.
    remove_outliers : bool, default=True
        If ``True``, apply a robust outlier cleaning pass before smoothing.

    Returns
    -------
    numpy.ndarray
        Smoothed, non-negative intensity signal used by automatic initialization.
    """
    _ = x
    y_proc = y.copy()
    if remove_outliers:
        y_proc = clean_outliers(y_proc)
    y_proc = safe_savgol(y_proc, window_length=sg_window, polyorder=sg_poly)
    return np.maximum(y_proc, 0.0)

tldecpy.fit.detection.detect_peaks_cwt

detect_peaks_cwt(x, y, min_snr=1.0, widths=None)

Detect TL peak candidates using CWT ridge responses (Ricker/Mexican-hat).

Parameters:

Name Type Description Default
x ndarray

Temperature grid :math:T in kelvin.

required
y ndarray

Intensity signal :math:I(T) (typically preprocessed and non-negative).

required
min_snr float

Minimum signal-to-noise threshold used to retain ridge-consistent candidates.

1.0
widths ndarray | Iterable[float] | None

Wavelet widths in sample units. When omitted, an adaptive range based on data length is used.

None

Returns:

Type Description
list[PeakSeed]

Peak candidates with position, intensity and shape descriptors compatible with the auto-initialization pipeline.

Notes

The implementation uses find_peaks_cwt when available and falls back to a local ridge-threshold detector otherwise.

Source code in tldecpy/fit/detection.py
def detect_peaks_cwt(
    x: np.ndarray,
    y: np.ndarray,
    min_snr: float = 1.0,
    widths: np.ndarray | Iterable[float] | None = None,
) -> list[PeakSeed]:
    r"""
    Detect TL peak candidates using CWT ridge responses (Ricker/Mexican-hat).

    Parameters
    ----------
    x : numpy.ndarray
        Temperature grid :math:`T` in kelvin.
    y : numpy.ndarray
        Intensity signal :math:`I(T)` (typically preprocessed and non-negative).
    min_snr : float, default=1.0
        Minimum signal-to-noise threshold used to retain ridge-consistent candidates.
    widths : numpy.ndarray | Iterable[float] | None, optional
        Wavelet widths in sample units. When omitted, an adaptive range based on
        data length is used.

    Returns
    -------
    list[PeakSeed]
        Peak candidates with position, intensity and shape descriptors compatible
        with the auto-initialization pipeline.

    Notes
    -----
    The implementation uses ``find_peaks_cwt`` when available and falls back to a
    local ridge-threshold detector otherwise.
    """
    x_arr = _as_1d_array(x, "x")
    y_arr = _as_1d_array(y, "y")
    if x_arr.size != y_arr.size:
        raise ValueError("x and y must have the same length.")

    y_pos = np.maximum(y_arr, 0.0)
    if float(np.max(y_pos)) <= 0.0:
        return []

    widths_arr = _resolve_widths(y_pos.size, widths)
    y_centered = y_pos - float(np.median(y_pos))
    y_scaled = y_centered / max(float(np.max(np.abs(y_centered))), 1.0)

    # Ridge map across scales: strong responses remain stable for nearby widths.
    coeffs = _cwt_ricker(y_scaled, widths_arr)
    ridge_response = np.max(np.abs(coeffs), axis=0)
    ridge_noise = _mad_sigma(ridge_response)
    ridge_floor = float(np.median(ridge_response))
    ridge_threshold = ridge_floor + max(min_snr, 0.0) * ridge_noise

    if _find_peaks_cwt is not None:
        cwt_candidates = _find_peaks_cwt(
            y_scaled,
            widths_arr,
            min_snr=max(min_snr, 0.5),
            noise_perc=20,
        )
        if cwt_candidates is None:
            cwt_candidates = []
        candidate_idx = np.asarray(cwt_candidates, dtype=int)
    else:
        candidate_idx = np.asarray([], dtype=int)

    if candidate_idx.size == 0:
        fallback, _ = find_peaks(
            ridge_response,
            height=ridge_threshold,
            distance=max(1, int(np.min(widths_arr))),
        )
        candidate_idx = np.asarray(fallback, dtype=int)

    candidate_idx = candidate_idx[(candidate_idx >= 0) & (candidate_idx < y_pos.size)]
    candidate_idx = np.unique(candidate_idx)
    if candidate_idx.size == 0:
        return []

    # Keep only ridge-consistent candidates and order by ridge prominence.
    keep = candidate_idx[ridge_response[candidate_idx] >= ridge_threshold]
    if keep.size == 0:
        return []

    keep = np.asarray(
        sorted(keep.tolist(), key=lambda idx: float(ridge_response[idx]), reverse=True),
        dtype=int,
    )

    # Snap CWT ridge candidates to true local maxima to avoid degenerate width/prominence
    # estimates (common source of PeakPropertyWarning in noisy signals).
    keep_original = keep.copy()
    local_peaks, local_props = find_peaks(y_pos, prominence=0.0)
    if local_peaks.size > 0:
        prominence_by_idx = {
            int(local_peaks[i]): float(local_props["prominences"][i])
            for i in range(local_peaks.size)
        }
        search_radius = max(1, int(np.ceil(np.min(widths_arr))))
        snapped: list[int] = []
        seen: set[int] = set()
        for idx in keep:
            lo = max(1, int(idx) - search_radius)
            hi = min(y_pos.size - 2, int(idx) + search_radius)
            in_window = (local_peaks >= lo) & (local_peaks <= hi)
            if np.any(in_window):
                candidates = local_peaks[in_window]
            else:
                # Fallback to the nearest detected local maximum when no candidate
                # falls inside the target window.
                nearest = int(np.argmin(np.abs(local_peaks - int(idx))))
                candidates = local_peaks[nearest : nearest + 1]
            best = int(candidates[np.argmax(ridge_response[candidates])])
            if best in seen:
                continue
            if prominence_by_idx.get(best, 0.0) <= np.finfo(float).eps:
                continue
            seen.add(best)
            snapped.append(best)

        keep = np.asarray(snapped, dtype=int) if snapped else keep_original
    else:
        keep = keep_original

    keep = keep[(keep > 0) & (keep < y_pos.size - 1)]
    keep = np.unique(keep)
    if keep.size == 0:
        return []

    width_pack = peak_widths(y_pos, keep, rel_height=0.5)
    seeds: list[PeakSeed] = []
    for i, idx in enumerate(keep):
        shape_data = (
            width_pack[0][i : i + 1],
            width_pack[1][i : i + 1],
            width_pack[2][i : i + 1],
            width_pack[3][i : i + 1],
        )
        shape = analyze_peak_shape(x_arr, y_pos, int(idx), shape_data)
        omega = float(shape["omega"])
        if (not np.isfinite(omega)) or omega <= 0.0:
            dx = float(np.median(np.diff(x_arr)))
            omega = max(dx * max(2.0, float(np.min(widths_arr))), np.finfo(float).eps)
        seeds.append(
            PeakSeed(
                index=int(idx),
                Tm=float(x_arr[idx]),
                Im=float(y_pos[idx]),
                fwhm=omega,
                symmetry=float(shape["mu_g"]),
            )
        )

    return sorted(seeds, key=lambda seed: seed.index)

Simulation

tldecpy.simulate.core.simulate

simulate(rhs_func, params, *, T0, T_end, beta, y0, state_keys, method='LSODA', points=1000, noise_config=None)

Integrate TL kinetic ODEs under linear heating and return a typed result.

The time variable is related to temperature by the linear heating law :math:T(t) = T_0 + \beta t. The ODE is integrated in time and the output is re-expressed on a temperature grid.

TL intensity is computed as minus the time-derivative of the first state variable: :math:I(T) = -\dot{y}_0(t).

Parameters:

Name Type Description Default
rhs_func Callable

ODE right-hand side callable. Must accept positional arguments (t, y, *params) followed by keyword arguments beta and T0: rhs_func(t, y, *params, beta=beta, T0=T0) -> array-like. The first element of the return value is treated as the trap population :math:n(t); TL intensity is its negative derivative.

required
params tuple[float, ...]

Model-specific kinetic parameters passed as positional args to rhs_func after t and y. Order must match the function signature.

required
T0 float

Initial temperature in kelvin. Must be < T_end.

required
T_end float

Final temperature in kelvin. Must be > T0.

required
beta float

Linear heating rate :math:\beta in K/s. Must be > 0.

required
y0 list[float]

Initial state vector passed to scipy.integrate.solve_ivp. Length must match the number of ODE states in rhs_func.

required
state_keys list[str]

Human-readable names for each state variable. Used as keys in SimulationResult.states. Length must match len(y0).

required
method str

Integration algorithm accepted by scipy.integrate.solve_ivp (e.g. "LSODA", "RK45", "Radau"). "LSODA" is recommended for stiff TL kinetics.

"LSODA"
points int

Number of evenly-spaced output temperature samples.

1000
noise_config dict | None

If provided, additive noise is applied to the intensity array. Recognised keys:

  • "mode""gaussian" (default) or "poisson"
  • "sigma" — float, standard deviation for Gaussian noise
  • "seed" — int or None, RNG seed for reproducibility
None

Returns:

Type Description
SimulationResult

Typed result with fields:

  • T — temperature grid in kelvin (float64, 1-D, length points)
  • I — simulated TL intensity (float64, 1-D)
  • states — dict mapping each state_key to its trajectory
  • time — integration time vector in seconds
Notes

A RuntimeWarning is emitted if the ODE integrator does not converge. The result is still returned with whatever data was produced up to the failure point.

Examples:

>>> import numpy as np
>>> import tldecpy as tl
>>> from tldecpy.simulate.fo import ode_fo, infer_n0_s
>>> T0, T_end, beta = 300.0, 600.0, 1.0
>>> E, Tm = 1.2, 450.0
>>> n0, s = infer_n0_s(Im=1.0, Tm=Tm, E=E, beta=beta)
>>> result = tl.simulate(
...     ode_fo, params=(s, E),
...     T0=T0, T_end=T_end, beta=beta,
...     y0=[n0], state_keys=["n"],
... )
>>> print(result.T.shape, result.I.max())
Source code in tldecpy/simulate/core.py
def simulate(
    rhs_func: Callable[..., Sequence[float] | np.ndarray],
    params: tuple[float, ...],
    *,
    T0: float,
    T_end: float,
    beta: float,
    y0: list[float],
    state_keys: list[str],
    method: str = "LSODA",
    points: int = 1000,
    noise_config: dict[str, Any] | None = None,
) -> SimulationResult:
    r"""
    Integrate TL kinetic ODEs under linear heating and return a typed result.

    The time variable is related to temperature by the linear heating law
    :math:`T(t) = T_0 + \beta t`.  The ODE is integrated in time and the
    output is re-expressed on a temperature grid.

    TL intensity is computed as minus the time-derivative of the first state
    variable:  :math:`I(T) = -\dot{y}_0(t)`.

    Parameters
    ----------
    rhs_func : Callable
        ODE right-hand side callable.  Must accept positional arguments
        ``(t, y, *params)`` followed by keyword arguments ``beta`` and ``T0``:
        ``rhs_func(t, y, *params, beta=beta, T0=T0) -> array-like``.
        The first element of the return value is treated as the trap
        population :math:`n(t)`; TL intensity is its negative derivative.
    params : tuple[float, ...]
        Model-specific kinetic parameters passed as positional args to
        ``rhs_func`` after ``t`` and ``y``.  Order must match the function
        signature.
    T0 : float
        Initial temperature in kelvin.  Must be < ``T_end``.
    T_end : float
        Final temperature in kelvin.  Must be > ``T0``.
    beta : float
        Linear heating rate :math:`\beta` in K/s.  Must be > 0.
    y0 : list[float]
        Initial state vector passed to ``scipy.integrate.solve_ivp``.
        Length must match the number of ODE states in ``rhs_func``.
    state_keys : list[str]
        Human-readable names for each state variable.  Used as keys in
        ``SimulationResult.states``.  Length must match ``len(y0)``.
    method : str, default="LSODA"
        Integration algorithm accepted by ``scipy.integrate.solve_ivp``
        (e.g. ``"LSODA"``, ``"RK45"``, ``"Radau"``).  ``"LSODA"`` is
        recommended for stiff TL kinetics.
    points : int, default=1000
        Number of evenly-spaced output temperature samples.
    noise_config : dict | None, optional
        If provided, additive noise is applied to the intensity array.
        Recognised keys:

        - ``"mode"`` — ``"gaussian"`` (default) or ``"poisson"``
        - ``"sigma"`` — float, standard deviation for Gaussian noise
        - ``"seed"`` — int or ``None``, RNG seed for reproducibility

    Returns
    -------
    SimulationResult
        Typed result with fields:

        - ``T`` — temperature grid in kelvin (float64, 1-D, length ``points``)
        - ``I`` — simulated TL intensity (float64, 1-D)
        - ``states`` — dict mapping each ``state_key`` to its trajectory
        - ``time`` — integration time vector in seconds

    Notes
    -----
    A ``RuntimeWarning`` is emitted if the ODE integrator does not converge.
    The result is still returned with whatever data was produced up to the
    failure point.

    Examples
    --------
    >>> import numpy as np
    >>> import tldecpy as tl
    >>> from tldecpy.simulate.fo import ode_fo, infer_n0_s
    >>> T0, T_end, beta = 300.0, 600.0, 1.0
    >>> E, Tm = 1.2, 450.0
    >>> n0, s = infer_n0_s(Im=1.0, Tm=Tm, E=E, beta=beta)
    >>> result = tl.simulate(
    ...     ode_fo, params=(s, E),
    ...     T0=T0, T_end=T_end, beta=beta,
    ...     y0=[n0], state_keys=["n"],
    ... )
    >>> print(result.T.shape, result.I.max())
    """
    total_time = (T_end - T0) / beta
    t_span = (0.0, total_time)
    t_eval = np.linspace(0.0, total_time, points)

    solution = solve_ivp(
        lambda t, y: rhs_func(t, y, *params, beta=beta, T0=T0),
        t_span,
        y0,
        method=method,
        t_eval=t_eval,
        rtol=1e-6,
        atol=1e-9,
    )

    if not solution.success:
        warnings.warn(
            f"Simulation did not converge: {solution.message}",
            RuntimeWarning,
            stacklevel=2,
        )

    t_out = T0 + beta * solution.t

    derivs: list[np.ndarray] = []
    for idx in range(len(solution.t)):
        deriv = rhs_func(solution.t[idx], solution.y[:, idx], *params, beta=beta, T0=T0)
        derivs.append(np.asarray(deriv, dtype=float))
    deriv_array = np.asarray(derivs, dtype=float).T

    intensity = -deriv_array[0]
    if noise_config:
        mode_str = str(noise_config.get("mode", "gaussian"))
        mode: Literal["gaussian", "poisson"] = "poisson" if mode_str == "poisson" else "gaussian"
        sigma = float(noise_config.get("sigma", 0.0))
        seed_value = noise_config.get("seed")
        seed = int(seed_value) if seed_value is not None else None
        intensity = add_noise(intensity, mode=mode, sigma=sigma, seed=seed)

    states = {key: solution.y[i] for i, key in enumerate(state_keys)}
    return SimulationResult(T=t_out, I=intensity, states=states, time=solution.t)

Data

tldecpy.data.refglow.load_refglow

load_refglow(key)

Load one Refglow/GLOCANIN benchmark curve as temperature and intensity arrays.

Parameters:

Name Type Description Default
key str

Dataset identifier. Accepted forms:

  • "xNNN" — e.g. "x001", "x010"
  • "RefglowNNN" — e.g. "Refglow003" (mapped to "x003")

Available datasets (NNN from 001 to 010):

.. list-table:: :header-rows: 1

    • Key
    • Description
    • x001
    • Synthetic, 1 peak (FO)
    • x002
    • Synthetic, 4 peaks (FO), :math:\beta = 8.4 K/s
    • x003x008
    • TLD-100, 5 peaks
    • x009
    • TLD-700, 9 peaks (complex)
    • x010
    • TLD-100, low dose
required

Returns:

Type Description
tuple[ndarray, ndarray]

T — temperature array in kelvin (float64, 1-D). I — TL intensity array in arbitrary detector counts (float64, 1-D). Both arrays have the same length and contain only finite values.

Raises:

Type Description
ValueError

If key is not one of the ten recognised identifiers, or if the CSV file does not contain at least two columns of valid numeric data.

FileNotFoundError

If the dataset file cannot be located inside the package data directory.

Examples:

>>> import tldecpy as tl
>>> T, I = tl.load_refglow("x001")
>>> print(T.shape, I.max())
>>> # Alternative key form
>>> T2, I2 = tl.load_refglow("Refglow002")
References

.. [1] Bos, A. J. J., et al. (1993). Intercomparison of glow curve analysis computer programs: the GLOCANIN project. Radiat. Prot. Dosim. 47, 473. .. [2] Bos, A. J. J., et al. (1994). GLOCANIN: A data set for the intercomparison of glow-curve analysis programs. Radiat. Meas. 23, 393.

Source code in tldecpy/data/refglow.py
def load_refglow(key: str) -> tuple[np.ndarray, np.ndarray]:
    r"""
    Load one Refglow/GLOCANIN benchmark curve as temperature and intensity arrays.

    Parameters
    ----------
    key : str
        Dataset identifier.  Accepted forms:

        - ``"xNNN"`` — e.g. ``"x001"``, ``"x010"``
        - ``"RefglowNNN"`` — e.g. ``"Refglow003"`` (mapped to ``"x003"``)

        Available datasets (``NNN`` from ``001`` to ``010``):

        .. list-table::
           :header-rows: 1

           * - Key
             - Description
           * - ``x001``
             - Synthetic, 1 peak (FO)
           * - ``x002``
             - Synthetic, 4 peaks (FO), :math:`\beta` = 8.4 K/s
           * - ``x003``–``x008``
             - TLD-100, 5 peaks
           * - ``x009``
             - TLD-700, 9 peaks (complex)
           * - ``x010``
             - TLD-100, low dose

    Returns
    -------
    tuple[numpy.ndarray, numpy.ndarray]
        ``T`` — temperature array in kelvin (float64, 1-D).
        ``I`` — TL intensity array in arbitrary detector counts (float64, 1-D).
        Both arrays have the same length and contain only finite values.

    Raises
    ------
    ValueError
        If ``key`` is not one of the ten recognised identifiers, or if the
        CSV file does not contain at least two columns of valid numeric data.
    FileNotFoundError
        If the dataset file cannot be located inside the package data
        directory.

    Examples
    --------
    >>> import tldecpy as tl
    >>> T, I = tl.load_refglow("x001")
    >>> print(T.shape, I.max())

    >>> # Alternative key form
    >>> T2, I2 = tl.load_refglow("Refglow002")

    References
    ----------
    .. [1] Bos, A. J. J., et al. (1993). *Intercomparison of glow curve
           analysis computer programs: the GLOCANIN project.*
           Radiat. Prot. Dosim. 47, 473.
    .. [2] Bos, A. J. J., et al. (1994). *GLOCANIN: A data set for the
           intercomparison of glow-curve analysis programs.*
           Radiat. Meas. 23, 393.
    """
    clean_key = str(key).strip().lower().replace("refglow", "x")
    csv_path = resolve_refglow_path(clean_key)
    filename = csv_path.name

    frame = pd.read_csv(csv_path)
    if frame.shape[1] < 2:
        raise ValueError(f"Dataset {filename} must contain at least two columns.")

    t_values = pd.to_numeric(frame.iloc[:, 0], errors="coerce").to_numpy(dtype=float)
    i_values = pd.to_numeric(frame.iloc[:, 1], errors="coerce").to_numpy(dtype=float)
    valid = np.isfinite(t_values) & np.isfinite(i_values)

    if int(valid.sum()) < 2:
        raise ValueError(f"Dataset {filename} does not contain enough valid numeric samples.")

    return t_values[valid], i_values[valid]

tldecpy.data.refglow.list_refglow

list_refglow()

List available Refglow dataset identifiers.

Returns:

Type Description
list[str]

Available IDs from x001 to x010.

Source code in tldecpy/data/refglow.py
def list_refglow() -> list[str]:
    """
    List available Refglow dataset identifiers.

    Returns
    -------
    list[str]
        Available IDs from ``x001`` to ``x010``.
    """
    return list(REFGLOW_META.keys())

Model registry

tldecpy.models.registry.list_models

list_models(order=None, include_aliases=False)

List available model keys for TL kinetic families.

Parameters:

Name Type Description Default
order str | None

Optional family filter (fo, so, go, mo, otor, cont). Aliases (mix, continuous, dist) are also accepted.

None
include_aliases bool

When True, include legacy alias keys in the output.

False

Returns:

Type Description
list[str]

Canonical model keys (and optional aliases).

Source code in tldecpy/models/registry.py
def list_models(order: str | None = None, include_aliases: bool = False) -> List[str]:
    """
    List available model keys for TL kinetic families.

    Parameters
    ----------
    order : str | None, optional
        Optional family filter (``fo``, ``so``, ``go``, ``mo``, ``otor``, ``cont``).
        Aliases (``mix``, ``continuous``, ``dist``) are also accepted.
    include_aliases : bool, default=False
        When ``True``, include legacy alias keys in the output.

    Returns
    -------
    list[str]
        Canonical model keys (and optional aliases).
    """
    if order is None:
        keys = list(CANONICAL_MODELS.keys())
        if include_aliases:
            keys.extend(sorted(_ALIAS_TO_CANONICAL.keys()))
        return keys

    family_key = order
    if family_key in {"mix"}:
        family_key = "mo"
    if family_key in {"continuous", "dist"}:
        family_key = "cont"

    if family_key not in _FAMILY_MEMBERS:
        raise ValueError(f"Unknown order: {order}. Try: {list(_FAMILY_MEMBERS.keys())}")

    keys = list(_FAMILY_MEMBERS[family_key])
    if include_aliases:
        alias_keys = [
            alias
            for alias, canonical in _ALIAS_TO_CANONICAL.items()
            if CANONICAL_MODELS[canonical].family == family_key
        ]
        keys.extend(sorted(alias_keys))
    return keys

tldecpy.models.registry.get_model

get_model(key)

Resolve and return a callable model implementation.

Parameters:

Name Type Description Default
key str

Canonical key, alias, or family key.

required

Returns:

Type Description
Callable

Numerical model function that maps TL parameters to intensity.

Source code in tldecpy/models/registry.py
def get_model(key: str) -> ModelFunc:
    """
    Resolve and return a callable model implementation.

    Parameters
    ----------
    key : str
        Canonical key, alias, or family key.

    Returns
    -------
    collections.abc.Callable
        Numerical model function that maps TL parameters to intensity.
    """
    return get_info(key).func

tldecpy.version.get_version_info

get_version_info()

Return package and API version metadata.

Returns:

Type Description
VersionInfo

Structured version payload containing package and API versions.

Source code in tldecpy/version.py
def get_version_info() -> VersionInfo:
    """
    Return package and API version metadata.

    Returns
    -------
    VersionInfo
        Structured version payload containing package and API versions.
    """
    return VersionInfo(
        version=__version__,
        api_version=__version__,
        build="ga",
    )