← All Papers · Core Theory

The Smooth Latent Operator: Parameter-Free Distributional Representations via Kernel Moment Recovery

Tamás Nagy, Ph.D. Updated 2026-03-18 Short Draft Core Theory
Unreviewed draft. This paper has not been human-reviewed. Mathematical claims may be unverified. Use with appropriate caution.
Download PDF View in Graph BibTeX

Abstract

The Padé resummation of a moment generating function — as used in distributional Latent representations (Nagy, 2026h) and orbital Latent representations (Nagy, 2026g) — depends on a discrete Padé order \(N_P\). For fixed \(N_P\), the construction is algebraic and closed-form; as \(N_P\) varies, the result changes discontinuously (the linear system changes dimension, coefficients can jump, and the condition number spikes). For heavy-tailed distributions or high-volatility regimes, no single \(N_P\) works well.

We resolve this by embedding the Padé-COS chain in a reproducing kernel Hilbert space (RKHS) of moment sequences. The scaled moments \(\{c_k = m_k/k!\}\) define a Latent element \(\Lambda\) in a Gaussian-weighted \(\ell^2\) space. The characteristic function is recovered by a kernel evaluation — a smooth inner product that replaces the Toeplitz matrix inverse. A continuous resolution parameter \(\alpha \in \mathbb{R}_{>0}\) replaces the discrete \(N_P\), making the entire chain smooth.

The resolution \(\alpha\) can be set adaptively: the Latent Theorem (Nagy, 2026e) determines the natural representation size from the analyticity parameter \(\rho\) of the generating function, which is itself a smooth function of the model parameters. This yields a parameter-free smooth operator:

\[\mathcal{L}: (w, \mu, \Sigma) \to F_S(x)\]

that is smooth in all arguments, requires no tuning, and automatically adapts to the problem's difficulty. The classical Padé-COS formula is recovered as a special case (the Dirac kernel limit).

We introduce the concept of a grade-2 Latent: the Latent of the extraction chain itself. The distribution has a grade-1 Latent \(\Lambda\) (what the distribution is); the extraction process has a grade-2 Latent \(\alpha^\) (how to optimally represent it). Together, \((\Lambda, \alpha^)\) provides a complete, smooth, parameter-free characterization.

Length
3,167 words
Claims
7 theorems
Status
Draft
Target
Annals of Statistics / Journal of Machine Learning Research

Novelty

Replacing the discrete Padé order with a continuous RKHS resolution parameter, yielding a smooth operator from model parameters to CDF with automatic regularization — the 'grade-2 Latent' concept (meta-representation of extraction difficulty) is the genuine new idea.

Connects To

Universal Foundations: A Verified Library of Core Mathematic...

Browse all Core Theory papers →