The Universal Spectral Representation Theorem: Breaking the Curse of Dimensionality
Abstract
How many parameters does it take to represent a smooth high-dimensional probability law to accuracy \(\varepsilon\)? We prove that the answer is \(N = \Theta(\log(1/\varepsilon)/\log\rho)\), where \(\rho > 1\) is the analyticity radius, and that this representation size is independent of the ambient dimension. For densities with analyticity radius \(\rho > 1\), this many Fourier coefficients suffice (upper bound, constructive via the Eigen-COS method) and are necessary (lower bound, information-theoretic via Fourier mode counting). The bounds match, establishing the exact rate. The theorem is not tied to one application domain: the same law governs smooth distributional objects arising in portfolio aggregation, diffusion dynamics, celestial mechanics, collision-probability estimation, and knowledge representations in machine learning. The curse of dimensionality in representing smooth systems is therefore not fundamental; it is a consequence of using simulation- and sampling-based descriptions where a reusable spectral representation exists. The structural core is formally verified in Lean 4 (27 files across 3 proof libraries, capstone theorems verified).
Novelty
Connecting Kolmogorov epsilon-entropy bounds to spectral risk measure computation, showing the representation complexity is Theta(log(1/eps)/log(rho)) independent of portfolio dimension — the information-theoretic lower bound for financial risk computation appears to be new.