Shadow Mining: Inferring Higher-Grade Structure from Lower-Grade Data
Abstract
Every finite-dimensional approximation of a dynamical system loses information. The Shadow Principle (Nagy 2026a) establishes that this loss is structurally detectable but not exactly measurable: the projected system can know that something is missing, and approximately how much, but not what. This paper develops the practical methodology of shadow mining — the systematic extraction of maximum information about grade-\((k+1)\) structure from grade-\(k\) data alone.
We prove three theorems that characterize the information boundary: the Derivability Boundary Theorem (magnitude is bounded; direction is not), the Consistency Narrowing Theorem (conservation laws shrink the feasible set), and the Anti-Shadow Characterization (lossless projection iff the system lies in the subspace). These theorems divide shadow information into five extractable levels: detection, magnitude, directional hint, structural constraint, and reconstruction.
We present a four-step computational pipeline (shadow landscape mapping, targeted probing, consistency filtering, ansatz verification) and validate it numerically on the equal-mass planar three-body problem. The grade-2 generator (Jacobian of the equations of motion) is computed at 100 configurations spanning five structural families. Without integrating any trajectory, the spectral entropy of the generator eigenvalues predicts the finite-time Lyapunov exponent with Spearman \(r = -0.51\) (\(p = 6.7 \times 10^{-8}\)), correctly identifying chaotic configurations from static data alone.
A key finding for Hamiltonian systems: the analyticity parameter \(\rho = \lambda_1/\lambda_2\) is identically 1 due to symplectic eigenvalue pairing, making spectral entropy the primary shadow indicator instead. This is the first systematic demonstration that grade-2 spectral structure predicts grade-3 (chaotic) behavior without simulation, validating the Shadow Principle as a practical research methodology.