← All Papers · Machine Learning

Ml Knowledge Artifacts Algebra

Dr. Tamás Nagy Draft Machine Learning
DOI: 10.5281/zenodo.18910387
Unreviewed draft. This paper has not been human-reviewed. Mathematical claims may be unverified. Use with appropriate caution.
View in Graph BibTeX

Abstract

A 200-tree Random Forest has 126,074 parameters. Its knowledge? Three numbers and a basis.

We introduce the Knowledge Artifact — a portable representation of what any ML model has learned — and the Knowledge Algebra — provably exact arithmetic on these artifacts. Any model with a predict method is decomposed into spectral coefficients via kernel eigendecomposition. The resulting vector supports addition (combine models: 5x error reduction), subtraction (remove bias: 10x correlation reduction; remove dangerous capabilities: 38x), averaging (federated learning: 21% improvement, zero data sharing), distance (model comparison), differencing (structural audit, \(R^2 = 0.95\)), and continual extension (zero catastrophic forgetting). Function-space arithmetic is 19.2x more accurate than weight-space arithmetic (Task Arithmetic) — provably, by Eckart-Young. All artifacts are portable, composable, and predictable via one matrix multiplication in any language.

Length
6,457 words
Claims
10 theorems
Status
Unknown

Connects To

Formal Foundations of Stochastic Gradient Descent Universal Foundations: A Verified Library of Core Mathematic...

Referenced By

The Latent Algebra: A Universal Representational Language fo...

Browse all Machine Learning papers →