Researchers have introduced a new framework called Structural Correspondence for neural networks that use parameter-efficient low-rank structures. This framework demonstrates that augmenting low-rank layers with a minimal sparse diagonal component, forming a Diagonal plus Low-Rank (DLoR) structure, is sufficient to achieve Universal Approximation. The study proves that DLoR components can reconstruct any full-rank transformation and restore the Universal Approximation Theorem for general activation functions, challenging the necessity of dense matrices for universal expressivity. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Introduces a theoretical framework that could lead to more parameter-efficient neural network architectures.
RANK_REASON This is a theoretical computer science paper published on arXiv. [lever_c_demoted from research: ic=1 ai=1.0]