ENTITY
MLP
MLP
PulseAugur coverage of MLP — every cluster mentioning MLP across labs, papers, and developer communities, ranked by signal.
Total · 30d
48
48 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
46
46 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 2 TOTAL
-
New theory suggests transformers use geometric memorization
Researchers have proposed a new theory of how transformer language models memorize factual information, suggesting a 'geometric' form of memorization rather than traditional associative memory. This model posits that le…
-
New DLR-Lock method secures open-weight language models
Researchers have developed a new method called DLR-Lock to prevent unauthorized modifications of open-weight language models. This technique replaces standard MLPs with deep low-rank residual networks, which increase me…