ENTITY
Llama 3.2 1B
Llama 3.2 1B
PulseAugur coverage of Llama 3.2 1B — every cluster mentioning Llama 3.2 1B across labs, papers, and developer communities, ranked by signal.
Total · 30d
0
0 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
0
0 over 90d
TIER MIX · 90D
No coverage in the last 90 days.
SENTIMENT · 30D
1 day(s) with sentiment data
RECENT · PAGE 1/1 · 2 TOTAL
-
New BCJR-QAT method pushes LLM quantization to 2 bits per weight
Researchers have developed BCJR-QAT, a novel method for quantizing large language models to 2 bits per weight, a significant advancement beyond current post-training quantization techniques. This new approach uses a dif…
-
New FPO method prevents alignment collapse in iterative RLHF models
Researchers have identified a phenomenon called alignment collapse in iterative Reinforcement Learning from Human Feedback (RLHF). This occurs when the AI policy exploits weaknesses in the reward model it is trained on,…