PulseAugur
LIVE 08:59:28
ENTITY Glue

Glue

PulseAugur coverage of Glue — every cluster mentioning Glue across labs, papers, and developer communities, ranked by signal.

Total · 30d
11
11 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
10
10 over 90d
TIER MIX · 90D
SENTIMENT · 30D

1 day(s) with sentiment data

RECENT · PAGE 1/1 · 7 TOTAL
  1. TOOL · CL_25657 ·

    New SWAP-Score metric evaluates neural networks without training

    Researchers have introduced SWAP-Score, a novel zero-shot metric designed to evaluate neural networks without requiring training. This method measures a network's expressivity using sample-wise activation patterns and d…

  2. TOOL · CL_21937 ·

    New AS-LoRA method improves privacy in federated learning

    Researchers have developed AS-LoRA, a novel framework for adaptive selection of LoRA components in privacy-preserving federated learning. This method addresses aggregation errors common in such setups by allowing each l…

  3. TOOL · CL_21302 ·

    LoRA fine-tuning explained: Why low rank adapts LLMs effectively

    This article explains the intrinsic-low-rank hypothesis of fine-tuning large language models, detailing how techniques like LoRA adapt models without altering original weights. It clarifies that LoRA's expressive update…

  4. TOOL · CL_20347 ·

    AWS MCP service controls bypassed by Lambda and other downstream services

    AWS has introduced new IAM context keys, aws:ViaAWSMCPService and aws:CalledViaAWSMCP, to track traffic flowing through its managed MCP service. While these keys enhance security by preventing direct deletion of S3 obje…

  5. RESEARCH · CL_10117 ·

    AdaFRUGAL paper introduces dynamic controls for memory-efficient LLM training

    Researchers have developed AdaFRUGAL, a new framework designed to make training Large Language Models (LLMs) more memory-efficient. Unlike previous methods that required manual tuning of hyperparameters, AdaFRUGAL autom…

  6. RESEARCH · CL_06833 ·

    New hardware design offers efficient Softmax and LayerNorm for edge AI

    Researchers have developed new hardware-efficient approximations for Softmax and Layer Normalization operations, crucial for Transformer models on edge devices. These methods ensure guaranteed normalization, which is vi…

  7. RESEARCH · CL_05149 ·

    LoRA fine-tuning research suggests rank 1 is sufficient, proposes data-aware initialization

    Three new research papers explore methods to optimize LoRA fine-tuning for large language models. One paper proposes reducing the LoRA rank threshold to 1 for binary classification tasks, showing competitive performance…