PulseAugur
LIVE 11:21:49
ENTITY few-shot learning

few-shot learning

PulseAugur coverage of few-shot learning — every cluster mentioning few-shot learning across labs, papers, and developer communities, ranked by signal.

Total · 30d
0
0 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
0
0 over 90d
TIER MIX · 90D

No coverage in the last 90 days.

SENTIMENT · 30D

2 day(s) with sentiment data

RECENT · PAGE 1/1 · 10 TOTAL
  1. TOOL · CL_30752 ·

    Many-shot CoT-ICL shows unstable scaling for reasoning tasks

    Researchers have investigated the effectiveness of many-shot chain-of-thought in-context learning (CoT-ICL) for reasoning tasks, finding that standard many-shot approaches do not directly translate. Their study revealed…

  2. RESEARCH · CL_20487 ·

    New research explains how transformers perform in-context learning via gradient descent

    Two new arXiv papers explore the theoretical underpinnings of in-context learning (ICL) in transformers. One paper demonstrates how transformers can perform in-context logistic regression by implicitly executing normali…

  3. TOOL · CL_20554 ·

    LoRA emerges as a viable parametric knowledge memory for LLMs, complementing RAG and ICL

    A new paper explores the use of Low-Rank Adaptation (LoRA) as a method for continuously updating knowledge in large language models. The research empirically analyzes LoRA's capacity, composability, and optimization for…

  4. TOOL · CL_26968 ·

    Researchers Compare In-Context and Agentic Learning Under Constraints

    Researchers explored the differences between in-context learning and agentic learning, focusing on how adaptive queries impact performance under realizability constraints. They found that adaptivity generally does not h…

  5. RESEARCH · CL_15913 ·

    Researchers explore weight decay, in-context learning, and acceleration for Transformer models

    Researchers have developed several new methods to improve the efficiency and theoretical understanding of Transformer models. One paper provides a functional-analytic characterization of weight decay, demonstrating its …

  6. RESEARCH · CL_14481 ·

    Researchers analyze attention heads to understand in-context learning in LLMs

    Researchers have developed a new framework called Task Subspace Logit Attribution (TSLA) to analyze how large language models perform in-context learning. This framework identifies specific attention heads responsible f…

  7. RESEARCH · CL_10095 ·

    Prompt engineering guide distills 58 techniques for life sciences

    A new guide distills 58 prompt engineering techniques into six core methods for life sciences researchers. It focuses on zero-shot, few-shot, thought generation, ensembling, self-criticism, and decomposition, providing …

  8. RESEARCH · CL_06663 ·

    LLMs show promise in scientific text categorization with prompt chaining

    Researchers have explored the use of Large Language Models (LLMs) for automatically categorizing scientific texts using prompt engineering techniques. Their study evaluated In-Context Learning (ICL) and Prompt Chaining …

  9. RESEARCH · CL_06884 ·

    Tabular foundation models enable real-time knowledge tracing with 53x speedup

    Researchers have introduced a new approach to knowledge tracing called "live knowledge tracing," which utilizes tabular foundation models (TFMs) for real-time adaptation. This method bypasses traditional offline trainin…

  10. RESEARCH · CL_06772 ·

    Transformer research probes security flaws, training dynamics, and in-context learning limits

    Researchers have identified vulnerabilities in the shuffling defense mechanism used to secure Transformer models during inference, demonstrating an attack that can extract model weights by aligning permuted activations.…