PulseAugur
LIVE 23:15:54
ENTITY Lora

Lora

PulseAugur coverage of Lora — every cluster mentioning Lora across labs, papers, and developer communities, ranked by signal.

Total · 30d
159
159 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
118
118 over 90d
TIER MIX · 90D
RELATIONSHIPS
TIMELINE
  1. 2026-05-12 research_milestone A paper is published detailing findings on parameter placement in LoRA for fine-tuning. source
SENTIMENT · 30D

4 day(s) with sentiment data

RECENT · PAGE 1/6 · 102 TOTAL
  1. TOOL · CL_29395 ·

    LoRA parameter placement impacts GRPO fine-tuning, not SFT

    Researchers have investigated the parameter placement problem within Low-Rank Adaptation (LoRA) for fine-tuning large language models. Their study reveals that for Supervised Fine-Tuning (SFT), the specific placement of…

  2. TOOL · CL_28343 ·

    New AdaPaD method improves PEFT efficiency for large language models

    Researchers have introduced AdaPaD, a novel method for efficiently fine-tuning large language models using Parameter-Efficient Fine-Tuning (PEFT). AdaPaD trains all rank-1 components simultaneously, with each component …

  3. TOOL · CL_28266 ·

    Fashion Florence model extracts structured clothing attributes

    Researchers have developed Fashion Florence, a vision-language model based on Florence-2, specifically fine-tuned for extracting structured fashion attributes from images. This model can generate a JSON object detailing…

  4. TOOL · CL_27578 ·

    EvoPref algorithm enhances LLM alignment with evolutionary optimization

    Researchers have developed EvoPref, a novel multi-objective evolutionary algorithm designed to improve the alignment of large language models (LLMs). Unlike traditional gradient-based methods that can lead to preference…

  5. TOOL · CL_25332 ·

    LoRA fine-tuning reduces LLM parameter updates

    Low-Rank Adaptation (LoRA) is a technique for efficiently fine-tuning large language models. Instead of modifying all model weights, LoRA freezes the original weights and introduces small, trainable matrices to learn ad…

  6. TOOL · CL_24634 ·

    LoRA fine-tuning explained with matrix-level detail

    This article provides a detailed, number-by-number explanation of how LoRA (Low-Rank Adaptation) works for fine-tuning large language models. It aims to go beyond simply stating what LoRA achieves and instead illustrate…

  7. TOOL · CL_24209 ·

    LoRA Explained: Mathematical Intuition Behind Low-Rank Adaptation

    This article delves into the mathematical underpinnings of Low-Rank Adaptation (LoRA), a technique used for efficient fine-tuning of large language models. It explains how LoRA leverages the concept of low intrinsic dim…

  8. RESEARCH · CL_23623 ·

    AI artist masters Gérôme's fini surface technique with advanced LoRA training

    An AI artist has developed a LoRA model capable of replicating Jean-Léon Gérôme's signature "fini surface" technique. This involved three iterative training rounds to blend academic painting precision with machine learn…

  9. RESEARCH · CL_23570 ·

    Paired bootstrapping is key for AI model evaluation, article explains

    A technical analysis explains the statistical necessity of paired bootstrapping in evaluating AI model performance, particularly when comparing a baseline system against a trained LoRA model. The author demonstrates tha…

  10. TOOL · CL_25549 ·

    MatryoshkaLoRA enhances LLM fine-tuning with hierarchical low-rank representations

    Researchers have introduced MatryoshkaLoRA, a novel framework for fine-tuning large language models that improves efficiency and performance. This method uses a hierarchical approach to low-rank representations, inserti…

  11. TOOL · CL_25611 ·

    New Bayesian fine-tuning method enhances model uncertainty quantification

    Researchers have developed a new framework for parameter-efficient Bayesian fine-tuning of large models. This method quantifies uncertainty effectively within very low-dimensional parameter spaces, addressing limitation…

  12. TOOL · CL_22630 ·

    Clinical AI fine-tuned on AMD hardware, bypassing CUDA dependency

    A project has successfully fine-tuned a clinical AI model, MedQA, using AMD hardware and ROCm, demonstrating that advanced AI development is possible without NVIDIA's CUDA. The fine-tuning process utilized the Qwen3-1.7…

  13. TOOL · CL_25604 ·

    LoRA rank allocation fails in RL fine-tuning, study finds

    A new study on the Qwen 2.5 1.5B model reveals that adaptive rank allocation techniques, effective in supervised fine-tuning, do not translate to reinforcement learning with Group Relative Policy Optimization (GRPO). Re…

  14. TOOL · CL_25606 ·

    New Diff-SAE method excels at detecting language model backdoors

    Researchers have developed a new method using Sparse Autoencoders (SAEs) to detect backdoor attacks in language models. Their Differential SAE (Diff-SAE) architecture proved significantly more effective than Crosscoders…

  15. TOOL · CL_25609 ·

    New defense framework tackles multilingual prompt injection attacks

    Researchers have developed MIPIAD, a defense framework to combat indirect prompt injection attacks in multilingual large language model systems. The framework combines a Qwen2.5-1.5B model fine-tuned with LoRA, TF-IDF l…

  16. RESEARCH · CL_22001 ·

    PACZero enables PAC-private fine-tuning of language models with usable utility

    Researchers have developed PACZero, a novel method for fine-tuning large language models that offers strong privacy guarantees. This approach utilizes sign quantization of gradients to achieve a privacy regime where mem…

  17. RESEARCH · CL_22549 ·

    Fine-tuned small language models outperform LLMs in Windows event log analysis

    A new paper explores the use of small language models (SLMs) for analyzing Windows event logs, offering a more resource-efficient alternative to large language models (LLMs). Researchers developed a synthetic dataset wi…

  18. TOOL · CL_22462 ·

    Transformer memory geometry explains confident hallucinations in LLMs

    Researchers have developed a new geometric framework to understand two failure modes in language models: conflict and hallucination. They propose that learned facts form attractor basins in the model's hidden-state spac…

  19. RESEARCH · CL_22113 ·

    New research links optimizer choice to reduced forgetting in LLM finetuning

    Researchers have explored the impact of optimizer consistency during the fine-tuning of large language models. One study suggests that using the same optimizer for both pre-training and fine-tuning leads to less knowled…

  20. TOOL · CL_21959 ·

    New adapter TFM-Retouche improves tabular foundation models without fine-tuning

    Researchers have developed TFM-Retouche, a novel adapter designed to enhance tabular foundation models (TFMs) without requiring computationally expensive full fine-tuning. This lightweight, architecture-agnostic adapter…