PulseAugur
LIVE 00:08:23
ENTITY Lora

Lora

PulseAugur coverage of Lora — every cluster mentioning Lora across labs, papers, and developer communities, ranked by signal.

Total · 30d
159
159 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
118
118 over 90d
TIER MIX · 90D
RELATIONSHIPS
TIMELINE
  1. 2026-05-12 research_milestone A paper is published detailing findings on parameter placement in LoRA for fine-tuning. source
SENTIMENT · 30D

4 day(s) with sentiment data

RECENT · PAGE 2/6 · 102 TOTAL
  1. TOOL · CL_21942 ·

    HCInfer system enables LLMs on resource-constrained devices with error compensation

    Researchers have developed HCInfer, a novel inference system designed to enable large language models (LLMs) to run efficiently on devices with limited memory. This system offloads parts of the model's compensation mech…

  2. TOOL · CL_21937 ·

    New AS-LoRA method improves privacy in federated learning

    Researchers have developed AS-LoRA, a novel framework for adaptive selection of LoRA components in privacy-preserving federated learning. This method addresses aggregation errors common in such setups by allowing each l…

  3. TOOL · CL_21930 ·

    New DLoR framework proves universal approximation with sparse diagonal components

    Researchers have introduced a new framework called Structural Correspondence for neural networks that use parameter-efficient low-rank structures. This framework demonstrates that augmenting low-rank layers with a minim…

  4. TOOL · CL_21435 ·

    DPO vs SimPO: Preference tuning methods compared for LLM training

    A recent analysis highlights a critical discrepancy in preference tuning methodologies for large language models, specifically comparing Direct Preference Optimization (DPO) and Simplified Preference Optimization (SimPO…

  5. TOOL · CL_21301 ·

    LoRA fine-tuning: Style learning or pattern memorization?

    A recent analysis explores whether fine-tuning a LoRA adapter on a specific writing style, like "Tenacious-style" sales emails, results in genuine style imitation or mere memorization of augmented patterns. The study fo…

  6. TOOL · CL_21302 ·

    LoRA fine-tuning explained: Why low rank adapts LLMs effectively

    This article explains the intrinsic-low-rank hypothesis of fine-tuning large language models, detailing how techniques like LoRA adapt models without altering original weights. It clarifies that LoRA's expressive update…

  7. RESEARCH · CL_22053 ·

    DomLoRA method places single adapter at dominant module for efficient fine-tuning

    Researchers have developed a new method called DomLoRA for parameter-efficient fine-tuning of large language models. This technique identifies a single "dominant adaptation module" within a model where placing a low-ran…

  8. TOOL · CL_21128 ·

    LoRA fine-tuning unexpectedly alters model behavior, not just specific word avoidance

    Researchers explored how LoRA adapters influence large language models, discovering that while they can alter specific behaviors like text length, they struggle to enforce negative constraints such as avoiding certain w…

  9. TOOL · CL_20554 ·

    LoRA emerges as a viable parametric knowledge memory for LLMs, complementing RAG and ICL

    A new paper explores the use of Low-Rank Adaptation (LoRA) as a method for continuously updating knowledge in large language models. The research empirically analyzes LoRA's capacity, composability, and optimization for…

  10. TOOL · CL_20767 ·

    LEGO framework uses LoRA to detect synthetic images with greater accuracy

    Researchers have developed LEGO, a novel framework designed to detect synthetic images by focusing on generator-specific artifacts. This approach utilizes Low-Rank Adaptation (LoRA) modules, each trained to identify uni…

  11. RESEARCH · CL_20602 ·

    New benchmark study explores neural network performance on Tajik POS tagging

    This paper introduces the first benchmark for part-of-speech tagging in the Tajik language, evaluating various neural network architectures. The study utilized the TajPersParallel corpus, focusing on context-independent…

  12. TOOL · CL_20563 ·

    Sub-token routing enhances transformer efficiency and KV compression in new research

    Researchers have introduced sub-token routing as a novel method for enhancing transformer efficiency, offering a more granular compression approach than existing techniques. This method focuses on routing within a token…

  13. RESEARCH · CL_20414 ·

    Budgeted LoRA framework optimizes LLM inference efficiency via structured compute allocation

    Researchers have introduced Budgeted LoRA, a novel distillation framework designed to create more efficient large language models for inference. This method frames model compression as a structured compute allocation pr…

  14. RESEARCH · CL_20291 ·

    LoRA efficiently adapts geospatial models for wildfire mapping with Sentinel-2 data

    Researchers have evaluated three Geospatial Foundation Models (GFMs) – Terramind, DINOv3, and Prithvi-v2 – for wildfire mapping using Sentinel-2 satellite data. The study found that Low-Rank Adaptation (LoRA) was the mo…

  15. RESEARCH · CL_20296 ·

    LLMs accelerate neural architecture search with novel delta-based code generation

    Researchers are exploring novel methods for Neural Architecture Search (NAS) using Large Language Models (LLMs). One approach, SPARK, aims to improve LLM knowledge integration by explicitly selecting functional factors …

  16. TOOL · CL_19445 ·

    AI agents secure payments with new crypto-signing protocol over radio

    Raza Sharif, CEO/Founder of Agentsign.dev, has developed MCPS (Model Context Protocol Security) to address critical security vulnerabilities in the widely-used MCP standard for AI agents. MCPS introduces cryptographic s…

  17. TOOL · CL_18589 ·

    CellxPert integrates multi-omics data for advanced single-cell analysis and perturbation prediction

    Researchers have developed CellxPert, a novel multimodal foundation model designed to unify and analyze single-cell and spatial multi-omics data. This model integrates various data types including transcriptomic, chroma…

  18. RESEARCH · CL_18667 ·

    RD-ViT cuts data needs for segmentation, outperforming standard ViT with fewer parameters

    Researchers have developed RD-ViT, a novel Recurrent-Depth Vision Transformer designed for semantic segmentation tasks. This architecture significantly reduces data dependence by using a single, shared transformer block…

  19. RESEARCH · CL_18344 ·

    LLMs fine-tuned to predict neural network performance from code

    Researchers have developed a method to fine-tune Large Language Models (LLMs) for predicting neural network performance on image classification tasks. By analyzing neural network architecture code, an LLM can determine …

  20. TOOL · CL_16554 ·

    Top Open-Source Libraries Enable Local LLM Fine-Tuning in 2026

    A recent analysis highlights the top open-source libraries for locally fine-tuning large language models in 2026. These tools, including LoRA, QLoRA, Hugging Face Transformers, and Unsloth, aim to reduce hardware requir…