PulseAugur
LIVE 06:02:04
ENTITY Lora

Lora

PulseAugur coverage of Lora — every cluster mentioning Lora across labs, papers, and developer communities, ranked by signal.

Total · 30d
161
161 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
120
120 over 90d
TIER MIX · 90D
RELATIONSHIPS
TIMELINE
  1. 2026-05-12 research_milestone A paper is published detailing findings on parameter placement in LoRA for fine-tuning. source
SENTIMENT · 30D

4 day(s) with sentiment data

RECENT · PAGE 3/6 · 102 TOTAL
  1. RESEARCH · CL_18689 ·

    MILE framework offers parameter-efficient continual semantic segmentation

    Researchers have introduced MILE, a novel framework for continual semantic segmentation that efficiently adapts to new domains and modalities without forgetting previous tasks. MILE utilizes Low-Rank Adaptation (LoRA) t…

  2. RESEARCH · CL_15928 ·

    Flexi-LoRA adapts fine-tuning ranks for speech and reasoning tasks

    Researchers have introduced Flexi-LoRA, a new framework designed to enhance parameter-efficient fine-tuning for large language models. This method dynamically adjusts the LoRA ranks based on the complexity of the input …

  3. TOOL · CL_15761 ·

    LinMU achieves linear complexity for multimodal understanding models

    Researchers have developed LinMU, a novel Vision-Language Model (VLM) architecture that achieves linear complexity, overcoming the quadratic complexity limitations of current models. This new design utilizes an M-MATE b…

  4. RESEARCH · CL_15809 ·

    New methods tackle continual learning in LLMs by separating task-specific and shared knowledge

    Two new research papers propose novel methods for continual learning in large language models, addressing the challenge of acquiring new knowledge without forgetting previous information. The first paper, "Split-on-Shar…

  5. RESEARCH · CL_15908 ·

    Teams leverage LLMs and ensemble methods for multilingual online polarization detection at SemEval-2026

    Researchers have developed systems for SemEval-2026 Task 9, a multilingual polarization detection challenge across 22 languages. One approach fine-tuned Gemma 3 models using Low-Rank Adaptation (LoRA) and augmented data…

  6. RESEARCH · CL_15929 ·

    New methods like SMF and SAM reduce catastrophic forgetting in LLMs

    Two new research papers explore methods to mitigate catastrophic forgetting in language models during fine-tuning. One paper introduces Sparse Memory Finetuning (SMF), which adds memory layers and updates only heavily a…

  7. TOOL · CL_15985 ·

    Researchers explore growing Transformers with modular composition and layer-wise expansion

    Researchers have explored a method for training Transformer models by incrementally adding new layers to a frozen base, maintaining a constant budget for trainable parameters. This approach, termed 'Growing Transformers…

  8. TOOL · CL_16166 ·

    SCALE-LoRA framework audits and composes Low-Rank Adaptation adapters for reliable AI outputs

    Researchers have developed SCALE-LoRA, a framework designed to improve the reuse of Low-Rank Adaptation (LoRA) adapters from open pools for new tasks. This system addresses challenges in adapter compatibility and output…

  9. RESEARCH · CL_16203 ·

    Researchers distill DeepSeek-R1 reasoning into compact models for code clone detection

    Researchers have developed a knowledge distillation framework to improve the reliability and practicality of compact open-source models for cross-language code clone detection. This method transfers reasoning capabiliti…

  10. RESEARCH · CL_18705 ·

    Ortho-Hydra paper introduces new method to improve LoRA fine-tuning for diffusion transformers

    Researchers have introduced Ortho-Hydra, a novel re-parameterization technique designed to improve LoRA fine-tuning for diffusion transformers (DiT) on multi-style data. This method addresses the issue of 'style bleed' …

  11. RESEARCH · CL_18270 ·

    New OCRR benchmark measures AI model recovery from distribution shift via corrections

    Researchers have introduced OCRR, a new benchmark designed to evaluate how well machine learning systems can recover from distribution shifts using online corrections. Unlike static benchmarks, OCRR simulates real-world…

  12. RESEARCH · CL_18277 ·

    AI flywheel boosts Indic ASR accuracy by 17x for niche entities

    Researchers have developed a novel Text-to-Speech (TTS) and Speech-to-Text (STT) system, dubbed the "TTS-STT Flywheel," to improve Automatic Speech Recognition (ASR) for niche domains in Indic languages. This system syn…

  13. RESEARCH · CL_14473 ·

    Audio-language models struggle with dysarthric speech context, but fine-tuning shows promise

    Researchers have developed a benchmark to test if current audio-language models can effectively use additional clinical context to improve automatic speech recognition for dysarthric speech. Initial findings indicate th…

  14. TOOL · CL_24183 ·

    Flexi-LoRA adapts LLM parameters dynamically for efficient fine-tuning

    Researchers have developed Flexi-LoRA, a new method for fine-tuning large language models that dynamically adjusts the model's parameters based on input complexity. This approach allows for more efficient adaptation, pa…

  15. RESEARCH · CL_13427 ·

    DeepSeek's V4 model omits Engram memory module, sparking debate and new research

    DeepSeek's latest model, V4, notably omits Engram, a novel memory and efficiency module co-developed with Peking University. Engram, designed to augment Transformers by enabling direct knowledge lookups instead of recal…

  16. RESEARCH · CL_13350 ·

    Flux.2 Klein LoRA models enable new 'scribbly doodle' AI art styles

    New LoRA models, Flux.2 Klein 9B and 4B, are enabling artists to generate "scribbly doodle" AI art with fine-tuned control over sketch aesthetics. Trained on numerous hand-drawn examples, these models are poised to tran…

  17. RESEARCH · CL_13015 ·

    Phosphene AI video tool adds LoRA support, runs on Macs with 16GB RAM

    The open-source AI video generation tool Phosphene has rapidly updated with LoRA support and CivitAI integration, allowing users to apply custom LoRA models like Retro anime LoRA. Additionally, tips have emerged for run…

  18. TOOL · CL_12331 ·

    FLUX.2 LoRA generates 2,880 daily AI animals using Soviet matchbox art

    A creative AI project utilizes a FLUX.2 LoRA model, trained on scans of Soviet matchbox labels, to generate approximately 2,880 unique animal images daily. This system operates continuously on vast.ai, exploring the int…

  19. RESEARCH · CL_14059 ·

    New research explores 3D consistency, LoRA transferability, and unified frameworks for video diffusion models.

    Researchers have developed new methods to improve video generation using diffusion models. One approach, Geometry Forcing, integrates 3D representations with video diffusion models to enhance geometric consistency and v…

  20. RESEARCH · CL_14127 ·

    RadLite fine-tunes small LLMs for CPU-deployable radiology AI

    Researchers have developed RadLite, a method for fine-tuning small language models (SLMs) with 3-4 billion parameters for radiology tasks. This approach, utilizing LoRA fine-tuning on models like Qwen2.5-3B-Instruct and…