PulseAugur
LIVE 07:27:32
research · [2 sources] ·
0
research

Edge AI research uses knowledge distillation for robust automotive VRU detection

Researchers have developed a knowledge distillation framework to improve the performance of object detection models on edge hardware for automotive safety. This method trains a smaller YOLOv8-S model to replicate the behavior of a larger YOLOv8-L model, achieving a 3.9x compression. The distilled model demonstrates significant robustness to INT8 quantization, outperforming the original larger model under these constraints and reducing false alarms by 44%. This approach is crucial for deploying accurate, safety-critical systems on resource-limited edge devices. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Enables more accurate and robust AI safety systems on resource-constrained automotive edge devices.

RANK_REASON Academic paper detailing a new method for model compression and quantization robustness.

Read on arXiv cs.CV →

COVERAGE [2]

  1. arXiv cs.CV TIER_1 · Akshay Karjol, Darrin M. Hanna ·

    Edge AI for Automotive Vulnerable Road User Safety: Deployable Detection via Knowledge Distillation

    arXiv:2604.26857v1 Announce Type: new Abstract: Deploying accurate object detection for Vulnerable Road User (VRU) safety on edge hardware requires balancing model capacity against computational constraints. Large models achieve high accuracy but fail under INT8 quantization requ…

  2. arXiv cs.CV TIER_1 · Darrin M. Hanna ·

    Edge AI for Automotive Vulnerable Road User Safety: Deployable Detection via Knowledge Distillation

    Deploying accurate object detection for Vulnerable Road User (VRU) safety on edge hardware requires balancing model capacity against computational constraints. Large models achieve high accuracy but fail under INT8 quantization required for edge deployment, while small models sac…