PulseAugur
LIVE 08:14:26
ENTITY nuscenes-devkit

nuscenes-devkit

PulseAugur coverage of nuscenes-devkit — every cluster mentioning nuscenes-devkit across labs, papers, and developer communities, ranked by signal.

Total · 30d
0
0 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
0
0 over 90d
TIER MIX · 90D

No coverage in the last 90 days.

RELATIONSHIPS
SENTIMENT · 30D

2 day(s) with sentiment data

RECENT · PAGE 1/1 · 19 TOTAL
  1. TOOL · CL_29265 ·

    New method improves HD map construction with cross-view supervision

    Researchers have developed a new method called Cross-View Supervision (CVS) to improve the construction of high-definition maps using bird's-eye-view (BEV) representations from multiple cameras. Traditional methods stru…

  2. RESEARCH · CL_29310 ·

    Random-Set GNNs enhance uncertainty quantification in graph learning

    Researchers have introduced Random-Set Graph Neural Networks (RS-GNNs) to address uncertainty quantification in graph learning. This new framework models node-level epistemic uncertainty using a belief function formalis…

  3. TOOL · CL_28029 ·

    Driving models' performance hinges on temporal sampling frequency

    Researchers have investigated the impact of temporal sampling frequency on end-to-end autonomous driving trajectory prediction models. They found that while dense frame sampling is often assumed to improve performance, …

  4. RESEARCH · CL_27517 ·

    Autonomous driving research tackles adaptive perception and novel adversarial attacks

    Researchers have developed an adaptive perception system for autonomous driving that dynamically adjusts its computational resources based on scene complexity, significantly reducing latency without sacrificing accuracy…

  5. TOOL · CL_20771 ·

    New neuro-symbolic architecture improves autonomous driving scene understanding

    Researchers have developed InfoCoordiBridge, a novel neuro-symbolic architecture designed to enhance the reliability of scene understanding in autonomous driving systems. This architecture addresses issues where languag…

  6. TOOL · CL_18725 ·

    BEVCALIB model uses bird's-eye view features for LiDAR-camera calibration

    Researchers have developed BEVCALIB, a novel method for calibrating LiDAR and camera sensors, crucial for autonomous driving systems. This approach utilizes bird's-eye view (BEV) features extracted from both sensor type…

  7. TOOL · CL_15627 ·

    LiDAR-only HD map construction method enhances semantic cues via knowledge distillation

    Researchers have developed LIE, a novel method for constructing High-Definition (HD) maps for autonomous driving using only LiDAR data. This approach overcomes the limitations of camera-based methods by leveraging knowl…

  8. TOOL · CL_15677 ·

    SimPB++ model unifies 2D and 3D object detection for autonomous driving

    Researchers have developed SimPB++, an end-to-end model designed to simultaneously detect both 2D objects in perspective views and 3D objects in a bird's-eye view for multi-camera autonomous driving systems. The model e…

  9. TOOL · CL_15751 ·

    MapRF uses NeRF-guided self-training for weakly supervised HD map construction

    Researchers have developed MapRF, a novel framework for constructing high-definition (HD) maps for autonomous driving systems using only 2D image labels. This weakly supervised approach leverages Neural Radiance Fields …

  10. TOOL · CL_15778 ·

    DynFlowDrive model enhances autonomous driving with flow-based dynamic world modeling

    Researchers have introduced DynFlowDrive, a novel latent world model designed to enhance the reliability of autonomous driving systems. This model utilizes flow-based dynamics to predict future scene evolutions under va…

  11. RESEARCH · CL_15496 ·

    Unified Map Prior Encoder enhances autonomous driving mapping and planning

    Researchers have developed a Unified Map Prior Encoder (UMPE) designed to integrate diverse map data, such as HD/SD vector maps, rasterized maps, and satellite imagery, into autonomous driving systems. This encoder addr…

  12. RESEARCH · CL_14065 ·

    Researchers develop noise-aware training for robust 3D object detection using V2X data

    Researchers have developed a new method for integrating vehicle-to-everything (V2X) communication data into 3D object detection systems for autonomous driving. This approach aims to overcome the limitations of onboard s…

  13. RESEARCH · CL_08569 ·

    BEV segmentation models for autonomous driving lack generalizability across datasets

    A new study published on arXiv evaluates the performance of Bird's-Eye View (BEV) segmentation models used in autonomous driving. Researchers found that models trained on single datasets, like nuScenes, tend to overfit …

  14. RESEARCH · CL_08191 ·

    ConFusion detector achieves state-of-the-art camera-radar fusion for autonomous driving

    Researchers have introduced ConFusion, a novel camera-radar fusion method for 3D object detection in autonomous driving. This approach utilizes heterogeneous query interaction, combining image, radar, and world queries …

  15. RESEARCH · CL_08200 ·

    New framework uses prior map data to improve camera-based 3D object detection

    Researchers have developed a novel framework called DualViewMapDet for camera-only 3D object detection and tracking, particularly beneficial for autonomous driving systems that lack LiDAR sensors. This method leverages …

  16. RESEARCH · CL_06573 ·

    OpenVO framework enhances visual odometry with temporal awareness and geometric priors

    Researchers have developed OpenVO, a new framework for open-world visual odometry that accounts for temporal dynamics and works with uncalibrated cameras. Unlike previous methods that assume fixed observation frequencie…

  17. RESEARCH · CL_06178 ·

    ARETE paper details new method for HD map generation using vehicle fleet data

    Researchers have developed ARETE, a new method for generating High-Definition (HD) maps for autonomous driving using crowdsourced vehicle data. The approach employs a Detection Transformer (DETR) model to predict vector…

  18. RESEARCH · CL_06207 ·

    CLLAP framework enhances radar-camera fusion for autonomous driving with LiDAR pretraining

    Researchers have developed CLLAP, a new pretraining framework that uses contrastive learning to improve radar-camera fusion for 3D object detection in autonomous driving. The method generates pseudo-radar data from abun…

  19. RESEARCH · CL_05117 ·

    DVGT-2 model advances autonomous driving with real-time geometry and planning

    Researchers have introduced DVGT-2, a novel Vision-Geometry-Action (VGA) model designed for autonomous driving. Unlike previous vision-language-action models, DVGT-2 prioritizes dense 3D geometry for decision-making. Th…