Pretraining Strategies and Scaling for ECG Foundation Models: A Systematic Study
Researchers have conducted a systematic study on pretraining strategies and scaling for electrocardiography (ECG) foundation models. They evaluated five different self-supervised learning objectives, finding that contrastive predictive coding and JEPA yielded the most transferable representations. The study also demonstrated that increasing pretraining data up to 11 million samples consistently improved performance for most objectives. Furthermore, structured state space models showed superior performance compared to transformers and CNNs, suggesting their inductive biases are key for effective ECG representation learning. AI
IMPACT Suggests structured state space models and contrastive learning are key for effective ECG representation learning, potentially guiding future medical AI development.