This research paper explores the feasibility of using dataset poisoning techniques as a method for watermarking contrastive learning datasets. The study reveals that existing data-poisoning attacks have limitations in adaptability and success rates when applied to contrastive learning models. However, the paper proposes repurposing these attacks as a watermark for intellectual property protection by employing statistical verification methods and a novel multi-level watermarking scheme. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Introduces a novel approach to dataset intellectual property protection in contrastive learning settings.
RANK_REASON This is a research paper published on arXiv detailing novel methods for dataset watermarking. [lever_c_demoted from research: ic=1 ai=1.0]