A new paper reveals that the LAION-Aesthetics Predictor (LAP), a model widely used to curate datasets for image generation models like Stable Diffusion, exhibits significant biases. The LAP disproportionately filters in images mentioning women while filtering out those mentioning men or LGBTQ+ individuals. Furthermore, it favors realistic Western and Japanese art, reflecting biases from its training data, which primarily came from English-speaking photographers and Western AI enthusiasts. The authors call for a move towards more pluralistic evaluation methods instead of prescriptive aesthetic measures. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Highlights potential representational harms in AI image generation due to biased aesthetic evaluation models.
RANK_REASON Academic paper analyzing biases in an AI model used for dataset curation.