PulseAugur
LIVE 10:36:07
research · [2 sources] ·
0
research

New framework uses K-Shapley values for meritocratic fairness in bandits

Researchers have introduced a novel framework for achieving meritocratic fairness in budgeted combinatorial multi-armed bandits with full-bandit feedback. This new approach extends the Shapley value concept to a K-Shapley value, which quantifies an agent's marginal contribution within a limited set size. The proposed K-SVFair-FBF algorithm adaptively estimates this K-Shapley value, demonstrating improved fairness and performance on datasets related to federated learning and social influence maximization. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Introduces a new fairness metric and algorithm for bandit problems, potentially improving resource allocation in complex systems.

RANK_REASON Academic paper introducing a new algorithmic framework and theoretical results.

Read on arXiv cs.AI →

COVERAGE [2]

  1. arXiv cs.LG TIER_1 · Shradha Sharma, Swapnil Dhamal, Shweta Jain ·

    Meritocratic Fairness in Budgeted Combinatorial Multi-armed Bandits via Shapley Values

    arXiv:2605.00762v1 Announce Type: new Abstract: We propose a new framework for meritocratic fairness in budgeted combinatorial multi-armed bandits with full-bandit feedback (BCMAB-FBF). Unlike semi-bandit feedback, the contribution of individual arms is not received in full-bandi…

  2. arXiv cs.AI TIER_1 · Shweta Jain ·

    Meritocratic Fairness in Budgeted Combinatorial Multi-armed Bandits via Shapley Values

    We propose a new framework for meritocratic fairness in budgeted combinatorial multi-armed bandits with full-bandit feedback (BCMAB-FBF). Unlike semi-bandit feedback, the contribution of individual arms is not received in full-bandit feedback, making the setting significantly mor…