Skip to content

Latest commit

 

History

History
50 lines (50 loc) · 2.07 KB

2023-04-11-cousins23a.md

File metadata and controls

50 lines (50 loc) · 2.07 KB
title software abstract section layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare
Cardinal objectives serve as intuitive targets in fair machine learning by summarizing utility (welfare) or disutility (malfare) $u$ over $g$ groups. Under standard axioms, all welfare and malfare functions are $w$-weighted $p$-power-means, i.e. $M_p(u;w) = \sqrt[p]{\sum_{i=1}^g w_i u_i^p}$, with $p \leq 1$ for welfare, or $p \geq 1$ for malfare. We show the same under weaker axioms, and also identify stronger axioms that naturally restrict $p$. It is known that power-mean malfare functions are Lipschitz continuous, and thus statistically easy to estimate or learn. We show that all power means are locally Holder continuous, i.e., $|M(u; w)-M(u’ ; w)| \leq \lambda \parallel u - u’\parallel^\alpha$ for some $\lambda$, $\alpha$,$\parallel \cdot \parallel$. In particular, $\lambda$ and $1/\alpha$ are bounded except as $p \rightarrow 0$ or $\min_i w_i \rightarrow 0$, and via this analysis we bound the sample complexity of optimizing welfare. This yields a novel concept of fair-PAC learning, wherein welfare functions are only polynomially harder to optimize than malfare functions, except when $p \approx 0$ or $\min_i w_i$ $\approx$ 0, which is exponentially harder.
Regular Papers
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
cousins23a
0
Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare
6422
6442
6422-6442
6422
false
Cousins, Cyrus
given family
Cyrus
Cousins
2023-04-11
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics
206
inproceedings
date-parts
2023
4
11