Lecture | October 31 | 1:10-2:30 p.m. | 202 South Hall
We take a social welfare analysis approach to the problem of designing algorithms that are equitable. In our framework, the social planner cares both about the efficiency and equity of outcomes. These preferences induce a preference over algorithms; as a result, in our model, the notion of the fairness of an algorithm is derived from these more primitive utilitarian preferences rather than defined ex ante. Our characterization of these implied preferences allow us to then address several questions. First, we describe how algorithms ought to be âprocuredâ; e.g., how a city that cares about acquiring fair algorithms for bail decisions would run a Netflix-style competition. Second, we derive optimal regulatory policy for governments that seek to regulate algorithm choices of private actors (e.g. companies) that care only narrowly about efficiency. Third, we illustrate these ideas using empirical data from criminal justice, education, and health care. Finally, we use the framework to talk about the equity consequences of simplicity in the design of algorithms.