AI, Professionals, and Professional Work: The Practice of Law with Automated Decision Support Technologies

Seminar | March 22 | 3:10-5 p.m. | 107 South Hall

 Daniel Kluttz

 Information, School of

A report on work being done jointly with Deirdre Mulligan. Technical systems employing algorithms are shaping and displacing human decision making in a variety of fields. As technology reconfigures work practices, researchers have documented potential loss of human agency and skill, confusion about responsibility, diminished accountability, and both over- and under-reliance on decision-support systems. The introduction of predictive algorithm systems into professional decision making compounds both general concerns with bureaucratic inscrutability and opaque technical systems as well as specific concerns about encroachments on expert knowledge and (mis-)alignment with professional liability frameworks and ethics. To date, however, we have little empirical data regarding how automated decision-support tools are being debated, deployed, used, and governed in professional practice.

The objective of our ongoing empirical study is to analyze the organizational structures, professional rules and norms, and technical system properties that shape professionalsâ understanding and engagement with such systems in practice. As a case study, we examine decision-support systems marketed to legal professionals, focusing primarily on technologies marketed for âe-discoveryâ purposes. Commonly referred to as âtechnology-assisted reviewâ (TAR) or âpredictive coding,â these systems increasingly rely on machine-learning techniques to classify and predict which of the voluminous electronic documents subject to litigation should be withheld or produced to the opposing side. We are accomplishing our objective through in-depth, semi-structured interviews of experts in this space: the technology company representatives who develop and sell such systems to law firms and the legal professionals who decide whether and how to use them in practice. We argue that governance approaches should be seeking to put lawyers and decision-support systems in deeper conversation, not position them as relatively passive recipients of system wisdom who must rely on out-of-system legal mechanisms to understand or challenge them. This requires attention to both the information demands of legal professionals and the processes of interaction that elicit human expertise and allow humans to obtain information about machine decision making.