Reading Assignments

To allow for this course to be flexible in response to the pace and dynamics of our in-class discussions, reading assignments will be scheduled on a rolling basis. To allow you enough time to read the assigned material, I plan to post reading assignments at least one week in advance of class. Once I have posted a reading assignment for a particular day, I will not change the assignment by adding more reading for that day.

Do not forget to submit your reading response the evening before class. Details about your weekly reading responses can be found on the assignments and grading page.

Lesson 01 - Jan. 18

Our first class will address some foundational questions.

  • Why does this class exist and why are you taking it?
  • How are algorithms, artificial intelligence, and machine learning already affecting legal decision-making? What are the benefits? What concerns does this development raise?

In prior versions of this course, I’ve devoted time on the first day for students to introduce themselves and share what topics they’re interested in learning about in the class. I’d still like to take the time for you to introduce yourselves. But I have an additional assignment I’d like you to do before class.

Assignment:

Part One: Instead of sharing aloud with the class what you’d like to get out of the class, I’d like you to ask GPT-3 — a language processing model that uses deep learning to produce human-like text — to write your introduction to this class based upon your background and interests. On the first day, each student will share with the rest of the class the personal introduction that GPT-3 has crafted for them — along with any commentary you may have about GPT-3’s statements. In turn, I will share with the class GPT-3’s vision of a class entitled “Law, Justice, and Algorithms” and share how our class differs from the AI model’s prediction.

Part Two: Use some of the language from the introduction that GPT-3 has created for you as a prompt for Dall-E to create an image.

Please email me both the GPT-3 text and the Dall-E image as part of your reading response. You are welcome to create multiple versions and send me your preferred text and image.

Readings:

DOWNLOAD ALL READINGS FOR LESSON 01

AI in the Criminal Justice System
Epic.org
Read all.

How the Police Use Facial Recognition, and Where It Falls Short
Jennifer Valentino-DeVries, New York Times (Jan. 22 2020)
Read all.

Chicago’s “Race-Neutral” Traffic Cameras Ticket Black and Latino Drivers the Most
Emily Hopkins & Melissa Sanchez, ProPublica (Jan. 11, 2022)
Read all.

Uptrust raises $2m to fight the billions of dollars wasted on useless mass incarceration
Danny Crichton, TechCrunch (May 18, 2021) 
Read all.

The Coming Collision Between Autonomous Vehicles and the Liability System
Gary Marchant and Rachel Lindor, 52 Santa Clara L. Rev. 1321 (2012)
Read pages 1321-1330.

The Promise and Perils of Algorithmic Lenders’ Use of Big Data
Matthew Adam Bruckner, 93 Chi.-Kent L. Rev. 3 (2018)
Read pages 31-38.

DeepFakes: A Looming Challenge for Privacy, Democracy, and National Security
Citron & Chesney (Scholarly Common at BU, 2019)
Read pages 1768-86.

AI is mastering language; should we trust what it says?
Steven Johnson, New York Times Magazine, 2022
Read all.

Online Dispute Resolution Moves from E-Commerce to the Courts: Technology Executive Discusses Use of Internet to Settle Civil Cases
Erika Rickard, Pew Charitable Trusts (Jun 4, 2019)
Read all.

Lesson 02 - Jan. 25

An introduction to machine learning

In this class, we will discuss how the field of machine learning has developed, what it looks like now, and how it may look in the future. With a firmer understanding of what machine learning is, we can address the question of whether we need distinct legal or regulatory frameworks for governing algorithmic decision-making systems or whether the “law of the algorithm” is an unnecessarily specific instance of more general principles.

Readings:

DOWNLOAD ALL READINGS FOR LESSON 02

Machine Learning: A Primer: an introduction for both technical and non-technical readers
Lizzie Turner, Medium: Artificial Intelligence (May 26, 2018)
Read all.

An Introduction to Statistical Learning with Applications in R
Gareth James, Daniela Witten, Trevor Hastie, & Robert Tibshirani (2021)
Read Introduction pages 1-9 (stop at “Who Should Read This Book?), 12-13, 15-24.

Anatomy of an A.I. System
Kate Crawford & Vladan Joler (2018)
Read all.

Do Artifacts Have Politics?
Langdon Winner, Deadlus Volume 109(1) (1980)
Read all.

Cyberspace and the Law of the Horse
Frank Easterbrook, University of Chicago Legal Forum 207 (1996)
Read all.

Does Technology Drive Law? The Dilemma of Technological Exceptionalism in Cyberlaw
Meg Leta Jones, U. Ill. J.L. Tech. & Pol’y 249 (2018)
Read pages 249-260, 268-272.

The Co-Evolution of Autonomous Machines and Legal Responsibility
Mark Chinen, 20 Va. J.L. & Tech. 338 (2016)
Read pages 345-353.

Lesson 03 - Feb. 1

What concepts and tools do computational resources offer for realizing legal values and policies? What cautions and objections should lawyers and communities sharpen in the face of increasing use of computational and algorithmic tools in public and private settings?

Readings:

DOWNLOAD ALL READINGS FOR LESSON 03

Prediction Policy Problems
Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Ziad Obermeyer, American Economic Review (2015) 
Read pages 491-95.

A Guide to Solving Social Problems with Machine Learning
Jon Kleinberg, Jens Ludwig, and Sendhil Mullainathan, Harvard Business Review (Dec. 8, 2016)
Read all.

Of prediction and policy
The Economist (Aug. 20, 2016)
Read all.

Biased Algorithms Are Easier to Fix Than Biased People
Sendhil Mullainathan, N.Y. Times (Dec. 6, 2019)
Read all.

Want Less-Biased Decisions? Use Algorithms.
Alex P. Miller, Harvard Business Review (2018)
Read all.

Artificial Intelligence - the Revolution hasn’t happened yet
Michael Jordan, Harvard Data Science Review (July 1, 2019)
Read all.

AI Is Doing Legal Work. But It Won’t Replace Lawyers, Yet
Steven Lohr, N.Y. Times (Mar. 19, 2019).
Read all.

Lesson 04 - Feb. 8

Fairness and discrimination

Recent developments in A.I. and machine learning raise questions about how fairness, equality, and nondiscrimination should be understood, defined, assessed, and advanced. As you make your way through this week’s readings, keep the following questions in mind:

  • What are the contrasting conceptions of fairness at work in these different sources?
  • How should we reconcile competing concerns of accuracy and equity?
  • How should an understanding of historic and systemic inequality influence the approach to incorporating machine learning into legal decision-making?
  • Do risk scores pose the same or different problems depending on the decision-making context (e.g., access to credit, eligibility for pretrial release without bail, parole eligibility, policing, child welfare, and so on)?

Readings:

DOWNLOAD ALL READINGS FOR LESSON 04

Criminal Tendency Detection From Facial Images and the Gender Bias Effect
Mahdi Hashemi and Margeret Hall, Journal of Big Data (2020).
Read the abstract and introduction.

Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
Joy Buolamwini and Timnit Gebru, ACM Conference on Fairness, Accountability and Transparency (2018).
Read all.

Two Conceptions of Procedural Fairness
Cass R. Sunstein, Social Research, Vol. 73, No. 2 (Summer 2006).
Read all.

Discrimination in the Age of Algorithms
Jon Kleinberg, Jens Ludwi, Sendhil Mullainathan and Cass R. Sunstein, Journal of Legal Analysis (2018).
Read pages 113-146.

Why Is My Classifier Discriminatory?
Irene Chen, Fredrik D. Johansson, and David Sontag, arXiv (Dec. 10, 2018).
Read pages 1-9. Don’t worry about understanding the equations.

Fairness and Abstraction in Sociotechnical Systems
Andrew W. Selbst, dana boyd, Sorelle Friedler, Suresh Venkatusabramanian, and Janet Vertsei, ACM Conference on Fairness, Accountability, and Transparency (2019)
Read all.

Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse
Anna Lauren Hoffmann, Information, Communications, and Society (2019)
Read all.

Lesson 05 - Feb. 15

Prediction and probability

Many legal concepts and practices are rooted in the language and logic of prediction and probability. Before issuing a preliminary injunction, a judge must predict whether the plaintiffs will win their case on the merits. Police must have probable cause for many arrests, searches, and seizures to be constitutionally permissible. Child welfare agencies triage investigations of suspected neglect based on predictions of which claims will be substantiated, while public housing authorities manage waitlists for housing based on predictions of who will use public housing for the shortest length of time before living independently.

Across legal systems nationwide, algorithmic predictions are replacing or informing predictions traditionally made by humans. Today, algorithms can deny a person government food benefits, send a social worker to investigate a home, or ban a person from flying on commercial airlines. In many places, criminal procedure is now algorithmic from start to finish. Based on predictions of wrongdoing, algorithms encourage police to investigate, judges to incarcerate, probation to surveil, and parole boards to deny release.

As you go through this week’s readings, ask yourself about the compatibility of traditional legal concepts and emerging algorithmic systems. How much is a legal idea like “probable cause” governed by our understanding of probability? As we develop or encounter systems that consider probability much more rigorously than judges or police traditionally would, how much should statistical thinking govern our decision-making? Are there some legal concepts that can be reduced to numerical probability and some that should not be understood in purely probabilistic terms? Why? When is it fair to make a legal judgment that depends upon a prediction about someone based on that person’s similarity to a broader group? Is it ever possible to make a prediction about someone that doesn’t rely upon their similarity to a broader group?

Readings:

DOWNLOAD ALL READINGS FOR LESSON 05

Law and the Crystal Ball
Barbara Underwood, 88 Yale L.J. 1408 (1979).
Read the first part, pages 1408-1420.

Naked Statistical Evidence of Liability: Is Subjective Probability Enough?
Gary L. Wells, J. Personality & Social Psych. 62:3 (1992).
Read the whole thing.

On Individual Risk
Philip Dawid, arXiv (2017).
Read the whole thing.

The Prediction of Violent Behavior: Toward a Second Generation of Theory and Policy
John Monahan, 141 American Journal of Psychiatry 10 (1984).
Read the whole thing.

Predicting Proportionality: The Case for Algorithmic Sentencing
Vincent Chiao, 37 Criminal Justice Ethics 238 (2018).
Read introduction, pages 238-40.

Situating methods in the magic of Big Data and AI
M. C. Elish & danah boyd, 85 Communication Monographs 57 (2018).
Read the introduction pages 57-58 and pages 67-72, starting with “Faith in Prediction.”

Lesson 06 - Feb. 22

Case study on risk assessments

What was your attitude toward risk assessments before doing these readings? What changed and why?

If you had to align yourself with one of the authors or between multiple authors, who would they be? How would your perspective differ from theirs?

What do you think of the “perfect is the enemy of good” argument from the “Open Letter”? Does your answer depend on a conception of risk assessments as either a positive incremental change or a distraction from other interventions?

What do you make of Mayson’s argument to use risk assessments to predict needs and intervene in positive ways? Mayson’s article leaves out what to do for pretrial incarceration in the absence of risk assessments. How do you expect that the open letter authors would respond? Would you buy their response? How would you approach the challenge that Mayson leaves unanswered?

DOWNLOAD ALL READINGS FOR LESSON 06

Understanding risk assessment instruments in criminal justice
Alex Chohlas-Wood, Brookings Institute (June 19, 2020)
Read the whole thing.

Machine Bias
Julia Angwin, et al., ProPublica (May 22, 2016).
Read the whole thing.

False Positives, False Negatives, and False Analyses: A Rejoinder to ‘Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks,’
Anthony W. Flores et al., 80:2 Federal Probation (Sept. 2016).
Read the whole thing.

More than 100 Civil Rights, Digital Justice, and Community-Based Organizations Raise Concerns About Pretrial Risk Assessment
The Leadership Conference on Civil and Human Rights (2018).
Read the whole thing.

Updated Position on Pretrial Risk Assessments Tools
The Pretrial Justice Institute, (2020).
Read the whole thing.

Open Letter to the Pretrial Justice Institute
James Austin, Sarah L Desmarais & John Monahan, (2020).
Read the whole thing.

The Accuracy, Equity, and Jurisprudence of Criminal Risk Assessment
Sharad Goel et al. (2018).
Read pages 1-4, 7-12.

Bias In, Bias Out
Sandra G. Mayson, 128 Yale L.J. 2218 (2019).
Read the introduction, pages 2221-27.

Algorithmic Risk Assessments and The Double-edged Sword of Youth
Megan T Stevenson & Christopher Slobogin, (2018).
Read the introduction, pages 1-3.

Lesson 07 - Mar. 15 (rescheduled)

Transparency, interpretability, and explainability

In this session we will discuss different approaches to achieve explainability, both from a legal and technical perspective. We will learn about the difference between interpretable algorithms and non-interpretable algorithms, and the difference between transparency ex-ante and transparency ex-post including different auditing methods. We will also discuss the tradeoff between transparency and accuracy, and how balance between the two can be achieved.

Questions to consider:

  • Does a requirement of transparency or explanation in the use of algorithms in decision-making promote fairness?
  • How would it work and what would be limitations?
  • How should different legal contexts require different transparency practices?

DOWNLOAD ALL READINGS FOR LESSON 07

The Mythos of Model Interpretability
Zachary C. Lipton, 2016 ICML Workshop on Human Interpretability in Machine Learning
Read the whole thing.

Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead
Cynthia Rudin, Nature (2019)
Read the whole thing.

The Intuitive Appeal of Explanable Machines
Andrew D. Selbts and Solon Barocas, 87 Fordham L. Rev., 1085 (2018).
Read 1087-1099.

The Hidden Costs of Automated Thinking
Jonathan Zittrain, New Yorker (2019).
Read the whole thing.

New York City’s Bold, Flawed Attempt to Make Algorithms Accountable
Julia Powles, New Yorker (2017).
Read the whole thing.

Lesson 08 - Mar. 22

Regulation Part One

This class and the next will exam legal regimes that could be used to govern or regulate the use of algorithms. Our first class will examine the “Blueprint for an A.I. Bill of Rights” that the White House put out this past fall and also the challenge of regulating deep fakes.

As A.I. and machine learning proliferates, what regulations are required? Are our institutions up to the task or is technology’s disruptive power overblown? Where might governmental oversight succeed and where might it fail? What would underegulation and overregulation look like in this space? How could we measure it? What existing rights should the government protect? What new rights ought to be protected in an age of automation?

DOWNLOAD ALL READINGS FOR LESSON 08

Please note that the download link above does not include the assigned podcast episode in the .zip file.

Podcast: Suresh Venkatasubramanian: An AI Bill of Rights
The Gradient, (2023).
Listen from 43:50 to the end. But feel free to listen to whole thing if you want, lots of good stuff.

Blueprint for an A.I. Bill of Rights
Read pages 5-7, 15-29, 40-52.

DeepFakes: A Looming Challenge for Privacy, Democracy, and National Security
Danielle K. Citron & Robert Chesney, Calif. L. Rev. (2019)
Read 1768-1786 (Section II: Costs & Benefits).

Responding to Deepfakes and Disinformation
Soojin Jeong, Margaret Sturtevant, & Karis Stephen, The Regulatory Review (2021).
Read the whole thing.

Congress Should Not Rush to Regulate Deepfakes
Hayley Tsukayama, India Mckinney, & Jamie Williams, Electronic Frontier Foundation (2019)
Read the whole thing.

Deepfakes and American Law
Abigail Loomis, Davis Political Review (2022).
Read the whole thing.

As Deepfakes Flourish, Countries Struggle With Response
Tiffany Hsu, N.Y. Times (2023)
Read the whole thing.

Lesson 09 - Mar. 29

Large Language Models

The main assignment for this week is to experiment with Chat-GPT. You will need to creat an account with Open AI, but you can access Chat-GPT in your browser at: https://chat.openai.com/chat. As you go through your week, just keep it open as a tab in your browser, and experiment with using the tool to answer different questions. In particular, explore how it might be used for law school and legal tasks. Watch this short video to learn how to provide ChatGPT with stronger prompts.

What are the possible benefits and drawbacks of large language models like GPT-4 for legal education and the legal profession? How can law schools best integrate these technologies into legal education? How can lawyers best integrate these technologies into legal work?

How might the widespread adoption of large language models in the legal profession affect the job market for lawyers and other legal professionals? Will these technologies complement or replace traditional legal roles?

To what extent should large language models be regulated within the legal profession? Can large language models be held accountable for providing incorrect or misleading legal advice? How can we ensure that AI technologies maintain the ethical standards of the legal profession?

Videos:

How ChatGPT Works Technically For Beginners - YouTube

A Massive Upgrade To ChatGPT! (This is Crazy) - YouTube

Microsoft’s AI Future of Work Event: Everything Revealed in 8 Minutes - YouTube

We tried to compete with AI… [AI vs. ARCHITECT] - YouTube
You can skim through parts of this video, but it is an interesting test case in an adjacent field.

Readings:

DOWNLOAD ALL READINGS FOR LESSON 09

You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills
Yuval Harari, Tristan Harris and Aza Raskin, N.Y. Times (2023).
Read the whole thing.

The False Promise of ChatGPT
Noam Chomsky, Ian Roberts and Jeffrey Watumull, N.Y. Times (2023).
Read the whole thing.

The Implications of ChatGPT for Legal Services and Society
Andrew Perlman, Harvard Law School Center on the Legal Profession (2022)
Read the whole thing.

New GPT-Based Chat App from LawDroid Is A Lawyer’s ‘Copilot’ for Research, Drafting, Brainstorming and More | LawSites
Bob Ambrogi, LawNext (2023).
Read the whole thing.

Can AI replace patent attorneys? - HGF
Mark Sellick, HGF (2022)
Read the whole thing.

Will ChatGPT Bring AI to Law Firms? Not Anytime Soon.
Thomas Bacas, BloombergLaw (2022).
Read the whole thing.

Evaluating The Legal Ethics Of A ChatGPT-Authored Motion
Aimee Furness and Sam Mallick, Law360 (2023).
Read the whole thing.

Lesson 10 - Apr. 5

Critical Approaches

In this class we’ll be examining critical approaches to algorithmic fairness that challenge the assumptions of the literature we’ve read so far and offer sometimes radical reconceptions of the role that algorithms can play in law and society.

As you work through the readings this week, take note of what critiques and perspectives resonate with you — even if you disagree with the broader argument of an article. In our class discussion, we’ll work on synthesizing these critiques and continuing the work of imagining algorithmic alternatives.

Readings:

DOWNLOAD ALL READINGS FOR LESSON 10

Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness
Ben Green, Philosophy & Technology (2022).
Read the whole thing.

On Missing Data Sets
Mimi Onuoha, Github Repository (2018).
Read the whole thing.

Algorithmic Reparation
Jenny Davis, Apryl Williams, & Michael W. Yang, Big Data & Society (2021).
Read the whole thing.

White Collar Risk Zones
Sam Lavigne, et al., The New Inquiry, (2017).
Visit the website and read the whitepaper

Lesson 11 - Apr. 12

The Future

Readings to be determined

Lesson 12 - Apr. 19

Student presentations

1 - Kevin

2 - Amara

3 - Soundarya

Lesson 13 - Apr. 26

Student presentations

1 - Jake

2 - Monica

3 - Prince