Document Type

Article

Publication Date

2022

Publication Title

Duke Law Journal

Issue

71

First Page

1207

Abstract

In the twentieth century, the Food and Drug Administration ("FDA") rose to prominence as a respected scientific agency. By the middle of the century, it transformed the US. medical marketplace from an unregulated haven for dangerous products and false claims to a respected exemplar of public health. More recently, the FDA's objectivity has increasingly been questioned. Critics argue the agency has become overly political and too accommodating to industry while lowering its standards for safety and efficacy. The FDA's accelerated pathways for product testing and approval are partly to blame. They require lower-quality evidence, such as surrogate endpoints, and shift the FDA's focus from premarket clinical trials toward postmarket surveillance, requiring less evidence up front while promising enhanced scrutiny on the back end. To further streamline product testing and approval, the FDA is adopting outputs from computer models, enhanced by artificial intelligence ("AI"), as surrogates for direct evidence of safety and efficacy. This Article analyzes how the FDA uses computer models and simulations to save resources, reduce costs, infer product safety and efficacy, and make regulatory decisions. To test medical products, the FDA assembles cohorts of virtual humans and conducts digital clinical trials. Using molecular modeling, it simulates how substances interact with cellular targets to predict adverse effects and determine how drugs should be regulated. Though legal scholars have commented on the role of AI as a medical product that is regulated by the FDA, they have largely overlooked the role of AI as a medical product regulator. Modeling and simulation could eventually reduce the exposure of volunteers to risks and help protect the public. However, these technologies lower safety and efficacy standards and may erode public trust in the FDA while undermining its transparency, accountability, objectivity, and legitimacy. Bias in computer models and simulations may prioritize efficiency and speed over other values such as maximizing safety, equity, and public health. By analyzing FDA guidance documents and industry and agency simulation standards, this Article offers recommendations for safer and more equitable automation of FDA regulation.

Share

COinS