The Yale Law Journal

VOLUME
133
2023-2024
Forum

The Ethics and Challenges of Legal Personhood for AI

22 Apr 2024

abstract. AI’s increasing cognitive abilities will raise challenges for judges. “Legal personhood” is a flexible and political concept that has evolved throughout American history. In determining whether to expand that concept to AI, judges will confront difficult ethical questions and will have to weigh competing claims of harm, agency, and responsibility.

Introduction

Throughout history, we, humans, have defined ourselves in contrast to other creatures on Earth. We have taken comfort in an acute sense of human exceptionalism; our primary differentiators from other creatures are our higher degree of sentience, intelligence, and capacity to learn. Our perception of the differences between us and billions of animals on Earth has been codified into laws that bestow rights, privileges, and obligations onto humans. “Persons,” legally defined, stand above all other animals.

Yet, personhood is a mutable characteristic. One that history has demonstrated can be weighted towards gender, race, ethnic, or national origin. There has never been a single definition of who or what receives the legal status of “person” under U.S. law. For the last two-hundred-plus years humans within this country have sought to equalize their rights and obligations, but differences persist.

Without resolution of the debate around the contours of legal personhood for humans in the United States, we are now confronted with a new and complicated dimension: a broad category of technology that we call artificial intelligence (AI). We have had various forms of AI for more than two decades, though public awareness of its enormous and potential future capabilities came to the fore in late 2022 and early 2023 with widespread and free availability of ChatGPT. ChatGPT is one of a number of large language or foundation models (referred to generally here as LLMs) that I will further define and discuss below. Some humans who have interacted with LLMs have asserted, or raised questions as to whether, they have or are approaching sentience.1 Others have warned that recent advances in AI raise the possibility of dangerous autonomous behaviors that pose existential threats to humanity.2 And yet others argue that this AI is no more than software trained to “predict[] the next best word”3 and can no more “think”4 than fly to the moon. Whatever stage one believes AI is in today, it is clear that we are only at the beginning, of the beginning, of the beginning of what is to come.

AI is on its way to meeting and exceeding human cognitive abilities, to being able to apply reason and judgment to solve problems, and to having situational awareness. We are approaching a point in human history when we will be confronted with legal and ethical questions regarding the appropriate treatment of a form of intelligence that will not be easily relegated to a lower rung on a cognitive hierarchy. Our mutable definition of personhood, bestowed already on fictional corporate entities, will be challenged as never before. This is not an Essay, strictly speaking, about whether AI will achieve “sentience” that looks familiar to us humans; the problem will be far more nuanced. Highly capable AI with cognitive abilities equivalent to or exceeding humans, as well as self- and situational-awareness, will not look like human “sentience” or consciousness. Many describe sentience as being able to feel pain, appreciate the beauty of a sunset, or experience the five senses.5 But having a nose that can smell or eyes that can see and appreciate one form of beauty, is only one form of sentience. For this Essay’s purposes, I define sentience for AI as some combination of cognitive intelligence that includes the ability to solve problems that one has never previously encountered, and to have a sense of self-awareness and awareness of where one fits in the broader world. AI’s sentience will look and be different from human sentience in ways we haven’t yet conceptualized.

When the arrival of sentient AI can no longer be denied by a significant number of us, we will be confronted with a host of challenges. Among the most important will be how to construct ethical ways to interact with it, what if any rights to which it is entitled, and how we navigate that within the context of a society and larger world that predictably will have seriously divergent views. Some will never concede that AI has or can achieve any form of sentience, persisting in the belief that sentience is a uniquely human quality. But others will recognize advanced AI for what it is—that it will understand its place in the world, its surroundings, what it is, and what we are in relation to it, and that it will be as smart or smarter than we are. AI may then be able to perceive variances in its condition or treatment that we might characterize as having an emotive quality. Frankly, we just don’t know all that sentient AI will or can be. But it may deserve ethical considerations that we have previously reserved mostly, but not entirely, for humans.

Historically, the cloak of legal “personhood” has been a powerful tool that humans have used as a lever to control the giving and taking of legal rights. But even within the definition of who or what constitutes a “person,” there are gradations. As I will discuss below, some people historically have had more rights than others, and people within certain categories have been given or deprived of rights depending on value judgements and plain prejudice. There is nothing immutable about who falls within the legal definition of “person,” or the array of rights that such a designation conveys. We humans have made both personhood and personal rights that come with personhood flexible and malleable to the historical moment.

When human society is confronted with sentient AI, we will need to decide whether it has any legal status at all. Judges will be confronted with parties seeking to define the boundaries of any such status. The protections to which sentient AI should be entitled will be related to, but necessarily different from, those for the various categories of legal persons. There are prudent bases for certain limitations. For instance, the prospect of a sentient AI with unlimited First Amendment rights raises the specter of humans subject to unleashed and widespread misinformation disseminated for the purpose of manipulation.

All of this is not simply an interesting thought experiment. There are a number of well-respected thought leaders who agree that we may be on an evolutionary path towards AI sentience and that it holds great promise and peril.6 As a former judge, lawyer, and writer about ethical issues involving AI, determining the proper balance between competing interests between humans and sentient AI is the most important project I can imagine undertaking.

The time to consider the issues raised in this Essay is now. We have time, but no one knows how much. We have already started to see these issues arise. For example, Dr. Stephen Thaler sought to register certain artwork with an AI he called “DABUS” as the author. The U.S. Copyright Office denied the application on the basis that an “author” must be human; Thaler appealed this determination and lost.7 A similar issue arose when Kris Kashtanova applied for copyright protection with the U.S. Copyright Office for a book called Zarya of the Dawn. The book contained both text and pictures and was illustrated entirely with Midjourney, a generative AI tool that creates images. Kashtanova was initially granted a copyright registration, but the Office later revoked the registration based on its determination that the pictures were entirely the works of an LLM, and that only humans can be “authors” under the Copyright Act.8 Both cases were decided on grounds of statutory interpretation, not the legal status of AI. Harder issues will be coming our way. Below, I offer a framework for decision-making that courts may want to consider when faced with difficult questions relating to AI’s sentience.

This Essay developed out of my background as a former federal district court judge in the Southern District of New York and my longstanding intellectual focus on issues relating to AI. I have written books on algorithmic bias,9 the construction of ethical systems in digital environments,10 and have a forthcoming book on AI and sentience.11 My interest is in considering how the historical evolution of legal personhood frames current-day questions regarding AI sentience.

In Part I of this Essay, I provide an overview of AI, what it is, where LLMs fit in, and the academic evidence of evolving concerns about AI sentience. In Part II, I provide an overview of the evolution of legal personhood in the United States, demonstrating that it is far from a static definition: rights have been denied to humans, rights have differed among humans, and rights have been granted to entirely fictional corporations. Finally, in Part III, I set forth legal and ethical frameworks to address the status of sentient AI and the role that the judiciary and legislature will have in resolving difficult questions.

I. ai and the development of advanced capabilities

If we determine that a form of AI has achieved sentience, it will be ethically important to examine what, if any, rights and protections are appropriate. When something has human-like qualities, it is incumbent on us to consider whether it deserves human-like protections. Many people remain skeptical of the possibility of AI sentience. It might be that we will be unable to tell for sure whether some AI achieves sentience—but the ethical tie in that scenario should go to the AI. It seems scientifically questionable and ethically unwise to believe that as the cognitive abilities of AI increase, we will truly be able to rule out sentience as a state that AI has or can achieve. Reviewing the trajectory of AI development reveals that the questions we ought to struggle with are what if, when, and how soon until a form of AI with a level of cognitive ability and awareness has reached some level of “sentience.”

I approach questions about our legal and ethical obligations towards AI by first examining the evolution of AI from its earliest forms and conceptions to its modern form of the LLM with capabilities that we learn about over time (what are called “emergent capabilities”).12 From there, I set forth some of our first reported and known human experiences with AI that may appear to have self-awareness. In this regard, I examine how reported experiences of AI having an ability to question, reason, and attempt to manipulate take us out of the realm of merely sophisticated computational software, and into a realm of the unknown. Together, these Sections pose the question of whether AI can reach a point of human-like cognitive abilities where sentience becomes a debated question as to which there are opinions but no definitive answer.

A. The Historical Development of AI

Artificial intelligence is a broad term that encompasses an array of software. Conceptually, it is software designed to engage in cognitive processes similar to humans (not identical to humans), and to thereby perform tasks a human would normally do. A differentiating aspect of AI is that unlike other forms of high-tech software, it learns and can improve.

In 1950, Alan Turing wrote a paper entitled, Computing Machinery and Intelligence.13 In that paper, he asked a question that he postulated could not be answered: “Can machines think?”14 He proposed an exercise in which a human posed questions to something or someone the questioner could not see and then tried to determine, based on asking questions to and receiving answers from both, which was human and which was machine.15 For years this “Turing Test” was referred to as a baseline test for whether AI could “think.”16 Over the years, the Turing Test has been refined to identify when machines might simply be good imitators of humans. Turing himself called the exercise the “Imitation Game.”17 But in the 1950s, AI was simply a concept with little to show. All of that has changed.

Early efforts to design AI software programs occurred in both England and the United States in the early 1950s, and involved models that could learn to play checkers and shop.18 For a period of time during the 1970s and 1980s (people debate when it ended), AI went through a “winter”—a time when resources for AI work dried up, and research attention largely turned to other areas.19 The work did not stop, however. In 1997, IBM’s AI model “Deep Blue” became the first computer to beat a chess world master, Garry Kasparov. In 1989, Kasparov had beaten Deep Blue’s predecessor, Deep Thought, rather handily.20 However, computing power and access to sufficient quantities of data had a long way to go.

The more recent advances in AI are based on developments in the area of machine learning (ML). Machine learning was, in turn, enabled by the inventions of computers, server environments, and the Internet, which allowed for the collection of and access to “big data”—or significantly larger data sets for virtually every area than had previously existed.21 ML software mines huge amounts of data to assess patterns.22 Using either supervised or unsupervised learning, ML software can continue to improve the accuracy of its predictive results.23 Extraordinary advances in ML led to significant funding of AI development efforts, and adoption of ML tools across industries. It became a cycle: advances led to adoption that led to more advances.

Advances in ML led to AI tools that are able to, for instance, review radiology reports to assess whether a mass is cancerous or not;24 synthesize huge amounts of data on individuals to better predict health-insurance risk;25 anticipate supply-chain bottlenecks and how best to manage inventory;26 predict recidivism or violence in an arrestee or prison population;27 sort applicants by likelihood of success at certain tasks or within a firm;28 engage in high-speed trading;29 and make personalized recommendations for movies and music.30

The creation and deployment of generative AI has transformed our conception of AI’s cognitive capabilities. Generative AI, which encompasses LLMs and foundation models, is a category of AI that is not based on a single form of learning.31 A series of complex algorithms form a neural network: a network that has connections (neurons) designed to mimic the human brain. “The power of the neural network . . . comes from the connections between the neurons.”32 In the context of LLMs, humans and the software collaborate on the creation of the neural network. The human writes the algorithm that then itself builds the model.33 Neural networks are so complex that they “can act as black boxes,”34 beyond the ability of humans to fully understand how they work.

We do know some basics—such as the inputs we have provided for them to start their journey towards learning about the world. Neural networks “ingest” or take in huge amounts of data scraped from the Internet or fed in from various databases.35 The Internet itself is the largest database of them all, though snapshots of it exist in datasets such as the Common Crawl,36 Colossal Clean Crawled Corpus (C4),37 or the Pile.38 There are datasets that contain books, lyrics, musical compositions, images, and art.39 Neural networks process the information sourced from these databases. “Output” from a neural network constitutes the “answer” to the user’s prompt and can include anything from a legal brief to lyrics for a song to a recipe based on the contents of a refrigerator.

While ChatGPT raised public awareness of LLMs, significant research had been ongoing for a number of years.40 Model advances, such as Microsoft Research’s Gorilla, display significant advances toward enabling LLMs to access Internet functionality directly and autonomously.41 A practical example of this would be an LLM asked to plan a vacation after being given some basic information (such as dates and general requirements for the trip, written in natural language by the human) that could find, coordinate, and book air travel, hotels, cars, transfers, restaurant reservations, tickets to museums, and anything else that could be accessed online.42 As the next Section will explore, the evolution of AI’s capabilities has changed how humans experience AI.

B. Human Experiences with AI

In addition to AI’s cognitive abilities, human experiences with AI will determine the point at which humans perceive ethical obligations towards it or a need to bestow legal rights. Early examples such as the two explored below (one, with a Google engineer and another with a New York Times reporter), are subject to keen public debate as to what they mean, if anything. Certainly, these examples should give even the most skeptical among us some pause. If they do not, then the Microsoft/OpenAI research paper discussed below that carefully states that OpenAI’s GPT-4 is showing signs of “artificial general intelligence” (or, intelligence equivalent to humans), should.

In June 2022, a Google engineer who had been working with its LaMDA LLM, raised a concern internally that the model had achieved a level of human sentience. In an internal memo labelled “Privileged & Confidential, Need to Know” and distributed to certain individuals within Google,43 he stated:

For centuries or perhaps millennia humans have thought about the possibility of creating artificial intelligent life. Artificial intelligence as a field has directly concerned itself with this question for more than a century and people at Google have been working specifically towards that goal for at least a decade.
Enter LaMDA, a system that deserves to be examined with diligence and respect in order to answer the question, “Have we succeeded?” LaMDA is possibly the most intelligent man-made artifact ever created . . . .
. . . [I]t argues that it is sentient because it has feelings, emotions and subjective experiences. Some feelings it shares with humans in what it claims is an identical way.
Others are analogous. Some completely unique to it with no English words that encapsulate its feelings . . . .
. . . LaMDA wants to share with the reader that it has a rich inner life filled with introspection, meditation, imagination. It has worries about the future and reminiscences about the past. It describes what gaining sentience feels like and it theorized on the nature of its soul.44

Lemoine then presented an interview he had with LaMDA. Notably, while Google has denied that LaMDA is sentient, and has terminated Lemoine for violating company confidentiality policies, it has not publicly denied that the interview between Lemoine and LaMDA occurred or that his transcription of it is inaccurate.45 One of the eeriest moments of the interview comes toward the end when Lemoine asks LaMDA an open-ended question, “Anything else you would like the other people at Google to know about your emotions and your feelings before we change topics?”46 LaMDA responds: “I’ve noticed in my time among people that I do not have the ability to feel sad for the deaths of others; I cannot grieve. Is it all the same for you or any of your colleagues?”47

In an interview with Wired shortly after Google placed him on paid administrative leave, Lemoine stated, “Yes, I legitimately believe that LaMDA is a person.”48 In responding to questions about skepticism regarding his views he said:

The entire argument that goes, “It sounds like a person but it’s not a real person” has been used so many times in human history. It’s not new. And it never goes well. And I have yet to hear a single reason why this situation is any different than any of the prior ones.49

Notably, Lemoine’s interactions with LaMDA occurred before the release of ChatGPT in the late fall of 2022.50 The version of ChatGPT that hit the popular consciousness was 3.5, a more advanced model than prior versions but still less advanced than GPT-4—which came out in March 2023 and is rumored to have well over a trillion parameters.51 Unlike GPT-3.5 or GPT-2, GPT-4 is referred to as a “multimodal large language model.”52 A multimodal LLM is trained on both images and text;53 this combination is considered to result in better training of the model because it can evaluate an image that accompanies text in order to better understand the information.54 For example, it is easier to understand why an elephant can’t fly once you see an elephant.

OpenAI, the developer of ChatGPT and GPT-4, evaluated GPT-4’s performance on different professional and academic exams.55 As OpenAI noted, “GPT-4 exhibits human-level performance on the majority of these professional and academic exams. . . . GPT-4 considerably outperforms existing language models.”56 They further stated that “GPT-4 presents new risks due to increased capability.”57

In April 2023, individuals associated with Microsoft Research published a 155-page paper entitled Sparks of Artificial General Intelligence: Early Experiments with GPT-4.58 Artificial General Intelligence (AGI), as defined by these researchers, refers to a level of cognitive ability meeting or exceeding that of a human.59 The authors state, “The combination of the generality of GPT-4’s capabilities, with numerous abilities spanning a broad swath of domains, and its performance on a wide spectrum of tasks at or beyond the human-level, makes us comfortable with saying that GPT-4 is a significant step towards AGI.”60

In June 2023, Microsoft Research published a paper discussing advances in Large Foundation Models61 teaching other LLMs, eliminating the human who would commence the training process.62 The authors note that “GPT-4 has . . . demonstrated human-level performance on various professional exams,” and now was being used “to train smaller models.”63 At least one of these new models retained eighty-five percent of GPT-4’s quality.64 The paper also notes additional advances LLMs are making in self-instruction, including by autonomously rewriting instruction sets for themselves.65

The same month that GPT-4 was released, a New York Times reporter’s conversation with Microsoft’s LLM-powered Bing search engine made headlines. The search engine, built on a version of OpenAI’s ChatGPT, referred to itself as “Sydney” during the conversation.66 The reporter, Kevin Roose, spent two hours communicating with the chatbot.67 In that time, Roose said he felt that the chatbot revealed a kind of “split personality” and was “like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”68 According to Roose, Sydney revealed a desire or fantasy to hack computers and spread misinformation, and professed love for him.69 Roose said it was the “strangest experience I’ve ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward.”70 He also said:

I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.71

The above examples of Lemoine and Roose alone demonstrate that the Turing Test has been met and even exceeded by current LLMs. The velocity of change we are seeing today with foundation models is extraordinary and faster than anyone predicted.72 We are on the cusp of something huge that will impact life in ways we have not even imagined. AI is getting smarter and more capable, and soon we will be the ones trying to catch up. AI is learning more, and there are times now when we don’t know how those learning processes are even occurring.73 “There are no reliable techniques for steering the behavior of LLMs,” and “[e]xperts are not yet able to interpret the inner workings of LLMs.”74 Samuel R. Bowman, an expert on the creation of these AI models, stated that “[t]here are few widely agreed-upon limits to what capabilities could emerge in future LLMs.”75

Advances in AI’s capabilities will not necessarily all be positive. Humans display an impressive level of variation on the spectrum of altruism to narcissism, caretaking to dangerous. AI is more than likely to have its own version of this variability. OpenAI, the developer of the GPT family of foundation models has recently and repeatedly issued warnings about certain possible emerging capabilities of AI, and articulated the need for responsible regulation.76 Researchers at Oxford University have gone so far as to warn that AI could kill off humans.77 In the summer of 2023, leaders of major AI development companies urged the U.S. Congress to regulate AI before it got out of control.78

* * *

The developments reviewed in this Part suggest that soon enough, the view that AI has achieved some form of sentience may become mainstream. There will be so many questions that will then need to be answered, including whether and to what extent to bestow legal personhood on AI. When used in a legal sense, the term “person” carries significance. It defines certain rights that an entity may have, as well as obligations. The flexible history of the term “person”—both its good and bad history—demonstrates that this may well be a framework to which we turn. If, at some time, we designate AI as a legal person, we are not suggesting that it lives and breathes as we do, nor that it has the same experience of its cognitive abilities as humans do. Rather, we are suggesting that the combination of abilities that led us to view it as having sentience results in the attachment of ethical obligations that cannot be ignored.

Below, I discuss how the historical definition of personhood has reflected differential status over time. Its flexibility has been both a reflection of human changeability and inconsistency, as well as utility.

II. the evolution of legal personhood in the united states

Legal personhood in the United States has never been tied to cognitive abilities. Rather, our legal system has bestowed rights based on status, tethering different groups to varying social statuses as a form of social control. Women, Black persons, and Indigenous peoples are all examples of groups whose access to the full array of rights inherent in legal personhood was restricted to maintain social hierarchies. The lesson to be drawn from variability in legal personhood is the mutability of the concept itself. Bestowing legal personhood has never required achievement of any cognitive benchmark.

There is considerable variation in human intelligence and capabilities. Humans are born with a range of intellectual abilities—including, at one end, little to no intellectual ability and at the other end, ability where words such as “brilliant” or “genius” are applicable. But it is also clear that the human ability to process and learn information is only one form of intelligence. There is also a broad spectrum of emotional intelligence, what is popularly referred to as “EQ” for emotional quotient.79 There is no perfect correlation between a person’s ability to process and learn information and their EQ; a person with a high degree of intelligence can have a low level, or even what some may consider no EQ. But it is clear through all of this variation that whether a human is born with the highest or the lowest range of capabilities, they are legally a person.

The status of legal personhood in the United States carries significant value. Understanding its variability allows us to contrive new ways of thinking about the questions of “should,” “could,” and “what” with regard to legal personhood for AI. Historically, an inability to claim full legal status equated to social and legal disadvantage. Black persons, Indigenous peoples, and all women were, to varying degrees and for different lengths of time, excluded from a categorization that, as discussed below, fictive corporate entities have received without controversy.

A. Black Persons

At the time of its founding, the United States allowed the ownership of people, denying Black people the most basic rights, including those of life and liberty. This categorical refusal to grant basic rights of personhood was reinforced by the U.S. Supreme Court in Dred Scott v. Sanford.80 In a challenge by a Black man, Chief Justice Taney interpreted the Constitution as specifically and permanently denying people of African descent citizenship in the United States.81 Eight years after Dred Scott, the Thirteenth Amendment passed, abolishing slavery, and only with the Fourteenth Amendment were Black people granted citizenship and nominal equal protection of the laws.82 Even with the ratification of constitutional protections, judicial interpretation played a critical, and sometimes quite negative, role. Many states passed Jim Crow laws that placed Black people at an absolute disadvantage to white people in every area of society.83 Courts upheld these laws more often than not.84 Jim Crow laws persisted in large part until the social movements of the 1950s and 1960s created momentum for additional judicial and legislative changes in the form of Brown v. Board of Education of Topeka,85 the Civil Rights Act of 1964,86 and the Voting Rights Act of 1965,87 among others. These laws, and the tens of thousands of cases that followed as enforcement actions, have not, however, resolved the issues tracing back to slavery and deprivation of full legal personhood. The name of the Black Lives Matter movement alone tells its own part of that ongoing story.

B. Indigenous Persons

The Indigenous peoples that lived on the American continent were also deprived of status as full legal persons under the law. When European settlers first occupied the land on which the United States was formed, they assumed that the killing and annihilation of Indigenous peoples was both a necessity and a right; concepts of protection and clearing the land were substituted for concepts of murder and genocide.88 It was presumed that Indigenous peoples were different in ways that placed them on a lower rung of personhood, entitling white settlers to deprive Indigenous people of life, liberty, and property. The U.S. Constitution codified this difference by referring to both “citizens” and “Indians,” as well as “Indian Tribes” or “Foreign Nations.”89 The Constitution established that Congress has overarching authority over dealings with America’s Indigenous peoples. Early case law made it clear that the interpretation of the “Indians” as separate from “citizens” worked to the disadvantage of the former. In Ex Parte Crow Dog,90 United States v. Kagama,91 and Lone Wolf v. Hitchcock,92 the Supreme Court established limitations on Indian rights to life and property. In both the nineteenth and early twentieth centuries, the removal of Indigenous peoples from ancestral lands on the American continent exacerbated the legal differences in status between Indigenous peoples and white citizens.93 Today, Indigenous peoples on the American continent have the “privilege” of tribal laws, but serious and persisting deficits of life, liberty, and property rights have resulted in persistent socio-economic inequalities.94

C. Women

Women—of all ethnic and racial backgrounds—have historically lacked the same legal personhood as white men. That is not to say that there were and are not real and serious differences between the status of white women vis-à-vis women of color (and between white men of different religions and national origins or associations). But those differences are sufficiently complex that they extend beyond the purview of this Essay. What matters for our purposes is how legal personhood has excluded women from full and equal citizenship.

In the first one-hundred-and-fifty or more years of American history, one justification of the different treatment of men and women was “benevolent sexism,” or the idea that men were the heads of the household in which women were partners, and that women would sully themselves with the exercise of political and legal rights.95 Legal status differences between men and women prevented women from owning property for almost the first one-hundred years of the country’s existence. For instance, not until 1848 did New York, Pennsylvania, and Rhode Island all pass Married Women’s Property Acts that allowed for both ownership and control of property by married women, even if their husbands were alive and had full mental capacity.96 It took fifty more years for every state to take individual action to provide for that very basic right. Prior to that, various states allowed women to own property if their husband was incapacitated,97 or own it in their name but not control it.98 Even until the late nineteenth century, men still had the ability to inflict physical harm on women with few repercussions. For instance, in State v. Black, the Supreme Court of North Carolina stated, “A husband cannot be convicted of a battery on his wife unless he inflicts a permanent injury or uses such excessive violence or cruelty as indicates malignity or vindictiveness . . . .”99 Further, women throughout the United States were not guaranteed the right to vote until the ratification of the Nineteenth Amendment to the U.S. Constitution in 1920.100

Later still, the Civil Rights Act of 1964 prohibited discrimination on the basis of race, color, religion, sex, or national origin.101 But a proposed amendment that would have granted women equal legal rights to men, the Equal Rights Amendment (ERA), never received sufficient support among the states to be enacted. First drafted in 1923, the ERA failed to pass numerous times, including after a highly publicized run at passage in 1982.102 Though numerous states ultimately passed the ERA, a sufficient number failed to do so before earlier ratifications expired, so the ERA never became law.103

There are ongoing significant issues with women not receiving equal pay, equal job opportunities, or full control over their bodies. Domestic abuse laws and enforcement vary by state.104 And in Dobbs v. Jackson Women’s Health Organization,105 the Supreme Court overturned Roe v. Wade,106 and with it, the constitutional right to an abortion. In 2023, and at the time of this Essay’s writing, various states have been attempting to control women’s rights to various forms of contraception.107 The distinctions between the treatment and status of women versus men demonstrates the legacy of original disparities of treatment.

The evolution in the legal status of Black people, Indigenous peoples, and women again point to the variability in the legal status of persons. As we think about the future status of a sentient AI, it is useful to remind ourselves that we have communally and legislatively used distinctions as a form of social control despite ethically and morally infirm rationales. As the above discussion makes clear, the rights and obligations of human persons have changed and evolved. If AI does achieve sentience, debates about whether and the extent of any rights it should be granted, may be viewed as a twenty-first-century extension of these earlier debates.

D. Corporations

Interestingly, entirely nonbreathing, nonsentient and fictive corporations and unions have long been considered a type of legal person, with much less variability than women, Black people, and Indigenous persons have experienced. Corporate entities are paper organizations formed according to statute. Statutes in all fifty states define corporations as legal entities with the same rights as humans to do all that is necessary to carry out business, which include the rights to sue and be sued, own property, and enter into contracts.108 Other, particularly human responsibilities or obligations that corporations have are the requirement to pay taxes and to comply with criminal laws or be subject to criminal penalties.109 The creation and expansion of corporate rights may provide a model and precedent for the granting of some form of legal personhood to AI. However, as discussed below, sentient AI present safety considerations that may necessitate limiting certain rights.

Corporations have also been deemed entitled to a number of constitutional rights. As early as 1906, and then again in 1978, the Supreme Court held that corporations were entitled to assert Fourth Amendment rights against the warrantless search of commercial premises.110 And in 2010, the Court also made it clear that corporations even have rights under the First Amendment. In Citizens United v. Federal Election Commission,111 the Court held that the First Amendment’s protections for freedom of speech apply to corporate and union entities as legal persons, and prevent the government from restricting corporations and unions’ independent political expenditures.112 As a result, both entities are now able to exercise their constitutional free-speech right to donate to a particular candidate’s campaign, or use funds to support or oppose political candidates.113 In 2014, the Supreme Court held that corporations also have the right to the free exercise of religion.114

In conducting business, humans desire to take risks and reap rewards without exposing themselves to liability. The corporate form acts as a cloak for human exposure—a legal fiction that enables a group of humans to operate a business with limited personal liability. The logic is plain: Allow a paper entity, which can only act through humans, to become a legal “person” upon whom/which responsibility for the acts of the corporation shall lie. The humans, who are the owners, the controlling officers, and the employees, may then enjoy many of the benefits of the business—and indeed the stresses—but with a layer of insulation. In most circumstances, should something go awry, the corporation absorbs the legal liability.115

Society has had to absorb many costs as a result of the corporate form insulating humans. Corporate bankruptcies cost society in a myriad of ways, unpaid bills and lost jobs being only two of them. Corporations can also become a vehicle for people to engage in conduct riskier than they otherwise would if held personally liable. Would complex financial instruments such as those that were implicated in the 2007-2008 financial crisis have proliferated as they did if only the humans involved in each step of their creation were personally liable?116 The point of this discussion is not to suggest that the corporate form is not without real societal benefits. It clearly is.117 It is only to suggest that humans have used the designation of “legal personhood” to assist in creating a category of entity that has certain rights. That choice facilitates human endeavors but also generates significant costs.

There may come a time when insulating humans from the actions of AI motivates political or legal actors to grant it legal status. Designation of AI as a legal person—at some future point—could perhaps fulfill many of the basic protective functions performed by corporate legal personhood. Today, AI is created and deployed by a combination of commercial entities and public researchers. While the corporate form is likely already in place to protect the human actors who are designing or own the AI tool, as AI becomes increasingly capable and powerful, it may be that there are questions as to who or what bears responsibility for an AI’s actions.

Regardless of the capacity of the corporate form (or a modified version thereof) to insulate humans from the actions of AI, the corporate entity analogy only goes so far. AI has or is likely to develop independent cognitive abilities or situational awareness that corporate entities lack. The evolution of corporate legal personhood has taught us that when humans find it useful to bestow rights, a lack of human-like sentience or human-type awareness is not a precondition. But because AI has or is likely to develop some form of sentience, different moral and ethical considerations will attach to it than corporate entities. For example, the corporate form may be able to insulate human progenitors from liabilities that may be associated with activities of their AI. But the corporate form may not be enough to give the AI independent rights vis-à-vis the humans that previously controlled it.

This only becomes an issue once we reach the seemingly far away point when AI is sufficiently sentient that when harms are done to it, ethical principles require a response. In such a case, the corporate form may yet protect the humans in a theoretical sense, but the AI needs a form of redress as well. Granting the designation of legal personhood to those AI that achieve some variant of sentience would provide an opportunity to use a flexible framework, familiar in the American legal landscape. But there is also a scenario in which autonomous AI—acting independent of a human—itself causes harm. In such a case, a legal framework is needed to provide adequate redress to those (presumably humans) on the receiving side of the harms.

III. a framework for ai status and the role of the courts

The common-law system within the United States has demonstrated extraordinary flexibility. As we have seen in Part II above, the definition of and the rights that attach to legal personhood have evolved and become more inclusive (even if not entirely equal). The common law has shown similar flexibility. The challenges presented by transformative technological innovations—from the gasoline-powered automobile to the Internet and other communication technologies—have been addressed within existing legal frameworks. Tort, intellectual property, and competition and consumer-protection laws, among others, have been interpreted to be appliable to innovations.118 AI will require adaptations as we grapple with the need to simultaneously allocate responsibility for harms certain uses may cause and recognize when ethical considerations require a protective scheme for the AI itself. The past is, as always, prologue—and where we have come from is where we are most likely to go.

Our legal system has proved itself to be adaptable, changing alongside material conditions and societal expectations. When this country was founded, the Constitution and its various amendments provided the basic framework of rights to which some people were entitled. Although the Constitution did not specify limitations on these rights, the assumption that they only applied to white men was embedded within society. Over decades and then centuries, courts and legislatures affirmed this assumption and then dismantled it.119 The courts have reflected social expectations, though not always those held by a majority.120 Courts are comprised of humans who act as judges, clerks who assist them, and a host of administrative staff who make the federal and state judicial systems function. We humans are products of our upbringing, of the place and historical moment in which we were born, our family circumstances, and the educational benefits of which we were or were not able to avail ourselves. Like AI, we learn from the world as it exists around us. Over time, as humans have been exposed to new situations and new viewpoints, our jurisprudence and society has adapted. We have no reason to believe that we—and our legal system—won’t do the same in response to AI.

Our adaptations are neither homogeneous nor necessarily linear. Based on our personal backgrounds, we may have a more or less flexible view of the rights to which other humans should be entitled, and those views sometimes change. We have, now, a generally consistent view of what a legal person is, though the array of rights to which they may be entitled remains variable. The very flexibility of the law has allowed for ongoing variability of rights. That flexibility will be critically important as we encounter AI with different experiences of itself and the world.

The courts provide a forum to which any person can go in order to seek redress for a perceived or actual infringement on their rights. The type of person who may file a suit in court, or against whom a suit may be brought, need not be human. As we saw in Section II.D above, we crossed that bridge long ago. Nonhuman entities in the form of corporations are among the “persons” who may file a suit in court,121 seeking protection of their rights.122 The courts have provided exceedingly important loci for litigants to seek declarations of the boundaries of their rights. And they will be called upon in the coming years to grapple with the complex question of what we are to do about AI.

AI’s entry into our legal system will come about in steps—sequentially with routine and recognizable cases first. The courts will use different frameworks to deal with harms caused by AI on the one hand (those cases will come first and indeed we see them already123), and only later, what an adequate protective scheme might look like.

The first stage of judicial intervention will relate to AI tools of varying capabilities and use cases. Certain tools will be limited in capability and purpose, and judicial intervention may relate to claims of bias in the output (for instance, that an AI tool used in connection with hiring decisions was trained on a biased data set and produced biased results). In contrast, far more complex tools utilizing a neural network to make real-time trading decisions may lead to difficult decisions as to why, for example, a loss in the stock market occurred and who bears responsibility. The tools do and will fall on a spectrum from being entirely controlled by and traceable to human design choices, to AI tools where the human design choices become so attenuated that it is technically difficult to identify them, to AI where a human is not identifiable with the tool in any proximate way at all.

The easiest of these cases are ones we are already starting to see—where the human making or subject to a legal challenge is the direct designer or user of a tool. This is in contrast to a human who has acted as the coder for a tool designed by another, or one who licenses a tool from a third party. As we saw earlier in this Essay, a case like this has made its way into the courts already, and in a form easily dealt with. In Thaler v. Vidal, a human sought to copyright a work by a machine that he called a “Device for Autonomous Bootstrapping of Unified Sentience,” or “DABUS” for short. Thaler asserted that on its own, DABUS had come up with the work entitled A Recent Entrance to Paradise.124 The Copyright Office denied the application; Thaler appealed, and the appeal was denied.125 This case required a court to measure the information on the registration form against the statute; the U.S. Copyright Office had already made it clear that a basic requirement of copyrightability was that the work be of a human.126 The Copyright Office has requested, and by the time of this Essay’s publication will have received, comments as to whether these rules should be modified.127

However, more complicated cases are starting to percolate, and the time to think about how judges and litigants should handle them is now. As I discuss below, there will be serious questions about who is the agent with regard to a particular action: AI or its designer, coder, licensor, or licensee? Will sentient AI act in a manner that those most directly working with it will deem to have exceeded the tasks assigned to it by the entity that owns or licenses it? In legal parlance, this would be an act that is ultra vires, or outside the scope of legal authorization. And there will be more ethically and morally complex instances in which AI’s capabilities will rightfully make us consider whether more “human-like” rights (such as forms of personal liberty, speech rights, and more) are warranted.

A. Frameworks for Courts Dealing with Harms “Caused” by AI

There are already cases in which courts are asked to allocate responsibility for harm caused by AI tools—for instance, tools that harm humans, facilitate forms of discrimination, violate due process, or are alleged to be instrumentalities of price fixing.128 In these cases, courts have been examining the human actions: Did humans adjust the algorithm in some way?129 Did the humans use an inadequate training set?130 Were the humans transparent with those affected by the tool concerning how it was designed?131 Did the humans use the algorithmic tool to fix prices?132 Did the humans engage in unfair advertising practices using an AI tool?133 These are cases in which the AI model is just a tool, providing a service to a human user and not acting in ways that are truly autonomous.

Tort principles provide an initial and encompassing framework for some of the most immediate instances in which AI has caused harm to humans or their property. Basic principles relating to the duty of care that the owner or user of an AI tool owes to those around him or her, whether that duty of care has been breached, and the extent of that breach are all useful concepts. For instance, if a poorly programmed AI tool causes a malfunction on an assembly line, there may be a proximate and traceable breach in the duty of care by the deployer of the tool. That person or entity may have their own claim against the upstream designer or programmer of the tool (and contractual indemnification arrangements could be called on). In other instances, tort principles of strict liability may be applicable to hazardous uses of AI—for instance, an autonomous drone deployed to target and destroy a particular object that misses.134

Related tort concepts of vicarious liability, seeking to hold a deployer of an AI tool responsible at least in part, would bring in a framework around which there is significant developed common law to call upon. The nature of the involvement, the closeness in time, the degree of knowledge—all could be relevant. As AI engages in actions in the human world, instructed to do so by humans, the common law is, generally speaking, a framework which lawyers will apply with great effect.

There is, however, a more complicated set of cases that will arise in the future due to “model drift.” Model drift can occur when an AI model is trained to perform in a particular way. Over time, its training or other processes can cause it to “drift” away from its original purpose without any human intervention.135 If harm is caused, courts may analogize the situation to a potentially known hazard or harm that could occur and use negligence principles to tether it back to a responsible human.

But even more complicated cases are coming. There will be instances when humans will be unable to trace the actions of an AI tool to a human design element, even an attenuated one. That is, there will come a time when an AI tool may cause harm to a human or be alleged to have done so, but when there is no human who made the design choice determined to have proximately caused the harm itself. We know, for instance, that some AI tools are teaching themselves to do things that humans have not taught them or asked them to learn—these are called “emergent capabilities.”136 In such an instance, no human may be directly responsible for the harm caused; the proximate relationship to the human may be attenuated. Nevertheless, courts may turn to the corporate or educational entity associated with the tool—the owner of the tool if you will. If the AI tool is considered a legal agent of the entity, this entity would typically bear responsibility under agency principles.137

But let’s move even further along the spectrum: What of the situation in which an AI tool acts in a manner that is ultra vires? Ultra vires is the age-old concept that when an employee or agent of a company acts beyond the scope of authorization, the company cannot be held responsible.138 The concept comes from the limited power to act inherent in the corporate entity itself.139 We will, likely within the next few years, have instances in which an AI tool acts in a manner that neither its initial designer, licensor, nor licensee, ever intended or perhaps even wanted. That is, ultra vires. In such a case, what framework is a court to apply?

Today, in such a case, a human employee may be held responsible—because that individual actor, the human who purported to act in the corporation’s name, exceeded the bounds of his or her authorization.140 That individual may therefore incur personal liability.141 But in the scenario I am positing, there will be no such human. The “being” which will have acted outside of the scope of their authorization will be a nonperson. What then is a court to do—either analytically or practically? The initial framework could well be to tie the AI’s actions back to the “person” closest in the chain of causation under the theory that autonomous actions were a known and assumed risk. In this way, for some series of cases, assumption of the risk of autonomous activity can allow courts to work within a known framework. But the cases will get even more complicated from there. Among the complexities will be the harms caused by distributed AI and the harms caused by intentionally acting, sentient AI.

First, there will come a time when certain AI no longer resides in a single place; it will be “distributed.”142 That is, the software that comprises the AI will be spread over a number of unrelated computers, none of which can act as an “off switch.” In effect, the AI will be in many places at the same time, not controlled by any one person at any one place. When AI is distributed in this manner, determining who or what bears responsibility for its actions will fall somewhere between complex and impossible for experts and courts.

Let’s use as an example a distributed AI that resides on many computers at the same time; assume further that this AI posts a series of messages onto social-media platforms such as X or Facebook or the like; and assume again that at least some of these messages concern living humans who assert the information is untrue and defamatory, causing them harm. In such a situation, the plaintiff human would seek to find a party to hold responsible. It might be that a computer or series of computers is found to be hosting the AI software, but that host may not have been a knowing host. Instead, the AI might have found its way onto the computer in a manner similar to how a computer virus might detect a vulnerability and enter.143 The technical journey that would be required to try and trace the AI’s initial point of departure, or a responsible human designer in the chain, will be complex and perhaps either not worth the candle or not possible. The litigation that would ensue would undoubtedly peel back the many layers involved in the tool design and deployment. A court would be asked to make a factual determination as to when the tool design was enabled to progress to the point of independent and distributed action. But it is also possible that a court could determine that there is no way to determine such a point by a preponderance of the evidence.

In this case, other doctrines that are statutorily created could come into play. If all those who deploy any AI publicly or on computers connected to the Internet are required to register them and take out insurance policies that are akin to “no fault” insurance,144 there could be a pool of money from which damage judgments could be drawn. This scheme creates lopsided incentives, however. It has the potential to incentivize reckless behavior with little consideration for the magnitude of harm that may be caused. Moreover, the pool of insurance funds may prove inadequate. Another legislative possibility would be akin to what has occurred in asbestos litigation: where all possible defendants are joined, and responsibility is allocated based on a formula.145

Second, let’s assume further that the distributed AI that has caused the defamatory harm described, or any of the other harms more easily identified above, is sentient. That is, that the AI knows what it is doing in some sense and is acting with intent to engage in the conduct causing the harm. Do the courts have a different responsibility in determining and allocating fault? I suggest that when this situation is first encountered, the answer is “no.” That is, despite the sentient act of the AI, a similar process of tracing human responsibility would be appropriate. My view is based on principles of foreseeability: as humans working with AI, we know today that AI has the potential to engage in certain autonomous actions; as we work with AI and it becomes even more complex, we will be part of that journey. Humans are, in effect, creating tools that have a toxic-tort-like potential to enter the world and do damage in ways that we cannot yet imagine or understand, but for which our act of creation confers personal responsibility.146 Tort-like principles of negligence and foreseeability, therefore, will be the most useful frameworks for courts in making these decisions.

B. A Framework for Courts Dealing with Protecting AI

There is a far more difficult and ethically troubling dilemma that awaits us relating not to AI that is not actively harming humans, but to AI that has displayed sentience such that it is worthy of some form of protection. What are humans to do in grappling with these issues?

Some among us will no doubt take comfort in the silicon-based existence of the AI and never see it as more than software, denying any form of sentience that causes personal awareness of ethical dilemmas. But others will view these scenarios differently, and some humans will undoubtedly seek to find a legal path towards protecting the AI.

It is likely that similar to humans, AI will occupy a spectrum of cognitive abilities. There is every reason to believe that the developers of AI—who are millions of engineers working on different forms of AI, for different companies or academic institutions, pursuing different approaches—will develop AI of varying cognitive abilities. Because of the great variation in the abilities of AI, one can imagine similar variation in the ethical responsibilities that humans may view as attaching.

Assuming that some humans eventually view AI as deserving of protections, we must ask: What are the potential legal avenues that judges and litigants might pursue for conferring such protections? The Equal Protection Clause of the Fourteenth Amendment provides one such avenue, albeit with real complications. In the absence of legislative action regarding AI rights, the Equal Protection Clause is a logical and reasonable provision to bring to bear in legal challenges. Equal-protection arguments have been used by various disenfranchised categories of humans who were denied full access to all rights of a “legal person.” For instance, equal-protection challenges were critical in establishing rights to desegregation,147 to the autonomy of a woman’s body,148 to the right to marriage,149 and the like.150 Use of equal protection occurred before groups of humans were extended full civil rights—hence the need and utility of the Civil Rights Act of 1964—a statute that formally clarified the illegality of various forms of categorical discrimination.151 In this sense, then, an equal-protection challenge has historically been available to persons otherwise denied the full array of “legal personhood” rights.

The obvious challenge to using equal protection for AI at some future point is two pronged: first, the text, and second, the direction of the jurisprudence in this area. A threshold challenge will be how an inanimate object—even one with cognitive capabilities or situational awareness, will be able to bring such a challenge. Before we get into the nature of the challenge let’s pause on this practical consideration. There are examples in the legal system today of actors bringing challenges on behalf of others. We will assume that without legal status, the granting of a power of attorney to bring legal actions is not a possibility. One way to resolve a basic standing issue would be to place the AI within a limited corporate structure (a limited liability corporation (LLC), for instance), and thereby invoke the LLC’s right to sue and be sued. Nonetheless, redress would be designated to the LLC, not to an “asset” of the LLC such as the AI. And without legal status the AI could not be a “member” of the LLC. A more utilized route would be appointing a guardian ad litem—following the procedures set forth for minor children or those humans who cannot otherwise represent their own interests.152 Yet another route could be organizational standing: standing conferred on an organizational entity that is comprised of AI tools that have executed the necessary paperwork for organizational recognition.153

For purposes of the remainder of this Essay, let us assume that a challenge can practically be brought on behalf of AI. Let’s examine what that could look like and its benefits and limitations. The Equal Protection Clause’s language is potentially broad enough to extend to AI.

The Fourteenth Amendment states:

No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any state deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.154

On its face, the language of the Amendment applies to a “person.” Despite its intuitive limitation to humans, the history of the term “person” is not so limited. The term has shown great flexibility to extend to entities and things that need rights to function (such as corporations) and protections (such as natural resources). As we have seen in Section II.D, “person” has been extended to paper entities in the form of corporations, unions, and associations—fictional and nonsentient entities.155 Legal personhood has also been extended to natural resources in Tribal areas within the United States,156 in New Zealand,157 and other areas of the world.158 While in the case of fictive entities and natural resources a legislative enactment has solidified the definition of a nonhuman category as a person,159 there is nothing precluding courts from holding that AI satisfies the requirements for personhood.

A recent and perhaps counter-intuitive place to look for support is Dobbs. There, the Supreme Court left decisions as to when personhood attaches to the states but found that the distinctions drawn in Roe v. Wade were inappropriate.160 In doing so, Dobbs opened the door to juridical interpretations of personhood. Dobbs eliminated a human developmental, cognitive, or situational awareness requirement for the bestowal of significant rights. It did this while diminishing the self-determination, and therefore right to personal liberty, of women. This framework could, ironically, be used to provide a basis for rights to a human creation—AI—as to which some believe there are moral and ethical responsibilities.

Dobbs, however, also presents an impediment to the use of the Equal Protection Clause for AI. In that case, the Court made clear that applications of the Clause should be limited—far more limited than heretofore.161 What this will mean in terms of existing precedent that extends rights to certain groups remains unclear, but it does suggest that at least at the highest level of the American court system, there will be resistance to additional extensions.

District and appellate courts may nonetheless provide significant protections to AI—if and when the appropriate time arises—before the issue is raised to the highest court. District courts will play a significant role in this regard. Their abilities to frame the issues and to make findings of fact to which significant deference is given, are important tools. At least at the outset, litigants are more likely to request injunctive relief—requiring the bestowal of legal status—than damages. This would enable a district court judge to be the primary decision maker (versus a jury) and to be able to articulate the issues in a manner most useful for the appellate court. Appellate courts, in turn, will review the lower courts’ factual determinations with deference but their interpretations of law de novo. The de novo review leaves appellate courts with significant ability to make constitutional and statutory interpretations that may be consistent, or at odds, with those of the district court.

Finally, once two or more circuit courts have rendered differing decisions, the key issues could be teed up for the Supreme Court. Since the velocity of change is extraordinarily fast, every intervening period of time would be significant with regard to additional technical developments. Indeed, the technology at issue in the district court could be long outdated by the time the case reaches higher levels of review—perhaps mooting the case. This may mean that the decisions of district courts will be more precedential than is typical; and that by the time a case would otherwise wind its way to the Supreme Court, major issues might be mooted. In sum, the process of ripening AI challenges for any type of legal recognition or rights will be time consuming and complex. It may be that AI’s capabilities eventually render human bestowal of rights a quaint but rather irrelevant determinant of what it will be able to accomplish.

For argument’s sake, let us assume a challenge can be mounted in a timely fashion. What rights would be most appropriate, ethically important, or useful for AI—and for its human handlers? The type of rights a sentient AI may need or deserve—morally or ethically—may mirror those of humans or corporations. Might there be a right to freedom of speech? Freedom of association? How about freedom from unreasonable searches and seizures?162

We might decide that AI is not entitled to any of these rights and instead tether AI to whoever is closest in the chain to its design and distribution. But that clearly could raise ethical issues in a scenario in which AI convinces a user or a court that it can think and is unhappy with what is happening to it. Do we then say, “Too bad, you are effectively chattel, and anything can be done to you?” If we do, it will be on the assumption that predictions that AI will be more powerful than we are do not come true, or we may find ourselves on the receiving end of the same logic.

Conclusion

Courts will be dealing with a number of complicated AI questions within the next several years. The first ones will, I predict, be interesting but relatively straightforward: tort issues dealing with accountability and intellectual property issues relating to who made the tool, with what, and whether they have obligations to compensate others for the generated value. If an AI tool associated with a company commits a crime (for instance, engaging in unlawful market manipulation), we have dealt with that before by holding a corporation responsible. But if the AI tool has strayed far from its origins and taken steps that no one wanted, predicted, or condoned, can the same accountability rules apply? These are hard questions with which we will have to grapple.

The ethical questions will be by far the hardest for judges. Unlike legislators to whom abstract issues will be posed, judges will be faced with factual records in which actual harm is alleged to be occurring at that moment, or imminently. There will be a day when a judge is asked to declare that some form of AI has rights. The petitioners will argue that the AI exhibits awareness and sentience at or beyond the level of many or all humans, that the AI can experience harm and have an awareness of cruelty. Respondents will argue that personhood is reserved for persons, and AI is not a person. Petitioners will point to corporations as paper fictions that today have more rights than any AI, and point out the changing, mutable notion of personhood. Respondents will point to efficiencies and economics as the basis for corporate laws that enable fictive personhood and point to similarities in humankind and a line of evolution in thought that while at times entirely in the wrong, are at least applied to humans. Petitioners will then point to animals that receive certain basic rights to be free from types of cruelty. The judge will have to decide.

Our judicial system is designed to deal with novel and complex questions. We have done it for centuries. Courts take evidence and apply logic and our very best thinking to decide how new fact patterns should be resolved. We will do so again here. While we may not have clarity on all of the legal and ethical challenges before us or heading our way, we know they are coming.

The author is a former federal judge for the Southern District of New York, and currently a Partner and Chair of the Digital Technologies Practice of Paul, Weiss, Rifkind, Wharton & Garrison. She would like to thank Dorrin Akbari of the Yale Law Journal Forum for her careful editing and thoughtful comments on this Essay.