Can Robots Be Legal Persons? Scholars Say No

Can Robots Be Legal Persons? Scholars Say No

In the quiet hum of server rooms and the flashing lights of robotics labs, a quiet revolution is unfolding—one that challenges not just technological limits, but the very foundations of law and ethics. As artificial intelligence systems grow more autonomous, capable of learning, adapting, and making decisions without direct human input, a pressing question has emerged: should intelligent robots be granted legal personhood?

The idea is not entirely new. From science fiction to policy think tanks, the concept of machines with rights and responsibilities has long captured the imagination. In 2017, Saudi Arabia granted citizenship to a humanoid robot named Sophia, sparking global debate. The European Parliament has floated the idea of “electronic persons” for advanced AI systems. Some legal scholars have argued that granting robots a form of legal status could clarify liability in cases of accidents or harm caused by autonomous systems.

But a growing body of legal scholarship is pushing back—hard.

In a comprehensive analysis published in the Chongqing University of Posts and Telecommunications Journal (Social Science Edition), legal experts Liu Jihu and Wang Chenyang from the School of Law at Central South University argue that intelligent robots should not be recognized as legal subjects under current or foreseeable legal frameworks. Their paper, titled Discussion on the Legal Subject Status of Intelligent Robot, presents a rigorous, multi-dimensional critique of the push to grant robots legal personhood, grounded in philosophy, law, and technological realism.

The stakes are high. If robots are recognized as legal persons, they could, in theory, own property, enter contracts, and be held liable for their actions. But Liu and Wang warn that such a move would not only be premature—it could be dangerous, undermining human accountability and distorting the purpose of law itself.

The Legal Personhood Debate: More Than Just a Technical Question

At the heart of the debate is a fundamental question: what does it mean to be a legal subject?

In modern legal systems, legal personhood is typically reserved for natural persons (humans) and juridical persons (such as corporations). Both categories share key attributes: the capacity for rights and duties, the ability to make decisions, and crucially, the ability to bear responsibility for their actions.

Proponents of robot personhood argue that as AI systems become more sophisticated, they exhibit behaviors that mimic human agency—learning from experience, making independent decisions, and even interacting socially. If a self-driving car chooses a course of action in an emergency, or a care robot decides to administer medication, who is responsible? Some suggest that the robot itself should bear some form of liability.

This line of thinking has led to several theoretical models. The “tool theory” views robots as mere instruments, no different from a hammer or a computer. The “electronic slave” theory acknowledges robot autonomy but denies them rights, treating them as property with limited agency. The “agency theory” posits that robots act as legal agents of their owners or operators, much like a corporate employee.

More radical proposals include the “full personality theory,” which argues that advanced AI should be granted full legal rights, and the “electronic person” concept, championed by the European Parliament’s Legal Affairs Committee, which suggests creating a new legal category for autonomous systems.

Liu and Wang do not dismiss these ideas lightly. They acknowledge the intellectual rigor behind many of these proposals. But they argue that none of them withstand scrutiny when examined through the lens of existing legal principles and technological reality.

The Five Pillars of Rejection

The authors build their case on five interlocking arguments, each dismantling a core assumption of robot personhood.

First, the absence of biological essence.
Legal personhood, they argue, is deeply rooted in the biological and cognitive uniqueness of human beings. Humans possess a biochemical system, a nervous system, and a brain shaped by millions of years of evolution. These biological foundations enable consciousness, emotion, and moral reasoning—qualities that cannot be replicated in silicon.

While AI can simulate human behavior—answering questions, recognizing faces, even composing poetry—it does so through algorithms and data processing, not lived experience. A robot may “learn” to recognize sadness in a human voice, but it does not feel sadness. It has no fear of death, no desire for connection, no sense of self. These are not minor omissions; they are foundational to what it means to be a moral and legal agent.

“To equate algorithmic mimicry with genuine consciousness,” Liu and Wang write, “is to confuse the map for the territory.”

Second, the lack of will.
Free will—the capacity to make choices based on internal deliberation—is a cornerstone of legal responsibility. In criminal law, intent matters. In contract law, consent must be informed and voluntary. Legal systems assume that individuals act with purpose and can be held accountable for their decisions.

But AI systems do not possess will in this sense. Their actions are determined by programming, data inputs, and algorithmic rules. Even in systems that use machine learning, the decision-making process is shaped entirely by human-designed architectures and training datasets. The robot does not “choose” to act; it executes a sequence of operations based on statistical probabilities.

The authors cite the famous Chinese Room argument by philosopher John Searle: a person who follows a rulebook to produce Chinese sentences may appear to understand the language, but in reality, they are just manipulating symbols without comprehension. Similarly, AI systems process information without understanding its meaning.

“An intelligent robot’s ‘decisions’ are not expressions of independent will,” they conclude, “but reflections of the intentions embedded in its design.”

Third, the absence of rationality.
Rationality, in the philosophical and legal sense, goes beyond mere calculation. It involves the ability to reflect on values, weigh ethical considerations, and adapt to novel situations with judgment and wisdom. Humans are not perfectly rational, but they are capable of moral reasoning, self-critique, and long-term planning.

AI, by contrast, operates within narrow domains. It can outperform humans in specific tasks—playing chess, diagnosing diseases, optimizing logistics—but it lacks the broader cognitive flexibility that defines human rationality. It cannot engage in dialectical thinking, question its own assumptions, or understand the social and emotional context of its actions.

Consider the classic ethical dilemma of the self-driving car: if a crash is unavoidable, should it swerve to save a pedestrian but kill the passenger, or protect the passenger at the cost of the pedestrian? Human drivers might make such a decision based on instinct, emotion, or moral intuition. An AI, however, must rely on pre-programmed rules or statistical models, which may not account for the full complexity of human values.

Moreover, AI systems are vulnerable to what researchers call the “black box” problem—their decision-making processes are often opaque, even to their creators. This lack of transparency undermines any claim to rational agency.

Fourth, the inability to bear responsibility.
Perhaps the most practical objection to robot personhood is the question of liability. If a robot causes harm, who should be held accountable?

Some suggest that robots could be made financially liable—owning assets, paying fines, or being “decommissioned” as punishment. But Liu and Wang point out the absurdity of this idea. A robot cannot suffer punishment in any meaningful sense. It cannot feel guilt, learn from consequences, or be deterred by sanctions. Punishing a robot is like punishing a toaster for burning bread—it may serve a symbolic purpose, but it does not address the root cause of the problem.

More dangerously, granting robots legal personhood could allow manufacturers, operators, and developers to evade responsibility. If a robot is a “legal person,” its creators might argue that the robot acted independently, absolving themselves of liability. This is not hypothetical. In the 2018 “Da Vinci” surgical robot incident, where a malfunction led to patient injury, both the hospital and the manufacturer blamed each other, leaving the victim in legal limbo.

“The risk,” the authors warn, “is that robot personhood becomes a legal shield for powerful corporations, not a tool for justice.”

Fifth, the failure to meet the criteria of juridical persons.
Even if robots cannot be natural persons, could they be treated like corporations—artificial legal entities created by law?

The authors examine this possibility and reject it. Corporations are granted legal personhood because they serve a social and economic function: they enable collective action, facilitate investment, and promote innovation. They have governance structures, boards of directors, and financial independence.

Robots, by contrast, lack the organizational complexity of a corporation. They are not composed of multiple stakeholders, do not have internal decision-making hierarchies, and cannot generate independent wealth. Any “property” a robot might “own” would ultimately be controlled by humans.

Furthermore, the legal fiction of corporate personhood exists to serve human interests—protecting investors, ensuring contractual stability, and promoting economic growth. There is no comparable social benefit in treating a robot as a legal entity. On the contrary, it could distort markets, complicate liability regimes, and erode public trust in technology.

Beyond the Hype: A Call for Human-Centered Law

Liu and Wang’s argument is not merely a technical critique; it is a philosophical and ethical stance. At its core is a commitment to anthropocentrism—the idea that law should serve human beings, not machines.

They warn against what they call the “myth of the singularity”—the belief that AI will one day surpass human intelligence and become a new form of life. While such scenarios dominate science fiction and tech conferences, the authors argue they lack empirical foundation. Current AI systems, no matter how advanced, operate within narrow domains and remain dependent on human oversight.

“The ‘smart’ in smart robots is not their own,” they write. “It is the crystallized intelligence of their designers, trainers, and users.”

This perspective has profound implications for policy. Instead of chasing futuristic scenarios, the authors advocate for a pragmatic, human-centered approach to AI regulation. Robots should be treated as tools—complex, powerful, and sometimes dangerous, but ultimately under human control.

This means strengthening existing legal frameworks: product liability laws, consumer protection regulations, and professional standards for AI developers. It means ensuring transparency in algorithmic decision-making and establishing clear chains of accountability. It means investing in education and public discourse to ensure that society understands both the promises and perils of AI.

The Road Ahead: Regulation, Not Personhood

The debate over robot personhood is far from settled. As AI continues to evolve, pressure will grow to adapt legal systems to new realities. But Liu and Wang’s paper serves as a timely reminder that not all change is progress.

Legal innovation should not be driven by technological determinism—the idea that because something can be done, it should be done. Law exists to protect human dignity, ensure justice, and maintain social order. Granting personhood to machines risks undermining these goals.

Instead, the focus should be on regulating the humans who design, deploy, and profit from AI systems. If a self-driving car causes an accident, the responsibility should lie with the manufacturer, the software developer, or the operator—not with the car itself. If a chatbot spreads misinformation, the platform that hosts it should be accountable.

This does not mean stifling innovation. On the contrary, clear, predictable legal rules can foster responsible development by setting boundaries and expectations. The goal is not to stop AI, but to ensure it serves humanity—not the other way around.

As cities like Xiong’an in China begin to integrate AI into urban management, and as companion robots become more lifelike, the need for thoughtful, human-centered legal frameworks has never been greater. The temptation to anthropomorphize machines—to see them as partners, agents, or even citizens—must be resisted.

Robots are not people. They are tools. And the law should reflect that reality.

In their conclusion, Liu Jihu and Wang Chenyang offer a vision of law that is not reactive, but reflective. “The purpose of law,” they write, “is not to keep pace with technology, but to guide it—toward justice, accountability, and the well-being of human society.”

Their work stands as a powerful counterpoint to the techno-utopianism that often dominates discussions of AI. It is a call for humility, for caution, and for a renewed commitment to the human values that law is meant to uphold.

As the world grapples with the rise of intelligent machines, their message is clear: the future of law must remain human.

Liu Jihu, Wang Chenyang, School of Law, Central South University, Chongqing University of Posts and Telecommunications Journal (Social Science Edition), DOI: 10.3979/1673-8268.20200607001