Click here to download the full paper (PDF)
Authored By: Ms. Siddhi (B.B.A.LL.B (Hons)), Amity University Noida, Uttar Pradesh,
Click here for Copyright Policy.
I. INTRODUCTION:
Artificial Intelligence (AI), has awakened deep legal, ethical, and philosophical concerns, amongst which the topic of legal personality for AI is increasingly attracting critical consideration. Legal personality is a fundamental concept in law, referring to the capacity to hold rights and incur obligations. Historically, human beings and juristic persons (such as corporations) are the only ones with this classification. With AI systems increasingly becoming autonomous, self-learning, and impactful in the decision-making process in different industries, one major question arises: Should machines be accorded legal personality? This article delves into the conceptual framework of legal personality, the nature and operation of AI, international jurisprudential debates, and the implications of giving rights or obligations to machines. It also addresses the practicability and requirement of such recognition in the Indian legal framework.
II. CONCEPT OF LEGAL PERSONALITY AND AI:
Legal personality refers to the acknowledgement by law of a person as having rights and duties. There are two major categories of legal persons: natural persons (humans) and juristic and artificial persons (such as companies, trusts, and some environmental objects like rivers in India).[1] Giving legal personality allows an entity to hold property, enter into agreements, sue, be sued, and be legally responsible. The question is whether AI systems, neither human nor classical artificial entities, can or should be subsumed within this intelligence, such as reasoning, learning, perception, and solving problems.[2] With advances in neural networks and machine learning, contemporary AI systems are able to make decisions on their own, engage in conversational speech, and accomplish creative tasks. However, AI does not have consciousness, emotions, or moral judgment – attributes most linked to legal and moral responsibility. With advancements in AI systems, especially those that can recursively enhance themselves, the disparity between functional ability and legal perception is even more evident. Law, heretofore centred on actors that possess the ability to engage in moral reasoning, now finds itself having to police entities that operate above mere programming but do not have the essential qualities of personhood.
III. ARGUMENTS IN FAVOUR OF LEGAL PERSONALITY FOR AI:
Those in favour of establishing a legal personality for AI contend that it is a natural progression in handling accountability, particularly with AI entities increasingly integrated into our existence.[3] Among the central arguments is that certain AI systems are functioning with little to no human influence or control, and even make choices with legal, economic or human implications, such as driverless cars making instant driving decisions or computer algorithms used in medical diagnostics and credit decisions.[4] This operational independence leaves open the possibility of an accountability gap: when damage is done, it is not necessarily clear who bears responsibility, whether the developer, the user, the owner, or the AI. Attribution of personality to AI would allow the liability gap to be resolved and accountability to be more easily attributed. Yet another argument arises out of current legal doctrine. Corporations, which are not real and are unconscious, are endowed with legal personality and can own property, and contracts and be liable. AI systems, particularly the more sophisticated ones, may be able to operate with even more operational autonomy than corporations. If the law is able to adapt to the legal fiction of corporate personhood, it may also progress to accept that AI is a legal person. Additionally, in fields such as financial markets, AI systems are already issuing decisions by means of algorithmic trading. It would make their regulation and interactions easier by giving them legal personal status. There are also practical considerations- giving legal standing to AI would enable them to own intellectual property, sign smart contracts, or be the subject of civil litigation in certain contexts, such as consumer commerce or automatic services.[5]
IV. CRITICISM OF LEGAL PERSONHOOD OF AI:
In spite of these arguments, there remains strong opposition. A primary criticism of AI is that it is not moral and conscious and therefore lacks what is needed to hold someone responsible. Legal systems, particularly common law ones, are dependent on the concepts of intent, negligence, and duty of care. AI, being a creation of coding and data, has no free will and cannot create intent. Additionally, current legal regimes are considered adequate by many. Developers, users, or corporations using AI can and should be held liable. Creating a new class of personhood might muddle current legal principles and water down accountability by letting human agents pass the buck to machines. Ethically, giving rights to machines triggers concerns over human dignity. The law is necessarily anthropocentric, designed to govern human behaviour and safeguard human interests. Granting rights and obligations to non-sentient machines can create a slippery slope where human rights are undermined.[6] There is also the risk of the misuse of legal innovation. Just as the corporate form is sometimes abused to evade responsibility, AI personhood could be abused to hide behind layers of techno-legal abstraction, real human agents. Additionally, if AI is granted legal personality, the question of enforcement remains-how would one penalize an AI system? Would it mean deactivating the system, limiting its codebase, or confiscating its hardware? These practical enforcement dilemmas further complicate the case for full legal personhood.
V. COMPARATIVE JURISDICTIONS AND LEGAL TRENDS:
Comparative views cast more light on this argument. In 2017, the European Parliament moved to give autonomous AI systems a type of “electronic personhood.” The initiative was later withdrawn following opposition from technologists, ethicists and policymakers who claimed that the action was premature and potentially had unintended effects. The resolution recognized that sophisticated autonomous systems ought to possess a certain legal status for liability purposes, but it was ultimately considered incompatible with current principles of legal responsibility.[7] In the United States, there is no legal recognition of AI as persons, but recent cases like Thaler vs. Perlmutter[8] (copyright for AI-generated content) and Naruto vs. Slater (the monkey selfie) have initiated debate regarding non-human entities and rights. [9]The two cases present the judiciary’s reluctance to establish rights or duties for non-human agents. China, the world leader in AI, prioritises regulatory management over AI instead of conferring any type of legal personhood. The Chinese government is focusing on building robust guidelines on data regulation, cybersecurity, and AI ethics. Japan, with its cultural absorption of robots, has touched upon ethical aspects of AI rights but has not included them in legislation. Its strategy is broader and more philosophical and societal than strictly legal, indicating a greater welcome of AI into everyday life without the need to change legal frameworks.
VI. THE INDIAN LEGAL APPROACH:
India presents a very interesting case. Although AI has not been conferred any legal personhood, the Indian courts have been ready to extend the definition of legal personality in innovative ways. Particularly, in Mohd. Salim V. State of Uttarakhand, the Uttarakhand High Court accorded legal personhood to the rivers Ganga and Yamuna, which it treated as living beings. This is a jurisprudential elasticity which theoretically might be applied to AI. But where, and how, the context and intention of such recognition are weighed must be carefully thought out. Personhood for natural objects tends to be done for environmental or cultural reasons, but for AI, it might serve other-and maybe ends. There is no explicit provision in the current Indian law for AI personhood. The Information Technology Act, of 2000, governs electronic governance and cybercrime but does not lay down anything concerning the legal status of autonomous systems.[10] Indian tort and criminal laws rely much on intent or foreseeability, which are human virtues. AI systems, lacking them, cannot be culpable under traditional doctrines without distorting basic principles. However, Indian policy organs such as NITI Aayog have issued white papers concerning AI ethics and governance, calling for strong regulation and accountability, but falling short of recommending legal personality for AI. Further, the Ministry of Electronics and Information Technology (MeitY) has also been considering the contours of regulation for AI, data, privacy and accountable AI, indicating that the policy climate is vigorous and adaptive.[11]
VII. A BALANCED ALTERNATIVE: QUASI-LEGAL STATUS:
A possible middle ground might be to provide AI systems with a limited or quasi-legal status. Instead of making AI full-fledged legal persons, they might be made agents for human principals, just like employees or contractors. This might include the application of electronic agents under contract law, already the norm in e-commerce, where AI systems act as facilitators of contractual transactions. Also, completely autonomous AI systems may be made to have insurance or set up compensation funds in order to pay for damage or injury inflicted by them while operating.[12] Such a method provides accountability without attributing human-like qualities to machines or disturbing cornerstone legal values. Indeed, certain insurance firms have begun marketing products for autonomous systems, such as drones, robots, and self-driving vehicles. The legislation can also require advanced AI systems to be certified and registered before their deployment, similar to professional licensing or public interest entities operating therein. Such regulations can instil accountability without providing agency or autonomy in a legal context. In addition, the utilisation of AI audit trails, transparency-by-design, and enforced disclosures will assist in assigning blame in the event of failure, so long as human oversight continues at the forefront.
VIII. PHILOSOPHICAL AND ETHICAL REFLECTIONS:
The philosophical and ethical implications of granting AI legal status must also be considered. The law should serve societal needs and reflect the values of society. AI systems, no matter how intelligent, are tools created and operated by humans. They serve instrumental purposes and do not possess intrinsic worth. Ascribing rights or responsibilities to them threatens to confuse their nature and warp the human-centred moral picture of the law. Furthermore, anthropomorphizing AI can cause society to regard them through an emotional and identity lens that is falsely applied and possibly harmful.[13] According to a Kantian view, moral responsibility only exists when an agent possesses autonomy, reason, and moral consciousness. Utilitarian arguments also argue against giving legal rights to AI, as AI systems do not feel pleasure, pain, or preference consciously. Giving legal status to AI would also create moral complexity—must AI possess a “right to life,” the freedom of expression, or a right to privacy? The moral complexity is further compounded when we consider AI in care responsibilities, such as robots assisting the elderly or AI in education and therapy.[14] The emotional connection people have with such devices may blur logical judgment regarding their essence.
IX. CONCLUSION:
The question of granting legal personality to AI is not just a legal one but a multidimensional inquiry that spans ethics, technology, philosophy, and public policy. While full legal personhood for AI may not be appropriate or necessary at this stage, legal systems must evolve to address the unique challenges posed by autonomous and intelligent systems. Functional legal tools such as insurance requirements, agent-principal liability models, and statutory regulations can provide a balanced framework. India, with its growing digital economy and progressive judiciary, should engage in global deliberations on AI governance while maintaining a cautious, human-centric approach. Ultimately, rather than asking whether machines should have rights or duties, we should be asking how best to ensure that human actors are held accountable for the actions of the tools they create—and how to build a legal system that protects society from harm without abandoning its fundamental values. The future of AI in law lies not in radical redefinition but in thoughtful adaptation—a process that must be inclusive, transparent, and guided by the public good.
Cite this article as:
Ms. Siddhi, “The Legal Personality Of AI: Should Machines Have Rights Or Duties?” Vol.6 & Issue 1, Law Audience Journal (e-ISSN: 2581-6705), Pages 123 to 129 (16th June 2025), available at https://www.lawaudience.com/the-legal-personality-of-ai-should-machines-have-rights-or-duties/.
Footnotes & References:
[1] Chesterman, S. (2020, September 21). Artificial Intelligence and the limits of Legal Personality: International & Comparative Law Quarterly. Cambridge Core. https://www.cambridge.org/core/journals/international-and-comparative-law-quarterly/article/artificial-intelligence-and-the-limits-of-legal-personality/1859C6E12F75046309C60C150AB31A29?utm_source=chatgpt.com
[2] company, M. (2024, April 3). What is Ai (Artificial Intelligence)?. McKinsey & Company. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-ai
[3] Kurki, V. A. J. (2019, August 1). The legal personhood of artificial intelligences | a theory of legal personhood | Oxford academic. Oxford Academic. https://academic.oup.com/book/35026/chapter/298856312
[4] Doomen, J. (2023, April 22). Full article: The Artificial Intelligence Entity as a legal person. The artificial intelligence entity as a legal person. https://www.tandfonline.com/doi/full/10.1080/13600834.2023.2196827
[5] Nandi , A. (2024, March 4). Artificial Intelligence and personhood: Interplay of agency and liability. Observer Research Foundation. https://www.orfonline.org/expert-speak/artificial-intelligence-and-personhood-interplay-of-agency-and-liability
[6] Nandi , A. (2024, March 4). Artificial Intelligence and personhood: Interplay of agency and liability. Observer Research Foundation. https://www.orfonline.org/expert-speak/artificial-intelligence-and-personhood-interplay-of-agency-and-liability
[7]Elliot, S. (2025, January 10). Global AI trends report: Key legal issues for 2025. Dentons. https://www.dentons.com/en/insights/articles/2025/january/10/global-ai-trends-report-key-legal-issues-for-2025
[8] Mathur, A. (2023, December 8). Case review: Thaler V. Perlmutter (2023) – center for art law. Center for Art Law – At the intersection of visual arts and the law. https://itsartlaw.org/2023/12/11/case-summary-and-review-thaler-v-perlmutter/
[9] Review, L. (2020, February 6). Naruto v. Slater: One small step for a monkey, one giant lawsuit for animal-kind. Wake Forest Law Review. https://www.wakeforestlawreview.com/2020/02/naruto-v-slater-one-small-step-for-a-monkey-one-giant-lawsuit-for-animal-kind/
[10] Bhoomi, L. (2025, April 16). A brief overview on electronic governance. LawBhoomi. https://lawbhoomi.com/a-brief-overview-on-electronic-governance/
[11] Network, E. N. (2025, June 4). Meity’s vision for Digital India Empowering Citizens & Businesses with ai, semiconductors, & e-governance. Elets eGov. https://egov.eletsonline.com/2025/06/meitys-vision-for-digital-india-empowering-citizens-businesses-with-ai-semiconductors-e-governance/
[12] Filipova, I. A., & Koroteev, V. D. (2023, June 17). Future of the artificial intelligence: Object of law or legal personality?: Filipova. Journal of Digital Technologies and Law. https://www-lawjournal-digital.translate.goog/jour/article/view/184?_x_tr_sl=en&_x_tr_tl=pt&_x_tr_hl=pt&_x_tr_pto=tc
[13] Wang, B. (2024b, September 18). Ethical reflections on the application of Artificial Intelligence in the construction of smart cities – wang – 2024 – journal of engineering – wiley online library. Wiley Online Library. https://onlinelibrary.wiley.com/doi/full/10.1155/2024/8207822
[14] Simon, J., Rieder, G., & Branford, J. (2024, February 27). The philosophy and ethics of AI: Conceptual, empirical, and technological investigations into values – digital society. SpringerLink. https://link.springer.com/article/10.1007/s44206-024-00094-2