You are currently viewing Analysing The Ethics And Legality Of AI In Legal Decision-Making

Analysing The Ethics And Legality Of AI In Legal Decision-Making

Share It!

Click here to download the full paper (PDF)

Authored By: Lamisha Abidin, Student, BA.LL.B, Kumaun University,

Click here for Copyright Policy.

Click here for Disclaimer.

I. INTRODUCTION:

Artificial Intelligence (AI) has brought about a significant shift in the constantly changing field of law, permanently altering several aspects of the legal industry. AI technologies are changing established processes and enhancing the capabilities of legal professionals in a variety of ways, including legal research and document analysis. AI is not just a futuristic concept anymore but an integral force already reshaping the landscape of legal profession, offering efficiency, accuracy, and innovative solutions. Two decades ago, pioneers like Jay Leib[1] recognized the potential of AI in addressing inefficiencies within the legal profession. Leib’s venture into electronic discovery with Discovery Cracker[2] in the early 2000s marked the beginning of a revolution. “We saw a gap in the market,” claims Leib. “Why print on so much paper? For lawyers to stay up to date, they need tools. Furthermore, a massive volume of data is being produced. As per Forbes, the amount of data generated daily is 2.5 quintillion (2,500,000,000,000,000,000) bytes, with 90 percent of the total data created in the last two years[3]. Lawyers now deal with terabytes of data and hundreds of thousands of documents instead of sifting through mountains of paper. Lawyers need a method for sorting through the material so they can present a compelling story. Because there is a wealth of data, e-discovery, legal research, and document review have become increasingly sophisticated. Platforms like LexisNexis[4], Westlaw[5], and Bloomberg Law[6] leverage machine learning algorithms to navigate vast repositories of legal documents swiftly and efficiently. This has not only accelerated the research process but has also enhanced the accuracy of information extraction. These tools utilize NLP to comprehend legal texts, enabling lawyers to identify relevant cases, statutes, and regulations with unprecedented speed and precision. By providing valuable insights into legal precedent, AI-powered research tools empower lawyers to make informed decisions, significantly reducing the time and resources traditionally invested in exhaustive manual research. Contract analysis, a historically laborious task, has undergone a paradigm shift with the advent of AI. Tools such as Kira Systems[7] and LawGeex[8] utilize NLP[9] algorithms to dissect legal documents, extracting key terms and clauses with remarkable efficiency. This not only expedites the contract review process but also facilitates the identification of differences and similarities between documents, simplifying the creation of new contracts or amendments to existing ones. Artificial intelligence (AI) has been projected to be used in international arbitration for a wide range of tasks, including the selection of arbitrators, legal research, writing and editing of written submissions, document translation, case management, document organisation, cost estimations, hearing arrangements (including simultaneous foreign language interpretation or transcripts), and writing of standard sections of awards.

II. AI’S USES AND ADVANTAGES FOR THE LEGAL SECTOR:

II.I LEGAL RESEARCH:

Legal research is made easier by AI-powered applications like LexisNexis, Westlaw, and Bloomberg Law. Machine learning algorithms extract pertinent information from legal papers through analysis. Makes it possible for solicitors to locate cases, laws, and rules more rapidly and precisely.

II.II CONTRACT ANALYSIS:

Artificial Intelligence makes it easier to analyse legal papers using tools like Kira Systems and LawGeex. NLP algorithms speed up and improve the process by identifying important terms, clauses, and discrepancies.

II.III DOCUMENT REVIEW:

AI-driven systems for reviewing documents swiftly examine a lot of documents. Determine crucial details, names, dates, keywords, and any possible problems or contradictions greatly cuts down on the time and expense of document evaluation.

II.IV PREDICTIVE ANALYTICS:

Artificial intelligence (AI)-based predictive analytics programmes, such as Blue J Legal[10] and Premonition,[11] forecast case results, pinpoint risks and provide strategy recommendations, make well-informed decisions and recommendations by applying machine learning algorithms to analyse case law.

III. CONSTRAINTS ON AI:

The Four V’s of Big Data—Volume, Variety, Velocity, And Veracity[12]—are used to identify constraints.

III.I VOLUME: REQUIREMENT FOR ADEQUATE NON-CONFIDENTIAL CASE INFORMATION:

Problem: For AI models to produce reliable forecasts, the legal industry must have access to enough data. However, there is a restriction in several legal areas where non-parties cannot obtain confidential decisions. One potential solution to solve this issue is to collect confidential prizes for model-building purposes and disseminate awards in a redacted manner.

III.II VARIETY: DO REPEATABLE PATTERNS WITH BINARY OUTCOMES NEED TO BE PROVIDED?

Challenge: The diversity of legal rulings may stem not from disparate sources or formats but rather from the topics covered in those rulings. AI models may not be able to solve complex, non-repetitive problems.
Resolution: Model-building is facilitated by explicit output questions; still, the challenge

Solution: Clearly defined output questions make the process of constructing models easier. Nevertheless, managing a variety of non-binary jobs that call for close attention to legal intricacies still presents a barrier.

III.III VELOCITY: THE ISSUE WITH POLICY SHIFTS OVER TIME:

Challenge: Policy changes might occur over time, making prior data obsolete, and legal decisions may not be made often. AI models may find it difficult to adjust to sudden changes in policy based on historical data.
Solution: While machine learning inherently involves continuous algorithmic improvement, policy changes that diverge from historical data may provide difficulties and necessitate conservative model methods.

III.IV VERACITY: POTENTIAL FOR PREJUDICE AND DATA DIET DEFICIENCIES:

The precision and reliability of the data used are related to veracity. Biases found in the training set of data may be inherited by AI models, potentially producing unfair results and systemic errors.

IV. ETHICAL CONSIDERATIONS IN AI:

There are ethical and legal concerns with the use of AI in the legal system. How, for instance, can we guarantee the accountability and transparency of AI systems? How can bias be eliminated from AI decision-making? In a world when AI is capable of handling legal work, what would be the role of lawyers?

V. BIAS AND FAIRNESS IN AI DECISIONS:

Initially, one would believe that AI models are superior to humans because of their algorithmic objectivity and infallibility, whereas humans are vulnerable to subjectivity and non-rationality and will always make mistakes. For instance, a team of Israeli and US-American scholars have provided some insight on the significance of extraneous elements in judicial decision-making by applying their research in the legal field[13]. Examining over 1,100 rulings made over a 10-month period by Israeli judges concerning 40% of the nation’s parole[14] requests, the research revealed that most requests are denied on average, but that the likelihood of a decision in favour of the applicant is much higher following the judge’s daily meals breaks. Illustrations illustrate how unrelated events, including lunch breaks, might influence human decision-making. These examples should not be relevant to the case’s merits. Thus, a few scholars have concluded that since computers are impervious to cognitive biases and the excessive impact of outside circumstances, AI-based decision-making would be superior to human decision-making. But it is misguided to treat algorithmic impartiality and infallibility with unquestioning deference. Recent advances in AI research have brought attention to the dangers of biased or misbehaving systems. Any computer model that uses data is only as good as the data it uses. The derived model suffers from vulnerabilities in the data diet. Specifically, it is possible that the underlying data used to train the algorithm had human prejudices. As a result, the algorithm may have been “infected” with these biases and may have even exaggerated them by accepting them as “true” when making decisions or predicting outcomes in the future. Research has indicated that the application of algorithms in the evaluation of criminal risk in the United States has resulted in results that are racially biased[15]. In order to evaluate the risk of recidivism for defendants, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system is widely used in the United States. Research conducted under this system revealed that while “white violent recidivists were 63 percent more likely to have been misclassified as a low of violation recidivism, compared with black defendants,” “Black defendants were twice as likely as white defendants to be misclassified as a higher risk of violent recidivism”[16]. It might possibly have happened because black offenders are overrepresented in some crime rates, leading the computer to incorrectly classify them as having a greater recidivist rate. It’s possible that the computer model incorrectly assumed that there was a larger chance of recidivist behaviour based on this pattern.

VI. CHALLENGES AND OPPORTUNITIES IN THE INDIAN CONTEXT:

VI.I OPPORTUNITIES:

A major opportunity for legal innovation is the incorporation of Artificial Intelligence (AI) technology into the operations of prominent Indian law firms. One of the top full-service legal firms in India, Cyril Amarchand Mangaldas[17], has made a groundbreaking move by partnering with the well-known Canadian machine learning software vendor, Kira Systems[18]. Through this strategic partnership, Cyril Amarchand Mangaldas will be the first law firm in India to use this cutting-edge technology, which will mark a significant development in the legal field.

Artificial intelligence used by Kira Systems’ software to recognise, evaluate, and extract relevant clauses and other data from a variety of legal documents, including contracts. The law firm can now deliver specialised legal services to its clients with greater efficiency, speed, and accuracy thanks to the use of Kira Systems’ software, which represents a paradigm shift in the way legal services are delivered.

The Jaswinder Singh vs. State of Punjab and Anr. (2022)[19], case offered a significant opportunity to integrate AI technologies into the legal system. In a particular instance, the Punjab and Haryana High Court requested assistance from ChatGPT[20], an AI-powered application, in response to a bail plea related to grave accusations of a violent and fatal assault. The application of artificial intelligence (AI) in legal procedures, particularly in terms of providing data and perspectives on case-related issues, represents a noteworthy progression in utilising technology to support the judiciary. As an AI tool, ChatGPT maintains objectivity and abstains from voicing opinions or rendering judgements. Rather, it serves as a valuable resource, offering details on the particular subjects or inquiries that are directed at it. The use of AI in this context highlights how technology can improve legal research, make it easier to get pertinent information, and help the court get new perspectives.

VI.II CHALLENGES:

Based on its subjects, principles, and methodology, traditional conceptualizations of algorithmic fairness seem to be intrinsically Western-centric. In the Indian setting, socio-economic considerations give rise to data reliability difficulties. Important data points are frequently missing in India due to social infrastructures and structural inequities. Digital divides compound this shortfall by causing errors and sustaining residual injustice. Notably, entire populations are either misrepresented or absent from datasets, a critical issue that is evident in many studies. The startling fact that half of India’s population does not have access to the Internet—women[21], rural areas[22], and Adivasis[23] being the main excluded groups—contributes significantly to this data gap. As such, datasets obtained from internet-connected sources might unintentionally leave out a significant section of the population. Furthermore, India is a relatively newer to 4G mobile data, therefore its data footprint is still extremely limited and biased towards issues facing the upper middle class. Given the significant disconnect between the implemented models and the underprivileged communities they are intended to assist, it is suggested that localising model fairness only to India may be a shallow solution. The recent debate involving Gemini AI’s response to a question regarding the political standing of Prime Minister Narendra Modi is an excellent illustration of the complicated relationship that exists in the Indian setting between artificial intelligence, ethical issues, and legal implications[24]. The event, in which Gemini AI said that Mr. Modi was carrying out policies that have been characterised as fascist, has sparked a discussion about the ethical bounds of AI and misinformation associated with it. Rajeev Chandrasekhar[25], the Minister of State for Electronics and Information Technology, has adamantly stated that these kinds of comments are against various sections of the criminal code and Rule 3(1)(b) of the IT Rules, 2021[26]. This highlights the difficulties presented by quickly implemented AI models in a politically delicate setting, leading the government to stress the significance of responsible AI practices.

VII. LEGAL FRAMEWORKS FOR AI ETHICS:

India currently lacks a comprehensive legal framework for Artificial Intelligence (AI). However, the Indian government charged NITI Aayog[27], its premier public policy think tank, with developing these principles after realising the necessity for rules and regulations in the quickly changing field of artificial intelligence. The National Strategy for Artificial Intelligence (#AIForAll), published by NITI Aayog in 2018[28], lays out criteria for AI research and development that are specific to industries including smart cities, infrastructure, healthcare, education, and agriculture. NITI Aayog expanded on this by releasing the “Principles for Responsible AI”[29] in February 2021, which broke down ethical issues into societal and systemic issues. While societal concerns examine the effects of automation on employment and job development, system considerations dive into principles of decision-making, the proper involvement of beneficiaries, and responsibility. Then, in August 2021, NITI Aayog published “Operationalizing Principles for Responsible AI,”[30] which highlighted practical steps that could be taken by the public and private sectors. These actions included capacity building, regulatory and policy interventions, encouraging ethics by design, and developing frameworks that would comply with AI standards. The Digital Personal Data Protection Act of 2023[31], which also addresses privacy issues with AI platforms, is another example of India’s proactive approach to AI regulation. India is also part of the Global Partnership on Artificial Intelligence (GPAI)[32] on the international front. The recent 2023 GPAI Summit in New Delhi[33] showcased the work of AI experts in the fields of data governance, responsible AI, and the future of work. In line with the OECD[34] AI Principles, this cooperative project seeks to incorporate these deliverables into national strategies. In addition, Indian departments like the Bureau of Indian Standards and the Ministry of Electronics and Information Technology are actively developing draft standards[35] and presenting studies to address ethical, safety, and development issues related to AI. Moreover, the first draft of India’s Artificial Intelligence (AI) rules framework is expected to be unveiled in June or July of this year[36], according to Rajeev Chandrasekhar, Union Minister of State for Jal Shakti, Electronics and Information Technology, and Skill Development and Entrepreneurship. Chandrasekhar has stated that the government is keen to using AI to boost the economy, with a special emphasis on the healthcare, agricultural, and other sectors. After the recent fiasco, The Ministry of Electronics and Information Technology (MeitY) released an AI advisory[37] on March 1, 2024, which marks a big change in the government’s stance on AI research policy. The advisory issued focuses on generative AI technologies, including Google’s Gemini and large language models like ChatGPT. It requires that if these models are deemed to be “under-testing” or “unreliable,” they must receive an approval from the Indian government. The advisory seems to be the government’s reaction to recent drama, that is the widely shared answer from Google’s Gemini chatbot to a question concerning Prime Minister Narendra Modi. The recommendation highlights regulation 3(1)(b) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021[38] in an attempt to ensure that AI models abide by current legal obligations.

Cite this article as:

Lamisha Abidin, Analysing The Ethics And Legality Of AI In Legal Decision-Making” Vol.6 & Issue 1, Law Audience Journal (e-ISSN: 2581-6705), Pages 53 to 62 (7th June 2025), available at https://www.lawaudience.com/analysing-the-ethics-and-legality-of-ai-in-legal-decision-making/.

Footnotes & References:

[1] Jay Leib is a legal technology entrepreneur and one of the co-founders of NexLP, a company specializing in using AI and machine learning for legal and investigative solutions.

[2] Sobowale, J. (2016) How artificial intelligence is transforming the legal profession, ABA Journal. Available at: https://www.abajournal.com/magazine/article/how_artificial_intelligence_is_transforming_the_legal_profession

[3] Marr, B. (2018) How much data do we create every day? the mind-blowing stats everyone should read, Forbes. Available at: https://www.forbes.com/sites/bernardmarr/2018/05/21/how-much-data-do-we-create-every-day-the-mind-blowing-stats-everyone-should-read/?sh=3db2157660ba

[4] LexisNexis is a consumer reporting agency that collects and analyzes data about individuals and businesses.

[5] Westlaw is an online legal research service from Thomson Reuters

[6] Bloomberg Law is a legal technology platform that provides legal professionals with tools to help them grow their business and advise their clients.

[7] Kira Systems is a machine learning software that helps companies analyze and search contracts and documents.

[8] LawGeex is a contract review automation platform that uses AI-powered automation to review and redline contracts.

[9] Natural language processing (NLP) is a component of artificial intelligence (AI) that allows a computer program to understand human language

[10] Blue J Legal is a legal technology company that offers AI-powered solutions tailored for tax and legal professionals.

[11] Premonition refers to an AI system developed by Premonition.ai that leverages big data to predict legal outcomes, particularly in litigation.

[12] Marr, B. (2021) What are the 4 vs of Big Data?, Bernard Marr. Available at: https://bernardmarr.com/what-are-the-4-vs-of-big-data/

[13] Danziger, S., Levav, J. and Avnaim-Pesso, L., 2011. Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences108(17), pp.6889-6892.

[14] Parole is a permanent release of a prisoner who agrees to certain conditions before the completion of the maximum sentence period

[15]Julia Angwin et al., Machine Bias: There’s Software Used across the Country to Predict Future Criminals. And It’s Biased against Blacks, ProPublica (23 May 2016), available at https://www.propublica.org/article/machine-biasrisk-assessments-in-criminal-sentencing

[16] Larson, J. et al. (2016) How we analyzed the compas recidivism algorithm, ProPublica. Available at: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm (Accessed: 12 March 2024).

[17] Cyril Amarchand Mangaldas is India’s largest full-service law firm, with over 1000 lawyers, including 170 partners, and offices in

[18] KIRA SYSTEMS (2024) Cyril Amarchand Mangaldas is India’s first law firm to embrace artificial intelligence technology as part of Legal Innovation, Kira Systems. Available at: https://kirasystems.com/company-announcements/cyril-amarchand-mangaldas-is-indias-first-law-firm-to-embrace-artificial-intelligence-technology-as-part-of-legal-innovation/ (Accessed: 12 March 2024).

[19] Jaswinder Singh v. State of Punjab and Anr. (CRM-M-22496-2022)

 (2022) LiveLaw. Available at: https://www.livelaw.in/pdf_upload/jaswinder-singh-jassi-vs-state-of-punjab-and-another-punjab-and-haryana-high-court-465630.pdf.

[20] https://chat.openai.com/chat.

[21] Jain, M. (2016) India’s internet population is exploding but women are not logging in, Scroll.in. Available at: https://scroll.in/article/816892/indias-internet-population-is-exploding-but-women-are-not-logging-inia (Accessed: 12 March 2024).

[22] Pandey, K. (2020) Covid-19 lockdown highlights India’s Great Digital Divide, Down To Earth. Available at: https://www.downtoearth.org.in/news/governance/covid-19-lockdown-highlights-india-s-great-digital-divide-72514 (Accessed: 12 March 2024).

[23] Anant Kamath and Vinay Kumar (2017a) In India, accessible phones lead to inaccessible opportunities, The Wire. Available at: https://thewire.in/caste/india-accessible-phones-still-lead-inaccessible-opportunities (Accessed: 12 March 2024).

[24] Bureau, T.H. (2024) Gemini AI’s reply to Query, ‘is modi a fascist’, violates it rules: Union minister Rajeev Chandrasekhar, The Hindu. Available at: https://www.thehindu.com/news/national/netizens-allege-bias-in-google-ai-tools-response-on-pm-modi-i-t-ministry-sees-rules-violation/article67877974.ece (Accessed: 12 March 2024).

[25] Minister of State | Ministry of Electronics and Information Available at: https://www.meity.gov.in/content/minister-state (Accessed: 12 March 2024).

[26] The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021

[27] NITI Aayog

[28] (2018) National strategy for artificial intelligence. Available at: https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf (Accessed: 12 March 2024).

[29] (2021) Responsible AI #aiforall. Available at: https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf (Accessed: 12 March 2024).

[30] (2021) Operationalizing Principles for Responsible AI Available at: https://www.niti.gov.in/sites/default/files/2021-08/Part2-Responsible-AI-12082021.pdf

[31] Digital Personal Data Protection Act, 2023 (2023)

[32] The Global Partnership on Artificial Intelligence is an international initiative established to guide the responsible development and use of artificial intelligence in a manner that respects human rights and the shared democratic values of its members.

[33] Three-day GPAI summit concluded today at Bharat Mandapam! Press Information Bureau. Available at: https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1986475#:~:text=The%20GPAI%20summit%20was%20inaugurated,and%20Jal%20Shakti%2C%20Shri%20Rajeev 

[34] OECD AI Principles overview

[35] (2024) AI Regulation in India: Current State and Future Perspectives –. Available at: https://www.morganlewis.com/blogs/sourcingatmorganlewis/2024/01/ai-regulation-in-india-current-state-and-future-perspectives (Accessed: 12 March 2024).

[36] Shivangini (2024) India to come up with AI regulations framework by June-July this year: Report, Mint. Available at: https://www.livemint.com/ai/artificial-intelligence/india-to-come-up-with-ai-regulations-framework-by-june-july-this-year-rajeev-chandrasekhar-msde-11708409300377.html (Accessed: 12 March 2024).

[37] No.eNo.2(4)/2023-CyberLaws–3 Available at: https://regmedia.co.uk/2024/03/04/meity_ai_advisory_1_march.pdf

[38]  Digital Media Ethics Code) Rules, 2021

Leave a Reply