Click here to download the full paper (PDF)
Authored By: Komal Kumari (LL.M), Galgotias University,
Click here for Copyright Policy.
ABSTRACT
The growing implementation of Artificial Intelligence (AI) in India’s criminal justice system signifies a significant change in the drafting, recording, and processing of First Information Reports (FIRs). Utilising technology like Natural Language Processing, voice-to-text input, and automatic legal section suggestions, AI-generated FIRs offer enhanced efficiency, precision, and linguistic flexibility. This innovation raises significant issues about constitutional protections, procedural due process, and institutional accountability. This research rigorously analyses the legal and administrative ramifications of AI-generated FIRs inside the Indian framework. It explores the technological capabilities of existing AI systems, assesses their compatibility with the procedural requirements under Indian law[1], and analyses their potential to impact fundamental rights under Articles 14 and 21 of the Constitution.[2] The paper assesses judicial remedies, statutory frameworks (including the Bharatiya Sakshya Adhiniyam, 2023), and international best practices regarding algorithmic accountability and oversight from a doctrinal and comparative perspective. The paper argues that while AI can support routine administrative functions, its unchecked deployment in core legal processes—such as FIR registration—poses significant risks of bias, opacity, and due process violations. The research advocates for a balanced approach that maintains human control, requires algorithmic transparency, and enforces legal accountability, ensuring that AI helps rather than undermines the integrity of India’s criminal justice system.
I. INRODUCTION:
I.I CONTEXT:
The First Information Report (FIR) constitutes the preliminary formal procedure in India’s criminal justice system. It is a document created by the police that initiates the investigative process and serves as the foundation for ensuing judicial actions. The importance of FIRs is in their evidential value and their role as a procedural safeguard, promoting transparency, accountability, and prompt action in criminal cases. The correctness and legitimacy of FIRs are essential due to their critical role in safeguarding the rights of both the complainant and the accused.[3] Historically, FIRs have been composed manually by police personnel, frequently under time constraints and in linguistically or legally intricate contexts. This has resulted in numerous problems, including procedural deficiencies, inconsistent application of legislative rules, and accusations of biassed or misleading reporting. In response to these challenges, police agencies in India have commenced the experimentation of Artificial Intelligence (AI)-based solutions to improve the efficiency, uniformity, and objectivity of FIR drafting.[4]
I.II EMERGING TREND: AI GENERATED FIR’S:
In recent years, AI-driven technologies, including Natural Language Processing (NLP), voice-to-text conversion, and legal section recommendation algorithms, have seen heightened use in administrative and legal operations. In states such as Madhya Pradesh, pilot initiatives have implemented AI-generated FIR systems, wherein law enforcement officials input verbal complaints into AI interfaces that transcribe, analyse, and suggest legal sections and formats based on pre-trained datasets.[5]Although these advancements represent a substantial progression in the digitisation and modernisation of police operations, they also elicit a range of intricate legal, ethical, and constitutional issues. The utilisation of AI in a discretionary and legally sensitive function—documenting a cognisable offence—requires a thorough analysis of its legality, accountability, and effects on basic rights.
I.III PRINCIPAL RESEARCH ENQUIRIES:
This work aims to investigate the subsequent fundamental research enquiries:
What is the effect of AI-generated FIRs on procedural accuracy and fundamental rights?
Does automation undermine or enhance the authenticity and impartiality of the FIR process?
Do dangers of privacy invasion, algorithmic prejudice, or violations of due process exist?
What are the legal authority and accountability limitations when law enforcement utilises AI tools?
Is there a definitive legal foundation for employing AI in the composition of FIRs?
Who is accountable for mistakes—law enforcement officials, developers, or governmental entities?
What statutory, judicial, and administrative safeguards are necessary to guarantee that the integration of AI into FIR processes complies with constitutional and legal standards?
Should a structure for oversight, standardisation, or judicial review be established?
Can the ideals of responsible AI and human-in-the-loop decision-making be institutionalised?
II. LEGAL AND TECHNOLOGICAL CONTEXT:
II.I FIRS UNDER INDIAN LAW:
The First Information Report (FIR) is a legislative instrument under Section 154 of the Code of Criminal Procedure, 1973 (CrPC), which specifies that police to record information relevant to a cognizable offence. The legal relevance of a FIR resides in its role as the official record of an alleged crime, which not only prompts investigation but also influences arrest, bail, and prosecution choices. Therefore, specificity in details—such as the nature of the offence, time, place, gender of the victim, and the applicable statute sections—is crucial. Even slight flaws in the wording of FIRs might lead to procedural invalidity, unlawful arrest, or miscarriage of justice.[6] With the advent of the Bharatiya Nagarik Suraksha Sanhita, 2023 (BNSS)—which is poised to replace the CrPC—there is rising emphasis on the electronic registration of FIRs, including provisions for digital complaint portals and video-recorded statements.[7] Nevertheless, the Act fails to explicitly address AI-generated FIRs, resulting in an uncertain legal position for AI-drafted papers that may be subject to contestation.
II.II AI COMPETENCIES AND IMPLEMENTATION:
The implementation of Artificial Intelligence in law enforcement in India is no longer hypothetical—it is a current and growing reality. Artificial Intelligence systems that incorporate Natural Language Processing (NLP), speech-to-text technology, and legal taxonomy databases are utilised to assist in the documentation of criminal complaints, particularly First Information Reports (FIRs).[8] Platforms such as AI and other proprietary software facilitate voice inputs from complainants, which are subsequently auto-transcribed and legally formatted according to taught algorithms. These systems propose pertinent IPC sections, automatically populate template-based narrative frameworks, and facilitate linguistic translation—particularly in multilingual countries. State police agencies are implementing extensive AI technologies for administrative and investigative purposes. Trinetra in Uttar Pradesh employs facial recognition and crime forecasting technologies.[9] The Punjab Artificial Intelligence System (PAIS) amalgamates criminal datasets with AI-enhanced profiling.[10] Delhi’s Automated Facial Recognition System (AFRS) employs artificial intelligence for suspect identification.[11]
Artificial intelligence is being utilised not just for post-FIR investigations but also during the registration and early processing phases, prompting significant legal and ethical enquiries.
II.III LEGAL FRAMEWORK AND CONSTITUTIONAL RIGHTS:
The application of AI in FIR draughting and wider police operations raises various constitutional protections. Article 14 (Right to Equality): AI algorithms must not provide discriminatory results, particularly against socio-economically disadvantaged groups.[12] Article 21 (Right to Life and Personal Liberty): Encompasses the right to privacy (Justice K.S. Puttaswamy vs. Union of India), requiring meticulous data management and surveillance measures.[13] Article 22 safeguards against arbitrary detention, which may arise from an AI-generated FIR based on erroneous data or prejudice. Furthermore, the Bharatiya Sakshya Adhiniyam, 2023, which supersedes the Indian Evidence Act, acknowledges digital and electronic documents as admissible evidence, encompassing AI-generated information. It requires that these documents be genuine, verifiable, and produced through a transparent method.[14] Currently, there are no explicit regulations addressing machine-generated legal documents, such as FIRs, resulting in a regulatory void. Judicial scepticism has been raised by courts over the complete integration of AI. In 2023, the Delhi High Court prohibited the utilisation of AI for binding judicial decisions, citing concerns around opacity and the infringement of rights. Conversely, the Madhya Pradesh High Court has commenced restricted pilot programs employing AI for sign-language interpretation and the transcription of crime-scene videos, while preserving human judicial judgement.
III. ADMINISTRATIVE DECISION-MAKING AND LAW ENFORCEMENT FUNCTIONS:
III.I THE FUNCTION OF AI IN FIR COMPOSITION:
The use of AI tools into the FIR draughting process signifies a pivotal transformation in administrative decision-making within the police framework. AI Legal Assistants, such as those created by TopView.ai, now provide real-time legal support by analysing voice inputs or written declarations and automatically suggesting relevant laws from the Indian Penal Code (IPC) or Special Laws. These systems may correlate event accounts with precedent databases, identify absent legal components, and offer suitable procedural actions based on standardised police manuals and judicial interpretations.
The pragmatic benefits of employing AI in FIR preparation encompass:
Mitigation of Human Error: Spelling errors, erroneous legal references, or insufficient factual accounts—prevalent in manually composed FIRs—are reduced.
Operational Efficiency: The time required for draughting is markedly diminished, which is particularly crucial in resource-limited police stations managing substantial case loads.
Multilingual Support: AI systems with language models facilitate translation between regional languages and English or Hindi, improving accessibility and uniformity across jurisdictions.
Standardisation: AI-generated templates facilitate the unification of FIR structures and guarantee adherence to procedural requirements.
Nonetheless, despite these benefits, the assignment of a quasi-legal role to algorithms engenders significant apprehensions over legality, equity, and discretion—especially when AI transcends auxiliary support and assumes the role of the primary drafter.[15]
III.II RISKS AND OPERATIONAL CHALLENGES:
a) Algorithmic Bias and Discrimination:
Artificial intelligence systems employed in First Information Report generation depend on past crime data, pre-trained models, and pattern recognition techniques. Nonetheless, these data sources may contain inherent institutional bias, such as the over-policing of some areas or distorted depiction of certain crimes. Consequently, AI-generated FIRs may mirror or exacerbate unfair profiling and misinterpretation of offences, negatively impacting marginalised communities. If AI models are mostly trained on urban data, they may misread rural languages or culturally particular grievances. Likewise, grievances concerning women or minorities may be inadequately highlighted if the training data lacked equitable representation. Academics have warned that data-driven inaccuracies may result in erroneous registration, underreporting, or denial of protection.
b) Privacy and Surveillance:
AI-enhanced police operations frequently encompass automated data collection, such as audio recordings, biometric data, and real-time surveillance integration (e.g., facial recognition, geotagging).[16] Systems such as Delhi’s Automated Facial Recognition System (AFRS) and predictive policing instruments in Uttar Pradesh and Punjab pose significant issues regarding India’s privacy jurisprudence following K.S. Puttaswamy vs. Union of India (2017). Currently, there is no specific data protection legislation in effect (awaiting the implementation of the Digital Personal Data Protection Act, 2023), which exposes sensitive personal information gathered through AI systems to potential misuse, breaches, or unauthorised profiling.[17] The lack of transparency in algorithmic decision-making procedures compromises procedural fairness and an individual’s right to challenge automated decisions.
c) Accountability and Legal Liability:
A primary difficulty in AI-generated FIRs is the dispersal of accountability. It remains ambiguous whether a FIR is filed based on AI-generated information that misrepresents facts or cites erroneous legal provisions.
Determining the liability of the police officer who only reviewed or signed the document;
Determination of duty between the developer and the AI service provider;
Or if the state, as the deploying authority, bears accountability under public law.
India’s existing criminal procedural and civil liability systems do not recognise AI as an entity capable of committing legal wrongs. The absence of legislation clarity about AI responsibility, data chain of custody, and procedural audits results in a legal void. In the absence of a comprehensive legislative framework, AI-generated FIRs exist in a nebulous area, subjecting both people and law enforcement to considerable legal vulnerabilities.
IV. LEGAL AND JUDICIAL REACTIONS:
IV.I LEGAL FRAMEWORK:
Despite the growing incorporation of Artificial Intelligence into public administration, India presently lacks a specific legislative framework regulating the application of AI in essential state tasks, such as policing and criminal justice. The current policy landscape is predominantly influenced by soft law instruments and nascent statutes that only marginally pertain to AI implementation. The NITI Aayog’s “#AIforAll” campaign (2020) and the Responsible AI for Social Empowerment (RAISE) Strategy (2021) delineate a vision for the ethical implementation of AI in governance, highlighting transparency, accountability, fairness, and human oversight. Nonetheless, these documents provide non-binding legislative frameworks, lacking formalised enforcement mechanisms or regulatory control for AI systems employed in law enforcement contexts. The Bharatiya Sakshya Adhiniyam, 2023, which supersedes the Indian Evidence Act, 1872, allows for the acceptance of AI-generated or processed electronic documents as evidence.
Nonetheless, these documents must adhere to rigorous authentication standards, which include:
Source verifiability, Consistency of metadata, Chain of custody, Adherence to established protocols. The evidentiary value of AI-generated FIRs may be contested unless comprehensive documentation of the AI’s input processing, legal reasoning application, and output generation is preserved. This necessitates algorithmic openness, which most proprietary systems now lack. Consequently, although the statutory framework is progressing, it remains inadequately established to thoroughly govern the application of AI in the initiation of criminal actions.
IV.II JUDICIAL PRECEDENTS:
Indian courts have adopted a prudent and gradual approach to the integration of AI in judicial and quasi-judicial operations. Two significant High Court rulings exemplify the present judicial stance:
The Delhi High Court (2023) prohibited AI tools from rendering conclusive legal decisions in a case concerning AI-driven predictive legal analysis, underscoring that legal reasoning and judicial discretion must remain the purview of human agents. The court voiced apprehension with the opacity, lack of accountability, and possible prejudice intrinsic to algorithmic systems, especially when employed to affect criminal liability or evidence evaluation. This decision effectively forbids automatic FIR registration without human verification, underlining the importance of human judgement in criminal proceedings.
The Madhya Pradesh High Court (2023) took a pragmatic and experimental approach by allowing the utilisation of AI tools for non-decisional auxiliary functions, including sign-language translation for impaired complainants and video-to-text documenting of crime scenes. The Court underscored that such usage is supportive rather than conclusive and hence does not contravene constitutional standards if adequately monitored by law enforcement officials. Collectively, these rulings signify an emerging judicial philosophy of proportionality concerning the use of AI in law enforcement: supportive functions are permissible, although decision-making power must be retained by human agents. Judicial bodies have not explicitly adjudicated the legitimacy of AI-generated FIRs; however, the current trend suggests that such practices necessitate stringent control, auditability, and human validation to endure judicial examination.
V. COMPARATIVE & INTERNATIONAL INSIGHTS:
Comparative & International Insights Across jurisdictions, the integration of AI into criminal justice systems—particularly in functions like predictive policing, risk assessment, and incident reporting—has prompted serious debate over legality, fairness, and democratic accountability. These international experiences offer valuable comparative lessons for India as it begins to experiment with AI-generated FIRs.
V.I GLOBAL CONCERNS:
The Case of Predictive Policing in the United States, predictive policing tools such as PredPol and HunchLab were designed to forecast crime-prone locations or repeat offenders based on historical crime data. However, civil rights advocates and researchers soon flagged that these tools replicated existing biases in law enforcement data, disproportionately targeting racial and ethnic minorities. Reports by organizations such as the ACLU and Electronic Frontier Foundation highlighted issues of: Lack of transparency in algorithms (black-box models), Absence of oversight and audit mechanisms, and Violation of constitutional rights such as the Fourth and Fourteenth Amendments. These critiques led several jurisdictions, including Los Angeles and Oakland, to discontinue or scale back their use of predictive policing systems. Crucially, courts and regulatory bodies in these regions began insisting on algorithmic accountability, public disclosures, and the requirement that final decisions be made by human officials, not AI systems. Similar concerns have arisen in Europe, where the EU’s AI Act (in draft form as of 2023) seeks to classify AI applications in law enforcement as high-risk, mandating rigorous compliance obligations, including: Ex ante risk assessments, Human-in-the-loop control, and Rights to contest algorithmic decisions.
V.II LESSONS FOR INDIA:
These international experiences highlight the urgent need for India to adopt a preventive regulatory posture as it ventures into AI-enabled FIR systems. Key lessons include: Human-in-the-Loop Mandate: AI tools must be strictly advisory, not autonomous. All FIRs must be verified and signed off by a responsible police officer, preserving legal accountability. Algorithmic Transparency: Public agencies should disclose the logic, data sets, and testing methodology behind any AI tools used in FIR drafting, subject to audit by independent bodies. Independent Oversight: A regulatory mechanism—either through the Data Protection Authority (once operational under the Digital Personal Data Protection Act, 2023) or a sectoral regulator—should oversee AI use in law enforcement, including periodic compliance reviews. Right to Contest and Review: Citizens must have the right to challenge AI-generated or AI-influenced FIRs and demand human re-evaluation of any erroneous or discriminatory outputs. Without such safeguards, AI-generated FIRs in India risk replicating structural biases, compromising due process, and undermining public trust in the criminal justice system. This section provides a comparative foundation for advocating legally sound, ethically aligned, and transparent AI governance models in India.
VI. PROPOSED SAFEGUARDS & POLICY FRAMEWORK:
To guarantee accountability, equity, and legal legitimacy in the incorporation of Artificial Intelligence in First Information Report (FIR) generation, the following precautions and legislative actions are recommended:
VI.I LEGISLATIVE MODIFICATIONS:
Revise the Bharatiya Nagarik Suraksha Sanhita (BNS) and associated FIR regulations to incorporate obligatory audit trails, procedural transparency, and organised appellate processes for AI-generated FIRs. This would establish a legislative basis for supervision and remediation.
VI.II PROTOCOLS FOR HUMAN OVERSIGHT:
AI-generated FIRs require human verification. Law enforcement officials must be mandated to examine, authenticate, and officially endorse every First Information Report launched or aided by artificial intelligence systems. This guarantees accountability and reduces the dangers associated with automated bias.
VI.III STANDARDS FOR DATA GOVERNANCE:
Develop comprehensive frameworks that delineate:
Standards for data quality;
Regular assessments of bias and fairness;
Data retention and deletion regulations are essential to guarantee that AI systems function on legally admissible and ethically appropriate data.
VI.IV JUDICIAL PROTOCOLS:
Drawing from cases like Anvar P.V. v. P.K. Basheer, courts are required to establish precise evidentiary protocols for AI-assisted FIRs. This may encompass authentication methods, metadata verification, and affidavit stipulations to guarantee admissibility and integrity.[18]
Instruction and Skill Development
Create mandatory training modules for law enforcement officials concentrating on:
Ethics and accountability in artificial intelligence;
Constitutional protections and human rights;
Regulatory constraints on automated decision-making.
This would encourage appropriate utilisation and mitigate the potential for misuse or excessive dependence on AI.
VII. EMPIRICAL ELEMENT:
An empirical inquiry may be done to assess the practical implications of AI integration in FIR registration and criminal justice processes, supplementing the normative and doctrinal study. This element may comprise:
Interviews with Law Enforcement Officials
Execute semi-structured interviews with police officers and technical personnel presently involved in the utilisation or assessment of AI-assisted FIR tools. Principal domains of investigation may encompass:
Pragmatic obstacles in execution;
Perceived precision and dependability of AI-generated FIRs;
Concerns pertaining to legal compliance, bias, or misuse;
Degree of human supervision and ethical consciousness.
VII.I CASE STUDY METHODOLOGY:
Investigate a particular jurisdiction where AI FIR technologies are undergoing piloting or testing. For instance: Examine a state-sponsored pilot program of an AI-driven FIR assistant within a state police force, concentrating on institutional reactions, procedural assimilation, and results. Examine the current assessment or consultation procedure undertaken by the Madhya Pradesh High Court about the admissibility, legality, and operational facets of AI-generated FIRs. The research may examine court records, policy documents, and stakeholder perspectives to evaluate the judiciary’s position.
Such empirical data can yield informed insights into the viability, deficiencies, and regulatory requirements of AI-driven legal technologies.
VIII. CONCLUSION:
The incorporation of Artificial Intelligence in the preparation of First Information Reports (FIRs) signifies a notable advancement in India’s criminal justice system. Artificial intelligence has distinct advantages—improved efficiency, language versatility, procedural uniformity, and the potential for diminished errors in criminal documentation. Particularly in resource-constrained settings, such tools can enhance the capabilities of law enforcement and facilitate the first phases of criminal investigations. Nonetheless, these advantages must not undermine constitutional protections, procedural equity, or democratic responsibility. FIRs are not simply administrative records; they commence criminal culpability and activate the state’s coercive apparatus. Consequently, their generation must rigorously comply with the norms of due process, non-discrimination, and openness. This research demonstrates that the existing legal structure in India is not prepared to tackle the complications posed by AI-generated FIRs. The Bharatiya Sakshya Adhiniyam (2023) recognises the admissibility of AI-generated evidence; yet, significant concerns around responsibility, privacy, and algorithmic bias persist unsolved. Judicial trends indicate a cautious receptiveness to AI in non-decisional roles, while firmly emphasising the necessity of human oversight for fundamental legal determinations. An equitable regulatory strategy is necessary. AI solutions ought to serve as assisting technology, augmenting police documentation while requiring obligatory human validation, audit trails, and independent oversight. India should implement worldwide best practices, encompassing the right to explanation, algorithmic openness, and substantial remedies for AI-induced errors. The primary objective should be to guarantee that AI implementation in FIR generation bolsters legal integrity, rather than compromising it. Judicial bodies, legislators, and technologists must unite to establish a rights-respecting framework for AI in law enforcement—one that embodies the constitutional principles of justice, liberty, and dignity.
Cite this article as:
Komal Kumari, “AI-Generated FIRs in India: A Legal Analysis of Administrative Decision-Making and Police Functions” Vol.6 & Issue 1, Law Audience Journal (e-ISSN: 2581-6705), Pages 210 to 223 (21st June 2025), available at https://www.lawaudience.com/ai-generated-firs-in-india-a-legal-analysis-of-administrative-decision-making-and-police-functions/.
References & Footnotes:
[1] Code of Criminal Procedure, 1973, § 154.
[2] India Const. arts. 14 & 21.
[3] Lalita Kumari v. Govt. of Uttar Pradesh, (2014) 2 SCC 1 (India).
[4] National Crime Records Bureau (NCRB), Crime and Criminal Tracking Network & Systems (CCTNS), Ministry of Home Affairs, Govt. of India, available at https://ncrb.gov.in.
[5] Madhya Pradesh Police AI Pilot Project on Smart FIRs, reported in The Hindu (Dec. 2023), available at [insert working URL].
[6]R.V. Kelkar, Criminal Procedure 116–18 (EBC 2020).
[7]Bharatiya Nagarik Suraksha Sanhita, No. 45, Acts of Parliament, 2023 (India).
[8] Arindrajit Basu, Artificial Intelligence and Law Enforcement in India, The Centre for Internet & Society (2022), https://cis-india.org.
[9] “UP Police’s Trinetra App Now Uses AI for Crime Prediction,” The Times of India (Aug. 2023).
[10] Punjab Police, Annual Innovation Report, Govt. of Punjab (2023).
[11] “Facial Recognition System in Delhi: A Legal and Ethical Appraisal,” The Hindu (Jan. 2024).
[12] India Const. art. 14.
[13] Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1.
[14] Bharatiya Sakshya Adhiniyam, 2023, §§ 2–4.
[15] Arindrajit Basu, Artificial Intelligence and Law Enforcement in India: Risks and Remedies, The Centre for Internet & Society (2022).
[16] “AFRS and Privacy in Policing,” The Hindu, Jan. 2024.
[17] Digital Personal Data Protection Act, No. 22 of 2023 (India).
[18] Anvar P.V. v. P.K. Basheer, (2014) 10 SCC 473.