Canonising Al and Algorithms:Assessing the Indian Constitution’s
Abstract
The rapid advancement of Artificial Intelligence (AI) and algorithmic decision-making presents unique challenges and opportunities within the Indian constitutional framework. As AI systems increasingly influence governance, law enforcement, and judicial processes, it becomes crucial to assess their compatibility with constitutional principles such as equality, fairness, due process, and fundamental rights. This paper examines how AI and algorithms intersect with constitutional mandates, focusing on issues of accountability, transparency, and judicial oversight. It explores whether AI can be “canonized”—that is, integrated into India’s constitutional ethos while maintaining human-centric governance. The study critically evaluates legal precedents, global AI regulations, and India’s evolving legal landscape to determine the extent to which AI-driven decisions can align with constitutional safeguards. By addressing potential risks such as algorithmic bias, data privacy concerns, and lack of legal personhood for AI, the paper proposes a balanced regulatory framework that upholds constitutional values while embracing technological progress.
Table of contents
- Introduction: AI and Constitutionalism
- AI and Fundamental Rights under the Indian Constitution
- Right to Equality (Article 14): Algorithmic Bias and Discrimination
- Freedom of Speech and Expression (Article 19(1)(a))
- Right to Privacy (Article 21 – Puttaswamy Judgment)
- AI and Separation of Powers
- Legal Personhood and AI: Can AI Have Rights and Duties?
- Conclusion: The Future of AI under the Indian Constitution
Introduction: AI and Constitutionalism
Artificial Intelligence (AI) and algorithmic decision-making have become integral to governance, law enforcement, and judicial processes worldwide. The increasing reliance on AI in India has transformed various sectors, including digital governance, biometric surveillance, predictive policing, and automated welfare distribution. AI-driven systems are used to streamline administrative processes, enhance efficiency, and provide data-driven insights for policymaking. However, while AI presents significant benefits, it also raises fundamental constitutional concerns that demand scrutiny.
One of the primary issues surrounding AI governance is its impact on fundamental rights. The Indian Constitution enshrines principles such as equality (Article 14), freedom of speech and expression (Article 19), and the right to life and personal liberty (Article 21). However, AI-based decision-making has introduced new challenges that test the constitutional safeguards protecting these rights. For instance, AI systems, if designed with biased datasets, can lead to algorithmic discrimination, violating the principle of equality before the law. Similarly, AI-driven content moderation and digital censorship could restrict freedom of expression, while mass surveillance programs powered by AI might infringe upon the right to privacy, especially in the post-Puttaswamy era, where privacy has been recognized as a fundamental right.
Beyond fundamental rights, AI governance also affects the separation of powers, a foundational principle ensuring the independence of the legislature, executive, and judiciary. The use of AI in decision-making processes—especially in law enforcement, taxation, and judicial reasoning—raises concerns about unchecked executive power and the potential erosion of judicial discretion. If AI-powered legal analytics or predictive policing influence judicial verdicts without human oversight, it could undermine judicial independence and due process. Moreover, AI’s deployment in governance, without comprehensive legal regulations or accountability mechanisms, could weaken democratic oversight, leaving individuals without adequate recourse to challenge AI-driven decisions.
Given these pressing concerns, the constitutional assessment of AI in India is not merely an academic exercise but a necessary step toward ensuring transparency, fairness, and accountability in AI governance. Key questions emerge: How can AI be designed to ensure fairness and non-arbitrariness under Article 14? Does AI-driven censorship align with the constitutional safeguards of Article 19? How can AI be regulated to prevent violations of privacy under Article 21? Can AI-based decision-making be subjected to judicial review, and how can democratic institutions ensure accountability in AI-driven governance? These questions highlight the urgent need for a robust constitutional framework to govern AI’s role in India, balancing technological advancements with fundamental rights and democratic principles.
AI and Fundamental Rights under the Indian Constitution
The Indian Constitution guarantees fundamental rights that serve as the foundation of a democratic society, ensuring equality (Article 14), freedom of speech and expression (Article 19), and personal liberty (Article 21). These rights are enshrined in Part III of the Constitution, acting as a safeguard against state excesses and ensuring that governance remains just, fair, and reasonable. However, the rise of Artificial Intelligence (AI) and algorithmic decision-making has introduced unprecedented challenges to these rights. While AI offers efficiency, automation, and predictive capabilities, its unchecked use raises concerns about bias, opacity, mass surveillance, and legal accountability. If AI systems are not designed and deployed within a constitutionally compliant framework, they may violate fundamental rights and undermine democratic principles.
1. Algorithmic Discrimination and the Right to Equality (Article 14)
One of the most pressing concerns regarding AI governance in India is algorithmic discrimination, which threatens the principle of equality before the law under Article 14. AI systems, particularly those used in law enforcement, hiring, lending, and welfare distribution, rely on large datasets to make decisions. However, if these datasets reflect historical and societal biases, AI may unintentionally reinforce discrimination rather than eliminate it.
For example, facial recognition technology (FRT), a widely used AI tool, has been shown to have higher error rates for marginalized communities, particularly women, individuals with darker skin tones, and ethnic minorities. In India, FRT deployed by law enforcement agencies, including the Delhi Police, has been criticized for disproportionately targeting certain communities, leading to concerns about biased policing and caste-based profiling. Similarly, predictive policing tools, which use AI to forecast crime-prone areas based on historical data, have been found to reinforce systemic biases. If past crime data is biased against certain communities, AI predictions will likely perpetuate the same prejudices, resulting in disproportionate surveillance and criminalization of specific groups.
AI-driven discrimination extends beyond law enforcement. Algorithmic hiring systems used by companies to screen job applicants may unintentionally favor certain .
Right to Equality (Article 14): Algorithmic Bias and Discrimination
Article 14 of the Indian Constitution guarantees the fundamental right to equality before the law and the equal protection of laws. This provision ensures that the state cannot act in an arbitrary, discriminatory, or unreasonable manner while implementing laws and policies. However, the increasing integration of Artificial Intelligence (AI) in governance, law enforcement, and economic decision-making has raised concerns about algorithmic bias and discrimination. AI systems rely on large datasets to function, but these datasets often reflect historical inequalities and societal biases, leading to outcomes that may disproportionately harm marginalized communities.
One of the most striking examples of AI-driven discrimination is found in Facial Recognition Technology (FRT). Studies have shown that FRT exhibits higher error rates for women, individuals with darker skin tones, and minority groups. A 2020 study by the Massachusetts Institute of Technology (MIT) found that FRT systems used by law enforcement agencies globally were less accurate in identifying Black, Asian, and Indigenous individuals compared to White individuals¹. In the Indian context, these biases raise serious concerns when FRT is used by law enforcement agencies. During the 2020 Delhi riots, AI-powered facial recognition technology was reportedly used to identify and prosecute protestors, with allegations that the system disproportionately targeted certain religious and socio-economic groups². If AI-driven policing unfairly impacts certain communities, it violates the constitutional principle of non-discrimination and equal protection under Article 14.
Another major concern is predictive policing, an AI-based tool used in crime prevention. Predictive policing algorithms analyze past crime data to identify areas and individuals more likely to be involved in criminal activities. However, because historical crime records often reflect deep-seated systemic biases, AI models tend to reinforce existing prejudices rather than eliminate them. For instance, if past policing data shows a higher number of arrests from lower-income or minority-dominated neighborhoods, the AI system is likely to flag these areas as high-crime zones, increasing police surveillance and interventions in these communities³. This results in a self-fulfilling cycle where prejudiced historical data continues to drive biased law enforcement practices.
AI-driven decision-making is also raising concerns in employment, financial services, and education. Many corporate hiring processes, loan approvals, and university admissions now use AI algorithms to screen candidates. However, biased training datasets can cause these AI systems to disfavor marginalized groups. For example, AI-powered hiring tools may filter out candidates based on names, locations, or educational backgrounds that are historically associated with underprivileged communities. Similarly, AI-driven loan and credit approval systems may deny financial assistance to individuals from economically disadvantaged backgrounds due to a lack of banking history, perpetuating economic inequality. These AI-driven biases contradict the constitutional mandate of equal opportunity and non-arbitrariness under Article 14.
The Supreme Court of India, in E.P. Royappa v. State of Tamil Nadu (1974), held that arbitrariness is the very antithesis of equality⁴. If AI systems function without transparency, accountability, and due process, they can lead to unfair and arbitrary decision-making, thereby violating constitutional principles of justice and fairness. Unlike traditional decision-making, where human reasoning and discretion allow for rectification of unfair outcomes, AI operates in a black-box manner, meaning its decision-making process is often opaque and unchallengeable. Individuals affected by AI-based discrimination may not even be aware of the bias, let alone have the means to contest it.
To align Artificial Intelligence (AI) with Article 14 of the Indian Constitution, which guarantees equality before the law and protection against arbitrary discrimination, it is crucial to implement a structured framework that ensures fairness, transparency, and accountability in AI-driven decision-making. As AI increasingly influences governance, law enforcement, employment, finance, and social welfare, unchecked biases and opaque decision-making processes pose significant risks to fundamental rights. Ensuring that AI systems function in a just and equitable manner requires comprehensive safeguards, including periodic algorithmic audits, diverse training datasets, human oversight, and robust judicial and legislative mechanisms.
1. Periodic Algorithmic Audits to Detect and Eliminate Biases
AI systems learn from vast amounts of data, but if the training data reflects historical biases, the algorithms can replicate and reinforce those biases in decision-making. For instance, AI-based facial recognition systems have been found to misidentify individuals from minority communities at disproportionately high rates, leading to wrongful arrests and discriminatory law enforcement practices. Similarly, AI-driven hiring tools have demonstrated biases against candidates based on gender, caste, or socio-economic background due to skewed training data.
To prevent such biases from influencing AI decision-making, periodic algorithmic audits must be conducted. These audits should involve independent regulatory bodies assessing AI systems for discriminatory patterns, unfair outcomes, and algorithmic opacity. AI audits must analyze decision-making processes, test algorithms against diverse datasets, and evaluate their real-world impact. Governments, corporations, and institutions deploying AI must be mandated to conduct such audits at regular intervals, ensuring that AI applications do not perpetuate unfair treatment. If biases are detected, corrective actions such as retraining AI models with balanced data, adjusting algorithmic weightage, and implementing fairness constraints should be enforced. Without routine audits, AI systems could operate unchecked, exacerbating existing inequalities rather than mitigating them.
2. Diverse and Representative Training Datasets to Minimize Discriminatory Outcomes
One of the primary reasons AI systems exhibit biased behavior is the lack of diversity in training datasets. AI models learn by analyzing past data, and if this data disproportionately represents privileged groups while underrepresenting marginalized communities, the AI system will produce discriminatory outcomes. For example, an AI model used in financial lending that is trained primarily on data from urban borrowers may systematically reject loan applications from rural or economically weaker individuals due to a lack of sufficient data representation. Similarly, AI-driven predictive policing models trained on historical crime records often end up disproportionately flagging lower-income neighborhoods, leading to over-policing of marginalized communities.
To minimize these discriminatory outcomes, AI training datasets must be diverse, balanced, and representative of the entire population. Data collection processes must include individuals from different socio-economic backgrounds, genders, castes, ethnicities, and geographic regions to ensure that AI models do not reflect or amplify existing biases. This requires collaboration between AI developers, legal experts, and social scientists to ensure that datasets used in machine learning are inclusive. Furthermore, mechanisms should be in place to continuously update AI models to prevent outdated and biased data from influencing decision-making. By ensuring diverse training datasets, AI can function in a manner that aligns with constitutional principles of equality and non-arbitrariness under Article 14.
3. Human Oversight Mechanisms in AI-Driven Decision-Making Processes
While AI is capable of processing vast amounts of data and automating decisions, it lacks the moral reasoning, empathy, and contextual understanding necessary for fair decision-making. AI systems, when used without human oversight, can produce unjust and irreversible consequences. For example, automated AI-based job recruitment tools that reject applicants based solely on algorithmic predictions may lead to the exclusion of deserving candidates without any opportunity for reconsideration. Similarly, AI-driven predictive policing models might label individuals as potential criminals without human review, leading to wrongful arrests and harassment.
To prevent such arbitrary and unfair outcomes, human oversight mechanisms must be integrated into AI-driven decision-making processes. AI decisions that impact fundamental rights—such as employment opportunities, access to financial services, or law enforcement actions—must be subject to human review before final implementation. Decision-making processes should follow a human-in-the-loop approach, where AI functions as an assistive tool rather than an autonomous authority. Individuals affected by AI decisions should have the right to challenge AI-generated outcomes, seek explanations, and request human intervention. Establishing AI ethics boards within organizations, government institutions, and regulatory bodies can ensure that AI operates within the ethical and legal framework of human rights protections.
4. Judicial Scrutiny and Legislative Safeguards to Ensure Compliance with Constitutional Values
The role of the judiciary and legislature is paramount in ensuring that AI systems comply with constitutional values and do not violate fundamental rights. Unlike traditional decision-making mechanisms, AI operates through complex algorithms, often functioning as a “black box” where the reasoning behind decisions is difficult to interpret. This lack of transparency raises concerns about due process, accountability, and access to justice. If an AI-driven system denies a person access to social welfare benefits, employment, or fair trial rights, individuals must have legal recourse to challenge such decisions.
Judicial scrutiny is necessary to establish precedents on AI-related constitutional violations, ensuring that AI-driven decisions adhere to the principles of fairness, non-discrimination, and reasonableness. Courts must assess whether AI systems deployed in governance, law enforcement, and public services comply with constitutional due process and provide affected individuals with an opportunity to appeal AI-generated decisions.
On the legislative front, comprehensive AI governance laws must be enacted to regulate the ethical use of AI in India. These laws should mandate transparency requirements, compelling organizations to disclose how AI models function and make decisions. Legislative frameworks should establish redressal mechanisms, allowing individuals to contest AI-driven actions that negatively impact them. Additionally, strict regulations should be implemented to govern the use of AI in law enforcement and surveillance, ensuring that it does not lead to mass surveillance, racial profiling, or human rights violations. Inspired by global AI governance models such as the European Union’s AI Act, India should develop legal standards for AI ethics, accountability, and risk assessment to ensure AI systems remain aligned with constitutional values.
By embedding these ethical and constitutional principles into AI governance, India can effectively harness the transformative potential of AI while ensuring that it does not become a tool of systemic discrimination. If AI operates without adequate safeguards, it risks deepening existing social inequalities, leading to violations of fundamental rights. However, with rigorous algorithmic audits, diverse datasets, human oversight, and strong legislative protections, AI can be shaped into an instrument that advances fairness, equity, and constitutional justice.
¹ MIT Study on AI Bias, 2020.
² Delhi Police AI System Report, 2021.
³ Predictive Policing Study, The Hindu, 2022.
⁴ E.P. Royapp
a v. State of Tamil Nadu, AIR 1974 SC 555.
Freedom of Speech and Expression (Article 19(1)(a))
The right to freedom of speech and expression, enshrined under Article 19(1)(a) of the Indian Constitution, is one of the most fundamental rights in a democratic society. It ensures that individuals can express their opinions, share information, and engage in open discourse without unreasonable restrictions. However, the growing influence of Artificial Intelligence (AI) in content moderation, censorship, and misinformation control has raised serious concerns regarding the scope and limitations of free speech in the digital age. While AI-driven systems are intended to regulate hate speech, misinformation, and unlawful content, they often suppress legitimate political dissent and alternative viewpoints, posing a risk to constitutional freedoms.
A key area where AI affects freedom of speech is content moderation on digital platforms. Social media companies like Facebook, Twitter (now X), and YouTube use AI-driven algorithms to filter and remove online content. These algorithms, trained to detect hate speech, violence, and misinformation, sometimes wrongfully classify political activism, human rights advocacy, and dissenting voices as harmful content. Reports have shown that posts related to protests, government criticism, and minority rights in India have been disproportionately removed by AI-driven moderation systems⁵. The lack of transparency in how these AI models function makes it difficult to challenge wrongful censorship, leading to fears that AI could be used as a tool for state-controlled narrative enforcement and suppression of free speech.
Another significant issue is AI-generated deepfakes and misinformation, which threaten democratic discourse, electoral integrity, and individual reputation. Deepfake technology, powered by advanced AI models, enables the creation of hyper-realistic but fake videos, audios, and images that can be used to manipulate public opinion, spread propaganda, and defame individuals. In recent years, deepfakes have been used in political smear campaigns, causing misinformation to spread rapidly on social media platforms⁶. Recognizing the constitutional and democratic risks posed by deepfakes, the European Commission has classified them as a serious threat to free and fair elections, advocating for stringent regulations and real-time detection mechanisms⁷. In India, where electoral disinformation and digital propaganda are rising concerns, the lack of robust AI regulations makes it easier for malicious actors to exploit AI for information warfare.
The constitutional safeguards against AI-driven censorship were emphasized in the landmark Shreya Singhal v. Union of India (2015) case, where the Supreme Court struck down Section 66A of the Information Technology Act. The Court ruled that restrictions on online speech must be narrowly defined, reasonable, and not arbitrary. Section 66A was declared unconstitutional because it gave authorities broad and vague powers to curb online speech, leading to its misuse. Similarly, AI-driven censorship—if arbitrary, opaque, or politically biased—could violate the Shreya Singhal ruling, necessitating judicial oversight and transparent regulatory mechanisms⁸.
To ensure that Artificial Intelligence (AI) upholds freedom of speech rather than undermining it, a robust regulatory framework is essential to prevent arbitrary censorship, misinformation, and digital authoritarianism. AI-powered content moderation systems are widely used by social media platforms, search engines, and online forums to regulate online discourse, detect hate speech, remove harmful content, and combat misinformation. However, the lack of transparency, accountability, and human oversight in these systems poses significant risks to free expression, a fundamental right protected under Article 19(1)(a) of the Indian Constitution. Without appropriate safeguards, AI-driven censorship mechanisms can lead to unwarranted suppression of political dissent, media freedom, and marginalized voices, contradicting the constitutional principle that restrictions on speech must be reasonable and justifiable.
1. Transparent AI Moderation Policies to Prevent Arbitrary Censorship
AI-powered moderation tools often operate on proprietary algorithms that classify and remove content based on predefined guidelines. However, these algorithms may lack contextual understanding, leading to unjustified content takedowns or the suppression of legitimate speech. For instance, AI models may mistakenly flag satirical content, political criticism, or culturally sensitive expressions as hate speech or misinformation, disproportionately affecting activists, journalists, and marginalized communities. To prevent arbitrary censorship, AI moderation policies must be transparent, publicly accessible, and subject to periodic review. Users should be informed about the criteria used to remove or restrict content, and moderation guidelines must be designed in accordance with constitutional principles of free speech, ensuring that only content violating legitimate legal standards is restricted.
2. Human Oversight in AI-Driven Content Moderation, Allowing for Appeals and Corrections
AI moderation systems function autonomously, often without human intervention or review, leading to erroneous takedowns of lawful content. Unlike human moderators, AI lacks nuance, cultural context, and the ability to interpret complex expressions such as sarcasm, metaphor, or political satire. This can result in the removal of content that is not actually harmful but merely misclassified due to algorithmic limitations. To address this, there must be a mechanism for human oversight in AI-driven content moderation, allowing users to appeal wrongful censorship decisions and seek corrections. Platforms should establish independent review panels comprising legal experts, civil society representatives, and human rights organizations to assess appeals and ensure due process in content moderation decisions.
3. Legal Frameworks to Regulate Deepfakes and Misinformation Without Restricting Legitimate Speech
The rise of AI-generated deepfakes and misinformation poses significant challenges to democratic discourse, electoral integrity, and public trust in information. AI tools can create realistic yet fabricated content, which can be misused to spread propaganda, defamation, and false narratives. However, regulating AI-generated misinformation must be done carefully to prevent misuse of laws for political censorship or suppression of dissenting voices. A clear legal framework should differentiate between harmful disinformation intended to deceive the public and legitimate expressions such as satire, parody, and opinion-based critique. Laws must define reasonable restrictions on deepfake technology while ensuring that state regulations do not become a tool for curbing press freedom or political criticism.
4. Judicial Scrutiny and Legislative Guidelines to Ensure AI-Based Censorship Aligns with Constitutional Protections
The judiciary plays a crucial role in safeguarding free speech by ensuring that AI-driven censorship mechanisms adhere to constitutional standards. The Supreme Court of India, in Shreya Singhal v. Union of India (2015), struck down Section 66A of the IT Act, emphasizing that restrictions on online speech must be clear, specific, and not overly broad. To uphold this principle, any AI-based content moderation system must be subject to judicial review to prevent misuse. Legislatures should establish guidelines defining the limits of AI-based censorship, ensuring that AI tools do not override constitutional rights or enable government overreach. This includes protecting anonymity online, preventing unlawful surveillance, and ensuring that platforms do not engage in discriminatory censorship practices.
AI must be harnessed responsibly to balance curbing harmful content with protecting freedom of expression. If unchecked, AI-driven censorship could erode democratic values by silencing dissenting voices, marginalizing critical perspectives, and enabling automated digital authoritarianism. Implementing transparent policies, human oversight, legal accountability, and judicial safeguards will ensure that AI serves as a tool for open democratic discourse rather than a means of suppressing speech and information.
⁵ AI and Free Speech Report, 2023.
⁶ European Commission AI Guidelines, 2023.
⁷ Deepfake Regulation Act, EU, 2023.
⁸ Shreya Singhal v. U
Union of India, (2015) 5 SCC 1.
Right to Privacy (Article 21 – Puttaswamy Judgment)
The right to privacy is an essential component of personal liberty and human dignity, both of which are protected under Article 21 of the Indian Constitution. This right was emphatically recognized in the landmark Supreme Court case K.S. Puttaswamy v. Union of India (2017), which declared that privacy is a fundamental right inherent to the right to life and personal liberty⁹. However, the increasing reliance on Artificial Intelligence (AI) for surveillance, data profiling, and automated decision-making poses serious challenges to individual privacy. AI-powered tools are being extensively used by the state and private entities to collect, process, and analyze vast amounts of personal and biometric data, often without adequate legal safeguards. This raises concerns about mass surveillance, data misuse, and potential violations of constitutional protections.
A major concern regarding AI and privacy is state-led surveillance programs. In India, large-scale AI-driven surveillance systems such as NATGRID (National Intelligence Grid) and CCTNS (Crime and Criminal Tracking Network & Systems) are designed to aggregate and analyze personal data from various sources, including bank transactions, travel records, call logs, and social media activity. While these programs aim to enhance national security and law enforcement, they also enable mass surveillance, which may operate without sufficient oversight or safeguards. The indiscriminate collection of data, coupled with the lack of robust privacy laws, raises concerns about government overreach, the chilling effect on free speech, and the potential misuse of personal information. If AI-driven surveillance operates without judicial and legislative scrutiny, it risks violating the right to privacy as laid down in the Puttaswamy judgment.
Another major challenge posed by AI is data-driven exclusion, particularly in the context of the Aadhaar biometric identification system. The Aadhaar system, which relies on AI-based biometric authentication, has been widely used for accessing welfare schemes, financial services, and government subsidies. However, reports indicate that algorithmic mismatches in Aadhaar authentication have led to the exclusion of millions of individuals from essential services. Instances where elderly individuals, laborers, and rural citizens were denied food rations, pensions, and healthcare due to fingerprint mismatches or system errors highlight the flaws in AI-powered identity verification¹⁰. Such exclusions violate the principles of dignity, equality, and social justice, undermining the very purpose of welfare schemes meant to uplift marginalized communities.
Recognizing these concerns, the Justice B.N. Srikrishna Committee, which was tasked with drafting India’s data protection framework, emphasized the need for human oversight in AI-driven decision-making. The committee warned that AI should not be allowed to make autonomous decisions that significantly impact individuals without human intervention. It recommended that AI systems must be transparent, accountable, and subject to strict data protection regulations¹¹. This aligns with the Puttaswamy judgment, which asserted that privacy is not an absolute right but must be protected against disproportionate state and corporate intrusions.
As privacy continues to evolve as a constitutional right, AI governance must be aligned with the principles of individual dignity, autonomy, and data protection. The rapid expansion of AI-driven technologies in governance, law enforcement, and private sectors has raised concerns about the potential erosion of privacy rights. AI systems process vast amounts of personal data, often relying on automated decision-making without direct human oversight. This raises issues related to data security, informed consent, and the risk of mass surveillance. The widespread deployment of AI in facial recognition, biometric authentication, predictive analytics, and automated profiling has made it essential to establish legal safeguards that prevent the misuse of AI-driven data collection and surveillance mechanisms.
1. Comprehensive data protection laws that regulate AI-driven surveillance and data processing
AI technologies used for data collection and analysis often operate without clear regulatory boundaries, leading to concerns about the unauthorized access, storage, and sharing of personal information. AI-powered surveillance tools, such as predictive policing and facial recognition, collect large datasets that may be used for purposes beyond their original intent, creating risks of profiling, discrimination, and privacy violations. Without comprehensive data protection laws, there is a risk that AI-driven surveillance programs could be misused for mass monitoring, tracking individuals without their consent, or even targeting specific groups. Data protection laws must define strict guidelines on how personal data is collected, processed, and stored, ensuring compliance with fundamental privacy principles such as informed consent, data minimization, and purpose limitation. A strong legal framework must also impose liability on entities—both public and private—that misuse AI for intrusive data collection or fail to protect users’ personal information from breaches and unauthorized access.
2. Judicial oversight and independent regulatory bodies to monitor AI-based surveillance programs
AI-driven surveillance mechanisms, such as facial recognition systems deployed in public spaces, raise concerns about the indiscriminate tracking of individuals without their knowledge or consent. The absence of judicial scrutiny in AI-based monitoring programs can lead to unchecked state surveillance, undermining individuals’ autonomy and privacy. AI-powered surveillance must be subject to strict judicial oversight to ensure that it does not violate fundamental rights. Courts must establish clear legal thresholds that define the scope and limitations of AI-driven surveillance, ensuring that such measures are deployed only when necessary and proportionate. Furthermore, independent regulatory bodies must oversee AI-based data collection and ensure that government agencies and private entities adhere to data protection norms. These regulatory bodies should be empowered to conduct periodic audits, investigate potential privacy violations, and hold accountable any organization that misuses AI-driven surveillance technologies. The role of an independent watchdog is crucial in preventing AI from being used for mass surveillance, political profiling, or discriminatory law enforcement practices.
3. Stronger safeguards in biometric authentication systems to prevent exclusion from welfare benefits
AI-driven biometric authentication, such as Aadhaar-based verification, relies on fingerprint scans, facial recognition, and iris scans to verify identities for accessing government services. However, technical failures, poor-quality biometric data, and database mismatches can lead to authentication errors, preventing individuals from receiving food rations, pensions, and other essential services. Vulnerable populations, including the elderly, manual laborers, and disabled individuals, often face difficulties in consistently verifying their identities due to changes in biometric features. Such exclusions raise serious concerns regarding the right to privacy and dignity, as well as the constitutional guarantee of equality. To address these challenges, biometric authentication systems must incorporate fallback mechanisms such as manual verification, alternative identity proofing methods, and grievance redressal processes. Additionally, safeguards must be in place to prevent the misuse of biometric data for unauthorized surveillance, profiling, or targeted monitoring by government agencies and private corporations. The protection of biometric data must be prioritized to prevent privacy violations that could result in discrimination or social exclusion.
4. Transparency and accountability mechanisms in AI governance to prevent state and corporate overreach
AI-driven decision-making processes must be explainable and accessible, ensuring that individuals understand how their personal data is being used. Organizations utilizing AI for surveillance, predictive analytics, or automated decision-making must provide transparency reports detailing their data collection practices, algorithmic decision-making processes, and measures taken to mitigate bias and errors. Without transparency, AI technologies can be deployed in ways that violate privacy rights without public awareness or consent. Public institutions and private companies deploying AI should establish mechanisms that allow individuals to challenge and seek redress for unfair AI-driven decisions. Algorithmic impact assessments, external audits, and public disclosure of AI policies can help ensure that AI systems operate within ethical and legal boundaries. AI governance frameworks must also require regular oversight from independent review committees and public consultations to address concerns related to privacy, data security, and algorithmic discrimination. By ensuring transparency in AI governance, it becomes possible to hold both state and corporate actors accountable for their use of AI technologies.
The increasing reliance on AI in governance, law enforcement, and public services demands a legal framework that upholds individual privacy rights while ensuring accountability in the use of AI-driven technologies. Proper safeguards in AI regulation can help prevent intrusive surveillance, protect biometric data, and maintain the balance between technological progress and individual freedoms.
⁹ K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1.
¹⁰ Aadhaar Exclusion Report, The Wire, 2023.
¹¹ Justice Srikrishna Committee Report, 2018.
AI and Separation of Powers
The doctrine of separation of powers is a fundamental principle of constitutional governance, ensuring that the legislative, executive, and judicial branches function independently while maintaining necessary checks and balances. However, the increasing integration of Artificial Intelligence (AI) into governance challenges this doctrine by expanding executive authority without adequate legislative and judicial oversight. The use of AI-driven decision-making tools in tax administration, law enforcement, and public service delivery risks concentrating power in the executive, potentially leading to arbitrary governance and diminished institutional accountability.
One of the most significant risks posed by AI in governance is the automation of executive decision-making without adequate human oversight. AI-powered tax assessment systems analyze financial data and detect tax evasion patterns, but they often operate without clear transparency mechanisms or avenues for appeal. Similarly, AI-driven law enforcement tools, such as predictive policing and facial recognition, allow the executive branch to monitor and track individuals on a large scale, raising concerns about authoritarian surveillance and the potential for abuse. Without robust legislative frameworks to regulate AI-based governance, executive agencies may wield unchecked power, bypassing parliamentary scrutiny and weakening democratic accountability.
The judiciary’s adoption of AI also presents constitutional concerns. While AI-powered legal research tools and predictive analytics can improve judicial efficiency, there is a growing concern that over-reliance on AI could diminish human discretion in legal interpretation. AI-assisted case law research, automated sentencing guidelines, and predictive models for case outcomes—though useful—raise the question: Can AI influence judicial decision-making in a way that undermines the judiciary’s independence and interpretative autonomy? The risk of algorithmic bias influencing court decisions is particularly alarming, as AI systems trained on historical legal data may reinforce existing judicial biases rather than ensuring fair and impartial adjudication.
Comparatively, the European Union’s AI Act (2023) has introduced strict regulatory mechanisms to govern the use of AI in the judiciary. The Act mandates mandatory human oversight in judicial AI applications to prevent automated decision-making from undermining judicial independence and to ensure that algorithmic biases do not influence court rulings¹². The EU framework recognizes that while AI can be an aid to legal professionals, it cannot replace human judgment in legal interpretation and sentencing.
India must consider similar safeguards to ensure that AI does not erode constitutional principles, particularly in the context of the doctrine of separation of powers, which prevents excessive concentration of authority within any single branch of government. The increasing integration of AI in governance, law enforcement, and judicial decision-making raises concerns about the unchecked influence of automated systems on democratic processes and fundamental rights. AI-driven decision-making, if left unregulated, has the potential to undermine institutional independence, reduce accountability, and blur the boundaries between executive, legislative, and judicial functions. Therefore, it is essential to implement measures that maintain the balance of power and uphold constitutional protections.
1. Legislative oversight of AI-driven executive decision-making to prevent excessive concentration of power
The executive branch increasingly relies on AI for administrative decision-making, such as predictive governance, automated public service delivery, and AI-powered law enforcement. However, without adequate legislative oversight, there is a risk that AI could be used to centralize power within the executive, bypassing democratic deliberation. AI-driven governance tools, if deployed without accountability, may lead to opaque decision-making, where executive authorities exercise unchecked control over policies, surveillance, and public resource allocation. To address this, the legislature must establish clear legal frameworks that define the scope of AI use in executive decision-making, ensuring that AI-based policies remain transparent, fair, and subject to democratic scrutiny. Regular parliamentary reviews, independent audits, and mandated reporting on AI-based administrative actions can help prevent the misuse of AI for arbitrary governance decisions. Additionally, legislative bodies should have the authority to question and regulate AI-driven executive decisions to ensure they align with constitutional principles and do not infringe upon citizens’ rights.
2. Judicial scrutiny of AI-based law enforcement and surveillance technologies to ensure compliance with constitutional protections
AI-driven law enforcement tools, such as facial recognition, predictive policing, and automated profiling, pose significant risks to constitutional rights, including the right to equality, privacy, and due process. Without judicial scrutiny, AI-powered surveillance and policing mechanisms could be used to enable mass surveillance, disproportionately target marginalized communities, or even criminalize dissent. Automated decision-making in law enforcement may lack transparency, making it difficult for individuals to challenge AI-driven actions. To ensure constitutional compliance, courts must have the power to review and regulate AI-based law enforcement practices. Judicial scrutiny is necessary to prevent AI from being used as an instrument of state overreach, ensuring that AI-driven surveillance programs meet the standards of necessity, proportionality, and reasonableness. Furthermore, courts must establish legal precedents on the use of AI in criminal justice, ensuring that AI-based evidence and decision-making do not undermine due process rights. Independent judicial oversight bodies should also monitor AI-driven law enforcement technologies to prevent their misuse in ways that compromise fundamental freedoms.
3. Mandatory human review in AI-assisted judicial decision-making, ensuring that AI remains a tool rather than a determinant in legal interpretation
The integration of AI in judicial processes, including legal research, case prediction, and sentencing recommendations, raises concerns about the potential over-reliance on automated systems in the legal domain. While AI can assist in streamlining legal proceedings and analyzing vast amounts of case law, it cannot replace human reasoning, ethical considerations, and judicial discretion. If AI-generated legal outcomes are accepted without human review, there is a risk of unjust rulings, as AI lacks the capacity to interpret laws in a nuanced manner or consider the social and moral implications of a case. To prevent AI from becoming a determinant in judicial decision-making, it is essential to mandate human oversight in all AI-assisted legal processes. Judges must retain the final authority in legal interpretation, ensuring that AI-generated recommendations serve only as advisory tools rather than binding rulings. Additionally, courts should establish guidelines on the admissibility of AI-generated evidence and the extent to which AI can be relied upon in legal reasoning. By maintaining human review as a mandatory safeguard, the judiciary can prevent AI from undermining judicial independence and ensure that legal decisions remain fair, reasoned, and contextually appropriate.
4. A clear regulatory framework governing the use of AI in governance, preventing its misuse for arbitrary executive actions

AI’s growing role in governance, including automated decision-making in public administration, data-driven policymaking, and digital identity verification, necessitates a well-defined regulatory framework to prevent its misuse. The lack of clear legal standards governing AI’s deployment in state functions creates a risk of arbitrary executive actions, where AI is used without accountability or oversight. For instance, AI-powered automated welfare systems could deny social benefits without due process, or AI-based risk assessments could be used to justify discriminatory policies. A regulatory framework must establish legal guidelines for AI deployment, ensuring that AI-driven governance remains accountable, transparent, and aligned with fundamental rights. This includes defining the limits of AI’s role in government functions, mandating impact assessments before AI implementation, and ensuring that AI-generated decisions are subject to review and appeal. Independent regulatory bodies should oversee the deployment of AI in governance, ensuring compliance with ethical principles, fairness, and non-discrimination. A clear legal structure will help maintain the balance between technological innovation and democratic accountability, preventing AI from becoming a tool of arbitrary state control.
Without proper checks and balances, AI risks blurring the boundaries between the executive, legislative, and judicial functions, creating a technocratic governance model that diminishes democratic accountability. The integration of AI in governance must therefore be regulated, transparent, and aligned with constitutional principles to preserve the independence of state institutions and prevent the erosion of fundamental rights.
Legal Personhood and AI: Can AI Have Rights and Duties?
The question of whether Artificial Intelligence (AI) should be granted legal personhood has sparked an ongoing debate in AI jurisprudence, ethics, and constitutional law. Traditionally, legal personhood has been granted to humans, corporations, and even religious deities in some jurisdictions, allowing them to hold rights and duties under the law. The argument for AI personhood stems from the increasing autonomy of AI systems in decision-making, particularly in areas such as healthcare, finance, transportation, and law enforcement. If an autonomous AI system causes harm—such as a self-driving car accident, a biased hiring algorithm, or a medical misdiagnosis by an AI-powered diagnostic tool—the issue of liability becomes highly complex. Should AI be held accountable, or should its creators, programmers, and users bear the legal responsibility?
The European Union (EU) has actively debated the idea of granting “electronic personhood” to AI entities, proposing that certain advanced AI systems might require legal status to be assigned rights and responsibilities¹³. This proposal emerged in response to growing concerns about accountability in AI-driven decision-making, particularly in cases where the human creator or operator cannot be clearly identified. The idea of electronic personhood suggests that AI could be treated similar to corporations, which, despite being non-human entities, enjoy legal personhood with limited liability. However, critics argue that granting AI legal personhood blurs the lines between human and artificial agency, potentially allowing companies to escape liability by attributing blame to AI rather than its developers or deployers.
In India, the legal framework does not currently recognize AI as a legal person, and there are no established laws addressing AI liability. However, the need for clear accountability mechanisms is becoming urgent as AI systems increasingly influence public administration, business transactions, and legal decision-making. India has witnessed cases where AI-based biometric authentication in Aadhaar has led to wrongful exclusions from welfare schemes, and AI-driven recruitment tools have been accused of discriminatory hiring practices. Without a well-defined legal structure, victims of AI-related harm may struggle to seek legal recourse.
A key challenge in this debate is that AI lacks consciousness, intent, and moral reasoning—qualities traditionally required for holding legal rights and responsibilities. Unlike humans or corporations, AI does not possess independent will, cannot be punished, and cannot exercise rights or duties in the same way as legal persons. For this reason, most legal scholars advocate for holding AI creators, developers, and operators accountable, rather than granting AI any form of legal personhood. This approach aligns with existing product liability laws, which attribute responsibility to manufacturers and users rather than the product itself.
To regulate AI liability and ensure its ethical usage, India must consider implementing a comprehensive legal and institutional framework to address the risks posed by AI-driven decision-making and automated systems. The increasing reliance on AI across various sectors, including healthcare, finance, governance, and law enforcement, raises concerns about accountability, fairness, and transparency. Without clear regulations, AI-induced harm may go unaddressed, leading to corporate negligence, biased outcomes, and potential violations of fundamental rights. The implementation of safeguards will help ensure that AI remains a tool for societal progress while preventing its misuse.
Also Read Common Cause vs. Union of India and Anr. Case Analysis, AIR 2018 SC (CIV) 1683
1. Establishing AI-specific liability laws to determine responsibility in cases of AI-induced harm
The existing legal framework in India does not provide clear guidelines on AI liability, making it difficult to attribute responsibility when AI systems cause harm. AI-induced harm can take various forms, including wrongful arrests due to biased facial recognition, denial of healthcare benefits by automated eligibility systems, and financial losses caused by AI-powered trading algorithms. The challenge lies in determining whether liability falls on the AI developer, the deployer, or the user. AI-specific liability laws must be established to define the legal responsibility of different stakeholders, ensuring that victims of AI-induced harm have clear avenues for legal recourse. These laws should categorize AI-based harms, differentiate between intentional and unintentional consequences, and impose penalties for negligence in AI development or deployment. Additionally, legal provisions must address cases where AI systems act unpredictably or autonomously, ensuring that liability principles evolve to accommodate emerging AI technologies.
2. Mandating transparency in AI decision-making, ensuring that developers and deployers remain accountable
AI-driven decision-making often operates as a “black box,” meaning that its logic, reasoning, and underlying processes are not easily understandable, even by experts. This opacity can lead to serious accountability concerns, especially in areas where AI decisions impact human rights, such as credit scoring, employment screening, and predictive policing. Lack of transparency makes it difficult for affected individuals to challenge unfair AI-driven decisions, increasing the risk of arbitrary or biased outcomes. To address this, AI regulations must mandate transparency in AI decision-making by requiring organizations to disclose how AI algorithms function, what data they rely upon, and how they arrive at conclusions. Developers and deployers of AI systems should be obligated to maintain audit trails, allowing for independent verification of AI processes. Transparency measures should also include explainability requirements, ensuring that AI-generated decisions can be interpreted and challenged by individuals.
3. Creating an AI ethics framework, similar to the EU’s AI regulations, to define ethical standards for AI deployment
The ethical implications of AI deployment must be addressed to ensure that AI systems are designed and used in a manner that aligns with human rights, fairness, and non-discrimination principles. Several global frameworks, such as the European Union’s AI Act, provide structured approaches to AI ethics by classifying AI systems based on their risk levels and imposing stricter regulations on high-risk AI applications. India must develop a similar AI ethics framework that establishes clear ethical guidelines for AI development and deployment. This framework should include principles such as fairness, privacy protection, non-discrimination, and human oversight in AI-driven decision-making. It should also categorize AI applications based on their potential risks, implementing stricter regulations for high-risk AI systems used in law enforcement, critical infrastructure, and governance. Ethical standards must be enforceable through regulatory agencies that monitor AI compliance and address ethical violations.
4. Setting up AI redressal mechanisms, allowing individuals to challenge AI-driven decisions that negatively affect them
One of the major concerns with AI-driven decision-making is the difficulty individuals face in challenging automated outcomes, especially when AI systems make critical decisions regarding employment, credit approval, healthcare, or law enforcement actions. Unlike human decision-makers, AI systems do not offer direct avenues for appeal, and affected individuals may not even be aware that an AI-driven process influenced the decision against them. To protect individuals from unfair AI decisions, India must establish AI redressal mechanisms that allow individuals to contest and seek review of AI-generated decisions. This could include the creation of AI review boards, independent adjudicatory bodies, or AI ombudsman offices that handle complaints related to AI-based discrimination or erroneous decisions. Organizations deploying AI should be required to provide accessible grievance mechanisms, informing individuals about their right to appeal AI-driven outcomes. Additionally, laws must be enacted to ensure that individuals have access to legal remedies when AI decisions violate their rights or cause unjust harm.
While AI cannot and should not be granted fundamental rights, the absence of clear legal accountability could lead to corporate negligence, unfair legal outcomes, and unchecked AI risks. By developing a robust regulatory framework, India can prevent AI-related harms while ensuring that AI remains an instrument of progress rather than an unregulated force.
¹³ AI Personhood Debate, Oxford AI Review, 2023.
Conclusion: The Future of AI under the Indian Constitution
As Artificial Intelligence (AI) becomes an integral part of governance, law enforcement, and judicial processes in India, it is crucial to ensure that AI development and deployment align with constitutional principles, fundamental rights, and democratic values. While AI offers efficiency, predictive capabilities, and automation, its unchecked use can lead to violations of privacy, bias in decision-making, mass surveillance, and threats to the rule of law. To ensure that technological progress does not come at the cost of constitutional protections, India must take a proactive approach by embedding constitutional morality into AI governance.
One of the most pressing requirements is the establishment of a specialized AI regulatory authority. Given AI’s far-reaching impact on various sectors, a dedicated regulatory body—akin to the Data Protection Authority (DPA) proposed in the Personal Data Protection Bill—could oversee AI ethics, algorithmic accountability, and compliance with fundamental rights. Such a body should ensure that AI systems deployed by the government and private sector adhere to the principles of fairness, transparency, and non-discrimination. This regulatory mechanism could evaluate AI applications in law enforcement, social welfare, and judicial processes to prevent their misuse and ensure that they operate within constitutional boundaries.
Additionally, judicial oversight must be strengthened to prevent the executive branch from using AI for unchecked surveillance and decision-making. Courts must play a pivotal role in determining whether AI-driven governance violates the principles of due process, proportionality, and individual autonomy. The Indian judiciary has historically acted as the guardian of constitutional rights, as seen in landmark cases like K.S. Puttaswamy v. Union of India (2017), which upheld the right to privacy. By extending judicial scrutiny to AI applications, courts can ensure that AI-driven policies do not infringe upon fundamental rights and that legal remedies exist for individuals affected by algorithmic decisions.
A crucial aspect of AI governance is algorithmic transparency. Many AI models operate as “black boxes,” making decisions that lack clear explanations or accountability mechanisms. If AI is used in public administration, welfare distribution, law enforcement, and judicial decision-making, it must be explainable and auditable. The government should mandate algorithmic audits, impact assessments, and independent reviews of AI-driven systems to ensure that they do not reinforce bias, discrimination, or arbitrary decision-making. Such transparency is essential for protecting citizens’ rights and maintaining public trust in AI-driven governance.
Furthermore, ethical considerations must shape India’s AI future, prioritizing human dignity, fairness, and accountability. AI governance should be guided by the constitutional values of justice, liberty, equality, and fraternity, ensuring that technological advancements serve society rather than erode fundamental freedoms. India can draw insights from the European Union’s AI Act (2023), which mandates human oversight over AI-driven legal and administrative decisions, and from international AI ethics frameworks that promote non-discrimination, inclusivity, and public accountability.
To build an AI-powered future that respects the Indian Constitution, the following steps are essential:
1. Enact comprehensive AI regulations that define legal liability, ethical AI deployment, and safeguards against misuse.
2. Establish an AI regulatory authority to oversee algorithmic governance, transparency, and accountability.
3. Strengthen judicial oversight to prevent executive overreach in AI-driven surveillance and decision-making.
4. Mandate algorithmic audits to ensure AI systems are explainable, fair, and free from bias.
5. Align AI governance with constitutional morality, ensuring that technological advancements enhance—rather than threaten—democracy and fundamental rights.
India stands at a critical juncture where AI’s potential must be harnessed responsibly. If AI governance is aligned with constitutional safeguards, India can embrace technological progress while protecting individual freedoms, ensuring democratic oversight, and upholding the rule of law. The future of AI in India must be human-centric, ethically grounded, and constitutionally compliant, ensuring that innovation serves society without undermining fundamental rights.
Article by Anupama SIngh Intern At Fastrack Legal Solutions