
What Happened
India is gradually shaping its approach to governing artificial intelligence as AI systems become more common in finance, healthcare, digital platforms, and public services. While the country has introduced data protection laws, platform rules, and sector-specific guidelines, it still does not have a single, comprehensive legal framework dedicated to artificial intelligence.
Context: Why AI Governance Has Become Necessary
Artificial intelligence systems are no longer limited to research labs. They are increasingly used to automate decisions, analyse personal data, and influence access to services such as loans, employment, healthcare, and welfare schemes. As AI adoption expands, concerns around misuse, bias, privacy, and accountability have also grown.
In India, rapid digitisation and the expansion of digital public infrastructure have accelerated AI deployment. However, most existing laws were drafted before autonomous and self-learning systems became widespread. This has created regulatory gaps, especially in defining responsibility when AI systems cause harm.
India has chosen an innovation-friendly and rights-respecting path, avoiding intrusive surveillance-based models. The challenge lies in ensuring that this approach protects citizens while allowing responsible technological growth.
What Laws Currently Regulate AI in India
India regulates artificial intelligence indirectly through multiple legal and regulatory instruments.
Information Technology Act, 2000
The IT Act provides the base framework for digital activity. Certain provisions are relevant to AI-related harms.
- Sections dealing with identity theft and online impersonation are increasingly applicable to deepfakes and AI-driven fraud.
- Intermediaries receive legal protection if they follow due diligence and comply with lawful government directions.
IT Rules, 2021
The Information Technology Rules require digital platforms to act against unlawful and misleading content.
- Platforms must label manipulated or synthetic content and respond to grievances.
- Generative AI systems are now considered within the scope of these rules.
Digital Personal Data Protection Act, 2023
The DPDP Act governs how personal data is collected and processed.
- It mandates consent-based and purpose-limited data use.
- AI developers must ensure transparency, fairness, and data security.
Sector-Specific Regulation
India follows a sectoral approach to AI governance.
- Financial regulators require explainability and fairness in AI-based decision-making.
- Algorithmic trading, medical AI devices, telecom systems, and cybersecurity tools are regulated by their respective authorities.
IndiaAI Mission
Through the IndiaAI Mission, the government is investing in domestic computing capacity, datasets, and ethical AI development. The focus is on risk-based governance rather than blanket restrictions.
Key Challenges in AI Governance
Lack of a Dedicated AI Law
India does not yet have a clear legal definition of artificial intelligence or a unified framework to assign liability when AI systems cause harm. Regulation remains fragmented across multiple institutions.
Data Availability and Quality
AI systems require large and diverse datasets. India faces challenges related to uneven digitisation, privacy concerns, and limited access to anonymised, high-quality data, particularly in public-interest sectors.
Algorithmic Bias and Opacity
Many AI systems operate as “black boxes,” meaning their decision-making processes are not easily explainable. This raises concerns about discrimination and unfair outcomes in sensitive areas such as credit access and welfare delivery.
Limited Institutional Capacity
Regulatory bodies often lack the technical expertise required to audit complex AI systems or conduct risk assessments.
Dependence on Foreign Technology
India remains dependent on foreign AI models, cloud infrastructure, and semiconductor supply chains, raising concerns related to data sovereignty and strategic autonomy.
What Needs to Change
To strengthen AI governance in India, several measures are required.
- Enacting a standalone, principle-based AI law that clearly defines responsibilities and accountability.
- Adopting a risk-based regulatory approach, with stricter oversight for high-risk AI applications.
- Strengthening data-sharing frameworks with strong anonymisation standards.
- Mandating explainability and algorithmic audits in high-impact sectors.
- Building institutional capacity and establishing a central AI oversight authority.
- Investing in domestic AI infrastructure, research, and semiconductor manufacturing.
Why This Matters
For citizens, weak AI governance can lead to unfair decisions, privacy violations, and misuse of personal data.
For governance, unclear accountability can reduce trust in digital systems.
For policy, fragmented regulation may slow innovation while failing to prevent harm.
For the future, continued dependence on foreign technology may affect India’s strategic autonomy.
What Readers Should Understand
India’s approach to artificial intelligence is still evolving. While important steps have been taken through data protection laws, platform regulation, and sectoral oversight, gaps remain in ensuring coherent and future-ready governance.
How India addresses accountability, ethics, and institutional capacity will determine whether AI becomes a tool for inclusive growth or a source of new risks. Clear rules and responsible oversight are essential for building public trust in AI-driven systems.
