- Rishabha Sharma
Navigating the New Frontier: A Critical Look at India’s AI Governance Guidelines

Introduction
The world is at a crossroads with artificial intelligence (AI), it holds great promise, but also serious risks. For India, the release of the India AI Governance Guidelines marks an important step in charting how AI will be used, governed and regulated. This article examines the Guidelines’ key ideas, how they fit into existing Indian law, and their implications, especially how India’s approach compares with other countries. The bottom line India is favouring innovation first, regulation later but whether that works will depend on strong institutions and clear legal backing.
The Guiding Philosophy: “Sutras” for an Innovation-Friendly Ecosystem
At the core of the Guidelines is a belief in “Innovation over Restraint”. That’s one of seven guiding “sutras” (principles) that also include Trust; People First; Fairness & Equity; Accountability; Understandable by Design; and Safety. This signals that India prefers to enable AI growth rather than subdue it by heavy regulation.
Legally, what this means is instead of immediately drafting very strict laws, the approach is to use soft-law tools (voluntary codes, guidelines, industry self-regulation) in the early phase. That has the advantage of being flexible and able to adapt to rapid changes but the downside is that it may leave some uncertainty for developers and deployers who prefer clear rules.
Interaction with Existing Laws: A Patchwork with Gaps
The Guidelines emphasise that many AI risks can be addressed under India’s current legal framework, e.g., the Information Technology Act, 2000 (IT Act), the Digital Personal Data Protection Act, 2023 (DPDP Act), the Bharatiya Nyaya Sanhita, 2023 (BNS) and the Consumer Protection Act, 2019. For example, deepfakes might be regulated under the IT Act or BNS, misuse of personal data under the DPDP Act.
But the Guidelines also admit that the existing laws leave important gaps. Some of those gaps identified include:
Classification & liability under the IT Act: Who is responsible (developer, deployer, user) when AI systems actively create or modify content? Existing definitions (e.g., “intermediary”) may not map well to generative-AI systems.
Data protection issues: How do AI model-training practices (with large datasets including personal or publicly available data) align with principles like “purpose limitation” in the DPDP Act? What about exemptions?
Copyright/training data challenge: The article says the Guidelines recognise that India’s “fair dealing” exceptions under the Indian Copyright Act, 1957 may be too narrow for large-scale AI training. It suggests India may move towards a broader “Text & Data Mining (TDM)” exception, similar to what the EU and Japan have done.
This candid recognition of gaps is one of the strong points of the Guidelines: it shows India is not pretending everything is covered but is signalling targeted legislative amendment may be required.
Accountability Framework: Graded Liability & Institutional Oversight
Because accountability is one of the core “sutras,” the Guidelines propose a graded liability system. That means, depending on an actor’s role (developer vs deployer vs user), the risk level and the due-diligence observed, liability might differ. This is more nuanced than simply saying “platforms are always responsible”.
To make this happen, the Guidelines include a proposed institutional architecture:
An AI Governance Group (AIGG) — a high-level inter-agency body for strategy coordination.
A Technology & Policy Expert Committee (TPEC) — advisory body for technical/policy inputs.
A AI Safety Institute (AISI) — technical body for risk assessment, testing, standard-setting.
The idea is to keep governance dynamic, informed by technology changes and expert input. But the article cautions the legal authority of these bodies is unclear. Unless given clear statutory power, they might remain advisory rather than enforcement bodies.
India in Global AI Governance Landscape
India’s approach is positioned as a middle path between stricter and lighter regulatory models:
European Commission / EU: Has a comprehensive, risk-based regulatory law (the EU Artificial Intelligence Act) with strong ex-ante obligations. India is less prescriptive for now.
National Institute of Standards and Technology (NIST) / US: Relies more on voluntary frameworks and sector-specific rules. India shares the pro-innovation, light-touch flavour, but adds its own institutional coordination.
UK Government: Uses existing regulators in a context-based way rather than a new regulator, somewhat similar to India.
By picking this flexible, innovation-friendly stance, India aims to tailor AI governance to its large, diverse development context and use AI especially via its digital public infrastructure for inclusive growth.
Conclusion
The India AI Governance Guidelines are a smart, pragmatic blueprint, rather than rushing into heavy regulation, they focus on enabling innovation while building trust, identifying gaps, and proposing flexible oversight. Their strength lies in the frank acknowledgment of legal gaps and the notion of graded liability with institutional coordination.
But success will depend on:
(a) How quickly the legal ambiguities (in the IT Act, DPDP Act, Copyright law) are addressed by targeted amendments.
(b) Whether the proposed institutions (AIGG, TPEC, AISI) are properly empowered, resourced, and given enforcement capability (not just advisory).
(c) How industry, startups, regulators actually adopt voluntary frameworks, transparent reporting, grievance redressal and human-centred design, during this “dynamic evolution” phase.
For Indian business, legal and tech communities the Guidelines give direction, but the next few years will likely be a period of adjustment, experimentation and change with compliance by design, transparent audits and governance-by-principle becoming important.