State of Play of Artificial Intelligence in India
Authors: Ayan Sharma & Bharath Gangadharan
1. Introduction:
Artificial Intelligence (AI) has now gone mainstream across the world along with it come huge opportunities and potential risks in deployment and India is no exception. Its burgeoning high tech labor force and ability to attract millions of dollars in foreign direct investment (FDI) has put the country on pace to become a major player in the global technology supply chain. This has resulted in the permeation of AI technologies into numerous industry sectors including healthcare, IT, education, manufacturing and logistics, necessitating the Indian government to take steps towards recognition and regulation of AI.
2. Current Regulatory Landscape:
Presently, India lacks any AI specific legislation and instead is relying on existing IT, Consumer Protection and IPR legislations, Sectoral Regulations, government advisories/guidelines and the interpretations thereof by Indian Courts to govern the development and deployment of AI technology.
I) Intellectual Property Legislations: The Copyright Act, 1957 is key to AI training and output ownership, as training datasets will likely include protected works, and outputs could be considered “derivative” works. “Fair dealing” is a key defence to copyright infringement. However, what constitutes “fair dealing” is determined on a case by case basis. Judicial precedents in RG Anand v. Delux Films & Ors.[1] and The Chancellor Masters and Scholars of the University of Oxford v. Rameshwari Photocopy Services[2] holds that transformative use is key to the idea-expression dichotomy, with a certain degree of reproduction permitted if the purpose either constitutes “fair dealing” or benefits from limited, specific exemptions under the Copyright Act[3]. The use of copyrighted material for AI training will have to contain “transformative” elements to be able to attract the defence of fair dealing. Unlike the doctrine of “fair use” in the United States of America , “fair dealing” is interpreted differently in the Indian context which is relatively limited in scope. Further, the Indian courts are yet to explicitly extend this to AI training and the traditional frameworks for copyright violation are not adept enough to deal with various issues with the advent of AI technology such as the large data sets used in AI training, collection-tokenisation-training processes and rapid advances in technologies related to AI development and deployment to name a few.
The Delhi High Court is currently assessing these issues in the case of Ani v Open AI[4]. The issues being dealt with include whether; a) storing copyrighted material constitutes infringement; b)output generated on training data would be considered derivative work and hence be an infringement ; c) if the “fair dealing” exemption applies ; and d) Indian courts have jurisdiction if servers are situated overseas.
The Court’s ruling will likely set the tone for AI training regulation. However, in the absence of such a precedent balancing the interests of the authors of copyrighted content with the interests of the owners of AI models seems a distant dream, at present. The most probable outcome in such situations is likely to be monetary settlements between parties.
II) Information Technology Legislation: While data scraping can lead to the collection and processing of personal data as well, AI is typically trained on non-personal data. Deployment of AI also results in the collection and processing of new data from users for the provision of tailored outputs.
Personal data collection and use are governed by the Information Technology Act, 2000 (IT Act); the IT (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules); and the recent Digital Personal Data Protection Act, 2023 (DPDPA)[whose provisions are yet to be brought into force]. While the IT Act and SPDI Rules require express consent from data principals for the collection, processing or transfer of sensitive personal data or information, the DPDPA has further enhanced this requirement by removing the criteria of ‘sensitive personal data or information’ while processing and collecting personal data. It also requires the free, specified and informed consent of the data principal.
The DPDPA allows for the processing of personal data on two grounds; ‘consent bases’ or non-consent based’ for “certain legitimate uses” specified in the act. These include compliance with legal obligations, fulfilment of statutory duties by government agencies, medical emergencies, threat to public health, employment purposes, etc. This conditional permissibility could potentially apply to the development and use of AI and AI-powered tools as well. Thus distinguishing between the stages and determining the purpose of processing personal data in the context of AI tools, namely the collection of personal data, structuring, training, user input/prompts and generation of output. becomes a key concern.
When consent is the basis for processing, obtaining explicit and informed consent through clear affirmative action from data principals will be mandatory. Merely updating the privacy policy to pre-select opt-in options could turn out to be insufficient as per the provisions of the DPDPA . Data principals must have the chance to give their consent clearly and unambiguously. Additionally, the ability to withdraw consent must be as easy as the initial consent process.
AI tools bring their own set of challenges in complying with these consent requirements particularly the ‘black-box issue’ that undermines transparency; which is one of the core principles governing the DPDPA provisions. The opaqueness in AI processing and decision making impedes appropriate disclosures about personal data handling, hindering informed consent. A significant challenge is the right to withdraw consent, which requires the deletion of personal data unless retention is legally mandated. Therefore, retention policies must align with withdrawal requests.
Clearly articulating the relationship between data fiduciaries and data processors with regards to AI development and deployment entities will become increasingly important. As clarity in such operational relationships would help allocate specific responsibilities and liabilities as well as risks.
Notably, exemptions for processing of personal data for “research, archiving or statistical purposes” are allowed under Section 17(2)(b) of the DPDPA, however, this is limited by the proviso which mandates that such activities are not to be used to take any decision regarding the data principal and that the processing is in accordance with the prescribed standards. While theoretically AI training could qualify as research, however, the final determinant would be the standards prescribed under the applicable legistations and also how developers and deployers meet the requirement of no decision being made specific to individual data principals.
Furthermore, the DPDP Act does not apply to personal data made publicly available by the data principal, or any other person under a legal obligation to make personal data publicly available. Accordingly, the DPDP Act and any obligations thereunder would not apply to personal data sourced by AI applications/tools from web scraping (of publicly available online resources). The DPDPA will also not apply if the personal data is anonymised before processing.
In January, 2025 the Ministry of Electronics and Information Technology (MeitY) released the Draft Digital Personal Data Protection Rules , 2025[5] (DPDP Rules) for public consultation giving a preview of the regulatory expectations that organizations may need to align with while collecting and processing the personal data of individuals. Rule 15 of the DPDP Rules provides an exemption for the processing of personal data for research, archiving or statistical purposes subject to adherence to standards that ensure data is used lawfully, without making individual-specific decisions and maintaining responsible data governance practices as specified in Schedule 2 of the DPDP Rules. However, the DPDP Rules are still under consideration by the Government and are yet to be notified. As the adoption of AI tools surges across various industries in India, it will be crucial to strike a balance between fostering innovation and addressing the pressing concerns of data privacy.
The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, impose due diligence obligations on intermediaries (including AI companies), to prevent hosting of infringing, obscene or impersonating content. While safe harbour provisions would extend to these companies, however, it is unlikely that a blanket protection will apply given AI’s ability to generate deepfakes and potential to spread misinformation. In March 2024, the Ministry of Electronics and Information Technology (MeitY) had issued and revised an advisory[6] regarding bias restriction, labelling of AI generated models and content and consequences of dealing with unlawful information. The ambiguity around the legal provision basis which MeitY had issued such advisories raised questions about its enforceability and binding value. Furthermore, certain criteria of what is “unreliable” or “under-tested” with regards to AI models makes compliance difficult.
III) Sectoral Regulations: While a comprehensive central legislation remains in a developmental stage sectoral regulators in India have issued targeted circulars and guidelines mandating disclosures specific to their respective markets and concerns.
a) SEBI: The Securities and Exchange Board of India has issued several circulars that specify requirements and regulations for the use of AI applications and systems. These include:
· SEBI Circular dt. 4th January 2019 imposes reporting requirements on intermediaries for offering or using AI applications and systems.
· SEBI Circular dt. 9th May 2019 imposes reporting requirements on all entities in the mutual fund ecosystem for offering/using AI applications and systems.
· SEBI Circular dt. 27th June 2024 imposes reporting requirements on all mutual funds using AI systems to report usage on a quarterly basis, to ensure full disclosure.
· SEBI Regulations dt. 16th December 2024 and Guidelines dt. 8th January 2025 requiring Investment Advisers to disclose use of AI in operations, irrespective of scale and extent.
· SEBI Regulations dt. 16th December 2024 and Guidelines dt. 8th January 2025 requiring Research Analysts to disclose use of AI tools, irrespective of scale and scenario, and be solely responsible for security, confidentiality, integrity of client data.
· SEBI Regulations dt. 10th February 2025 imposes the responsibility on Intermediaries using AI tools, irrespective of scale and scenario, to be solely responsible for the privacy, security and integrity of stakeholders’ data, the output arising from them, and compliance with applicable laws.
b) Reserve Bank of India: The Reserve Bank of India in August 2025 released a report developing a Framework for Responsible and Ethical Enablement of AI in the financial sector (FREE-AI) [7] which urges lawmakers to legislate in a manner that balances innovation and risk. It prescribes seven guiding “sutras” for AI adoption which are: i) trust is the foundation; ii) people first; iii) innovation over restraint; iv) fairness and equity; v) accountability; vi) understandable by design; and vii) safety, resilience and sustainability.
The report makes 26 recommendations under six strategic pillars: infrastructure, capacity, policy, governance, protection and assurance. It further recommends establishment of shared infrastructure by regulated entities to democratise access to data and computing, along with the creation of an Al Innovation Sandbox.
c) Department of Telecommunications: The DoT through its Telecommunication Engineering Centre (TEC) in July 2023 released a new standard for the Fairness Assessment and Rating of Artificial Intelligence Systems outlining procedures for assessing and rating AI systems for fairness[8].
3. Recent Legislative Developments:
The Indian Government in the latter half of October 2025 took its first steps towards regulating the AI and curb its misuse on the internet. MeitY has proposed amendments to the Information Technology (Intermediary Guidelines and Digital Ethics Code) Rules, 2021[9] (IT Rules). The draft amendments require social media platforms to mandate that their users declare any AI-generated or AI-altered content. While the obligation to label content will be on social media intermediaries, the companies may flag accounts of users who violate the law. Furthermore, in order to clearly label AI content, companies will need to visibly post AI watermarks and labels across more than 10% of the duration or size of the content. Social media firms may lose their safe harbour protection if violations are not flagged proactively.
I) Key Features of the Draft Amendments:
a) The draft amendments to the IT Rules, seek to regulate synthetically generated content such as deepfakes. They, introduce the concept of “synthetically generated information” and impose new labelling, identification, and due diligence requirements on intermediaries, particularly significant social media intermediaries (SSMIs).
b) The draft amendments define “synthetically generated content” as content that is artificially or algorithmically created, modified or altered using a computer resource, in a manner that it appears to be authentic or true. Under the draft framework, intermediaries that provide tools or resources to create or modify synthetic content will be required to label such information clearly or embed a unique metadata or identifier that reveals its synthetic nature.
c) For visual content, the label should occupy at least 10% of the display surface, while in the case of audio, it should be declared during at least 10% of the duration. Intermediaries must also ensure that such labels or metadata cannot be removed or suppressed.
d) The draft amendments further clarify that all existing references to ‘information’ under the IT rules will now also extend to synthetically generated information. This means that deepfakes and other AI-generated content will also now be subject to rules that govern harmful or unlawful online content. Under Rule 3(1)(b) of the IT Rules, platforms must now also take efforts to prevent users from posting harmful AI-generated content or deepfakes. Under Rule 3(1)(d) of the IT Rules, platforms will now have to take down any such AI-generated content or deepfakes if ordered to do so by the government or the court. Under Rules 4(2) and 4(4) of the IT Rules , SSMIs should now be able to trace and monitor such AI-generated content and deepfakes when required
e) The draft amendments add a new proviso to Rule 3(1)(b) of the IT Rules. This rule lists the categories of content (e.g., defamatory, obscene, harmful to a child) that intermediaries must make "reasonable efforts" to prevent users from hosting or sharing. The new proviso clarifies that any removal or disabling of access to any information, including synthetically generated information, data, or communication links, “shall not amount to a violation” of Section 79(2) of the Information Technology Act, 2000 which pertains to the safe harbour doctrine that offers intermediaries conditional immunity from third-party content.
f) The draft amendments also provide for additional due diligence obligations for SSMIs, such as platforms like X, Meta, or YouTube. These platforms must obtain a declaration from users at the time of upload whether the content being published is synthetically generated.
g) SSMIs are also required to use “reasonable and appropriate technical measures,” including automated tools, to verify such declarations. If a piece of content is found to be synthetic, platforms must prominently display a label or notice indicating that it has been algorithmically generated.
h) The draft amendments clarify that intermediaries will be considered in violation of their due diligence obligations if they knowingly permit, promote, or fail to act upon the publication of synthetically generated content that misleads or deceives users which could lead to the loss of their safe harbour protections.
II) Potential Impact and Concerns:
a) The requirement for SSMIs to “verify” user declarations with “technical measures” is undefined and leaves the responsibility of implementation with platforms. The adequacy of technical measures alone being able to regulate deepfakes is also doubtful. Furthermore, the rules do not clarify what level of AI-generated content is permissible and where will liability lie if a detection system fails to identify synthetic media or incorrectly flags authentic content.
b) Draft Rule 3 (3) creates a very wide scope of what constitutes a creator platform. The rule applies to any intermediary that “enable... the creation, generation, modification, or alteration” of synthetic information. This could potentially apply to professional design software (e.g., Adobe Photoshop), basic video or photo-editing or even in-app filters or does it only apply to end-to-end generative-AI use cases such as text-to-image generation. Which could lead to the over-regulation of even benign uses of AI such as the altering of a profile picture via AI prior to posting on a job-seeking platform etc.
c) The language of the draft amendments seem to extend the safe harbour doctrine to AI companies i.e., it covers both social media intermediaries that host or transmit such content as well as companies that generate such content. Entities such as OpenAI, Gemini, or Meta AI do not function as intermediaries under the IT Act’s definition, however, they can now claim the same protections. This results in an inconsistency as they can potentially acquire immunity for content generated using their models. This could potentially reduce the accountability for AI companies and an unreasonably increase the burden for intermediaries despite them not having actively contributed to the generation of the synthetic content.
Thus, it is imperative to note that such regulatory safeguards though well intentioned must be carefully and diligently designed to prevent the misuse of such provisions in ways that could inadvertently restrict legitimate expression or artistic, satirical, and creative uses of synthetic media. The balancing of accountability and authenticity with freedom of speech will be key to the success of any such framework. The draft amendments are open for public consultation till the 6th of November, 2025.
4. Conclusion:
While AI regulation develops and coalesces in India, certain aspects of regulation are being accelerated due to marker forces and pressures. An initial battleground pits humans and AI actors against each other in the field of content creation. AI needs to be fed data in order for it to be creative; what protection then is to be given to AI generated contents becomes a pertinent issue. Additionally, as AI transcends national boundaries, India’s pursuit of AI sovereignty—tailored to serve its distinctive socio-economic landscape—demands a regulatory framework that is robust in oversight and aligned with evolving global standards. Furthermore, as the fifth largest economy that has attained a rapid pace of growth India must account for the potential risks of AI and be conscious of the fact that advancements in AI development may exacerbate inequalities and widen the digital divide without effective regulation.
[1] 1978 SCC (4) 118
[2] Delhi High Court Order dated 16th September 2016 in CS (OS) 2439/2012
[3] Section 52 of the Copyright Act, 1957
[4] Delhi High Court Order dated 19th November 2024 in C.S. (Comm.) 1028 of 2024
[5] https://www.meity.gov.in/static/uploads/2025/02/f8a8e97a91091543fe19139cac7514a1.pdf
[6] https://www.meity.gov.in/static/uploads/2024/02/9f6e99572739a3024c9cdaec53a0a0ef.pdf
[7]https://rbidocs.rbi.org.in/rdocs/PublicationReport/Pdfs/FREEAIR130820250A24FF2D4578453F824C72ED9F5D5851.PDF
[8]https://www.tec.gov.in/pdf/SDs/TEC%20Standard%20for%20fairness%20assessment%20and%20rating%20of%20AI%20systems%20Final%20v5%202023_07_04.pdf
[9] https://www.meity.gov.in/static/uploads/2025/10/9de47fb06522b9e40a61e4731bc7de51.pdf
(M): +91 9211075725
Elements Legal© 2025. All rights reserved.
20 GF, World Trade Centre,
Barakhamba Road, New Delhi - 110 001
