India regulates intellectual property and data privacy in artificial intelligence along three separate axes. The Digital Personal Data Protection Act 2023 controls personal data in training sets and outputs, the Copyright Act 1957 controls protected expression fed into models, and the Patents Act 1970 controls patentability of the model itself. The three regimes do not collapse into each other.
This article sets out the India position as at April 2026, with comparative reference to the EU AI Act, the GDPR, Section 9(3) of the UK Copyright, Designs and Patents Act 1988, and pending US litigation over AI training data. For the broader frame of intellectual property law for artificial intelligence, this article zooms into the specific interaction with data privacy. It is written for practitioners advising AI businesses and for founders building AI products in India.
| Quick answer: the triangle in practice |
| • Three Indian statutes touch every AI product: DPDPA 2023 (personal data in training or output), Copyright Act 1957 (protected expression in training data), Patents Act 1970 (patentability of the model itself). • Complying with one does not absorb the others. A dataset can be DPDPA-compliant and still infringe copyright. • The DPIIT Working Paper on Generative AI and Copyright, Part I (8 December 2025) proposes a mandatory collective-licensing model for AI training. Consultation closed 6 February 2026. It is not operative law. • The DPDP framework was notified through separate Gazette notifications on 13 November 2025. The Data Protection Board of India is notified as a four-member body, with operational appointments pending; Rule 4 (Consent Manager registration) commences one year from publication; Rules 3, 5–16, 22, 23 and Sections 3 to 17 of the Act take effect eighteen months from publication. |
Mapping IP and data privacy across the AI product stack
An AI product in India touches each regime at a different layer of the stack. The model architecture and any novel training technique sit under the Patents Act, subject to Section 3(k)’s bar on “a mathematical or business method or a computer program per se or algorithms”. Patentability of a specific training method or architecture turns on the technical-effect and technical-contribution test from Ferid Allani v Union of India (Delhi HC, 12 December 2019) and Raytheon Company v Controller General of Patents and Designs (Delhi HC, 15 September 2023), now consolidated in the CRI Guidelines 2025 (official PDF). Under those Guidelines, AI and machine-learning patents divide into AI-assisted inventions, which can clear Section 3(k) if they show technical effect in a tangible application, and AI-generated inventions produced autonomously. Indian patent law as it presently stands does not recognise an AI system as inventor: Section 6 contemplates a “Person” claiming to be the true and first inventor.
The training dataset and the outputs generated from it sit under the Copyright Act 1957. Section 14 grants the copyright owner an exclusive reproduction right including “the storing of it in any medium by electronic means”. No express text-and-data-mining exemption exists in Indian copyright law as at April 2026; the DPIIT Working Paper on Generative AI and Copyright, Part I, acknowledges this gap and proposes a legislative response rather than judicial extension of Section 52.
Personal data flowing through the same dataset or output sits under the DPDPA 2023. A company that determines the purpose and means of processing personal data is a Data Fiduciary under Section 2(i); the individual to whom the data relates is a Data Principal under Section 2(j). The Act applies inside India under Section 3(a) and extra-territorially under Section 3(b) when the processing relates to offering goods or services to Data Principals in India. An AI company based abroad serving Indian users is therefore within the Act’s scope.
Training data: where the Copyright Act and DPDPA collide
The training corpus is the single point at which all three regimes can attach. Any scrape, download, or storage of a copyrighted work for model training engages the Section 14(a)(i) reproduction right, which expressly covers “storing of it in any medium by electronic means”. That language is broad enough to make intermediate copying during AI training a live infringement question, though the application of Section 14 and Section 52 to model training remains unsettled in India.
One of the key interim questions in Asian News International v OpenAI, CS(COMM) 1028/2024, is whether intermediate copies fall within the Section 52(1)(b) carve-out for “transient or incidental storage”. The Delhi High Court reserved orders on ANI’s interim application on 27 March 2026 after thirty-two hearings; the issue remains unresolved. Related doctrinal argument sits in the Intepat case note on ANI v OpenAI and the broader AI and big-data copyright analysis.
From copyright to data privacy, the personal data layer intersects the same corpus on different terms. Under DPDPA Section 4, processing personal data requires either consent under Section 6 or a listed legitimate use under Section 7. Neither consent nor any of the nine Section 7 categories neatly covers commercial AI training on a scraped general-purpose corpus.
The research exemption in Section 17(2)(b) is available only where “the personal data is not to be used to take any decision specific to a Data Principal” and the processing is “carried on in accordance with such standards as may be prescribed”. Reliance on Section 17(2)(b) is difficult wherever the model generates outputs, inferences, profiling, ranking, or targeting relating to identifiable individuals; whether that amounts to a “decision specific to a Data Principal” remains judicially unsettled.
Section 3(c)(ii) creates a narrower escape than practitioners sometimes assume. The Act does not apply to personal data made publicly available by the Data Principal herself or by a person under a statutory obligation to publish. Self-published blog posts, social-media posts the user intentionally made public, and statutory government registers fall outside DPDPA. Personal data extracted by a scraper from sources where the Data Principal did not herself publish does not. Some publicly available personal data may therefore fall outside DPDPA through Section 3(c)(ii) while still attracting copyright restrictions under Section 14 of the Copyright Act, which has no equivalent public-availability exception.
The December 2025 DPIIT Working Paper on Generative AI and Copyright proposes a mandatory collective-licensing model, described as “One Nation One Licence One Payment”, under which commercial AI developers would pay a central non-profit copyright society a royalty calibrated to global revenue in exchange for authorised training access. Public consultation closed on 6 February 2026. The Working Paper reflects a policy direction under consultation; it is neither a draft Bill nor an executive rule, and the position of Indian copyright law on AI training remains the one set out in the Copyright Act as it currently stands.
Model outputs, authorship, and inference of personal data
Indian copyright law already contemplates computer-generated works. Section 2(d)(vi) of the Copyright Act 1957 defines the author of a “computer-generated” literary, dramatic, musical or artistic work as “the person who causes the work to be created”. The provision predates generative AI and functions as a textual hook for assigning authorship to a human causer such as the prompt engineer or the model operator. How it applies to modern generative AI systems remains largely untested in Indian higher courts.
The Indian position sits between the UK approach in Section 9(3) of the Copyright, Designs and Patents Act 1988, which assigns authorship to the person making “the arrangements necessary for the creation of the work”, and the US position in Thaler v Perlmutter, where the DC Circuit affirmed in March 2025 that AI-only works are not copyrightable, a holding the Supreme Court left intact when it denied certiorari on 2 March 2026 (No. 25-449). The practical extension of Section 2(d)(vi) surfaces in questions of prompt authorship and ownership in collaborative human-AI creation.
Outputs that reveal personal data attract a second layer of exposure. A model that generates an image of an identifiable individual, an inference about a named person, or a voice resembling a recognisable performer processes personal data within the meaning of Section 2(t) of the DPDPA. For celebrities, Indian courts have treated such outputs as actionable under the common-law right of publicity.
In Titan Industries v Ramkumar Jewellers (Delhi HC 2012), the court protected Amitabh and Jaya Bachchan against unauthorised use of their likenesses in jewellery advertising. Twelve years later, in Arijit Singh v Codible Ventures LLP (Bombay HC, 26 July 2024, 2024 SCC OnLine Bom 2445), Justice R.I. Chagla granted an ex-parte ad-interim injunction reaching AI voice-cloning tools and the Metaverse, protecting the singer’s “name, voice, vocal style, vocal technique, vocal arrangements and interpretations, mannerisms, and signature”. The court observed that unauthorised generative AI content targeting celebrities “shocks the conscience”. The thread runs through the Intepat explainer on personality rights in India.
For non-celebrities, publicity rights are less developed. The principal vehicle is DPDPA Section 12, granting the Data Principal a right of correction, completion, updating, and erasure of personal data processed on consent. A user whose image, voice, or identifying attributes appear in a model output without consent may request erasure; the Data Fiduciary must erase unless retention is required by law.
Trade secrets, ChatGPT prompts, and twin breaches under DPDPA
India has no dedicated trade secret statute. Trade secret protection in India rests on three overlapping mechanisms: the common-law action for breach of confidence, contractual NDAs and employment restrictive covenants, and Section 72 of the Information Technology Act 2000. DPDPA recognises this ecosystem in Section 7(i), which treats processing of employee personal data for “prevention of corporate espionage, maintenance of confidentiality of trade secrets, intellectual property, classified information” as a legitimate use that does not require separate consent.
The Samsung incident of April 2023 illustrates how generative AI prompts create twin exposures. Three Samsung employees pasted proprietary source code, internal meeting notes, and hardware test patterns into ChatGPT, after which the company banned generative AI tools on internal networks.
Once the substantive DPDPA obligations are in force, an identical event inside an Indian Data Fiduciary may trigger liability on two tracks. On the IP track, confidentiality may be seriously compromised, and enforcement of trade-secret or breach-of-confidence claims may become materially harder. Patentability of any underlying invention is at risk under the absolute-novelty standard, depending on what was disclosed and whether the disclosure became enabling and public. On the DPDPA track, if the pasted content also contained personal data of employees, customers, or counterparties, the Data Fiduciary may fail the Section 8(5) obligation to take reasonable security safeguards, and Section 8(6) may then trigger a breach-notification duty to the Board and to each affected Data Principal.
The Schedule to DPDPA prescribes a penalty that may extend to Rs 250 crore for a Section 8(5) failure and Rs 200 crore for a Section 8(6) failure. A single careless prompt therefore sits at the intersection of a compromised trade secret, a possible copyright exposure where third-party material was pasted, and two separate DPDPA penalty heads.
Three compliance priorities for AI builders in India
The three regimes do not collapse; compliance work must therefore run on three parallel tracks.
First, map the dataset. For each source of training data, record whether it contains copyrighted expression (yes, unless the content is in the public domain or licensed), whether it contains personal data (almost always, on the open web), and whether the data was made publicly available by the Data Principal herself. The Section 3(c)(ii) carve-out works only for the third category; the first two obligations persist independently of DPDPA status.
Second, calibrate consent to the right regime. Consent obtained for DPDPA purposes under Section 6 does not authorise copyright reproduction under Section 14. Copyright permission must be obtained separately, either through direct licensing or, should the DPIIT Working Paper proposal mature into law, through whatever collective licensing mechanism the legislature adopts. Model cards, privacy notices, and terms of use should reflect both regimes clearly.
Third, build for the phased enforcement calendar. The DPDP framework was notified through separate Gazette notifications on 13 November 2025. The statutory framework for the Data Protection Board of India is operative, with the Board notified as a four-member body and operational appointments pending; Phase I procedural provisions took effect on publication. Rule 4 (Consent Manager registration) commences one year from publication.
Rules 3, 5–16, 22 and 23 of the DPDPR, with Sections 3 to 17 of the Act, take effect eighteen months from publication. They activate the substantive obligations: notice, consent, security, breach reporting, rights, children’s data, and cross-border transfer. Products launching in 2026 or 2027 should be built for the full regime, since retention choices and model-weight decisions made now will persist into the enforcement window.
Frequently asked questions
DPDPA applies under Section 4 to processing of digital personal data. The Section 3(c)(ii) exclusion covers only data the Data Principal herself made public or data a statutory publisher was obliged to publish. Scraped personal data not placed online by the individual remains within the Act and requires Section 6 consent or a Section 7 legitimate-use basis.
Section 17(2)(b) exempts processing for research, archiving, or statistical purposes only where the personal data is not used to take any decision specific to a Data Principal and processing follows prescribed standards. Reliance is difficult wherever the model generates outputs, inferences, profiling, or targeting relating to identifiable individuals; the meaning of “decision specific to a Data Principal” remains unsettled.
Section 2(d)(vi) of the Copyright Act 1957 defines the author of a computer-generated work as “the person who causes the work to be created”. How that provision applies to modern generative AI is largely untested in Indian higher courts. The US position is now settled: SCOTUS denied certiorari in Thaler v Perlmutter on 2 March 2026.
Confidentiality is seriously compromised once the prompt enters a third-party platform. Once Sections 8(5) and 8(6) commence (eighteen months from 13 November 2025), if the pasted content contained personal data, the employer faces DPDPA liability with Schedule penalties up to Rs 250 crore and Rs 200 crore respectively.
Phased enforcement under separate Gazette notifications dated 13 November 2025 began on publication, with Phase I activating the Data Protection Board framework (operational appointments pending) and procedural provisions. Rule 4 (Consent Manager registration) commences one year after publication. Rules 3, 5 to 16, 22 and 23 and Sections 3 to 17 of the Act take effect eighteen months from publication.
No. DPDPA regulates personal data; the Copyright Act regulates expressive works. A dataset may be DPDPA-compliant and still infringe Section 14 reproduction rights. Copyright permission must be obtained separately through licensing, and the Section 52 fair-dealing heads do not currently include an express text-and-data-mining exemption for AI training.
Disclaimer
This article discusses the intersection of Indian intellectual property statutes, the Digital Personal Data Protection Act 2023, and pending regulatory instruments including the DPIIT Working Paper on Generative AI and Copyright. DPDPA provisions are subject to phased notification under the DPDPR 2025, and compliance dates cited are current as of April 2026 and subject to further Government of India notifications. The DPIIT Working Paper is a consultation document and is not operative law. Nothing in this article constitutes legal advice for any specific factual situation. Readers facing particular AI product compliance questions should engage qualified counsel.

