Back to Insights
AI Governance11 min read15 September 2025

India's DPDP Act and AI: What Every Business Deploying AI Needs to Know

India's Digital Personal Data Protection Act 2023 has significant implications for any organisation using AI to process data about Indian citizens. Most businesses have focused on data storage. Fewer have considered how the Act applies to AI systems — where the obligations are more complex and the risks higher.

AA

Agraj Agranayak

Founder & CEO, Imagine Works · About · LinkedIn

Key Takeaways

  • India's DPDP Act (Act No. 22 of 2023) received Presidential assent on 11 August 2023 and applies to all digital personal data processing about Indian citizens.
  • Maximum penalties reach ₹250 crore (approx. €27 million) for significant violations.
  • AI training on personal data, automated profiling, and AI-based decisions affecting individuals all create distinct DPDP obligations separate from service delivery consent.
  • Indian organisations are the Data Fiduciary even when using third-party AI tools — DPDP obligations flow down through Data Processor contracts.
  • Organisations with EU operations face overlapping DPDP and EU AI Act obligations — a consolidated compliance programme is significantly more efficient.

India's Digital Personal Data Protection Act, 2023 (Act No. 22 of 2023) received Presidential assent on 11 August 2023. It establishes India's first comprehensive legal framework for the protection of digital personal data. While the detailed Rules are still being finalised by the Ministry of Electronics and Information Technology (MeitY), the Act's framework is in force and its implications for AI-deploying organisations are significant.

Most compliance discussions around the DPDP Act have focused on data storage, consent management, and breach notification. These are important. But the more complex obligations — and the higher risk exposure — arise in organisations using AI systems that process personal data about Indian citizens.

The Core Framework

The DPDP Act operates on a consent-based model. A Data Fiduciary (any entity that determines the purpose and means of processing personal data) must obtain free, specific, informed, and unambiguous consent from the Data Principal (the individual whose data is processed) before processing their personal data.

Key obligations for Data Fiduciaries include:

  • Purpose limitation: personal data may only be processed for the specific purpose for which consent was obtained
  • Data minimisation: only the data necessary for the stated purpose may be collected
  • Accuracy: reasonable steps must be taken to ensure data is accurate and up to date
  • Storage limitation: data must not be retained beyond the period necessary for the stated purpose
  • Breach notification: significant data breaches must be reported to the Data Protection Board and affected Data Principals
  • Grievance redressal: a mechanism must be in place for Data Principals to raise complaints

Maximum penalties under the Act reach ₹250 crore (approximately €27 million at current exchange rates) for significant violations, with lower penalty tiers for specific breaches such as failure to notify on breach or failure to implement security safeguards.

How AI Systems Create DPDP Obligations

Compliance Reference

DPDP Act — Where AI Systems Create Obligations

Act No. 22 of 2023 · Presidential assent 11 August 2023 · Max penalty ₹250 crore

AI Training on Personal Data

High risk

Consent for training is distinct from consent for service delivery. Legacy consent frameworks almost never cover AI training explicitly.

AI-Based Profiling & Segmentation

High risk

Profiling for creditworthiness, preferences, or risk scores requires its own consent basis — separate from transaction processing consent.

Automated Decision-Making

Medium risk

Data Principals have the right to know about automated processing that affects them. Transparency disclosures must cover how AI systems use their data.

Third-Party AI Tools (SaaS)

Medium risk

The organisation remains the Data Fiduciary when using third-party AI tools. DPDP obligations must flow down via Data Processor contracts.

₹250 cr

Max penalty

≈ €27 million

Required

Breach notification

To Board + Principal

Strict

Purpose limitation

Per consent obtained

Pending

Rules status

MeitY draft 2025

Source: Digital Personal Data Protection Act, 2023 (Act No. 22 of 2023). Organisations with EU operations face overlapping DPDP and EU AI Act obligations.

AI systems interact with the DPDP Act at several points that are not always obvious in initial compliance assessments.

AI training on personal data. If an organisation trains or fine-tunes an AI model using personal data about Indian citizens — customer records, employee data, usage logs — the organisation is a Data Fiduciary for that processing activity. Consent for training use is distinct from consent for service delivery. Most legacy consent frameworks do not cover AI training explicitly.

AI-based profiling and segmentation. Systems that analyse personal data to infer preferences, risk scores, creditworthiness, or behaviour patterns are processing personal data for a purpose that requires its own consent basis. A model that segments customers for targeted communications is not covered by the same consent as a model that processes transaction data for fraud detection.

Automated decision-making. The DPDP Act grants Data Principals the right to know about automated processing that affects them. While India's Act does not yet have the explicit right to contest automated decisions found in the EU's GDPR, the transparency obligations under the Act require Data Fiduciaries to provide meaningful information about how personal data is being used — including in AI systems.

Third-party AI tools. When an organisation uses a third-party AI tool (SaaS-based LLMs, vendor-provided analytics platforms) and provides it with personal data about Indian citizens, the organisation remains the Data Fiduciary. The third party is a Data Processor. Contractual obligations must flow down: the Data Processor must process data only as instructed, maintain security standards, and not engage further sub-processors without approval.

Practical Implications for AI-Deploying Organisations

Organisations using AI in HR (recruitment screening, performance assessment), customer analytics, credit assessment, or healthcare should treat the DPDP Act as requiring a full AI data audit — not just a data storage review.

The audit should identify:

  • Which AI systems process personal data about Indian citizens
  • What consent basis exists for each processing activity, including training data
  • Whether purpose limitation and data minimisation obligations are met
  • What automated processing affects Data Principals and whether disclosure obligations are satisfied
  • How Data Principal rights (access, correction, erasure, grievance) are operationalised for AI-generated outputs

Organisations with operations spanning India and the EU face the additional complexity of operating under both DPDP and the EU AI Act simultaneously. The frameworks have overlapping but non-identical requirements — a consolidated compliance programme is significantly more efficient than running two parallel workstreams.

The Rules Are Coming

MeitY's implementing rules under the DPDP Act were in draft consultation as of early 2025. When finalised, they will clarify several areas of current ambiguity, including cross-border data transfer conditions, the specific categories of Significant Data Fiduciaries (who face enhanced obligations), and consent manager framework requirements.

Organisations that have built DPDP compliance structures before the Rules are finalised will need to review those structures against the Rules once published. This is expected — the prudent approach is to build to the Act's framework now, not to wait for Rules that may add additional obligations.

Imagine Works advises organisations on AI governance and regulatory compliance, including DPDP Act readiness. Get in touch to discuss your data and AI compliance posture.

Related Service

AI Governance & Risk Design

Designing the governance framework and risk architecture that keeps your AI systems compliant, auditable, and board-ready — before regulation forces the issue.

Explore this service