5 min read

Predictions for India's AI Policy

Predictions for India's AI Policy

In 2024, India’s AI strategy moved from rhetoric to action. 

The India AI mission, with a relatively meagre budget of $1.3 billion, set an ambitious agenda. But by all accounts it is being implemented in a holistic, pragmatic and collaborative manner.

AI also became a centerpiece of India’s diplomatic efforts. It signed bilateral agreements with the US, UK, UAE and others, and hosting the GPAI summit helped cement New Delhi’s position as a global leader on AI governance.

I explored India’s advance on AI regulation in this paper for Carnegie India, noting that India's approach was cautious and pragmatic, but sometimes confusing. For example, the AI advisory issued in March last year, was a moment of collective reckoning, setting the stage for more deliberate action in the coming year. 

As we step into 2025, I anticipate that India’s AI trajectory will be shaped by three critical factors — industrial policy, governance, and geopolitics. These three vectors of AI policy closely align with the four pillars of the ‘Viksit Bharat’ mission: infrastructure development, local manufacturing, social inclusion, and modernization of laws. How effectively India navigates these interconnected priorities will determine how quickly the country’s AI ambitions can be realised. 

And that leads to my predictions for how India’s AI policy will shape up in the coming year.

More data access will raise new privacy questions

As India ramps up its AI ambitions, the question is - where will the data required to train new AI models come from? And will it be safe?

I’m not concerned about the first point. India’s already rich data ecosystem has the potential to mushroom on the back of crowd-sourced projects, aggregated data from DPI platforms, and synthetic datasets generated by AI models.

However, the sheer diversity and volume of data generated will raise novel questions about privacy, ownership, value, and compensation.

India’s new data protection law will serve as a critical framework to address these concerns, even as the implementing rules remain to be enforced. Some unique features of the law, such as 'consent managers' to act as trusted intermediaries will empower users, are well-suited for the AI context.

Other provisions are likely to create issues. For example, the absence of a data classification framework means sensitive categories of data like financial data need not be afforded higher protections; the lack of clarity on legitimate interest exceptions could stifle innovation by requiring consent for low-risk AI use cases (eg. spam filters); there are also ambiguities around ‘vicarious consent’, which could create confusion in multi-user contexts; and the lack of privacy safeguards for personal data in the public domain leaves ample room for misuse.

In 2025, I expect clearer guidelines to emerge on how personal data can be used to train and deploy AI applications. However, broader debates on non-personal data governance are likely to resurface, including what safeguards are required for anonymised data and the value of such data in the public interest.

Self-regulation and targeted rules will help manage AI risks

AI governance is one area where India’s strategy will be tested most rigorously. The challenge lies in crafting rules that protect against harm without stifling innovation—a delicate balance that India has approached with caution, but where more needs to be done.

Incidentally, MeitY issued a consultation paper this year that reflects my suggestions for AI regulation – the need for risk assessments, regulatory gap analysis, targeted rules to address specific issues, and a whole of government approach. How this is implemented in practice remains to be seen.

By mid-2025, I predict that there will be a new set of self-regulatory codes or ‘voluntary commitments’ in place to establish baseline norms for a range of industry actors. A separate law for AI is unlikely at this stage. However, additional rules and expanded guidance for foundation models, transparency and copyright may be introduced. These rules will address specific risks and market failures, though the temptation will be to respond to high-profile incidents.

Sectoral regulators in areas like finance and healthcare will also step in to address high-risks applications in their respective domains, as the RBI is doing already. 

Finally, a new AI Safety Institute for India will likely be announced at the Paris Summit in February or soon after. While this announcement will be a significant step, the real work—designing the institution and implementing its charter—will take shape only in the months that follow.

Compute investments will be rationalised

The government announced that it would acquire 10,000 GPUs and make them available to developers through public-private partnerships.

The move to democratise access to AI generated excitement but also raises some critical questions: Who gets priority access? Why this specific number of GPUs? Should the government be involved in a traditionally market-driven process?

The reality is that Indian developers are moving to smaller models tailored for specific use cases and low-resource environments. They are not aiming to build AGI, and don’t require unlimited compute resources like their global counterparts.

As I’ve said before, we need more data on how existing compute resources are being used in India. Furthermore, large compute clusters can emit as much carbon as small cities, making efficient allocation and use of compute resources a policy priority in order to protect the environment. 

In 2025, I expect a shift in focus. While additional compute investments will be required in certain strategic sectors such as defense, space and telecom, market-led approaches are likely to dominate other domains. And as AI compute resources become publicly available, more clarity on who is accessing it—and for what purpose—will help align investment policies with the needs of the ecosystem.

India’s global AI ambitions will be tempered

While India remains an important market for global AI companies, how that translates into geopolitical influence is another matter.

Increasingly, it appears that control over advanced AI will determine the real winner, and currently the United States and China are frontrunners. India is expected to produce its first indigenous AI chip by 2026, which will be a milestone in building secure supply chains. However, scaling up and competing with incumbents on advanced AI remains a significant challenge.

Export control restrictions imposed by the United States create another roadblock for India to meet its AI ambitions. Despite strong bilateral ties with the US, India remains in the ‘middle tier’ of countries under new export control regulations, highlighting the constraints it has to work with.

To break these dependencies, India will need to align its geopolitical goals with domestic capacity-building—ensuring that its investments in R&D, talent and manufacturing translate into strategic autonomy. Progress on these fronts will be a slow and steady endeavour. For now, India’s geopolitical ambitions in AI remain aspirational, as multiple challenges will likely limit its ability to lead.

Overall, 2025 will be a defining year for India’s AI ambitions. It will have to balance innovation with regulation, collaboration with self-sufficiency, and growth with inclusion. The stakes are high. If India gets it right, it won’t just be participating in the global AI race, it will be carving out a distinct path forward.