3 min read

Beyond the AI advisory

Beyond the AI advisory

India’s $1 billion national AI program requires a carefully calibrated approach that leverages the transformative potential of AI with safety, transparency and accountability measures. 

Given the plethora of generative AI tools being deployed at scale today, and the upcoming national elections, there is an urgent need for guardrails to contain the potential harmful effects of AI. 

For this reason, an advisory issued by the Indian government on March 1 was timely. It sought to contain the premature deployment of AI models, algorithmic bias and the viral spread of deepfakes. However, it was quickly replaced with a fresh advisory on the back of scathing criticism from startups and investors. 

This unfortunate back and forth suggests the need for a more consultative, holistic, and long-term approach to AI governance. 

An advisory without clarity

The revised advisory settles the most contentious issue — there is no longer a requirement to obtain permission from the government to deploy a new ‘foundational model’ or ‘generative AI’ service in India. Instead, platforms need to be transparent about the ‘inherent fallibility and unreliability’ of the output generated by these services through a combination of labels and consent popups. 

However, the new advisory remains unclear on multiple fronts, specifically whom it applies to, how it should be complied with, and who will enforce it.  

First, the advisory says it applies to ‘all intermediaries and platforms’, but media reports suggest that it has been sent to eight specific platforms that are classified as ‘significant social media intermediaries’ under the IT Rules (YouTube, Instagram, WhatsApp, X, etc.). If the intention is to enforce the advisory only against social media platforms, that should have been made clear in the advisory. But this interpretation also begs the question – should obligations around fairness, transparency and accountability only extend to the deployment of AI on social media, when these issues can very well creep in during the development phase? 

Second, the advisory requires platforms to ensure that their service “does not permit any bias or discrimination or threaten the integrity of the electoral process”. On one hand, it is incumbent on responsible AI platforms to implement guardrails in every jurisdiction where they operate based on local cultural and legal norms. AI models must also be consistent in the types of prompts they decline to respond. But at the same time, fairness is subjective, so it is near impossible to comply with these obligations in the absence of any technical guidance, evaluation datasets, or safety benchmarks.

Third, the advisory requires platforms to label or embed code in any synthetic media so it can be identified as AI-generated content. But the advisory goes a step further and requires platforms to identify the user or service that has modified such content. The consensus amongst technical experts is that tracing the identity of a person who created a deepfake may not always be feasible given the current state of technology, especially if the traceability requirement extends to users outside the platform. Instead, measures to suppress the viral distribution of deepfakes should be the focus. 

Lastly, from a process point of view, the advisory lacks clear legal backing under the IT Act. It has been presented as non-binding but seeks “compliance with immediate effect”. The advisory also warns platforms and users that they may be prosecuted under the IT Act and criminal codes. This has left many  people wondering if the advisory will have a chilling effect on innovation in India.  

Regulatory capacity 

To be sure, the advisory makes important headway. For example, it requires platforms to notify users that generating unlawful content could result in suspension and prosecution. These measures are easy to implement and create the type of friction necessary to suppress the viral distribution of deepfakes. 

However, there is a need to develop a more holistic governance framework based on the principles of safety, fairness, transparency and accountability, combined with enhanced regulatory capacity.  

The first prong is to update existing regulations to account for recent advancements in AI. That includes classifying AI systems based on their function and risk of harm, plugging the gaps in laws relating to data and copyright, and promoting trust and accountability through a techno-legal approach.  

The second pillar is to develop a multi-agency institutional framework that can harmonise AI governance approaches across sectors, exercise oversight, and provide the local ecosystem with guidance to make them globally competitive. 

Until such structures are in place, platforms should be encouraged to self-report their internal benchmarks so they can be evaluated by independent experts. This would ensure that the safety, trust and fairness objectives are met without having to introduce prescriptive measures that could hamper innovation.