India rarely does anything the way in which it “ought” to be done. Seasoned negotiators across the world understand this. That India is a nation more convinced than others of its own imperatives is an adage that sits in the pockets of corporate leaders, politicians, and diplomats dealing with any part of the Indian system. India is, as A. J. P Taylor might have put it, had he the right of a pen today, the quintessential “troublemaker.” At the risk of being over-deterministic, the exceptionality that underpins the “India Way” is an advance that can be detected in almost any aspect of India’s approach to global affairs. This is as true in the fast-evolving universe of geopolitics that appears to shape the present and the future of artificial intelligence (AI).
This essay presents a set of loose thoughts on what an India AI Safety Institute might look like if the Indian government were to consider it seriously. It is based on discussions with those in industry, government, and civil society actors who advocate for a “safety first” approach to AI.
Context
The first AI Safety Summit was held at Bletchley Park in November 2023, aimed at addressing frontier risks surrounding AI. India and twenty-eight other jurisdictions, including the European Union, were represented at the summit. The task for the future, within this growing club of actors, was set out to “identifying AI safety risks of shared concern” and to “build risk-based policies.” Following the summit, the then government of Rishi Sunak announced the creation of the first AI safety institute in the United Kingdom.
The second global Safety Summit took place in Seoul in May 2024, with India in attendance. The EU and ten other countries agreed to “work together to launch an international network to accelerate the advancement of the science of AI safety.” In essence, they committed to investing in safety institutes in their countries, aiming for “complementarity and interoperability” within this new network.
There is a growing sense amongst experts that this group may well set out global standards for AI safety. Others suggest that, at best, the safety institute pathway will lead to the creation of transnational norms, which national jurisdictions might be expected to consider as they contemplate or reassess regulations. Either way, it is becoming increasingly clear that the international network of AI safety institutes will, at some level, shape and inform future global approaches to AI safety. The Chinese government too, according to reports is “seriously considering” its own version of such an institute.
General Views on AI Safety Institutes
In India, the debate on setting up an AI safety institute is split in two broad ways, as far as I can discern. First, there are those who underline that creating an institute on India’s terms can help global cooperation. This group also believes that an Indian advance could inspire countries from the Global South to follow suit. Doing so would help ensure that such a process is not dominated by states in the Global North, and that global norms are carefully considered keeping broader national contexts squarely at the centre of deliberations. Further, a national safety institute could provide technical expertise to support the testing and assessments of AI systems. These skills are not necessarily available within the government.
To be clear, India’s Chair (starting in November 2022) of the Global Partnership for Artificial Intelligence (GPAI), an alliance of twenty-nine countries, has been partially designed to include a wide range of experts from the Global South. While the unanimously agreed upon GPAI ministerial declaration, published in December 2023, clearly recognized the importance of contributing to cross-border processes such as the Hiroshima AI Process and the G20, it highlighted AI-related issues critical for countries in the Global South.
This included “the need for equitable access to resources…to benefit from and build competitive AI solutions,” and the need to develop “necessary knowledge, skills, infrastructure.” As those in the negotiations made clear, the question of accessing compute infrastructure is at least as important as focussing on safety and risks, which, according to these individuals, remains the main focus of countries like the UK and the United States.
An Indian AI safety institute could, in the view of this group of experts and decision-makers, serve as a necessary bridge between the North and the South, even as these broad categories ought to be tempered when one discovers competing national priorities.
A second group of both industry representatives and policymakers remain uncertain of the virtue of an AI safety institute in India. Will this become yet another bureaucracy for regulation? What purpose will this serve, given the increasingly crowded space in India dominated by line ministries and coordinating ministries, that have their own views on AI regulation? Is India only considering this because the “West” wants it to go down this path? These are common refrains I have encountered around safety institutes. Those in industry seem to have taken the view that administrative overreach could turn a novel idea into nothing more than another bureaucratic address to engage with in the trying efforts to deal with rules and regulations.
There is little clarity, as far as I can discern, on the matter of an Indian AI safety institute. Much of the skepticism is driven by existing reference models of these institutes as essentially rule-making bodies, which, in the Indian context, is seen as yet another extension of the regulatory state. Globally, some state-backed safety institutes have been designed as directorates in existing government structures, like in the UK; those that seem to be focussed on standard-setting, like in the U.S.; and others tucked into research universities with a focus on trust, like in Singapore.
But why does India need to follow any model or a combination of the same?
To regulate the digital universe, India developed a “third way,” as an alternative to the U.S. or Europe. It developed and assimilated a techno-legal approach to Digital Public Infrastructure (DPI)—coding as opposed to regulating privacy-preserving principles in technology systems.
An Indian Approach
There is both a geopolitical as well as a natural rationale and opportunity for India to consider its own globally assimilated, but locally driven, AI institute. It does not even need to be called an institute. With some imagination and effort from the administrative state, India could create an ecosystem for innovation, access, and cutting-edge advances that squares safety-based needs, as outlined in the Bletchley Declaration, and stays clear of the shadow of domestic regulations. The latter can be left to line ministries and committees mandated for the same.
An “India Safety Institute” could be imagined as both a physical location and a series of connections to innovation-led and existing community-driven ecosystems, focused on different aspects of AI. The model could be rooted in a top-tier technology university, or an incubation hub for social-impact projects that bridges the best of industry and the leading lights in the academy.
Such a space could provide labs to those working on varied use cases, like agriculture, water, or health, from across the world. Technologists from Kenya and Nauru could work in India, alongside those from Germany and the United States. Compute could be packaged and provided at a fraction of the financial outlay provided by the National AI Mission. Open-source large language models (LLMs) could be customized with unique data sets collected in India and other countries with similar contexts. The Indian government has done this before in the health and biotech sectors. Philanthropic capital has created these spaces to real effect for India and increasingly in the Global South.
Conclusion
While the debate on an Indian safety institute gains momentum, it might be worth breaking away from cerebral images of the same as they exist presently in other countries. India is a natural disruptor. There are opportunities to disrupt, innovate, and create something unique in India around this transformational technology. Rejecting the establishment of an AI safety institute, in the forms in which they currently exist, ought to only embrace an alternative that can be incubated differently in India.