India’s AI Safety Institute: A Potential Solution
• India’s AI Safety Institute should tap into parallel international initiatives, such as the G20 and the GPAI.
• The institute should focus on raising domestic capacity, capitalizing on India’s comparative advantages, and integrating with international initiatives.
• The Global Digital Compact, a result of the Summit of the Future, identifies multi-stakeholder collaboration, human-centric oversight, and inclusive participation of developing countries as essential pillars of AI governance and safety.
• The institute should engage with the Bletchley Process on AI Safety, which is being used by the U.K. and South Korea.
• The Bletchley process is a network of AI Safety Institutes, facilitating proactive information sharing without being regulators.
• The institute should aim to transform AI governance into an evidence-based discipline, leveraging the expertise of governments and stakeholders from across the world.
• India should establish an AI Safety Institute that integrates into the Bletchley network of safety institutes, operating exclusively as a technical research, testing, and standardisation agency.
• The institute could champion perspectives on risks relating to bias, discrimination, social exclusion, gendered risks, labour markets, data collection, and individual privacy.
• If done right, India could become a global steward for forward-thinking AI governance, embracing many stakeholders and government collaboration.