Global AI Safety: How Singapore is Leading the Way
Artificial intelligence (AI) is evolving at a breathtaking speed—bringing incredible benefits, but also deep concerns. From biased decision‑making to the spectre of uncontrollable “superintelligent” machines, governments and researchers are scrambling to ensure AI develops safely and ethically. Amid this turmoil, Singapore is emerging as a global leader in shaping proactive, cooperative AI governance.
Understanding AI Safety
At its core, AI safety covers efforts to ensure AI systems are developed and used in ways that are:
Reliable and robust, minimising failures or unpredictable behaviour;
Fair and transparent, reducing bias and explaining decisions;
Secure, preventing misuse like deepfakes or disinformation; and
Controllable, especially as AI capabilities approach or surpass human performance in certain domains.
Researchers worry that increasingly powerful AI could misalign with human intent or be used in harmful ways without proper safeguards.
The Singapore Consensus: A Blueprint for Cooperation
In April 2025, during the International Conference on Learning Representations (ICLR) held in Singapore, dozens of top AI institutions—from OpenAI and Google DeepMind to MIT and Tsinghua—participated in adhering to a joint declaration called the Singapore Consensus on Global AI Safety Research Priorities. This Consensus focuses on three major pillars:
Risk identification in cutting‑edge AI systems;
Safe development techniques to build inherently safer AI; and
Behaviour control methods to guide advanced systems reliably.
This movement is a rare example of researchers from the US, China, Europe, and beyond collaborating rather than competing, demonstrating Singapore’s unique role as a neutral convener.
Concrete Initiatives Driving Change
Singapore’s approach isn’t just rhetorical. It has launched several practical programmes:
Global AI Assurance Pilot: a joint enterprise with the AI Verify Foundation and IMDA, testing how generative AI systems can be safely evaluated before deployment;
Joint Testing with Japan: part of the AI Safety Institutes network, this tests guardrails on language models in ten non‑English languages, recognising global diversity;
Red‑Teaming Challenge: bringing in experts from across Asia to probe models for cultural biases and vulnerabilities; and
Memoranda of Cooperation with the UK and US: aligning testing frameworks, sharing best practices, and strengthening global standards.
A Harmonised Framework for Trust
Singapore’s Model AI Governance Framework, first issued in 2019 and updated in 2020, advises firms on managing AI risk—with clear principles like explainability, fairness and human oversight. Its flexibility has made it popular in businesses, enabling interoperable regulation across borders.
IMDA’s A.I. Verify toolkit allows voluntary self‑assessment, where firms test their systems against international standards and publish their findings—promoting accountability and trust
Diplomatic Bridge and Regional Catalyst
Thanks to its reputation for neutrality, Singapore is uniquely positioned to bridge geopolitical divides:
It successfully brought together the US and China to co‑draft the Singapore Consensus
Discussions under the US–Singapore Critical & Emerging Technology dialogue aim to align chemistry with US frameworks and make regulations globally compatible.
It co‑sponsors the Global Partnership on AI, an OECD‑hosted coalition of over 25 democracies working on shared AI principles and research .
Why International Cooperation Matters
AI transcends borders: it can be created in one country but deployed globally overnight. Without common guardrails, nations risk harmful fragmentation;
Capacity building matters: by partnering with smaller nations via the ASEAN framework and AI Playbooks, Singapore is helping ensure AI safety is inclusive, not just for the tech‑rich; and
Standards avoid duplication: interoperable frameworks, like those shared with NIST in the US and aligned with the UK, cut compliance costs and enhance clarity for businesses
The Road Ahead
Singapore’s contributions, from piloting assurance tools to brokering multilateral pacts, show how a small nation can exert outsized influence. Yet challenges remain:
The UK and US did not sign the February 2025 Paris AI Action Summit’s ‘Statement on Inclusive and Sustainable AI’, illustrating persistent divisions; and
Technical red‑teaming and control methods need continuous refinement as AI grows more sophisticated.
Still, these early efforts point to a future where global AI safety isn’t zero‑sum, but a shared investment in humanity’s future.
Explore Global Trekker’s curated documentaries on AI, technology, and innovation, offering fresh perspectives on the people, ideas, and industries shaping our digital future.
Visit the ‘Where to Watch’ page for local listings, and explore weekly articles on Personality & Art, Science & Technology, Business, Destination & Food, or Nature & Environment on this space.
Broaden your mind, open your heart, and inspire your soul with Global Trekker.
Follow us: