The United Nations General Assembly’s High-Level Week in New York has seen a significant development as over 200 global figures, including Nobel laureates, former heads of state, and prominent AI scientists, are pressing for immediate international measures to regulate artificial intelligence. They released a joint declaration called the ‘Global Call for AI Red Lines,’ presented by Nobel Peace Prize laureate Maria Ressa. This statement cautions that the swift advancement of AI poses “unprecedented dangers” to society and insists that governments must establish enforceable regulations by the end of 2026. The declaration asserts that without legally binding commitments, AI could transform human society in ways that jeopardize stability, rights, and even human survival.
This event marks the first occasion where Nobel Prize winners from various fields have come together on the topic of AI governance. Notable signatories include biochemist Jennifer Doudna, economist Daron Acemoglu, physicist Giorgio Parisi, and AI pioneers Geoffrey Hinton and Yoshua Bengio, both influential figures in modern machine learning. Support for this initiative has also come from civil society, with backing from over 60 organizations, such as the UK think tank Demos and the Beijing Institute for AI Safety and Governance.
Yuval Noah Harari, a prominent author and co-signer of the letter, stressed the critical nature of the issue, stating, “For thousands of years, humans have learned, sometimes the hard way, that powerful technologies can have dangerous as well as beneficial consequences. Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.” Concerns regarding the misuse of AI have escalated in recent years, driven by instances of harm, including mass surveillance and disinformation campaigns, as well as tragic personal outcomes linked to AI, such as a teenager’s suicide.
Experts caution that the next phase of risks could be even more severe, encompassing large-scale job loss, engineered pandemics, and systemic human rights violations. Political leaders have also joined this cause, with notable figures like former Irish president Mary Robinson and Colombia’s Nobel Peace Prize-winning ex-president Juan Manuel Santos voicing support. The campaign is being organized by the University of California, Berkeley’s Center for Human-Compatible AI, The Future Society, and France’s Center for AI Safety. While the declaration does not specify legislative frameworks, it highlights crucial areas where prohibitions may be necessary, including banning lethal autonomous weapons, preventing self-replicating AI systems, and prohibiting their involvement in nuclear command and warfare.
Ahmet Üzümcü, former head of the Organisation for the Prohibition of Chemical Weapons, emphasized the importance of taking action: “It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly.” The signatories refer to historical successes in global cooperation, such as treaties banning biological weapons and agreements to eliminate ozone-depleting substances, as proof that enforceable global regulations for AI are feasible. However, they warn that voluntary commitments from AI companies are inadequate, as many corporate pledges remain unfulfilled.
Concerns about the existential risks of AI are not new; in 2023, tech leaders including Elon Musk called for a temporary halt to advanced AI development, comparing the dangers of uncontrolled AI to those of nuclear war and global pandemics. Although some notable AI executives, like Sam Altman (OpenAI), Dario Amodei (Anthropic), and Demis Hassabis (Google DeepMind), did not sign this declaration, other senior figures, including OpenAI co-founder Wojciech Zaremba and former DeepMind scientist Ian Goodfellow, have added their support. The coalition’s message is unmistakable: without immediate, enforceable global standards, AI could exceed boundaries that humanity cannot afford to overlook.