Skip to content

Seoul AI summit opens with companies pledging to develop AI safely

16 AI companies made voluntary committment to accountable governance and public transparency
web1_20240521010516-664c2ea7167b340b87584591jpeg
A screen shows an announcement of the AI Seoul Summit in Seoul, South Korea, Tuesday, May 21, 2024. World leaders are expected to adopt a new agreement on artificial intelligence when they gather virtually Tuesday to discuss AI’s potential risks but also ways to promote its benefits and innovation. (AP Photo/Ahn Young-joon)

The world’s leading artificial intelligence companies pledged at the start of a mini summit on AI to develop the technology safely, including pulling the plug if they can’t rein in the most extreme risks.

World leaders are expected to hammer out further agreements on artificial intelligence as they gathered virtually Tuesday to discuss AI’s potential risks but also ways to promote its benefits and innovation.

The AI Seoul Summit is a low-key follow-up to November’s high-profile AI Safety Summit at Bletchley Park in the United Kingdom, where participating countries agreed to work together to contain the potentially “catastrophic” risks posed by breakneck advances in AI.

U.N. Secretary-General Antonio Guterres told the opening session that seven months after the Bletchley Park meeting “We are seeing life-changing technological advances and life-threatening new risks — from disinformation to mass surveillance to the prospect of lethal autonomous weapons.”

The U.N. chief said in a video address that there needs to be universal guardrails and regular dialogue on AI. “We cannot sleepwalk into a dystopian future where the power of AI is controlled by a few people — or worse, by algorithms beyond human understanding,” he said.

The two-day meeting — co-hosted by the South Korean and U.K. governments — also comes as major tech companies like Meta, OpenAI and Google roll out the latest versions of their AI models.

They’re among 16 AI companies that made voluntary commitments to AI safety as the talks got underway, according to a British government announcement. The companies, which also include Amazon, Microsoft, France’s Mistral AI, China’s Zhipu.ai, and G42 of the United Arab Emirates, vowed to ensure safety of their most cutting edge AI models with promises of accountable governance and public transparency.

The pledge includes publishing safety frameworks setting out how they will measure risks of these models. In extreme cases where risks are severe and “intolerable,” AI companies will have to hit the kill switch and stop developing or deploying their models and systems if they can’t mitigate the risks.

Since the U.K. meeting last year, the AI industry has “increasingly focused on the most pressing concerns, including mis- and dis- information, data security, bias and keeping humans in the loop,” said Aiden Gomez CEO of Cohere, one of the AI companies that signed the pact. “It is essential that we continue to consider all possible risks, while prioritizing our efforts on those most likely to create problems if not properly addressed.”

On Tuesday evening, South Korean President Yoon Suk Yeol and British Prime Minister Rishi Sunak are to meet other world leaders, industry bosses and heads of international organizations for a virtual conference. The online summit will be followed by an in-person meeting of digital ministers, experts and others on Wednesday, according to organizers.

While the U.K. meeting centered on AI safety issues, the agenda for this week’s gathering was expanded to include “innovation and inclusivity,” Wang Yun-jong, a deputy director of national security in South Korea, told reporters Monday.

Wang said participants will subsequently “discuss not only the risks posed by AI but also its positive aspects and how it can contribute to humanity in a balanced manner.”

The AI agreement will include the outcomes of discussions on safety, innovation and inclusivity, according to Park Sang-wook, senior presidential adviser for science and technology for President Yoon.

Governments around the world have been scrambling to formulate regulations for AI even as the technology makes rapid advances and is poised to transform many aspects of daily life, from education and the workplace to copyrights and privacy. There are concerns that advances in AI could take away jobs, trick people and spread disinformation.

This week’s meeting is just one of a slew of efforts to draw up AI guardrails. The U.N. General Assembly has approved its first resolution on the safe use of AI systems, while the U.S. and China recent held their first high-level talks on AI and the European Union’s world-first AI Act is set to take effect later this year.

READ ALSO: No expert consensus on AI risks, trajectory ‘remarkably uncertain’: U.K. report

READ ALSO: Q&A: CIA’s chief technologist’s cautious embrace of generative AI

__

Chan contributed to this report from London. Associated Press writer Edith M. Lederer contributed from the United Nations.

Hyung-jin Kim And Kelvin Chan, The Associated Press