The two-day AI safety summit, which takes place on Nov. 1 and Nov. 2, will host government officials and companies from around the world, including the U.S. and China.
The two-day summit, which takes place on Nov. 1 and Nov. 2, will host government officials and companies from around the world, including the U.S. and China, two superpowers in the race to develop cutting-edge AI technologies.
The summit is squarely focused on so-called “frontier AI” models — in other words, the advanced large language models, or LLMs, like those developed by companies such as OpenAI, Anthropic, and Cohere.Misuse risks involve a bad actor being aided by new AI capabilities. For example, a cybercriminal could use AI to develop a new type of malware that cannot be detected by security researchers, or be used to help state actors develop dangerous bioweapons. headtopics.com
“I think we remain confident that we have brought together the right group of world experts in the AI space, leading businesses and indeed world leaders and representatives who will be able to take on this vital issue,” the spokesperson said.
“In the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as super intelligence,” Sunak said. A particular point of contention surrounding the summit is Sunak’s decision to invite China — which has been at the center of a geopolitical tussle over technology with the U.S. — to the summit. Sunak’s spokesperson has said it is important to invite China, as the country is a world leader in AI. headtopics.com