The Microsoft Bing App is seen running on an iPhone in this photo illustration on 30 May, 2023 in Warsaw, Poland. (Photo by Jaap Arriens/NurPhoto via Getty Images)
Jaap Arriens | Nurphoto | Getty Images
Artificial intelligence may lead to human extinction and reducing the risks associated with the technology should be a global priority, industry experts and tech leaders stated in an open letter.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement on Tuesday read.
Sam Altman, CEO of ChatGPT-maker OpenAI, as well as executives from Google‘s AI arm DeepMind and Microsoft were among those who supported and signed the short statement from the Center for AI Safety.
The technology has gathered pace in recent months after chatbot ChatGPT was released for public use in November and subsequently went viral. In just two months after its launch, it reached 100 million users. ChatGPT has amazed researchers and the general public with its ability to generate humanlike responses to users’ prompts, suggesting that AI could replace jobs and imitate humans.
The statement Tuesday said that there has been increasing discussion about a “broad spectrum of important and urgent risks from AI.”
But it said it can be “difficult to voice concerns about some of advanced AI’s most severe risks” and had the aim of overcoming this obstacle and opening up the discussions.
ChatGPT has arguably sparked much more awareness and adoption of AI as major firms around the world have raced to develop rival products and capabilities.
Altman had admitted in March that he is a “little bit scared” of AI as he worries that authoritarian governments would develop the technology. Other tech leaders such as Tesla’s Elon Musk and former Google CEO Eric Schmidt have cautioned about the risks AI poses to society.
In an open letter in March, Musk, Apple co-founder Steve Wozniak and several tech leaders urged AI labs to stop training systems to be more powerful than GPT-4 — which is OpenAI’s latest large language model. They also called for a six-month pause on such advanced development.
“Contemporary AI systems are now becoming human-competitive at general tasks,” said the letter.
“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asked.
Last week, Schmidt also separately warned about the “existential risks” associated with AI as the technology advances.
24World Media does not take any responsibility of the information you see on this page. The content this page contains is from independent third-party content provider. If you have any concerns regarding the content, please free to write us here: contact@24worldmedia.com
Lynn Classical senior walks the red carpet
Shribman: Lost in the circus acts, Putin’s declaration chills
Police Logs 11/23/24 – Itemlive
Swampscott schools keep an eye on student enrollment
Peabody man arraigned in Lynn drive-by shooting
Nahant reverend celebrates 100 years
Family of four needs Santa’s help
Rain a double-edged sword in Saugus
Lynn prepares for winter weather
Virginia Doris Bayles – The Suffolk Times
Jeanette P. Howard – The Suffolk Times
Marblehead teachers’ strike has its critics