A letter co-signed by Elon Musk and thousands of others demanding a pause in artificial intelligence research created a firestorm, after the researchers quoted in the letter condemned the use of their work, some signers were found to be false and others have withdrawn their support.
On March 22, more than 1,800 signatories – including Musk, cognitive scientist Gary Marcus and Apple co-founder Steve Wozniak – called for a six-month pause on developing “more powerful” systems than Apple’s. Apple. GPT-4. Engineers from Amazon, DeepMind, Google, Meta and Microsoft also provided support.
Developped by Open AI, a company co-founded by Musk and now backed by Microsoft, GPT-4 has developed the ability to hold a human conversation, compose songs and summarize long documents. Such AI systems with “competitive human intelligence” pose serious risks to humanity, the letter asserts.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent external experts. “, reads the letter.
The Future of Life Institute, the think tank that coordinated the effort, cited 12 research by experts, including academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind. But four experts cited in the letter expressed concern that their research had been used to make such claims.
At its initial launch, the letter lacked signature verification protocols and accumulation of signatures of people who have not signed it, including Xi Jinping and Yann LeCun, Chief AI Scientist at Meta, who clarified on Twitter, he did not support it.
Critics have blamed the Future of Life Institute (FLI), which is primarily finance by the Musk Foundation, to prioritize imagined doomsday scenarios over more immediate concerns about AI – such as racist or gender bias programmed into machines.
Among the research cited was “On the dangers of stochastic parrots”, a paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. Mitchell, now chief ethics scientist at artificial intelligence firm Hugging Face, criticized the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”.
“By treating many dubious ideas as given, the letter affirms a set of priorities and a narrative about AI that benefits FLI supporters,” she said. “Ignoring active damage right now is a privilege that some of us don’t have.”
Its co-authors Timnit Gebru and Emily M Bender criticized the letter on Twitter, with the latter calling some of its claims “unbalanced”. Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with the mention of her work in the letter. She co-wrote last year a research paper arguing that the widespread use of AI already posed serious risks.
His research has argued that the current use of AI systems could influence decision-making on climate change, nuclear war, and other existential threats.
She told Reuters: “AI does not need to reach human-level intelligence to exacerbate these risks.”
“There are non-existential risks that are really, really important, but don’t get the same kind of attention on a Hollywood level.”
Asked to comment on the critics, FLI President Max Tegmark said the short- and long-term risks of AI should be taken seriously. “If we quote someone, it just means that we claim that they approve of that sentence. It doesn’t mean that they approve of the letter, or that we approve of everything they think,” he told Reuters.