Yesterday, OpenAI announced GPT-4, its long-awaited next-generation AI language model. System capabilities are still under evaluationbut as researchers and experts pore over the accompanying documents, many have expressed disappointment at one particular feature: that despite its parent company’s name, GPT-4 is not a open You have a model.
OpenAI shared many benchmark and test results for GPT-4, as well as intriguing demos, but offered virtually no information about the data used to train the system, its power costs, or specific hardware or methods. used to create it. .
Should AI research be open or closed? Experts disagree
Many in the AI community criticized the move, noting that it undermines the company’s founding ethos as a research organization and makes it harder for others to replicate its work. Perhaps more importantly, some say it’s also difficult to develop safeguards against the kind of threats posed by AI systems like GPT-4, with these complaints coming at a time of growing tension and rapid progress in the world of AI.
“I think we can say it’s closed on ‘open’ AI: the 98-page document showcasing GPT-4 proudly states that they don’t disclose *anything* about the contents of their training set,” tweeted Ben Schmidt, Vice President of Information Design at Nomic AI, in a thread on the subject.
Here Schmidt refers to a section of the GPT-4 technical report which reads as follows:
Considering both the competitive landscape and the security implications of large-scale models such as GPT-4, this report does not contain any further details on the architecture (including model size), hardware, training calculation, building datasets, training method or similar.
Talk to The edge in an interview, Ilya Sutskever, chief scientist and co-founder of OpenAI, developed this point. Sutskever said OpenAI’s reasons for not sharing more information about GPT-4 — fear of competition and security fears — were “obvious”:
“On the competitive landscape front – it’s competitive out there,” Sutskever said. “GPT-4 is not easy to develop. It took pretty much all of OpenAI working together for a very long time to produce this thing. And there are a lot of companies that want to do the same thing, so from a point competitively, you can see this as a maturing of the domain.
“On the safety side, I would say that the safety side is not yet as salient a reason as the competition side. But that’s going to change, and it’s basically as follows. These models are very powerful and are getting more and more powerful. At some point, it will be quite easy, if you will, to cause a lot of harm with these models. And as abilities increase, it makes sense that you don’t want to divulge them.
“I expect that in a few years it will become completely obvious to everyone that open source AI just doesn’t make sense.”
The closed approach is a marked change for OpenAI, which was founded in 2015 by a small group including current CEO Sam Altman, Tesla CEO Elon Musk (who resigned from his board in 2018) and Sutskever. In a introductory blog post, Sutskever and others said the organization’s goal was to “create value for everyone rather than shareholders” and that it would “collaborate freely” with others in the field to do so. to arrive at. OpenAI was founded as a nonprofit, but later became “capped profit” in order to secure billions in investment, mostly from Microsoft, with whom it now holds exclusive commercial licenses.
When asked why OpenAI changed its approach to sharing its research, Sutskever replied simply, “We got it wrong. In short, we were wrong. If you think, like us, that at some point, AI — AgI – is going to be extremely, incredibly powerful, so it just doesn’t make sense to open source code. It’s a bad idea… I expect that in a few years it will become completely obvious to everyone that open source AI just isn’t wise.
Opinions within the AI community on this vary. Notably, the launch of GPT-4 comes just weeks after another AI language model developed by Facebook owner Meta, named LLaMA, online leak, sparking similar discussions about the threats and benefits of open source research. However, most initial reactions to the closed model of GPT-4 have been negative.
Talk to The edge Via DM, Nomic AI’s Schmidt explained that not being able to see what GPT-4 data was trained on made it difficult to know where the system was safe to use and find fixes.
“For people to make informed decisions about where this model won’t work, they need to have a better idea of what it does and what assumptions are built into it,” Schmidt said. “I wouldn’t trust a self-driving car trained without experience in snowy climates; it is likely that there are holes or other issues that may occur when this is used in real life situations.
William Falcon, CEO of Lightning AI and creator of the open source PyTorch Lightning tool, said VentureBeat that he understood the decision from a business perspective. (“You have every right to do that as a company.”) But he also said the move sets a “bad precedent” for the wider community and could have adverse effects.
“If this model goes wrong…how is the community supposed to react?”
“If this model goes wrong, and it will, you’ve already seen it hallucinating and giving you false information, how is the community supposed to react?” Falcon said. “How are ethical researchers supposed to suggest solutions and say, this method doesn’t work, maybe modify it to do something else?”
Another reason suggested by some for OpenAI to hide GPT-4 build details is legal liability. AI language models are trained on huge textual datasets, with many (including older GPT systems) fetching information from the web – a source that likely includes copyrighted material. AI image generators also trained on internet content have found themselves facing legal challenges because of this, with several companies currently being sued by independent artists And photo site Getty Images.
When asked if this is one of the reasons OpenAI doesn’t share its training data, Sutskever said, “My view is that training data is technology. It may not look like that, but it is. And the reason we don’t disclose training data is pretty much the same reason we don’t disclose parameter counts. Sutskever did not respond when asked if OpenAI can definitively say that its training data does not include hacked material.
Sutskever agreed with OpenAI critics that there is “merit” in the idea that open-sourcing models help develop collateral. “If more people studied these models, we would learn more about them, and that would be good,” he said. But OpenAI has provided some academic and research institutions with access to its systems for these reasons.
The research sharing discussion comes at a time of frenetic change for the AI world, with pressure on multiple fronts. On the business side, tech giants like Google and Microsoft are dash add AI features to their products, often dismissing prior ethical concerns. (Microsoft recently dismissed a team dedicated to ensuring that its AI products adhere to ethical guidelines.) On the research side, the technology itself appears to be improving rapidly, raising fears that AI could become a serious and imminent threat.
Balancing these various pressures presents a serious governance challenge, said Jess Whittlestone, head of AI policy at UK think tank The Center for Long-Term Resilience — and which she says will likely need to involve third-party regulators. .
“It shouldn’t be up to individual companies to make those decisions.”
“We’re seeing these AI capabilities evolving very quickly and I’m generally concerned that these capabilities are advancing faster than we can adapt to them as a society,” Whittlestone said. The edge. She said OpenAI’s reasons for not sharing more details about GPT-4 are good, but there are also valid concerns about the centralization of power in the AI world.
“It shouldn’t be up to individual companies to make those decisions,” Whittlestone said. “Ideally, we need to codify what the practices are here, and then have independent third parties play a bigger role in reviewing the risks associated with certain models and whether it makes sense to disclose them to the world.”