When ChatGPT was released in November, he took the world by storm.
Within a month of its release, some 100 million people had used the viral AI chatbot for everything from writing high school essays to planning travel routes to generating computer code.
Built by San Francisco-based startup OpenAI, the app was flawed in many ways, but it also sparked a wave of excitement (and fear) about the transformative power of generative AI to change the way we work. and create.
ChatGPT, which runs on a technology called GPT-3.5, has been so impressive, in part because it represents a leap forward in capabilities from its predecessor of just a few years ago, GPT-2.
On Tuesday, OpenAI released an even more advanced version of its technology: GPT-4. The company says this update is another milestone in the advancement of AI. New technology has the potential to improve the way people learn new languages, the way blind people process images, and even the way we manage our taxes.
OpenAI also claims that the new model supports a chatbot that is more factual, creative, concise, and able to understand images, instead of text.
Sam Altman, CEO of OpenAI, called GPT-4 “our most capable and aligned model to date.” He also warned that “it’s still flawed, still limited, and it still looks more impressive on first use than it does after spending more time with it”
In a live demo of GPT-4 on Tuesday afternoon, OpenAI co-founder and president Greg Brockman showed off new use cases for the technology, including the ability to receive hand-drawn mockup of a website and, from there, generate code for a working site in seconds.
Brockman also introduced GPT-4’s visual abilities by providing him with a cartoon image of a squirrel holding a camera and asking him to explain why the image is funny.
“The image is funny because it shows a squirrel holding a camera and taking a picture of a nut as if it were a professional photographer. It’s a humorous situation because squirrels usually eat nuts , and we don’t expect them to use a camera or act like humans,” GPT-4 replied.
It’s the kind of ability that could be incredibly useful to people who are blind or have low vision. Not only can GPT-4 describe images, but it can also communicate the meaning and context behind them.
Yet, as the creators of Altman and GPT-4 were quick to admit, the tool is far from a complete replacement for human intelligence. Like its predecessors, it suffered from issues of accuracy, bias and context. This poses a growing risk as more people start using GPT-4 for more than just novelty. Companies like Microsoft, which is investing heavily in OpenAI, are already starting to integrate GPT-4 into core products that millions of people use.
Here are some things you need to know about the latest version of the hottest new technology on the market.
He can pass complicated exams
A tangible way for people to measure the capabilities of new AI tools is to see how well they can pass standardized tests, like the SAT and the bar exam.
GPT-4 has shown impressive progress here. The tech can pass a mock legal bar exam with a score that would put it in the top 10% of candidates, while its immediate predecessor GPT-3.5 scored in the down 10% (attention, lawyers).
GPT-4 can also score 700 out of 800 on the SAT math test, compared to 590 in its previous version.
Yet GPT-4 is weak in some subjects. It only scored a 2 out of 5 on the AP English exams – the same score as the previous version, GPT-3.5 received.
Standardized tests aren’t a perfect measure of human intelligence, but the kinds of reasoning and critical thinking required to perform well on these tests show that technology is improving at an impressive rate.
It shows promise for teaching languages and helping the visually impaired
Since GPT-4 has just been released, it will take time for people to discover all the most compelling ways to use it, but OpenAI has come up with a few ways the technology could potentially improve our daily lives.
One is for learning new languages. OpenAI has teamed up with popular language learning app Duolingo to power a new AI-based chat partner called Roleplay. This tool allows you to have a smooth conversation in another language with a chatbot that responds to what you say and intervenes to correct you if necessary.
Another great use case presented by OpenAI is to help visually impaired people. In partnership with Be My Eyes, an application that allows visually impaired people to get help on demand from a sighted person via video chat. OpenAI used GPT-4 to create a virtual assistant that can help people understand the context of what they see around them. An example given by OpenAI showed how, given a description of the contents of a fridge, the app can suggest recipes based on what is available. The company says this is a step up from the current state of technology in image recognition.
“Basic image recognition apps only tell you what’s in front of you,” said Jesper Hvirring Henriksen, CTO of Be My Eyes, in a press release for the launch of GPT-4. “They can’t have a discussion to figure out if the noodles have the right kind of ingredients or if the object on the ground isn’t just a ball but a tripping hazard – and communicate that.”
If you want to use OpenAI’s latest GPT-4 powered chatbot, it’s not free
Currently, you will need to pay $20 per month to access ChatGPT Plus, a premium version of the ChatGPT bot. GPT4’s API is also available to developers who can build apps on it for a fee based on how much you use the tool.
However, if you want to sample GPT-4 without paying, you can use a chatbot created by Microsoft. called BingGPT. A vice president of Microsoft confirmed tuesday that the latest version of BingGPT uses GPT-4. It is important to note that BingGPT limits the number of conversations you can have per day and does not allow you to capture images.
GPT-4 still has serious flaws. Researchers are concerned that we don’t know what data it is trained on.
Although GPT-4 has obvious potential to help people, it is also inherently flawed. Like previous versions of generative AI models, GPT-4 can relay misinformation or be misused to share controversial content, such as instructions on how to cause physical harm or content to promote activism. policy.
OpenAI reports that GPT-4 is 40% more likely to give factual answers and 82% less likely to respond to requests for unauthorized content. Although this is an improvement over before, there is still plenty of room for error.
Another concern with GPT-4 is the lack of transparency about how it was designed and trained. Several leading academics and industry experts on Twitter sharp out that the company does not disclose any information about the dataset it used to train GPT-4. This is a problem, the researchers say, because the large datasets used to train AI chatbots can be inherently biased, as evidenced a few years ago by Microsoft’s Twitter chatbot, Tay. Less than a day after coming out, Tay gave racist answers to simple questions. He had been trained on social media posts, which can often be hateful.
OpenAI says it doesn’t share its training data in part because of competitive pressure. The company was founded as a non-profit organization but became a for-profit entity in 2019 – partly due to the high cost of training complex AI systems – OpenAI is now heavily backed by Microsoft, who is engaged in a fierce battle with Google which tech giant will lead generative AI technologies.
Without knowing what’s under the hood, it’s hard to immediately validate OpenAI’s claims that its latest tool is more accurate and less biased than before. As more and more people use the technology in the weeks to come, we’ll see if it ends up being not only significantly more useful, but also more responsible than what came before it.