What Elon Musk gets right — and very wrong — about AI and ChatGPT

Elon Musk is at or near the top of almost every AI influencer list I’ve ever seen, despite the fact that he doesn’t have an AI degree and seems to have only one academic journal article in the field, which has received little notice.

There’s not necessarily anything wrong with that; Yann LeCun has a background in physics (the same field as one of Musk’s professors two undergraduate degrees) but is rightly known for his pioneering work in the field of machine learning. I’m also known for my work on AI, but I had a background in cognitive science. The most important article I ever wrote for AI was in a journal of psychology. It’s perfectly fine for people to influence different areas, and Musk’s work on driverless cars undoubtedly influenced the development of AI.

But much of what he says about AI is wrong. Most notoriously, none of his forecasts for self-driving car timelines have been correct. In October 2016, he predicted that a Tesla would drive from California to New York by 2017. (It doesn’t.) Tesla has rolled out a technology called “autopilot,” but everyone in the industry knows that name is a lie, more marketing than reality. Teslas are far from being able to drive themselves; the software is still so buggy seven years after Tesla started rolling it out that a human driver still has to be careful at all times.

Musk also seems to misunderstand the relationship between natural (human) intelligence and artificial intelligence. He is repeatedly argued that Teslas don’t need lidar – a detection system that virtually every other autonomous vehicle company relies on – based on a deceiver comparison between human vision and cameras in driverless cars. While it’s true that humans don’t need Lidar to drive, today’s AI doesn’t seem close enough to be able to understand and handle a full spectrum of road conditions without it. Driverless cars need Lidar as a crutch precisely because they lack human intelligence.

Teslas can’t even consistently avoid crash into stationary emergency vehicles, a problem the company has been unable to solve for more than five years. For reasons not yet publicly disclosed, cars’ perception and decision-making systems have not yet managed to operate with sufficient reliability without human intervention. Musk’s assertion amounts to saying that humans don’t need to walk because cars don’t have feet. If my grandmother had wheels, it would be a car.

ChatGPT isn’t the deep advancement in AI it seems

Despite an uneven track record, Musk continues to make claims about AI, and when he does, people take him seriously. His last, first reported by CNBC and widely covered afterwards, took place a few weeks ago at the World Government Summit in Dubai. Some of what Musk said is, in my professional judgment, right – and some of it is way off.

What was most wrong was that he hinted that we are on the verge of solving AI – or achieving what is called “artificial general intelligence” (AGI) with the flexibility of human intelligence – claiming that ChatGPT “illustrated to people how advanced AI has become.”

It’s just silly. For some people, especially those who haven’t followed the AI ​​field, the extent to which ChatGPT can mimic human prose seems deeply surprising. But it is also deeply flawed. A really super-smart AI would be able to tell right from wrong, reason about people, objects, and science, and be as versatile and quick at learning new things as humans, which the current generation is. of chatbots is not capable. All ChatGPT can do is predict text that might be plausible in different contexts based on the huge body of written work it’s been trained on, but it doesn’t care whether what it spits out is TRUE.

This makes ChatGPT incredibly fun to play, and if run responsibly, sometimes it can even be useful, but that doesn’t make it really smart. The system has a hard time telling the truth, hallucinates regularly and sometimes struggles with basic math. He doesn’t understand what a number is. In this example, sent to me by AI researcher Melanie Mitchell, ChatGPT doesn’t understand the relationship between a pound of feathers and two pounds of bricks, foiled by the ridiculous railing system that prevents him from using hateful language but also prevents him from answering many questions directly, which Musk himself complained about somewhere else.

Examples of ChatGPT failures like this are legion on the internet. With NYU computer scientist Ernest Davis and others, I assembled a whole collection of them; feel free to contribute yours. OpenAI often fixes them, but new errors keep popping up. Here is one of my current favorites:

These cases illustrate that, despite superficial appearances to the contrary, ChatGPT cannot reason, has no idea what it is talking about, and absolutely cannot be trusted. He has no real moral compass and has to rely on rudimentary guardrails that try to keep him from going wrong. but can be broken without too much difficulty. Sometimes things are correct because the text you’re typing is pretty close to something it was trained on, but that’s incidental. To be right Sometimes is not a solid basis for artificial intelligence.

The musk is would have seeks to build a ChatGPT rival – “TruthGPT”, as it Put the recently – but it’s also missing something important: the truth is just not part of GPT-style architectures. It’s fine to want to build a new AI that solves the fundamental problems of current language models, but that would require a very different design, and it’s not clear that Musk appreciates how drastic the changes will have to be.

Where the stakes are high, companies are already realizing that the truth and the GPT are not the closest of friends. JPMorgan comes limit its employees to use ChatGPT for business, and Citigroup and Goldman Sachs quickly followed suit. Like Yann LeCun Put theechoing what I’ve been saying for years it’s been a stray on the road to general artificial intelligence because its underlying technology has nothing to do with the demands of true intelligence.

Last May, Musk said he would be “surprised if we didn’t have an AGI by” 2029. I then recorded my doubts, Free bet him $100,000 (that’s real money to me, if not so much for him), and drafted a set of terms. Many people on the ground shared my feeling that on predictions like these, Musk is all talk and no action. The next day, without having foreseen it, I had lifted another $400,000 for the bet of other AI experts. Musk never got back to us. If he really believed what he was saying, he should have.

We should still be very worried

If Musk is wrong about when driverless cars will arrive, naive on what it takes to build human-like robots, and roughly on the general intelligence timeline, it East right about something: Houston, we have a problem.

At the Dubai event last month, Musk told the crowd, “AI is one of the greatest risks to the future of civilization.” I still think nuclear war and climate change might be more important, but in recent weeks, especially with the chaotic introductions of new AI search engines by Microsoft And Googlelead me to believe that we are going to see more and more primitive and unreliable artificial intelligence products rushing into the market.

It may not be precisely the kind of AI Musk in mind, but it poses clear and present dangers. New concerns appear seemingly every day, ranging from the unexpected consequences in the education of possibility massive and automated disinformation campaigns. Extremist organisations, such as the far-right social network Gab, have already begin announcing their intention to build their own AI.

So don’t go to Musk for specific timelines on AGI or driverless cars. But he still insists on a crucial point: we have new technologies in our hands and we don’t really know how it will all play out. When he said this week that “we need some sort of regulatory authority or something that oversees the development of AI”, he may not have been the most eloquent, but he was absolutely right.

We are not, in truth, that close to AGI. Instead, we’re throwing in a seductive but haphazard AI that ignores the truth that perhaps no one anticipated. But the takeaways are always the same. We should be worried no matter how smart (or not) it is.

Gary Marcus (@garymarcus) is a scientist, best-selling author and entrepreneur. He founded startup Geometric Intelligence, which was acquired by Uber in 2016. His new podcast, Humans versus Machines, will launch this spring.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top