Tuesday, October 8

The paradox at the heart of Elon Musk’s OpenAI lawsuit

It would be easy to dismiss Elon Musk’s lawsuit against OpenAI as a case of sour grapes.

Musk sued OpenAI this week, accusing the company of violating the terms of its founding agreement and violating its founding principles. According to him, OpenAI was founded as a non-profit organization that would build powerful artificial intelligence systems for the benefit of humanity and distribute its research to the public for free. But Musk argues that OpenAI broke that promise by starting a for-profit subsidiary that absorbed billions of dollars in investment from Microsoft.

A spokesperson for OpenAI declined to comment on the lawsuit. In a memo sent to employees on Friday, Jason Kwon, the company’s chief strategy officer, denied Musk’s claims and said: “We believe the claims in this lawsuit may stem from Elon’s remorse for not having been involved with the company today,” according to a copy of the memo I viewed.

On the one hand, the lawsuit reeks of personal carnage. Musk, who founded OpenAI in 2015 along with a group of other tech heavyweights and provided much of its initial funding but left in 2018 amid disputes with leadership, resents being sidelined in conversations about artificial intelligence. as successful as ChatGPT, OpenAI’s flagship chatbot. And the falling out between Musk and OpenAI CEO Sam Altman has been well documented.

But amid all this animosity, there is a point worth making, because it illustrates a paradox that is at the heart of much of today’s debate about artificial intelligence – and a point where OpenAI has truly spoken to both sides of its mouth, insisting both that its AI systems are incredibly powerful and that they are nowhere near human intelligence.

The claim centers on a term known as AGI, or “artificial general intelligence.” Defining what constitutes AGI is notoriously complicated, although most people would agree that it is an artificial intelligence system capable of doing most or all of the things the human brain can do. Altman defined AGI as “the equivalent of an average human you might hire as a collaborator,” while OpenAI itself defines AGI as “a highly autonomous system that outperforms humans in the most economically valuable work.”

Most leaders of AI companies argue that not only is AGI possible to build, but also that it is imminent. Demis Hassabis, CEO of Google DeepMind, told me in a recent podcast interview that he thinks AGI could arrive as early as 2030. Altman said AGI could be just four or five years away.

Building AGI is OpenAI’s explicit goal, and it has many reasons to want to get there before anyone else. True AGI would be an incredibly valuable resource, capable of automating huge amounts of human work and making a lot of money for its creators. It’s also the kind of bright, bold goal that investors love to fund, and which helps AI labs recruit the best engineers and researchers.

But AGI could also be dangerous if it were able to outsmart humans, or if it became deceptive or misaligned with human values. The people who started OpenAI, including Musk, feared that an AGI would be too powerful for a single entity to own, and that if they ever got close to building one, they would have to change the control structure around it, to prevent it from causing damage or concentrating too much wealth and power in the hands of a single company.

That’s why, when OpenAI partnered with Microsoft, it specifically granted the tech giant a license valid only for “pre-AGI” technologies. (The New York Times is suing Microsoft and OpenAI over their use of copyrighted works.)

Under the terms of the agreement, if OpenAI ever built something that met the definition of AGI – as determined by the non-profit OpenAI board – Microsoft’s license would no longer apply and the OpenAI board could decide to do whatever wants to ensure that OpenAI’s AGI benefits. all humanity. This could mean many things, including open sourcing the technology or decommissioning it altogether.

Most AI commentators believe that today’s AI models do not qualify as AGI, because they lack sophisticated reasoning capabilities and often make silly mistakes.

But in his legal filings, Musk makes an unusual argument. He claims that OpenAI has Already achieved AGI with its GPT-4 language model, released last year, and that the company’s future technology will even more clearly qualify as AGI

“With respect to information and belief, GPT-4 is an AGI algorithm, and therefore expressly outside the scope of Microsoft’s exclusive license with OpenAI as of September 2020,” the complaint reads.

What Musk is arguing here is a little complicated. Basically, he’s saying that because it achieved AGI with GPT-4, OpenAI is no longer allowed to license it to Microsoft and that his board is obligated to make the technology and research more freely available.

His complaint cites the now-infamous “Sparks of AGI” paper written by a Microsoft research team last year, which claimed that GPT-4 demonstrated the first hints of general intelligence, including signs of human-level reasoning.

But the complaint also notes that OpenAI’s board is unlikely to decide to change its AI systems Actually it qualifies as an AGI, because as soon as it does, it has to make big changes to how it deploys and profits from the technology.

Furthermore, he notes that Microsoft – which now has a non-voting observer seat on OpenAI’s board, after an upheaval last year that led to Altman’s temporary firing – has a strong incentive to deny that OpenAI’s technology qualifies like AGI. That would end its license to use such technology in its products and jeopardize potentially huge profits.

“Given Microsoft’s enormous financial interest in keeping its doors closed to the public, OpenAI, Inc.’s captured, conflicted, and compliant new board of directors will have every reason to delay the discovery that OpenAI has obtained AGI,” yes reads in the complaint. “In contrast, OpenAI achieving AGI, like ‘Tomorrow’ in ‘Annie,’ will always be a day away.”

Considering his track record of questionable controversies, it’s easy to question Mr. Musk’s motives here. And as the head of a competing artificial intelligence start-up, it’s no surprise that he would want to involve OpenAI in messy litigation. But his lawsuit represents a real conundrum for OpenAI.

Like its competitors, OpenAI badly wants to be seen as a leader in the race to build AGI, and it has a vested interest in convincing investors, commercial partners and the public that its systems are improving at a breakneck pace.

But because of the terms of its deal with Microsoft, OpenAI investors and executives may not want to admit that its technology actually qualifies as AGI, if and when it actually does.

This has put Musk in the strange position of asking a jury to rule on what constitutes AGI and decide whether OpenAI’s technology has met the threshold.

The lawsuit has also put OpenAI in the strange position of downplaying the capabilities of its own systems, continuing to fuel anticipation that a big AGI breakthrough is just around the corner.

“GPT-4 is not an AGI,” OpenAI’s Kwon wrote in a memo to employees on Friday. “It can solve small tasks in many jobs, but the ratio of the work done by a human to the work done by GPT-4 in the economy remains incredibly high.”

The personal feud that fueled Musk’s lawsuit has led some people to view it as a frivolous lawsuit — one commentator likened it to “suing your ex because she renovated the house after your divorce” — that will be quickly dismissed.

But even if it is rejected, Musk’s lawsuit points to important questions: Who decides when something qualifies as AGI? Are tech companies exaggerating or exaggerating (or both) when it comes to describing the capability of their systems? And what incentives lie behind the various claims about how close or far we might be from AGI?

A lawsuit from a resentful billionaire is probably not the way to resolve these questions. But it’s worth asking, especially as advances in artificial intelligence continue to accelerate.