Litecoin

New Yorker's in-depth survey read: Why did OpenAI's own people find Altman untrustworthy?

2026/04/07 13:36
👤ODAILY
🌐en

On the non-profit body, a money-shocking tree grew. 。

New Yorker's in-depth survey read: Why did OpenAI's own people find Altman untrustworthy?

Original author: Cake, Deep tide TechFlow

In autumn 2023, OpenAI Chief Scientist Ilya Sutskever sat in front of the computer and completed a 70-page document。

This document was collated from the Slack message records, HR communication files and internal meeting minutes to answer only one question: can Sam Altman, who controls what may be the most dangerous technology in human history, be trusted

Sutskever's answer, written in the first line of the first page of the document, is "Sam displays a consistent pattern of behaviour..."

Article I:Lie。

Today, two and a half years later, investigative journalists Ronan Farrow and Andrew Marantz published a long interview in The New Yorker. More than 100 clients were interviewed and obtained internal memos that had never been made public before, as well as over 200 pages of private notes from the OpenAI period by the founder of Anthropic Dario Amodei. The story of these documents is much worse than the 2023 hysterics: how OpenAI went from a non-profit organization that was born for human security to a commercial machine, almost every security fence was removed by the same person。

Amodei's conclusion in his notes is even clearer:"The problem with OpenAI is Sam himself."

OpenAI's original crime is set

To understand the weight of this story, let's just say how special OpenAI is。

In 2015, Altman and a group of Silicon Valley elite did a little precedent in business history: using a non-profit organization to develop technologies that might be the most powerful in human history. The role of the board is very clear: security takes precedence over the success of the company and even over its survival. If one day OpenAI's AI becomes dangerous, the board has an obligation to shut the company down。

THE WHOLE ARCHITECTURE IS BET ON A HYPOTHESIS: THE PERSON WHO CONTROLS THE AGI MUST BE AN EXTREMELY HONEST PERSON。

What if it's wrong

The core bomb was the 70-page document. Sutskever doesn't do office politics, he's one of the world's top AI scientists. But in 2023, he became more convinced of one thing:Altman continues to lie to the executive and the board。

A specific example: In December 2022, Altman assured the Board at its meeting that security clearances had been adopted for various functions of the forthcoming GPT-4. Board member Toner asked for approval documents and found that two of the most controversial functions (user-defined fine-tuning and personal assistant deployment) had not been approved by the security panel at all。

More ridiculous things happen in India. One employee reported "that violation" to another board member: Microsoft issued an early version of ChatGPT in India without completing the necessary security clearance。

Sutskever also recorded another matter in the memorandum: Altman had stated to the former CTO Mira Murati that the security clearance process was less important and that the company's general legal adviser had accepted it. Murati ran to the General Counsel to confirm that the other side had said, "I don't know where Sam came from."

200 pages of personal notes from Amodei

Sutskever's file is like a prosecutor's indictment. Amodei left more than 200 pages of notes, more like a diary written by a witness at the crime scene。

Amodei was in charge of security at OpenAI for several years and witnessed the company retreating from business pressure. In his notes, he wrote about a key detail in the Microsoft investment case of 2019: he had inserted a "merger and assist" clause in OpenAI's charter to the effect that if another company had found a safer AGI path, OpenAI would have stopped the competition and turned to help that company. This is his most important security in the whole deal。

When the deal was about to be signed, Amodei discovered one thing: Microsoft got the veto on this clause. What do you mean? Even if one day a competitor finds a better way, Microsoft can block OpenAI’s obligation to assist. The clause was still on paper, but it had been scrap paper since the day it was signed。

Amodei then left Openai and created Anthropic. The bottom of the competition between the two companies is the fundamental disagreement about how "AI should develop"。

20% of the missing

There's a detail in the story, reading it to cool your back, about OpenAI's "Super Alignment Team"。

In mid-2023, Altman e-mail contacted a doctoral student at Berkeley who was studying "Ai's deceptive alignment" (which was put on a test, actually deployed, and he said he was very worried about the problem and was considering setting up a $1 billion global research award. The doctoral student was encouraged, suspended, joined OpenAI。

Then Altman changed his mind: no more external awards, a super team within the company. The company announced that it would allocate 20 per cent of its available credit to this team, potentially worth more than $1 billion. The announcement is very serious, saying that if the alignment problem is not resolved, the AGI could lead to "human disempowerment, even human extinction"。

Jan Leeke, who was appointed to lead the team, later told reporters that this commitment was in itself a very effective "brain retention tool"。

What about reality? Four people working on this team or in close contact saidThe actual amount allocated is only 1% to 2% of the firm's total, or the oldest hardware. The team was subsequently disbanded and the mission was not completed。

When journalists asked to interview OpenAI's researcher on "Sexual Safety", the company's public relations response made people laugh: "That's not a thing that actually exists."

Altman himself is quite honest. He told reporters that his "intuitives don't fit much of what's traditional AI safe" and that OpenAi would do "safe projects, or at least those that are not safe."。

EMPTYED CFO AND FORTHCOMING IPO

The New Yorker story is only half the bad news of the day. On the same day, the Information posted another heavy news:OpenAI's CFO Sarah Friar and Altman had a serious disagreement。

Friar told her colleagues privately that she didn't think Openai was ready for the market this year. Two reasons are: the procedural and organizational workload to be accomplished is too high and the financial risks associated with the $5-year $60 billion in computing expenditure committed by Altman are too high. She is not even sure that OpenAI's revenue growth will sustain these commitments。

But Altman wants to sprint IPO in the fourth quarter of this year。

And even worse, Friar no longer reports directly to Altman. From August 2025, she reported to Fidji Simo (OpenAI Application CEO). And Simo just called sick last week for health reasons. You're in a situation where there's a fundamental disagreement between a company that sprints IPO, CEO and CFO, where CFO does not report to the CEO, and where CFO's superiors are on leave。

Even the executives within Microsoft can't watch, saying that Altman "contributes the facts, contradicts the agreements reached." A Microsoft executive even said, "I think there's a chance that he'll end up being remembered as Bernie Madoff or SBF-grade liar."

Altman's Two-Face

A former OpenAI board member described two features of Altman to journalists. This is probably the worst character sketch in the whole story。

The director said that Altman had a very rare combination: In every face-to-face exchange, he has a strong desire to please and be liked. At the same time, he had little to do with the consequences of deceiving others。

It is extremely rare that both characteristics appear in one person. But for a salesman, it was the perfect gift。

THERE'S A METAPHOR IN THE REPORT: JOBOS IS FAMOUS FOR "REAL-TWISTING FORCE FIELD," AND HE CAN CONVINCE THE WORLD OF HIS VISION. BUT EVEN JOBOS NEVER TOLD A CUSTOMER "YOU DON'T BUY MY MP3 PLAYER, THE PEOPLE YOU LOVE WILL DIE."。

Altman said something like that about AI。

A CEO HUMAN PROBLEM. WHY IS EVERYONE'S RISK

Altman, if it's just the CEO of a general technology company, these allegations are, at best, a great commercial gossip. But OpenAI is not normal。

In its own words, it is developing perhaps the most powerful technology in human history. OpenAI itself has just issued a policy white paper on AI leading to unemployment, which can also be used to manufacture large-scale biological and chemical weapons or to launch cyberattacks。

ALL SECURITY GUARDS ARE DEAD. THE FOUNDER'S NON-PROFIT MISSION GAVE WAY TO IPO SPRINTING. THE FORMER CHIEF SCIENTIST AND FORMER HEAD OF SECURITY BOTH FOUND THE CEO "UNTRUSTWORTHY". PARTNERS COMPARE CEO TO SBF. IN THIS CASE, HOW CAN THE CEO DECIDE UNILATERALLY WHEN TO PUBLISH AN AI MODEL THAT COULD CHANGE HUMAN FATE

After reading the report, Gary Marcus wrote: If one of the future OpenAI models can produce large-scale biological and chemical weapons or launch a catastrophic cyber attack, do you really trust Altman to decide whether to release it or not

OpenAI's response to The New Yorker is simple: "Most of this article is about the story of what has been reported, and it is clear that the source of information has a personal purpose through anonymous statements and selective inecstasy."

Altman responded very well: it did not respond to specific allegations, it did not deny the authenticity of the memorandum, it only questioned the motive。

On a not-for-profit body, there's a money tree

OpenAI's 10 years, and it's written a story outline:

A GROUP OF RISK-BASED IDEALISTS CREATED A MISSION-DRIVEN NON-PROFIT ORGANIZATION. THE ORGANIZATION MADE AN EXTRAORDINARY TECHNOLOGICAL BREAKTHROUGH. THE BREAKTHROUGH ATTRACTED HUGE AMOUNTS OF CAPITAL. CAPITAL NEEDS RETURNS. MISSION BEGINS TO MAKE WAY. THE SECURITY TEAM IS DISBANDED. THE MAN WHO QUESTIONED WAS CLEANED UP. THE NON-PROFIT ARCHITECTURE HAS BEEN TRANSFORMED INTO A PROFIT-MAKING ENTITY. ONCE HAD THE RIGHT TO SHUT DOWN THE COMPANY'S BOARD OF DIRECTORS, NOW FULL OF CEO'S ALLIES. ONCE PROMISED 20 PERCENT OF THE CALCULUS TO PROTECT HUMAN SECURITY, NOW THE PR SAYS, "THAT'S NOT A REAL THING."。

The main character of the story, over a hundred family members, gave him the same label: "Not bound by the truth."

HE WAS ABOUT TO TAKE THE COMPANY IPO, VALUED AT OVER $85 BILLION。

This post is a synthesis of public coverage of New Yorkers, Semafor, Tech Brew, Gizmodo, Business Insider and The Information。

QQlink

无加密后门,无妥协。基于区块链技术的去中心化社交和金融平台,让隐私与自由回归用户手中。

© 2024 QQlink 研发团队. 保留所有权利.