Litecoin

Sam Ottman and end-of-life capitalism

2026/03/04 02:15
👤PANews
🌐en
Sam Ottman and end-of-life capitalism

Author:Sleepy.txt

In 2016, The New Yorker wrote a special to Sam & Middot; Ottman, entitled Sam & Middot; Ottman's Destiny. At 31, he was the president of the most powerful incubator of Silicon Valley。

There's a detail in the article that says Ottman likes to race, has five sports cars and rents planes. He told the journalist that he had two bags, one of which was an escape bag ready to run。

He has also prepared firearms, gold, potassium iodide (for nuclear radiation), antibiotics, batteries, water, IDF-level gas masks, and a land in Big Sur, California ' s famous coastal resort, where he can fly at any time to take refuge。

TEN YEARS LATER, OTTMANN BECAME THE ONE WHO WAS MOST COMMITTED TO CREATING THE END AND SELLING THE ARK. AS HE WARNED THE WORLD THAT AI WOULD DESTROY HUMANITY, HE ACCELERATED THE PROCESS WITH HIS OWN HAND; AS HE SAID HE DID NOT BUILD A $2 BILLION PERSONAL INVESTMENT EMPIRE FOR MONEY; AS HE CALLED FOR REGULATION, HE KICKED OUT ALL WHO TRIED TO BRAKE。

Rather than being a schizophrenic madman or an incompetent liar, he is only one of the most standard and successful products produced by this huge machine in Silicon Valley. His destiny is to create the collective anxiety of human beings and exercise their rods and crowns。

The end is good business

The business model of Otmann makes it clear in one sentence that a business is wrapped up in a jihad that concerns the survival of humanity。

HE'S BEEN PRACTICING SINCE YC. HE TURNED YC FROM A SMALL WORKSHOP THAT GAVE TENS OF THOUSANDS OF DOLLARS TO AN EARLY START-UP COMPANY INTO A HUGE EMPIRE. HE'S GOT AN YC RESEARCH OFFICE TO FUND PROJECTS THAT DON'T MAKE MONEY, BUT THAT SOUND HUGE. HE TOLD REPORTERS THAT YC'S GOAL IS TO FINANCE "ALL IMPORTANT AREAS"。

When he got to OpenAI, he played it so hard. He sells a packaged view of the world: AI End Day Plus Redemption。

He's better at describing the "occupying risk" of AI than anyone else. He followed a group of 100 scientists and said that AI risked more than nuclear war. In his testimony to the Senate, he said, "We are in a state of fear — — and people should be happy." He suggested that such fear was itself a useful warning。

Each of these words is on the headlines, and every one is advertising OpenAI for free. This carefully designed fear is the most efficient lever of attention. What's more exciting about capital and the media? The answer speaks for itself。

In that part of the salvation, he has a ready-made product: Worldcoin. When fear is embedded in public awareness, it is logical that solutions are sold. Using a big silver ball of basketball to scan the human iris globally, it says it's to pay everyone in the AI era. The story is good, but the exchange of money for biometric data quickly aroused the vigilance of many Governments. A dozen countries, including Kenya, Spain, Brazil, India and Colombia, suspended or investigated Worldcoin for data privacy。

But it may not matter to Otmann. It is important that he succeeds in shaping himself through this project as the only one with a solution。

Selling fear and hope in packages is the most efficient business model of the era。

Supervision is my weapon, not my shackles

How can a man who talks about the end of the world every day? Otman's answer is to turn custody into his own weapon。

In May 2023, he first went to the United States Congress to testify. Instead of complaining about regulation, he volunteered to ask, "Please regulate us." He suggested an AI licensing system that only companies with licences could develop large models. The external image is a very productive industry leader, but at that point in time, OpenAI was technically leading a strict, high-threshold regulatory system, with the greatest effect of blocking all potential competitors。

However, with the passage of time, especially as the technology of competitors such as Google and Anthropic has caught up and the power of open-source communities has begun to rise, Ottmann ' s discourse on regulation has changed subtlely. He began to stress on various occasions that overly stringent regulation, in particular requiring mandatory review by AI companies prior to publication, could stifle innovation and be “hazardous”。

Regulation at this time is no longer a moat but a stumbling block。

WHEN THEY ARE IN AN ABSOLUTE POSITION, REGULATION IS CALLED FOR TO LOCK IN THE ADVANTAGE; WHEN THE ADVANTAGE IS NO LONGER, FREEDOM IS CALLED FOR TO SEEK A BREAKTHROUGH. HE EVEN TRIED TO EXTEND THE MAP TO THE TOP OF THE CHAIN. HE PUT FORWARD A $7 TRILLION CHIP PROGRAMME THAT SOUGHT THE SUPPORT OF CAPITAL, SUCH AS THE EMIRATES SOVEREIGN WEALTH FUND, TO RESHAPE THE GLOBAL SEMICONDUCTOR INDUSTRY. THIS IS MUCH MORE THAN A CEO'S MANDATE, MORE LIKE AN AMBITIOUS MAN WHO WANTS TO INFLUENCE GLOBAL PATTERNS。

Behind all this is the rapid transformation of OpenAI from a non-profit organization to a commercial giant. When it was founded in 2015, its mission was "to secure AGI for the benefit of all." In 2019, it established a subsidiary of the Limited Profit Company. By the beginning of 2024, the word "safe" in the mission statement of OpenAI had been quietly deleted. Although the corporate architecture remains “limited profits”, the pace of commercialization has increased significantly. Corresponding to this is the explosive growth in income, from tens of millions of dollars in 2022 to more than a billion dollars in annualized income in 2024, and the surge in valuation from 29 billion to a trillion dollars。

When a man begins to look up at the stars and talk about the fate of mankind, it is best to see where his money bag falls。

Human: the immunity of charismatic leaders

On 17 November 2023, Otmann was expelled from the board of directors selected by him on the grounds that he was “unconfident in his communication with the board”。

What happened over the next five days was more a referendum of faith than a business struggle. Managing Director Greg & Middot; Brockman resigned; 95 per cent of the company's employees, over 700, joined up in a book requesting the board to resign or jump to Microsoft; the largest investor, Microsoft CEO Nadra, announced that Ottman would be welcome to work at any time. Eventually, King Otmann returned, returned to office and cleaned almost all the board members who opposed him。

HOW CAN A CEO, OFFICIALLY RECOGNIZED BY THE BOARD OF DIRECTORS AS "UNCONFIDENT", RETURN UNHARMED, EVEN WITH GREATER POWER

Helen ·, a member of the Board of Trustees who was expelled, later disclosed details. Ottman concealed from the Board his actual control of the OpenAI Entrepreneurship Fund; lied repeatedly about the company ' s key security processes; and even ChatGPT, so big, the Board knew it on Twitter. These charges, they're enough to get a CEO out of class 100 times。

BUT OTMANN'S FINE. BECAUSE HE'S NOT AN ORDINARY CEO, HE'S A "CHARACTER LEADER."。

this is the sociologist max & middot; the concept that weber put forward a hundred years ago, that there is an authority, not from a post, not from a law, from the leader himself “extraordinary personal charm”. the followers believed in him not because he had done the right thing, but because he was him. this belief is irrational. when leaders make mistakes or are challenged, their first reaction is not to question them, but to attack them。

OpenAI employees are like that. They do not believe in the procedural justice of the board of directors, they only believe in the "natural nature" represented by Ottmann, and they feel that the board of directors are "impeding human progress"。

After Otmann's reinstatement, OpenAI's security team was quickly disbanded. Chief scientist Ilya & Middot; Sutzkwire, who was the first to fire Otmann, left. In May 2024, the head of the security team, Jan Leeke, resigned and wrote on Twitter: "In order to launch the bright products, the company's safety culture and processes have been sacrificed."

In the face of a charisma leader, the facts don't matter, processes don't matter, and safety doesn't matter. The only thing that matters is faith。

The prophets on the waterline

sam · ottman is just the latest and most successful model on the silicon valley production line。

There are a lot of people on this production line that we know very well。

For example, Mask. 2014 he said "AI is calling the devil." But his Tesla is the largest robot company in the world and the most sophisticated AI application scenario. After breaking up with Otmann, he founded XAI in 2023 and declared war. After just one year, the XAI valuation exceeded $20 billion. He warns the devil of his arrival, and forges another devil himself. This dual narrative of the struggle between the right and the left is identical to that of Otmann。

AND LIKE ZUCKERBERG. IN PREVIOUS YEARS, HE HAD RISKED THE ENTIRE COMPANY'S LIFE ON THE DOLLAR UNIVERSE, BURNING NEARLY $90 BILLION AND FINDING IT A PIT. SO TURN AROUND AND REPLACE THE COMPANY'S CORE NARRATIVE WITH THE AGI. IN 2025, HE ANNOUNCED THE ESTABLISHMENT OF A "SUPER SMART LABORATORY" TO RECRUIT HIMSELF. IT IS ALSO AN AMBITIOUS VISION OF THE FUTURE OF HUMANITY, A CAPITAL STORY THAT REQUIRES ASTROMETRIC INPUT, AND THE SAME SAVIOURAL GESTURE。

And Peter · Tyre. As Otmann's mentor, he's more like the chief designer of this production line. While he invests in companies that promote the "technologies" of immortality, he buys land in New Zealand and builds the Fort of End of Life, and he only spent 12 days in New Zealand with citizenship. His name, Palantir, is one of the largest data-monitoring companies in the world, with clients mainly government and military. While preparing for the collapse of civilization, he created the sharpest surveillance tools for those in power. In the military operations against Iran at the beginning of 2026, it was Palantir ' s artificial intelligence platform that served as the brain, integrating large amounts of data from spy satellites, communications wiretapping, drones and Claude model analysis, translating confusing information into real-time information for decision-making, and ultimately targeting and decapitating。

Each of them is playing the dual roles of warning the end of the day and driving the end of the day. This is not a split of personality; it is a business model that has been certified by the capital market as the most efficient. They create and traffic structural anxiety to capture attention, capital and power. They are the product of this system, and they are its architects, the evil behind the great narrative。

Silicon Valley is already more than just an output technology; it's a factory that produces modern myths。

Why does this trick always work

Every few years, Silicon Valley gives birth to a new prophet, whose attention is drawn to capital, the media and the public by a set of grand narratives about doom and salvation. This trick has been repeated over and over and over again. Each of its components is working precisely against specific loopholes in human perception。

Step 1: Manage the rhythm of fear, not just create fear。

THE POTENTIAL RISKS FOR AI ARE REAL, BUT THEY COULD HAVE BEEN DISCUSSED CALMLY. IT WAS THESE PEOPLE WHO CHOSE TO PRESENT IT IN THE MOST DRAMATIC WAY, AND THEY HAD PRECISE RHYTHM CONTROL OVER THE RELEASE OF FEAR。

When the public is scared, when there is hope, when the alarm is raised, it is designed. Fear is fuel, but the time and the way to set fire is the real technology。

Step 2: Turn the incomprehensible nature of technology into a source of authority。

AI IS AN ENTIRELY NON-TRANSPARENT BLACK BOX FOR THE VAST MAJORITY OF PEOPLE. WHEN A COMPLEX, INCOMPREHENSIBLE THING EMERGES, PEOPLE ARE INSTINCTIVELY GIVEN THE POWER OF INTERPRETATION TO "THE PEOPLE WHO KNOW IT BEST". THEY UNDERSTAND THIS DEEPLY AND TURN IT INTO A STRUCTURAL ADVANTAGE, AND THE MORE THEY DESCRIBE AI AS MYSTERIOUS, DANGEROUS, BEYOND COMMON SENSE, THE MORE IRREPLACEABLE THEY ARE。

The terrible thing about this logic is that it is self-enhanced. Any external challenge will be automatically dissolved because the challenger is “not sufficiently understood”. Supervisors do not understand technology, so their judgement is not credible; critics in the academic world have not modeled on the front, so their concern is to talk on paper. Ultimately, they alone are entitled to judge themselves。

Step three: replace "interest" with "mean" and let the followers give up criticism。

This is one of the most difficult layers of the system to recognize and the most lasting source of its strength. They're not selling a job or a product, but a story that makes sense on the scale of the universe: you're deciding the fate of humankind. Once this narrative is accepted, the followers will voluntarily give up independent judgement. For, in the face of a mission that concerns the survival of human beings, questioning the motives of leaders makes them small and even a blocker of history. It makes people willing to hand over their capacity for criticism and understands this as a noble choice。

Put these three steps together, you'll see why the system is so difficult to shake. It does not rely on lies; it relies on a precise understanding of the cognitive structure of human beings. It first creates fear that you cannot ignore, then monopolizes the interpretation of that fear, and then turns you into its most faithful communicator with the word of meaning。

And in this system, Otmann is the fastest-functioning model so far。

Whose destiny

Otman has always said that he had no shares in OpenAI and received only symbolic salaries, which had been the cornerstone of his narrative of "power for love"。

But Bloomberg wrote him off in 2024, his personal net assets, about $2 billion. This wealth comes mainly from a series of investments he has made VC over the last decade or so. His early investment in the payment company Stripe, with an alleged return of hundreds of millions of dollars, and his investment in Reddit on the market, was also profitable. He also voted for the nuclear fusion company, Helion, and he said that the future of AI depended on energy breakthroughs, and that he was betting on fusion, and then Openai went to talk to Helion about electricity purchases. He said he avoided the negotiations, but this chain of interests was understood by fools。

He did not have direct ownership of OpenAI, but he built a huge, individual-centred investment empire around OpenAI. Every great sermon about the future of mankind, he is injecting value into the map of the empire。

Now, is there a new understanding of his end-of-life kit, packed with guns, gold and antibiotics, and the land in Grand Sur, ready to fly

He never covered it up. The escape bag is true, the bunker is true, and the obsession with the Last Day is true. But he was also the one who tried his best to push the end of the day. These two things are not contradictory, because in his logic, the end doesn't have to be stopped, it just takes an advance slot. He's obsessed with playing the only person who can see and prepare for the future。

It's one thing to prepare an escape package for a substance and to build a financial empire around OpenAI: In a hand-driven and uncertain future, lock yourself in the best position for a winner。

IN FEBRUARY 2026, AS SOON AS HIS FRONT FOOT FINISHED SUPPORTING THE "AI NOT FOR WAR" LINE, HIS BACK FOOT SIGNED A CONTRACT WITH THE PENTAGON. IT'S NOT HYPOCRISY, IT'S INHERENT IN HIS BUSINESS MODEL. MORAL GESTURES ARE PART OF THE PRODUCT AND COMMERCIAL CONTRACTS ARE THE SOURCE OF PROFITS. HE NEEDS TO PLAY BOTH THE MERCIFUL SAVIOR AND THE COLD-BLOODED PROPHET OF THE LAST DAY, BECAUSE ONLY WHEN HE PLAYS BOTH ROLES CAN HIS STORY BE TOLD AND HIS "NATURAL DESTINY" REVEALED。

THE REAL DANGER IS NEVER AI, BUT THOSE WHO BELIEVE THEY HAVE THE RIGHT TO DEFINE HUMAN DESTINY。

QQlink

Không có cửa hậu mã hóa, không thỏa hiệp. Một nền tảng xã hội và tài chính phi tập trung dựa trên công nghệ blockchain, trả lại quyền riêng tư và tự do cho người dùng.

© 2024 Đội ngũ R&D QQlink. Đã đăng ký Bản quyền.