Litecoin

OpenAI's latest interview: What's the next step for ChatGPT after closing Sora

2026/04/03 04:30
🌐en
OpenAI's latest interview: What's the next step for ChatGPT after closing Sora

Video title:OpenAI President Greg Brockman: AI Strategy, AGI, and the Super App

Video by Alex Kantrowitz

Photo by Peggy Block Beats

 

This post is part of our special coverage Global Voices 2011. The programme has long focused on changes in AI, science and technology industries and business structures, and is an important window of view of Silicon Valley's first line of judgement。

In this conversation, Brockman did not stop at model capacity per se, but moved the problem forward: when AI's capability was largely validated, how the industry would then choose its path, reformulate its product form and absorb its systemic impact. The discussion revolved around OpenAI’s product strategy, the upcoming “super applications” and its judgement that AI is entering the “deep-off phase”。

This conversation can be understood in three ways。

First, the path shrinks。
From video generation to the reasoning model, from multiple lines and into active trade-offs, OpenAI's choice is not a simple technical judgement, but rather a response to reality constraints - arithmetic has become a core bottleneck. Given the limited resources available, the technical route began to shrink to two of its most leveraged directions: personal assistants and solving complex problems. It also means that the competition logic of AI is moving from "do what can" to "do what first"。

Second, it's the remodeling of patterns。
The introduction of "super-applications" is essentially a leap in product patterns. AI is no longer a collection of fragmented tools, but a single portal: It understands the context, calls the tools, carries out the tasks and builds on the memory in different contexts. From ChatGPT to Codex, AI is gradually taking over the full workflow, while the role of human beings is shifting from implementers to dispatchers — setting targets, assigning tasks and monitoring。

The third is the turn of the rhythm。
IF THE PAST TWO YEARS HAVE BEEN A CAPACITY-GRADUATION PHASE, WHAT IS HAPPENING NOW IS "FLYING." ON THE ONE HAND, MODELLING CAPACITY HAS JUMPED FROM "SUPPLIED ABOUT 20 PER CENT" TO "COVERED ABOUT 80 PER CENT" AND DIRECTLY TRIGGERS THE RE-ENGINEERING OF THE WORKFLOW; ON THE OTHER HAND, AI IS PARTICIPATING IN ITS OWN EVOLUTION (OPTIMIZING AI WITH AI), FOLDING CHIPS, APPLICATIONS AND COMPANY-SIDE SYNERGIES TO FORM A CONTINUOUSLY ACCELERATING CLOSED CIRCLE. AI IS NO LONGER A SINGLE-POINT TECHNOLOGY BUT IS BEGINNING TO BE A KEY ENGINE FOR ECONOMIC GROWTH。

At the same time, however, another set of issues also emerges simultaneously: public mistrust, uncertainty about employment, disputes over data centres, and the boundaries of security and governance. The answer given by Brockman is not entirely technical. He emphasized two things: first, the risk cannot be addressed through "centralization" and the need to build social infrastructure around AI similar to the electricity system; second, individual capabilities are changing — really important, not "will tools be used" but "can AI be used to achieve its goals."。

IF THE PROBLEM IN THE PAST IS "WHAT CAN AI DO," THEN THE PROBLEM HAS BECOME, WHEN AI STARTED DOING MOST OF THE WORK FOR YOU, WHAT ELSE DO YOU HAVE TO DO。

The following is the original text (in order to facilitate reading and understanding, the original text has been consolidated):

TL;DR

AGI HAS ENTERED THE "LISTING PATH" PHASE:Greg Brockman (OpenAI Co.) believes that the GPT-based reasoning model already has a clear route to AGI and is expected to be achieved in a few years, but the pattern will remain "uneven" (jagged)。

Note: AGI (Artificial General Intelligence) refers to general artificial intelligence and to the existence of an AI system equivalent to, or even exceeding, human capabilities in the vast majority of cognitive tasks. Unlike the current "special-purpose AI" (e.g. image identification, recommended algorithm), AGI emphasizes cross-mission interoperability and migration capability。

Strategic convergence: from multiple lines to two core applications:OpenAI concentrates its resources on "Personal Assistant" and "Complicated Problem Solution" rather than simultaneously pushing all directions (e.g. video generation) under the constraints of arithmetic。

SUPERAPPLICATION WILL BE THE AI ENTRY FORM:CHATS, PROGRAMMING, BROWSERS AND KNOWLEDGE WILL BE INTEGRATED INTO A UNIFIED SYSTEM, WITH AI MOVING FROM A TOOL TO AN "EXECUTIVE LEVEL" AND USERS MOVING TO A "MOBILIZER"。

KEY TRANSITION: AI TO START TAKING OVER THE WORKFLOW INSTEAD OF SUPPORTING:Model capabilities have jumped from "finishing 20% of tasks" to "taking up 80%", forcing individuals and businesses to restructure their working methods。

Calculus became the core bottleneck and competition focus:AI DEMAND FAR EXCEEDS SUPPLY AND FUTURE CONSTRAINTS ARE NOT IN THE FORM OF MODELLING CAPACITY, WHILE DATA CENTRES AND INFRASTRUCTURE ARE KEY VARIABLES IN COMPUTING RESOURCES。

This is the first time I've heard of thisTECHNOLOGY SELF-ACCELERATION (AI OPTIMIZATION AI) SUPERIMPOSE INDUSTRIAL SYNERGIES (CHIPS, APPLICATIONS, BUSINESSES) THAT DRIVE AI FROM TOOLS TO ENGINES OF ECONOMIC GROWTH。

The greatest risks are not in technology, but in governance and use:Security problems could not be addressed by a single subject, and open ecological and social infrastructure were needed。

Individual core competencies are changing:THE FUTURE COMPETITIVENESS IS NOT "EXECUTED" BUT "TARGET+MANAGE AI" AND THE ACTIVE USE OF AI WILL BE THE FOUNDATIONAL CAPABILITY。

Q:

Alex (Moderator):
Today we've invited Greg Brockman, co-founder and CEO of OpenAI, to talk about AI's most potential opportunities, how OpenAI will seize them, and the idea of "superapply." Greg came to our studio today。

Greg Brockman (OpenAI Initiative & CEO):
Nice to meet you, thanks for the invitation。

Why shut it down, Sora? It's not enough

Alex:
Now this is an interesting time, and OpenAI is suspending the advance of video generation and concentrating its resources on a "super applications" -- it will integrate business and programming scenarios. From the outside, including me, it feels like OpenAI is leading the consumer side and is now adjusting its resource allocation. What happened

Note: In March 2026, OpenAI declared its video production Sora (including applications and API) closed and stopped related commercial advances。

Greg Brockman:
For some time now, we've been developing this technology of in-depth learning to see if it actually has the positive impact we've been thinking about. – Is it possible to construct applications that really help people and improve their lives。

At the same time, we are doing another line: deploy this technology. One is to support operations and the other is to build up experience in the real world in advance and prepare for the moment when technology is truly mature。

And now we have reached a new stage. We see that this technology is indeed feasible. We're moving from benchmarking and some abstract abilities to a new phase -- It must be placed in the real world, where it can actually work and evolve through user feedback。

So I prefer to interpret this change as a strategic shift driven by technological changes。

This is not to say that we are moving from "consumption" to "business". To be more precise, we ask the question: What applications should we give the highest priority when resources are limited? Because we can't do anything。

WHAT APPLICATIONS CAN TRULY LAND, CREATE SYNERGIES AMONG THEM AND HAVE PRACTICAL IMPLICATIONS? IF YOU LIST ALL DIRECTIONS, THE CONSUMER SIDE CAN BE BROKEN DOWN INTO DIFFERENT KINDS OF THINGS: PERSONAL ASSISTANTS, A SYSTEM THAT REALLY KNOWS YOU, IS CONSISTENT WITH YOU, HELPS YOU ACHIEVE YOUR LIFE GOALS; CREATIVE AND ENTERTAINING; AND MANY OTHER POSSIBILITIES. AND AT THE CORPORATE LEVEL, IF YOU LOOK AT IT FROM A HIGHER LEVEL, YOU CAN ACTUALLY ABSTRACT IT INTO ONE THING: CAN AI DO IT FOR YOU

FOR US, THE CURRENT PRIORITIES ARE VERY CLEAR, WITH ONLY TWO THINGS AT THE TOP: FIRST, A PERSONAL ASSISTANT; SECOND, AN AI THAT CAN HELP YOU SOLVE COMPLEX PROBLEMS。

The problem is that we are not even satisfied with our current calculations. Once more applications are added, full coverage is simply not possible. So it is a realistic judgement: technology is fast maturing and the impact is about to explode, and we have to choose the most important direction to really make it。

Alex:
You mentioned an analogy before, saying that OpenAI was kind of like Disney: There's a core ability, and then it can be extended to different scenarios. Disney has Mickey Mouse and can do movies, theme park, Disney+. OpenAI's "core" is a model that can be used for video generation, assistant and business applications。

But now, does it look like you're not going to take this "full rollover" path, but rather have to choose

Greg Brockman:
Actually, I think it's more like it is now. But the point is: from a technical point of view, Sora and GPT actually belong to two different technical branches. They are built in completely different ways。

THE PROBLEM IS THAT AT THIS STAGE, IT IS VERY DIFFICULT TO ADVANCE THESE TWO TECHNOLOGICAL TREES SIMULTANEOUSLY, ESPECIALLY IN THE CONTEXT OF LIMITED RESOURCES. SO THE CHOICE WE MADE WAS, AT THIS STAGE, TO FOCUS OUR MAIN RESOURCES ON THE GPT PATH。

Of course, that does not mean we give up the other direction. In the field of robotics, for example, we are continuing our research. But the robot itself is at an earlier stage, and it is not yet ripe for a real outbreak。

BY CONTRAST, IN THE COMING YEAR, WE'LL SEE AI ACTUALLY TAKE OFF IN THE FIELD OF KNOWLEDGE。

And it's important to stress that the GPT route is not just "text". For example, two-way voice interaction (speech-to-speech), which is part of this technology path, makes AI more useful and practical. These capabilities are in essence adapted in different ways within the same model system。

But if you go to two completely different branches of technology, it's very difficult to sustain them in the long run, given the constraints of calculation. And the reason for this is that the need is too great. After almost every model was released, people wanted to do more with it。

Alex:
Then why didn't you focus on the World Model? Video models, for example, need to understand the relationship between objects, which is also critical for robots. And Sora is actually moving very fast. Why the ultimate bet

Note: World Model focuses on sensory and physical instincts, with the core being to make AI understand how the world works, not just to learn about the face patterns of data. These models are usually used to describe systems like Sora: It is not only the generation of images or videos, but also the relationships between modelling objects (e.g., people, cars, light), continuous changes in time (e.g., evolution between frames) and underlying physical patterns (e.g., motion, shielding and collision). By contrast, GPT is a model of language and reasoning that focuses more on abstract cognitive and mission performance。

Greg Brockman:
The biggest problem in this area is actually too many opportunities。

We found out early that, in OpenAI, when an idea is mathematically reasonable, it usually runs through and achieves good results. This suggests that the bottom level of in-depth learning is very powerful, and that it can be a rule from data in abstracto and migrate to new scenes. You can use the world's models, scientific discoveries, programming, etc。

But the key is: we need to make trade-offs。

THERE'S BEEN A DEBATE IN THE PAST. HOW FAR CAN THE TEXT MODEL GO? CAN IT REALLY UNDERSTAND THE WORLD? I THINK THERE'S AN ANSWER TO THIS QUESTION. TEXT MODELS CAN GO TO AGI。

We've seen clear paths, and there'll be stronger models this year. And within OpenAI, one of our greatest pains is how to distribute the calculus — a problem that only gets worse, not less. So, in essence, this is not a question of which route is more important, but of timing and sequencing。

NOW, SOME OF THE APPLICATIONS WE USED TO THINK WERE REMOTE HAVE BEGUN TO BECOME ACCESSIBLE. FOR EXAMPLE, SOLVING PHYSICAL PROBLEMS THAT HAVE NOT YET BEEN SOLVED. WE'VE JUST HAD A CASE WHERE A PHYSICIST HAS BEEN STUDYING A PROBLEM FOR A LONG TIME, GIVING IT TO A MODEL, AND 12 HOURS LATER, WE'VE GIVEN A SOLUTION. HE SAID IT WAS THE FIRST TIME HE FELT LIKE A MODEL WAS THINKING. THIS PROBLEM MAY NOT EVEN BE SOLVED BY HUMANS, BUT AI DID。

When you see something like this, your only choice is to double your bets and triple your input. Because it means that we can really unleash huge potential。

So for me, this is not a competition between different directions, but what is OpenAI's mission? How do we take AGI to the world? How can it really benefit everyone? And we have seen that path, and we know how to move it forward。

BET GPT, NOT THE WORLD MODEL: PATH SELECTION TO AGI

Alex:
Well, I do want to go back to the next generation model that you just mentioned, but I'd like to ask that question first。

I spoke to Demis Hassabis of Google DeepMind earlier this year. Interestingly, he said that for him, the closest thing to AGI was actually their image generator, Nano Banana。

Note: Demis Hassabis is one of the key people driving AI from research to breakthrough applications. He created DeepMind to develop AlphaGo and defeat the chess world champion in 2016 as a landmark event in the history of artificial intelligence development。

His reason was that, in order to generate such images and videos, it was essential to understand the interaction between objects, at least with regard to how the world operated。

So does that mean a potential risk? It's a big bet -- if that's the case, does OpenAI ever miss anything by adding numbers to another technology tree

Greg Brockman:
What if that's true? I have two answers。

First, of course there is that possibility. This is where you have to choose and bet. And OpenAI has been doing this from the beginning: we have to figure out what the path to the AGI is, and we have to focus high on the road. Like random vectors, the end result may be close to zero; but if you align all vectors, they can drive you in a clear direction。

But the second point is that image generation is actually a very popular capability in ChatGPT, and we're still investing in it and making it a priority. We can do this because it doesn't really belong to the technology branch of the World Model or the Proliferation Model, which is actually based on GPT structures. So, while it faces different data distributions, it's the same thing on the lower core of technology。

AND THIS IS JUST ONE OF THE MOST AMAZING PLACES FOR AGI: SOMETIMES VERY DIFFERENT APPLICATIONS -- VOICE TO VOICE, IMAGE GENERATION, TEXT PROCESSING, AND THE USE OF TEXT ITSELF IN DIFFERENT CONTEXTS, SUCH AS SCIENTIFIC RESEARCH, PROGRAMMING, PERSONAL HEALTH INFORMATION -- CAN BE ACCOMMODATED IN THE SAME TECHNOLOGY FRAMEWORK。

So, from a technical point of view, one thing that I and companies have been thinking about is how to harmonize our efforts as much as possible. Because we really believe that this technology will lead to an overall upgrading, or even an uplifting of the entire economic system。

And this thing is too big. Of course, we can't do everything, but we can do what belongs to us。

Alex:
This is what Artificial General Intelligence means by "general."。

Greg Brockman:
THAT'S RIGHT. THAT'S THE G. THAT'S WHAT IT REALLY MEANS。

Alex:
Speaking of "unified," what would this super-application look like

Greg Brockman:
I understand that the super-application is..

Alex:
It brings together chats, programming, browsers and ChatGPT, right

Greg Brockman:
RIGHT. WHAT WE WANT IS AN END-USER-ORIENTED APPLICATION THAT GIVES YOU REAL EXPERIENCE OF THE POWER OF AGI, WHICH IS ITS "UNIVERSALITY."。

IF YOU THINK ABOUT TODAY'S CHAT PRODUCT, I THINK IT'S GOING TO EVOLVE INTO YOUR PERSONAL ASSISTANT, YOUR PERSONAL API, A REAL AI FOR YOU. IT KNOWS YOU WELL, KNOWS A LOT OF INFORMATION ABOUT YOU, IS CONSISTENT WITH YOUR GOALS, IS TRUSTWORTHY, AND CAN IN SOME MEASURE REPRESENT YOU IN THIS NUMBER WORLD。

As for Codex, you can understand it: it's still a tool primarily designed for software engineers, but it's turning into "Codex for everyone."。

Anyone who wants to create or build something can use Codex to let computers do what they want. And it's not just about writing software anymore, it's more like using computers. Like I'll let it help me with the laptop settings. Sometimes I forget how to set hot areas, and I just let Codex do it, and it does。

That's what computers are supposed to be. They should be adapted to people, not to me。

SO YOU CAN IMAGINE AN APPLICATION WHERE YOU CAN TELL EVERYTHING YOU WANT THE COMPUTER TO DO. THIS WILL INCORPORATE THE CAPABILITIES OF "COMPUTER USAGE" AND "BROWSER OPERATION" SO THAT AI CAN REALLY OPERATE THE WEB PAGE AND YOU CAN MONITOR WHAT IT IS DOING. AND WHETHER YOUR INTERACTION IS CHATTING, WRITING CODE, OR KNOWLEDGE WORK IN GENERAL, ALL THESE DIALOGUES ARE UNIFIED IN ONE SYSTEM. AI WILL REMEMBER, WILL UNDERSTAND YOU。

That's what we're building。

But to be honest, it's just the tip of the iceberg, the part that's on the surface. What really matters to me is the unity of the bottom technologies。

We have mentioned earlier the harmonization of bottom model levels, but what has really changed over the past few years is that it is not just the "model" itself, but more importantly the "carrying systems". That is, how does the model get context? How does it connect to the real world? What can it do? How does the cycle of interaction with users work when new contexts are entering

IN THE PAST, WE ACTUALLY HAD MULTIPLE SETS OF THINGS, OR AT LEAST A FEW SLIGHTLY DIFFERENT SETS OF THINGS. NOW WE'RE PUTTING THEM TOGETHER. EVENTUALLY, WE'LL HAVE A UNIFIED AI LAYER, AND THEN, IN A VERY LIGHT-HANDED WAY, POINT IT TO DIFFERENT SPECIFIC APPLICATIONS。

Of course, you can still do a small plugin, a small interface, dedicated to financial services, specialized services laws, but in most cases you don't even need it, because the super-application itself will be sufficiently broad and universal。

Alex:
This application is both for business and for individuals

Greg Brockman:
Yeah, that's actually the heart of it. Like a computer, like your notebook, is it for personal use or for work? The answer is: both. It's your device, the interface to the digital world. And that's exactly what we want to do。

Alex:
And from a non-commercial point of view, what would I do with it if I used it in my personal life? What happens to my life

Greg Brockman:
I'll understand that in personal life, it will first continue the way you use ChatGPT。

How do you use ChatGPT now? In fact, people are already using it for very diverse and amazing tasks. Sometimes it's just that, "I'm going to address the wedding. Can you help me draft it? Or, "Can you show me this idea and give me some feedback? And like, "I'm doing a little business. Can you give me some thought?"

These scenes are somewhat personal and some have begun to blur the boundaries between individuals and work. And my point is, all these things should be left to superapply。

Greg Brockman:
But if you look back at ChatGPT, it's actually already evolving。

IT USED TO HAVE NO MEMORY, RIGHT? FOR EVERYONE, IT'S THE SAME AI, STARTING FROM SCRATCH, ALMOST TALKING TO A STRANGER. BUT IF IT REMEMBERS YOUR PAST INTERACTIONS, IT WILL BE MUCH STRONGER. IF IT CAN ACCESS MORE CONTEXT, IT CAN BE MUCH STRONGER。

For example, it connects to your mailbox, your calendar, truly understands your preferences, has a deeper set of background information about your past experience, and uses it to help you achieve your goals. And now, for example, ChatGPT already has a function called Pulse, which, depending on what it knows about you, offers you something you might be interested in。

So at the level of personal use, super-appliances will cover all of this and will do more and more。

Alex:
When are you going to launch it

Greg Brockman:
More precisely, we will move in this direction step by step in the coming months. This complete vision that we are talking about will be delivered gradually, but will not be on line in a single, integrated manner, and will emerge in a phased manner。

For example, today's Cordex application actually contains two layers: one of a universal intelligent carrier that can use tools; and the other of an intelligent body that can write software。

And this universal carrier system can actually be used in many other scenarios. You put it on spreadsheets, on Word files, and it helps you with your knowledge work。

So our first step is to make Cordex applications more useful for generic knowledge. Because we've seen it inside OpenAI, and people started using it like this。

This will be the first step, and there will be a lot of steps ahead。

Alex:
When I was talking to one of your colleagues yesterday about Codex, he mentioned someone who was doing video clips with Codex: He made Codex handle the video for himself, and Codex even made an plugin for Adobe Premiere, divided the video into chapters and started editing. That's what you're gonna do

Greg Brockman:
I particularly like to hear such cases. That is how we want the system to work. And it's interesting: Codex was originally designed for software engineers, so for non-programmers, it's not really very usable. Because there are so many minor problems in the configuration。

The developers know what that means and how to fix it; we're used to it. But if you're not a developer, then you're like, "What is this?" I've never seen it before

But even so, we have seen a lot of people who have never written a program, who have started putting it on the web site, or doing what you just said — automating the interaction between different software, with a huge leverage. For example, someone in our communications team picked it up on the Slack and the mailbox so it could handle a lot of feedback and make a good synthesis and synthesis。

So what's happening is that those who are very motivated are already willing to cross these thresholds and then get high returns。

IN A SENSE, THE HARDEST PART HAS BEEN ACCOMPLISHED — WE HAVE MADE AN AI THAT IS TRULY SMART, CAPABLE AND ABLE TO ACTUALLY ACCOMPLISH ITS TASKS。

The next thing to do is the relatively "easy" part: to make it really useful to the public, to tear down the entry threshold。

Alex:
And in terms of competition patterns, Anthropic now has Claude applications, both chat robots and Claude Code. In a way, they have their own super-appliance prototype。

What do you think, Anthropic? And how do you think OpenAI's gonna catch up

Greg Brockman:
If you put back 12 to 18 months ago, we've been using "programming" as a priority area, and we've been getting the best results in programming competitions and so on. But one thing we did not invest enough was the last kilometre of availability。

THAT IS TO SAY, WE'RE NOT PAYING ENOUGH ATTENTION TO THE FACT THAT AI IS SMART ENOUGH TO SOLVE ALL THE DIFFICULT PROGRAMMING QUESTIONS, BUT IT HAS NEVER SEEN A CODING LIBRARY IN THE REAL WORLD — AND THAT THE REAL WORLD'S CODING BANKS ARE OFTEN CONFUSED, FAR FROM THE “CLEAN” ENVIRONMENTS IT KNOWS。

At that point, we were indeed lagging behind. But perhaps from mid-year last year, we began to make this up very seriously. We have assembled teams to see where all these gaps are, what the real world is, what the complexity is, and what we have not had real contact with before。

FOR EXAMPLE, HOW TO BUILD TRAINING DATA? HOW TO BUILD A TRAINING ENVIRONMENT? LET AI REALLY EXPERIENCE WHAT IT FEELS LIKE TO BE "DOING SOFTWARE ENGINEERING" - TO BE INTERRUPTED, TO HAVE STRANGE PROBLEMS, TO BE UNDESIRED, AND SO ON。

I think by now we've caught up. When users really compare us with their competitors, many prefer to choose us。

Of course, we also know that we have a gap in the front-end experience, and we'll fill that out. But that's the way we're going this time: not just to make a model, but to add a shell; to think about it as a product from the beginning. In doing our research, we thought at the same time, "How will it eventually be used?" This is a shift that is taking place inside OpenAI。

So my point is, we're going to have a very strong wave of upgrades. Looking at the road map for this year alone, I feel very excited that much can be done。

At the same time, we are making up for the last kilometre with a very focused focus。

Alex:
Since 2022, OpenAI has been like an undisputed runner in this field. Clearly, competition is no longer just a test competition. You just said "we're coming."。

Has the atmosphere in the company changed? In other words, it's not like it's like it's in the past that you're leading on a product like ChatGPT, but it's actually entering a positive competition。

Some outside reports can actually see this change — for example, an intra-corporate meeting that emphasizes that OpenAI has no longer had a “tribe mission” and that everyone has to focus on this central direction. So what's happening in the environment and atmosphere

Greg Brockman:
I'd say, personally, OpenAI's the most disturbing moment for me, just after we released ChatGPT。

I remember there was a "we won" atmosphere at the company's holiday party. I've never felt that way before. My response was: No, we are not. We are the one who is at a disadvantage。

And we've always been. Most of the competitors in this area are already established large firms with more capital, more manpower, more data and almost all resources。

So, why is OpenAI competing? In a way, the answer is that we never felt that we could be safe. We have always seen ourselves as challengers。

In fact, it is healthy for me to see that the market is beginning to show this pattern of competition and that other opponents are beginning to appear and do well。

Because in my opinion, you can never pin your attention on competitors. If you were just staring at where they are now, they would have gone by the time you got there。

And I think it has been the reverse for some time: a lot of people have been staring at where we are and we have been able to move forward. Instead, it gives us a sense of internal alignment and unity。

As I mentioned earlier, in the past we almost viewed "research" and "deployment" as two separate things; now we really want to integrate them. For me, it's a wonderful thing。

So I would say that this is not the stage where I think we've been "stable" or where we're suddenly in crisis. You know, it's not usually as good as they say, or as bad as they say。

I think we've been stable as a whole. And I'm very confident in our road map and in the research that we've done. As for the end of the product, I think we now have a very good energy, and you are coming together to deliver these things to the world。

Alex:
You've mentioned it several times before, and there'll be some strong new models. What the hell was that

The Information reports that you've completed pre-training for “Spud”, and Sam Altman has also said to OpenAI's internal staff that they should see a very strong model in a few weeks. That was a few weeks ago. Within the team, it was felt that there was even a real possibility that it could accelerate the economy and that things were moving faster than many had anticipated。

So, what the hell is Spud

Greg Brockman:
It's a good model. But I don't think the point is really a single model。

OUR R & D PROCESS IS BROADLY THIS: FIRST, PRE-TRAINING, I.E., THE PRODUCTION OF A NEW BASE MODEL, AND THEN ALL FURTHER IMPROVEMENTS WILL BE BUILT ON THAT FOUNDATION MODEL. THIS IS OFTEN A STEP THAT REQUIRES CONSIDERABLE EFFORT BY MANY TEAMS WITHIN THE COMPANY. IN FACT, IN THE LAST 18 MONTHS, I'VE SPENT MOST OF MY TIME HERE: IT IS MAINLY AROUND THE GPU INFRASTRUCTURE, SUPPORTING THE TEAMS RESPONSIBLE FOR THE TRAINING FRAMEWORK AND MAKING A REAL RUN FOR THESE LARGE-SCALE TRAINING TASKS。

THIS IS FOLLOWED BY INTENSIVE LEARNING. AND THAT'S HOW AI, WHICH HAS LEARNED A GREAT DEAL ABOUT THE WORLD, STARTED REALLY USING IT。

Then there is the post-training process. At this stage, you'll really tell it -- well, now that you know how to solve the problem, go practice in different situations。

Finally, there is a "last kilometre" phase on behaviour and availability。

So I'll look at Spud as a new base, a new pre-training model. And on it, it can be said that our research over the past two years or so has begun to really lead to results. It'll be very exciting。

I think what the outside world finally feels is an overall increase in capacity. But for me, it was never just a single issue. Because when it comes out, it's just an early version of what we're going to do. We will continue to do more at every step of this process。

So I think we're more like having an accelerated engine of progress, and Spud is just a node on this road。

Alex:
So what do you think it can do about today's models

Greg Brockman:
I think it will solve more difficult problems and become more subtle. It would better understand the directives and the context。

people sometimes say "big model smell" -- - it means you can feel it when the model is really smarter and more capable. it'll be more in line with your intentions, more in tune with your needs。

WHEN YOU ASK A QUESTION AND AI DOESN'T REALLY UNDERSTAND WHAT YOU MEAN, THAT FEELING IS STILL DISAPPOINTING. YOU CAN'T HELP BUT THINK THAT YOU SHOULD BE ABLE TO FIGURE IT OUT。

SO I WOULD SAY, IN A SENSE, IT'S A LOT OF "MASS" THAT COMES TOGETHER. ON THE ONE HAND, THERE WILL BE MANY INCREASES IN INDICATORS; ON THE OTHER HAND, THERE WILL BE NEW SCENARIOS: YOU USED TO BE TIRED OF USING AI BECAUSE IT WASN'T RELIABLE, AND NOW YOU'RE GOING TO USE IT。

I think this is gonna be a full-blown change. I look forward in particular to seeing how it will continue to raise the capacity ceiling. We have seen its performance in a physical study of such scenarios, and I think it will be able to solve more open-ended problems and span longer。

At the same time, I look forward to seeing how it lifts the power threshold. - So whatever you want to do, it's much more useful than today。

Alex:
BUT IT IS NOT ALWAYS EASY FOR ORDINARY USERS TO FEEL THIS CHANGE. BEFORE GPT-5, FOR EXAMPLE, THERE WAS A LOT OF HEAT AND EXPECTATION; BUT WHEN IT REALLY CAME OUT, THE INITIAL PUBLIC REACTION WAS SOMEWHAT DISAPPOINTING. IT WAS ONLY THEN THAT IT BECAME APPARENT THAT IT WAS VERY STRONG IN CERTAIN SPECIFIC TASKS。

So, for the next generation, do you think it'll be felt more clearly in some professional scenes, or will it become a more intuitive, universal promotion for all

Greg Brockman:
I think the story might still be similar. After the model was released, someone must have started thinking, "This is exactly the difference between day and night compared to what I've seen before. But there's also some applications that are not on "smart" bottlenecks. So if you're just making models smarter, maybe in these places, users don't feel the difference right away。

But as time goes by, I think we all end up feeling change. Because what really changes is how much you're going to start relying on this system。

IF YOU WANT TO THINK ABOUT HOW WE INTERACT WITH AI NOW, EVERYONE ACTUALLY HAS A PSYCHOLOGICAL MODEL OF WHAT IT CAN DO. AND THIS PSYCHOLOGICAL MODEL DOESN'T CHANGE VERY FAST. IT'S USUALLY WITH EXPERIENCE THAT IT DOES SOMETHING AMAZING FOR YOU ONCE IN A WHILE, AND YOU SUDDENLY REALIZE THAT IT WAS ABLE TO DO IT, AND I DIDN'T THINK IT WOULD。

We've seen things like this in the context of access to medical information. I have a friend who uses ChatGPT to understand his different treatments for cancer. The doctor had previously told him that it was late and that there was nothing left to do. But he used ChatGPT to study a lot of different ideas, and finally he actually found a cure。

IN THIS CASE, THE PREMISE IS THAT YOU HAVE TO HAVE SOME CONFIDENCE IN AI'S ABILITY TO HELP IN THIS SCENE, AND YOU'RE WILLING TO DEVOTE SO MUCH EFFORT TO EXTRACTING VALUE FROM THE SYSTEM。

SO I THINK WHAT WE'RE GOING TO SEE NEXT IS THAT IN ANY SIMILAR APPLICATION SCENARIO, IT'S GOING TO BE MORE OBVIOUS TO EVERYONE THAT AI CAN HELP YOU。

So it is both technology itself becoming stronger and our understanding of technology is changing and catching up。

Alex:
That means you'll be relying on it more and more. Within OpenAI, you are also developing an automated AI researcher who is said to be launched this fall. What the hell was that

AI IS IN THE EARLY STAGES OF "DOWNING"

Greg Brockman:
I think that we are now at an early stage of this technology flight in terms of overall trends。

Alex:
What does that mean

Greg Brockman:
AND THAT'S, LIKE, AI GETTING STRONGER ALONG THE EXPONENTIAL CURVE. AND PART OF THE REASON IS THAT WE CAN USE AI TO HELP US IMPROVE AI ITSELF, SO THE WHOLE PROCESS IS ACCELERATING。

BUT I THINK THAT THE SO-CALLED "FLYING" IS NOT JUST A TECHNICAL MATTER, BUT A RELEASE OF REAL WORLD INFLUENCE. MANY TECHNOLOGICAL DEVELOPMENTS ARE LIKE AN S CURVE; AND IF YOU LOOK AT MULTIPLE S CURVES OVER A LONGER TIME DIMENSION, THEY EVENTUALLY CONVERGE INTO AN ALMOST EXPONENTIAL GROWTH。

I think we're at this stage. In other words, the technology itself is advancing at an increasingly rapid pace, and the engine of progress is accumulating kinetic energy。

AT THE SAME TIME, IN THE OUTSIDE WORLD, THERE ARE A NUMBER OF WINDING FACTORS: CHIP DEVELOPERS ARE GETTING MORE INPUT; A LARGE NUMBER OF PEOPLE ARE DOING APPLICATIONS AT THE TOP, TRYING TO EMBED AI IN DIFFERENT SCENARIOS, LOOKING FOR POINTS OF CONVERGENCE BETWEEN IT AND SPECIFIC NEEDS。

ALL THIS ENERGY IS BUILDING UP, AND TOGETHER, IT'S PUSHING AI INTO A "DEEP-OFF PERIOD" FROM A MARGINAL PRESENCE TO A MAJOR ENGINE OF ECONOMIC GROWTH。

And it's not just what happens in our walls. It is about the world as a whole, the economy as a whole, how together it promotes this technology, and how it works。

Alex:
What exactly does this "researcher" do

Greg Brockman:
THE TERM "RESEARCHER" ESSENTIALLY MEANS THAT WHEN AI CAN TAKE OVER MORE AND MORE TASKS, WE SHOULD ALLOW IT TO OPERATE MORE AUTONOMOUSLY。

Of course, there is much behind this that needs to be considered. It doesn't mean that we put it out, let it run for a while and come back later to see if it's working。

I think we will still be very deeply involved in its management. Like right now, if you bring a junior researcher, if you hang him alone for too long, he'll probably be on a path that doesn't have much value. But if there is a senior researcher, or a person with a real sense of direction, who does not need to have all the specific operational skills in person, he can still provide continuous feedback, review and directional guidance on what he produces: what exactly do I want you to do。

So what I understand is that the system that we are building is a mechanism that will dramatically increase the speed of our output models, drive new research breakthroughs and make them more useful and useful in the real world. And it's all going to happen at an increasing rate。

Alex:
WHAT EXACTLY WOULD IT DO? WILL YOU SAY TO IT, "GO FIND THE AGI," AND THEN IT WILL TRY

Greg Brockman:
To some extent, I do understand that, at least in the first sense. But in a more practical sense, I would understand it to mean moving one of our research scientists to the Silicon-based system as much as possible。

Alex:
YET ANOTHER WAY TO UNDERSTAND THE TERM "DEFUNCT" IS FOR AI TO MOVE FROM PROGRESSIVE PROGRESS TO THE ACCUMULATION OF KINETIC ENERGY, WHICH EVENTUALLY EVOLVES INTO AN ALMOST UNSTOPPABLE PUSH TOWARDS SMARTER INTELLIGENCE THAN HUMANS。

Are you worried that, just as things might go in the right direction, such progress itself could be out of control and could be biased

Greg Brockman:
I think, of course, that there is no doubt. I believe that, in order to reap the benefits of this technology, it must be accompanied by serious reflection on its risks。

If you look at our approach to technology development, you will find that we have invested a lot in safety and protection. A good example is the prompt attack. If you're going to be an AI that's very smart, very capable, and has access to a lot of tools, of course you're going to make sure that it doesn't get pushed and manipulated for giving a strange instruction。

That is what we have devoted so much energy to, and I think we have achieved very good results and there is a very strong team in charge of this part of the work。

Interestingly, some of the questions are actually similar to humans. Humans are equally affected by fishing attacks and may be misled and may act without knowledge of the full context。

We'll take these analogies into our own R&D. Every time we publish a model and develop a model, we wonder: How do we ensure that it is truly consistent with human goals and how can it really help? This is one thing we care about very much。

There are also, of course, larger questions about the world, the economy as a whole: how will everything change? How can everyone benefit from this technology? It's not just a technical problem, it's not an OpenAI problem. But yes, I do think frequently not only about driving technology forward, but also about really ensuring that it can bring positive effects commensurate with its potential。

Alex:
The problem is, it looks like a race. OpenAI will be quickly replicated by many open-source players as well. These players tend to be much weaker in terms of safe borders and protective measures。

I remember what you said before to the effect that creative results require a lot of people to do the right thing; but destructive results may require only a man with bad intentions. That's where I worry most. Because it is clearly a race, and it is moving fast. Many of your colleagues have said that they would stop if all agreed to stop. But now it seems that there is no sign of slowing down。

Is it really worth the risk

Greg Brockman:

I think the return is worth it. But I also think it's too rough, too one-size-fits-all。

From the very beginning of OpenAI, we've been asking, "What is a good future?" How can this technology really advance the situation of all people

You can break this into two angles. A "centreization" perspective: the best way to make this technology safe is to have only one subject to develop it. That way, there will be no competitive pressure, and you can slowly and carefully get things right, and when you're ready, decide how to deliver it to everyone. This idea is, of course, understandable, but it is also, to some extent, an unacceptable formula。

And the other path, which is our preferred path, is to think about "resistence." In other words, it is seen as an open system: many participants are promoting this technological development, but the focus is not just on the technology itself, but on building the social infrastructure around it so that it can be taken over more safely。

You can think about the evolution of electricity. Electricity is also produced by many different people and institutions, and it is equally risky and dangerous in itself. At the same time, however, we have built a multilayered security infrastructure around it: electricity safety standards, different usage norms and different sizes of regulation. At a very large scale, there are also specific regulatory requirements. Many people are able to use electricity in a democratic way, along with inspectors and a set of supporting systems, built up around the characteristics of this technology。

AND I THINK, AI IS THE SAME. WHAT WE REALLY SEE IS THAT THERE MUST BE A BROAD SOCIAL DISCUSSION AROUND AI. IF THIS TECHNOLOGY REALLY COMES AND CHANGES EVERYONE'S LIFE, THEN PEOPLE MUST BE INVOLVED. IT CANNOT BE PUSHED AND DECIDED IN SECRET BY A SMALL, CENTRALIZED GROUP。

So, for me, this has always been a very central question: in what way should this technology go? And what we really believe is that this is a "resilient ecosystem" that has evolved around technological development。

Alex:
SO WHAT YOU'RE SAYING IS THAT WE'RE IN THE PROCESS OF "FLYING" AND WE'RE ALL IN IT. YOUNG WEIDA CEO WONG IN-HOON RECENTLY SAID HE THOUGHT THE AGI HAD BEEN ACHIEVED. DO YOU AGREE

Greg Brockman:
I THINK AGI HAS DIFFERENT DEFINITIONS FOR DIFFERENT PEOPLE. AND IT'S TRUE THAT MANY PEOPLE THINK THAT THE TECHNOLOGY WE HAVE TODAY IS ALREADY AGI。

This thing can be argued. But I think what's really interesting is that the technology we now have is still very "unstable" with a clear fault。

ON MANY MISSIONS, LIKE WRITING CODES, IT'S DEFINITELY SUPERMAN. AI IS ABLE TO DO IT, AND IT DOES SIGNIFICANTLY REDUCE FRICTION IN CREATING THINGS. BUT AT THE SAME TIME, THERE ARE VERY BASIC THINGS THAT HUMANS CAN DO EASILY, AND AI CAN STILL WORK。

So where did you draw the line? In a way, it's more like a feeling, an atmosphere of judgement, than a question that can be strictly defined by science at this point in time。

So for me, I think we're obviously going through that moment. If you showed me these systems today five years ago, I would have said, "Yeah, that's what we were saying. It's just that reality looks very different than we thought. It is not the same as any of the forms we had thought of。

So I think we need to adjust our mental models accordingly。

Alex:
So you mean, not yet

Greg Brockman:
I'd say it's about 70%, 80%. So I think we're really close。

AND I THINK THAT ONE THING IS PERFECTLY CLEAR: IN THE NEXT FEW YEARS, WE'LL DEFINITELY MEET AGI. ITS PERFORMANCE MAY STILL BE "SLASHY" AND NOT COMPLETELY SMOOTH AND PERFECT. BUT THE LOWER LIMIT OF ITS ABILITY TO COMPLETE ITS MISSION WILL BE RAISED VERY HIGH - I CAN DO ALMOST ANYTHING INTELLECTUALLY THAT REQUIRES YOU TO DO ON THE COMPUTER。

So now I have to give a little bit of uncertainty, because it's kind of like some kind of "uncertainty principle" -- you can argue about it from different definitions. But by my own definition, I think we're almost there. A little bit further, and it's definitely there。

Critical turn: from 20% to 80% to take over

Alex:

What happened in December of 2025. Because it looked like a turning point, "Let the machine write the code for hours without interruption," and suddenly it started from a theoretical thought to everyone, and said, "I think I can trust it and let it continue running for a while."

So what happened then

Greg Brockman:
AFTER THE RELEASE OF THE NEW MODEL AT THE TIME, THE PERCENTAGE OF TASKS THAT AI COULD PERFORM WAS ABOUT 20 PERCENT OF YOUR WORK, AND IT WENT UP TO 80 PERCENT. THIS IS AN EXTRAORDINARY TRANSFORMATION. BECAUSE IT'S NOT JUST "A NICE LITTLE TOOL" ANYMORE, BUT IT'S -- YOU HAVE TO REORGANIZE YOUR WORK STREAM AROUND THIS AI。

PERSONALLY, I HAVE A VERY TYPICAL MOMENT OF FEELING. ALL THESE YEARS, I'VE HAD A TEST HINT: LET AI SET UP A WEBSITE FOR ME. I DID IT MYSELF WHEN I STUDIED PROGRAMMING. IT TOOK ME MONTHS。

AND IN 2025, IT WOULD TAKE ABOUT FOUR HOURS AND A FEW ROUNDS OF TIPS TO DO IT PROPERLY. BUT IN DECEMBER, I ONLY ASKED ONCE, AND AI DID IT ONCE, AND IT WAS GOOD。

Alex:
And how did these models accomplish that

Greg Brockman:
Much of the reason is that the base model itself has become stronger. OpenAI has been continuously upgrading its pre-training skills. And at that point, for the first time, we saw a little bit of what would happen in the rest of the year. But at the same time, it is not just a single-point breakthrough. More precisely, we are advancing on all the dimensions of innovation。

The interesting thing about these models is that in a sense, you feel that they've been "jumping" once and for all, but from another point of view, it's actually a continuous evolution. It didn't suddenly jump from 0% to 80%, but from 20% to 80%. So in a way, you can say it's just getting better。

and i think that this progress continues in every small update that we follow. for example, from 5.2 to 5.3, i had a very close engineer who couldn't get the model to do it. the kind of bottom, hard core system engineering that he was responsible for; but with the new version, the model had been able to access his design files, actually achieve, plus indicator monitoring and observability, run the profiller performance analysis, continue to optimize, and finally achieve the results that he had hoped to deliver himself。

So I would say, it's more like a "slow advance, and suddenly it changes everywhere." But all of this has been predicted by the ability to work now. In one year at the latest, many things, some even faster, will become extremely reliable。

Alex:
Does that surprise you too? Because I remember a little while ago you said in an interview that Codex, an automated programming tool, was meant for software developers. But even earlier today, you said that such tools were available to all。

What made you change your mind

Greg Brockman:
I've actually been using Codex to understand it in the context of writing code. After all, there's a code in its name, which is naturally seen as a tool for programmers. And inside OpenAI, a lot of people are software engineers themselves, and we're building tools for ourselves, so it's natural to think that way。

But as this technology continues to evolve, we are beginning to realize one thing: the bottom technology that we really make is mostly not about "codes" at all, it's about "solve problems"。

AT THE HEART OF IT IS MANAGING THE CONTEXT, SETTING THE IMPLEMENTATION FRAMEWORK, AND THINKING ABOUT HOW AI SHOULD GET INTO REAL WORK AND ACTUALLY GET THINGS DONE. AND WHEN THIS HAPPENS, EVEN IN THE PROGRAMMING SCENE, SUDDENLY IT MEANS THAT ANYONE CAN ACCESS IT. BECAUSE WHAT YOU REALLY HAVE IS A SYSTEM THAT CAN BE IMPLEMENTED FOR YOU. AS LONG AS YOU HAVE A VISION, YOU HAVE A GOAL TO ACCOMPLISH, YOU CAN DESCRIBE YOUR INTENTIONS, AND AI CAN DO IT, AND YOU CAN DO IT。

But it's also going to make you ask why I'm only looking at "non-programme" or "programming" divisions. There is actually a lot of work, essentially just some kind of mechanical skill. For example, Excel forms, like presentations. If AI already has enough context and original intelligence, it can actually do well。

So if we just get it closer and more friendly, it's going from "Codex is for programmers" to "Codex is for everyone."。

Alex:
And now that we've seen the wave progress, there's another almost silent phenomenon in Silicon Valley: Open Claw, right? Or, more broadly, the entire technology circle began to trust AI in a way you just mentioned -- like handing over desktop control to an AI robot, or getting a Mac Mini, giving it all rights to mail, calendars, documents, and then making it "take over life."。

Later, OpenAI recruited the founder of Open Claw. So can you tell me more about the al that helped you manage your life? Bring in the Open Claw team. Is that the vision behind it

Greg Brockman:
I would say that the central point of this technology is to figure out how it really works, how people want to use it, what the vision of a smart body is, and how it enters people's lives — problems that are difficult in themselves。

And one thing I have seen over and over and over again in these generations of technological evolution is that those who are truly willing to be deeply involved, curious and imaginative are in themselves very real and will become increasingly valuable in the new economy。

Peter, the founder of Open Claw, who, in my opinion, has great imagination and great creative impulse. So, to some extent, the matter is related to a particular technology; to another extent, it is not just a technical issue at all. What really matters is how we embed these capabilities in people's lives and find where they really belong。

So, as a technologist, it is certainly exciting; but as a person who is genuinely concerned about how to deliver practical values to users, we are now investing heavily in this matter as well。

Alex:
YOU'VE HAD AN INTERESTING SAYING ABOUT THIS LATELY. YOU SAID THAT WHEN YOU START WORKING WITH THESE AUTONOMOUS AI INTELLIGENTS, YOU BECOME THE CEO OF A FLEET OF THOUSANDS OF INTELLIGENTS WHO DO YOUR GOALS, YOUR VISIONS AND YOUR MISSION, AND YOU ARE NO LONGER TRAPPED IN THE DETAILS OF HOW SPECIFIC PROBLEMS ARE SOLVED。

But you also say that, in a sense, this new way of working can make people feel that they are losing their "pulse feeling" about the problem itself。

Greg Brockman:
Is that a good thing? I think it's a trade-off。

So I think that what we need to do is to recognize, on the one hand, the real power these tools can bring and, on the other hand, to minimize the weaknesses they bring. For example, giving people greater leverage and greater mobility — if you have a vision and one thing you want to do, you can mobilize an entire intelligent fleet to do it for you, which is certainly powerful。

But if you think about the way the world works, there must be someone responsible at the end. Suppose you're doing a website, and your smarts screw things up and eventually affect users, that's not exactly the fault of the smarts, but your fault. So you have to care about this。

I BELIEVE THAT ANYONE WHO WANTS TO MAKE REAL USE OF THESE TOOLS MUST REALIZE THAT HUMAN MOBILITY AND HUMAN RESPONSIBILITY ARE THE CORE COMPONENTS OF THE SYSTEM. HOW PEOPLE USE AI, IT'S VERY FUNDAMENTAL IN ITSELF。

So the most important thing that I think is that as users of these intelligent bodies -- and we're inside OpenAI -- you can't give up on responsibility. You can't just say, "Ai will do it himself."

Alex:
Sure. But what you just said was, "I feel like I'm losing my pulse" and it's not like "responsibility."。

Greg Brockman:
FOR ME, THE TWO ARE ACTUALLY LINKED. BECAUSE THE POINT IS, IF YOU'RE CEO, BUT YOU'RE TOO FAR AWAY FROM THE DETAILS -- LIKE YOU'RE TAKING A TEAM, RUNNING A COMPANY, AND YOU'RE LOSING YOUR SENSE OF A FRONT-LINE STATE, AND THAT USUALLY DOESN'T LEAD TO ANY GOOD RESULTS. SO WHAT I'M TRYING TO SAY JUST NOW IS NOT THAT "HUMAN BEINGS CAN FINALLY KNOW NOTHING" IS SOMETHING WORTH PURSUING。

Of course, there are certain details that can be given comfortably. Just like you found a general contractor to build your house, there's a lot of details you probably don't have to look at, because you trust each other to take care of it. But in the final analysis, if there is a problem with certain key details, you should still care and know。

So here's a very important fine difference: you can't just say blindly, "I'm willing to lose that sense of certainty." On the contrary, we should be proactive in saying that I still need to retain that sense and to truly understand the strengths and weaknesses of the system。

And when you start pulling yourself out of something lower and more mechanical, you should be able to do this because you have built trust in the system to make sure that it does。

Alex:
One last question about models. You just mentioned the path of a model evolution: From pre-training to fine-tuning to intensive learning, it is better equipped to solve problems one by one and to be able to carry out tasks on the Internet。

And now we're at a stage where models learn to use tools through this process. If I understand correctly, what's the next step in this path

Greg Brockman:
I THINK THAT THE WORLD IN WHICH WE LIVE IS ONE OF DEEPENING AND EXPANDING MACHINE POWER. PART OF THIS, OF COURSE, IS RELATED TO THE USE OF TOOLS, BUT AT THE SAME TIME WE NEED TO REALLY DO THE TOOLS THEMSELVES WELL. FOR EXAMPLE, IF AI CAN ALREADY OPERATE A "COMPUTER" AND USE THE DESKTOP SYSTEM LIKE A HUMAN BEING, THEN IN PRINCIPLE IT CAN DO ANYTHING YOU CAN。

But at the same time, we must add much infrastructure to the machine. For example, in the business environment, what about identification and authority management? What about audit trails and observability? There are a lot of technology that needs to be built to catch up on the bottom of the model. And in the overall direction, I think it's going to include something like the "very natural voice interface." In other words, you can talk to computers as naturally as you do right now, and it can really understand you, do what you need, and make valuable recommendations。

For example, it's a positive reminder that something you've been pushing is stuck, and the problem is here. Or when you wake up in the morning, it'll tell you, "This is your daily brief, how much work did your smarts do last night?"。

Maybe it's already running a business for you -- I think it's going to be a huge application of this technology. Democratization of entrepreneurship will definitely happen. It'll tell you that there's something wrong with these places; there's a client who's not happy right now, and he wants to talk to a real person, and you better take care of it yourself. That's what happens。

Then, I think the next phase will also include the goal ceiling that humanity can challenge and will continue to be raised by this technology. We have now actually seen the front line of this trend. The most exciting thing is that it can almost be compared to AlphaGo's 37th hand -- a move that humans never come out of, it's creative and it changes many people's understanding of the game。

This happens in every field. It can occur in science, mathematics, physics, chemistry; in material science, biology, medicine, medicine; and even in literature, poetry and many other fields. It will unlock the new space for creative understanding and conception of humankind in ways that we cannot imagine today。

Alex:
But if the model is as strong as you say, why hasn't it really happened yet

Greg Brockman:
I think there's a "weak capacity" in it -- there's a huge distance between what models really do and how people actually use them. In a way, our understanding of what's in the model is growing。

SO I THINK THAT EVEN IF TECHNOLOGY DOES NOT CONTINUE TO PROGRESS FROM NOW ON, THERE WILL STILL BE A HUGE CHANGE IN THE WORLD — A CALCULATION-DRIVEN, AI-DRIVEN ECONOMY THAT WILL STILL COME。

But at the same time, there is another reason: what we're best at is training models on "measurable" tasks. So at the beginning, we started with the mathematical, programming, because these missions have very clear certifiers: the answer is yes, and can be judged very clearly. And over the past period, we have been able to gradually bring models to more open questions by expanding the range of things that can be tested and evaluated。

AND AI CAN ACTUALLY HELP WITH THIS. IF AI IS SMART ENOUGH TO UNDERSTAND THE MISSION, YOU GIVE IT AN EVALUATION STANDARD, IT CAN LEARN OVER TIME. BUT IT'S HARD TO SCORE TASKS LIKE CREATIVE WRITING, LIKE "HOW'S THE POEM DONE?"。

SO, WE USED TO BE IN THIS KIND OF SCENE, AND IT'S REALLY HARDER FOR AI TO REALLY LEARN THROUGH CONSTANT ATTEMPTS AND FEEDBACK. BUT IT'S ALL CHANGING, AND WE'VE SEEN THE NEXT PATH QUITE CLEARLY。

Alex:
That's interesting. Peter Thiel said earlier to the effect that if you're a good at math, you might be more affected by these models than you are by words. And you were a member of Math Club. Don't you worry about it

Greg Brockman:
I THINK IT IS ALWAYS EASIER FOR PEOPLE TO SEE WHAT THEY HAVE LOST, NOT WHAT THEY HAVE. BECAUSE WE HAVE A DEEP EXPERIENCE OF HOW I USED TO DO THIS. I USED TO GO TO MATH COMPETITIONS, AND NOW AI CAN DO MATH COMPETITIONS. BUT THE PROBLEM IS, IT'S NEVER REALLY ABOUT THE MATH COMPETITION ITSELF, IS IT? THAT IS NOT THE CORE THING DRIVING HUMANITY FORWARD。

If you look at the way we are doing our work — sitting in front of one box, typing to another — we did not live that way a hundred years ago. This is not a state of nature, nor is it really the world in which we are involved。

That's not the essence of being human. It is really important to be present, to be present, to be connected to others。

AND WHAT I THINK WE'RE ABOUT TO SEE IS THAT AI WILL RELEASE A GREAT DEAL OF TIME AND GIVE HUMANITY MORE OPPORTUNITIES TO STRENGTHEN THEIR LINKS AND BUILD MORE PEOPLE-TO-PEOPLE TIES。

I'm very excited about that。

Alex:
okay. and as you move further towards these more agent applications, the outside world also begins to discuss the question: will there be a need to continue to do so much training in the future

Especially when the model is good enough, you seem to be able to get it into the real world, and then get a lot of upgrades in a lot of non-dependent pre-training. And those that really need to be supported by super-data centres are mostly pre-trained。

You've been in charge of scaling up and promoting this. What do you think of that

Greg Brockman:
I think that this statement ignores a very important point in the evolution of technology. Indeed, each link in the water line produced by the model will magnify each other's effects. So you'll want everything to get stronger。

What we see is that, once pre-training becomes stronger, every step that follows will be much easier. That actually makes sense. Because the model is more capable from the start, it learns more quickly; it also moves faster and makes fewer mistakes when it tries to think differently and learn from its own mistakes。

So the real change is not to say that we went from "training a purely closed, self-programming system of rationality" to "showing it to the real world." Rather, we realize that not only is the model itself large and strong, but also that it should try to understand how people use it in the real world and re-enter the training process with feedback on its use. But this does not diminish the value or the importance of continuing to advance that part of the study。

i think there is another change: in the past, we have focused primarily on the upgrading of the original capabilities at the pre-training stage, but not so much on the reasoning stage, or the extrapolation stage. and in the last 24 months, a big shift has been that we have come to realize that there is a need for balance。

That is, you can have a very powerful bottom model, but it must also be sufficiently efficient to extrapolate and actually operate. Because you want to learn more, and to really deploy it to the real world, this requires that it be highly speculative。

This also means that you do not necessarily push the scale of the training to the largest theoretical level possible, because you must also take into account the large amount of subsequent use。

What you really want is between the level of intelligence and the cost, the best point to multiply. Instead of optimizing only one dimension。

Alex:
If the future turns mainly to inference, will you no longer need Nvidia's GPU

Greg Brockman:
Of course we do。

Alex:
Why

Greg Brockman:
There are many reasons。

One of them is that, regardless of the ratio between training and extrapolation, the issue of super-scale training can still be accomplished only by concentrating mass computing on one issue, for which there is currently no alternative。

So what I think is more likely to happen in the future is that the calculus ratio on the deployment side will increase significantly; but at the same time, there will still be moments when you have to carry out a particularly large round of pre-training tasks, when you still need to pool a lot of calculations。

And I think Nvidia's team is really great, and they're doing amazing work. So, yes, we worked very closely with them。

Alex:
Will one day people start saying, "We've trained enough, models are smart enough?"

Greg Brockman:
I think it's kind of like saying, "Well, maybe we can say that when humans have solved all the problems before them." But I think what we want to achieve is actually a much higher ceiling。

Over the past 50 years, our ambitions for many goals have, to some extent, receded. For example, some questions seem very clear — can we give everyone medical security? And it's not just "rehabilitation with problems," but it's really preventive medicine, looking at lifestyles, helping people at an early stage to detect potential risks before the disease occurs. Such problems, I think, can actually be solved with more intelligent models。

Of course, there may be a level at which the problem has been completely resolved, when you might ask: Do I need a twice as smart model? But at the same time, there must be other problems that require higher levels of intelligence。

It's not a cost, it's a revenue engine

Alex:
Let's talk about the numbers behind these data centres. You guys raised $110 billion earlier this year. How does math work here? Will this money go directly to the data centre? And how do you think you're going to give that money back to investors in the future? Talk about these calculations。

Greg Brockman:
I think it's very simple in nature: the biggest expenditure we've got right now is power. But you can't think of math only as a cost center, more like an income center。

You can imagine it as a sales team. How much will you pay for the sale? As long as your product is sold, and as long as you have a mechanism to market it on a scale, the more you hire, the higher your income。

And what we are in is a world in which we have repeatedly found that we cannot build our calculations fast enough to keep pace with the growth of demand. I can feel that very concretely now. We have to make very painful decisions: which functions are on line and which are temporarily not; which ones are given priority over which。

AND I THINK THAT THIS IS GOING TO HAPPEN AT A BROADER LEVEL AS THE WHOLE ECONOMY MOVES TOWARDS AN AI-DRIVEN ECONOMY。

The real question for the future will be: which problems can achieve that mass? How do you expand it so that everyone has a personal intelligence? How do you get everyone to use a system like Codex

There is not enough talent in this world to sustain these things. So we're preparing for this in advance。

Alex:
But it's a whole new category, right? And you're betting with a very strong degree of certainty — a sum that the world has never seen before. When you create a new category, how can you be so sure that it will eventually be established

Greg Brockman:
I think there are several components。

First, there are already historical precedents. From the moment ChatGPT was released, I remember having a very clear conversation with my team. I was asked, "How much money should we buy?" I said, "All of it." Others asked, "No, seriously, how much is it? I said, "However we build, I know we can't keep up with the demands。

And every year since then, this has been proven. The problem is that this type of calculator is usually locked 18 months in advance, sometimes 24 months, or even longer. That is, before the machine actually delivers, you have to make a judgment. That means you have to move forward very strongly。

And the world we are heading towards is that, so far, most of our income has come from consumer subscriptions, and this future will remain very important. Of course, we are also generating other sources of income。

But now the greater opportunities that are emerging are knowledge work。

And that is what we have seen in very concrete terms: almost every enterprise is beginning to realize that this technology is really useful and that it must be adopted if they are to remain competitive. And you can see that very natural drive, which is already being used by a large number of software engineers; and then it's starting to spread more widely, and people are using it in the business landscape. And the willingness to pay that has emerged in this industry, and the increase in income that you see, is very clear。

This is happening now. You just have to push it forward. And what we might see more than the outside world is that we can see more clearly how these models will progress。

PUT THESE TOGETHER, YOU'LL FIND: THE ECONOMY ITSELF IS A HUGE, ALMOST UNIMAGINABLE THING. AND FROM NOW ON, THE HIGHEST GROWTH FACTOR IN THIS ECONOMY WILL BE AI -- HOW WELL YOU CAN USE AI, AND HOW WELL YOU CAN DRIVE IT。

Alex:
You just said that consumer subscriptions are still your largest source of income. Does your judgment mean that, in the future, the business will be the largest source of income

Greg Brockman:
I THINK IT'S VERY CLEAR NOW THAT THIS ENTERPRISE IS GROWING RAPIDLY. OF COURSE, THE WORD "ENTERPRISE" ITSELF HAS CHANGED. BECAUSE WHAT IT REALLY POINTS TO IS THAT PEOPLE USE AI IN PRODUCTIVE KNOWLEDGE WORK。

And in terms of pricing, I don't think the classification will be as clear as in the past. Now, for example, the way that Codex is used is if you have a consumer subscription to ChatGPT, you can actually use Codex。

SO I DON'T THINK THE FUTURE IS GONNA BE THE KIND OF B-AND-C-END DISTINCTION. MORE LIKELY, AS A USER, YOU WILL HAVE A SINGLE PORTAL -- LIKE YOUR LAPTOP, WHICH IS YOUR PORTAL TO THE DIGITAL WORLD。

And real income comes from here。

Alex:
Dario said one thing, and I think he's probably talking about you: some of the players put the risk too high, and he was very worried. I think he's referring to your massive stakes in infrastructure. What do you think of that

Greg Brockman:
I disagree. I think we've been very careful, and we do see what happens next. I think that if we look at this year alone, all those who are really involved will feel that "climate is limited."。

And I think we just realized that earlier than others and started preparing for how that technology would go。

Instead, what I saw was that many of the other participants probably realized it by the end of last year, and then started to panic and look for ideas, but there was little left to buy。

So I think it's easy to say that. But the reality is that everyone now realizes that this technology is feasible, that it has arrived and that it is true. Software engineering is only the first clear example。

And what really limits us is the ability to calculate。

Alex:
He also said that if his predictions were to be slightly different, his company might be bankrupt. Are you at the same risk

Greg Brockman:
I think there's actually more "the exit." If you start to think about the next step -- and I think it's perfectly reasonable -- - Then you'll find that, in a way, the bet was never on a company。

It's really bet on the whole industry. The bet is: do you believe that this technology can be made and deliver the enormous value we see before us。

I'll still go back to the most direct points of proof. Software engineering -- if you're not a software engineer and you're not really using Codex -- it's hard to understand how different this experience is by reading. It's really hard to describe that difference. But I think people will really feel it soon。

Six months ago, this feeling took place only inside of us; then it began to be evident from the outside. And in six months, I think everyone will feel it. And then all of us will feel another pain: there are great models, but you can't use them at all, because the world doesn't have enough math。

Alex:
Yes, but when we made our 2026 projections on the show, there was a discussion at the end of last year, and Ranjan Roy was there, and he said that 2026 would be the year of "everyone is using intelligence." And my reaction at the time was that I would not believe it until I saw it with my own eyes and actually started using intelligence。

Greg Brockman:
Now, isn't this the moment we've reached? What are you gonna do with it now

Alex:
I'll use it to put tools in place to help people who work with me better synchronize when the video goes online, what the thumbnails should do. I'm also going to connect some of the data on YouTube so that we can sort out the video performance based on, for example, thumbnails. In a way, it's a software that I customise myself, and if it's traditional, I probably won't even pay for it。

I THINK THAT'S WHAT'S INTERESTING ABOUT THE MOMENT: THE SOFTWARE WAS INTENDED FOR MASS MASS PRODUCTION, BUT THAT'S WHY THERE'S ALWAYS A LOT OF SPACE IN IT THAT ISN'T MADE FOR YOU. AND PERHAPS THE CHANGE THAT AI BRINGS IS THAT IT FINALLY ALLOWS US TO DEAL WITH SOFTWARE IN A MORE NATURAL WAY。

Greg Brockman:
I think that's the point. And one thing I've been thinking about all along is the way we build computers today, actually pulling us into a digital world。

YOU THINK ABOUT HOW MUCH TIME YOU SPENT BRUSHING ON YOUR PHONE. AND THINK OF HOW MUCH TIME IT TOOK YOU TO KEEP ALL KINDS OF BUTTONS UP AND GET THIS SYSTEM CONNECTED TO THAT SYSTEM -- WHY DO YOU HAVE TO DO THIS ON YOUR OWN? WHAT AI REALLY SHOULD DO IS BRING THE MACHINE CLOSER TO YOU, MAKE IT MORE RELEVANT TO YOU, UNDERSTAND WHAT YOU WANT TO DO。

It's always been in our popular culture that you can talk directly to the computer, and then it does it for you. And now it's becoming a reality, it's really becoming something you can do. And how amazing is this change, many times you have to try it yourself to understand. So I do feel that we are at a very special moment。

Alex:
Then I wonder, why does AI look so bad in the public? YouGov, for example, shows that Americans who think that AI will have a negative impact on society are three times as likely to think that it will have a positive impact。

WHAT DO YOU THINK IS BEHIND THIS? ARE YOU WORRIED ABOUT THE PUBLIC IMAGE OF AI

Greg Brockman:
I THINK THERE'S ONE THING WE HAVE TO REALLY DO: LET THE PEOPLE OF THIS COUNTRY SEE WHY AI IS GOOD FOR THEM. AND IT'S NOT JUST ABOUT THE MACROECONOMICS, IT'S NOT JUST ABOUT THE WORDS THAT DRIVE GDP GROWTH, IT'S ABOUT HOW IT ACTUALLY ACTUALLY IMPROVES THEIR LIVES。

Indeed, I hear many very specific stories every day. For example, there's a family whose children have been having headaches and other health problems, but MRI has never been approved. Then they used ChatGPT to study the symptoms and realized that they could actually use it to make a stronger application to insurance companies. They did it, and they found that there was a tumor in the child's brain. And because they got the right information through ChatGPT, the last child was saved。

It's just a story. There's a lot of stories like that. People ' s lives have been profoundly improved by this technology and even saved their lives. The key is that they really have a partnership with this technology in reality。

But I don't think such a story really gets out. I think that this is happening in the lives of many, but somehow it has not really become a mainstream narrative。

AND I ALSO NOTICED THAT POP CULTURE, ESPECIALLY THE IMAGINATION THAT CONTINUED FROM THE 1990S, WAS VERY NEGATIVE FOR AI, ALWAYS EMPHASIZING WHAT IT MIGHT BE. BUT ONCE PEOPLE REALLY STARTED USING AI, THEY WOULD FIND IT USEFUL AND HELPFUL。

So I do care about one thing: we have not really succeeded in helping people understand why this wave of technology is improving their lives and why it is promoting closer human connectivity。

THIS IS A VERY IMPORTANT CONCERN IN MY HEART. AND IF YOU ZOOM IN A LITTLE BIT MORE AND SEE WHY AI IS SO IMPORTANT, I THINK IT WILL BECOME AN IMPORTANT SOURCE OF ECONOMIC POWER AND NATIONAL SECURITY. IT IS ABOUT THE COMPETITIVENESS OF A COUNTRY. AND OTHER COUNTRIES LIKE CHINA, ON AI, HAVE ALMOST THE OPPOSITE SENSE OF DIRECTION。

So, yeah, I think it's very important. We must look at it, and we must really figure out how the benefits of this technology can be shared by all。

Alex:
BUT WE ARE ALSO AT A TIME OF GREAT INSTABILITY. PEOPLE ARE WORRIED ABOUT WORK. EVERY TIME I TALK TO SOMEONE, AI, THEY ALMOST ASK, "HOW LONG CAN I KEEP MY JOB?"

AND THEN AGAIN, THE DATA CENTRE, THE PUBLIC'S PERCEPTION OF IT IS EVEN WORSE THAN THAT OF AI ITSELF. YOU CAN SEE THAT MORE PEOPLE BELIEVE THAT DATA CENTRES HAVE A NEGATIVE IMPACT ON THE ENVIRONMENT, HOUSEHOLD ENERGY COSTS AND THE QUALITY OF LIFE OF THE SURROUNDING POPULATION, RATHER THAN A POSITIVE IMPACT。

So we are at a time when good jobs are becoming increasingly difficult to find, and people see data centres entering their own communities, and they feel that it is neither environmentally friendly nor cost-effective, and that it reduces the quality of life。

Are they wrong

Greg Brockman:
I think there are a lot of errors around data centres。

A typical example is water. If you really look at our facility in Abilene, it's the largest, or at least one of the largest, supercomputers in the world, which spends an entire year on water, which is equivalent to only one year in an ordinary household. In other words, the amount of water used is practically negligible。

There is, however, a great deal of misinformation from the outside, which leads to the belief that these data centres will consume significant amounts of water resources。

Electricity is similar. We have pledged to bear our own costs and not to shift pressure from rising electricity prices to residents. This is important, and a similar commitment is now being made throughout the industry, as it is indeed important to improve local communities. And when we build data centres, we actually get into these local communities and know what happens on the ground and what we can do to help. Data centres generate taxes and create jobs. It does bring many benefits。

So I think it is still about how we appear, and that is a responsibility that we take very seriously。

Alex:
Okay, but if you don't raise the cost of electricity, you have to get it in, which could mean more pollution. Isn't that a problem

Greg Brockman:
I think there's actually a lot more detail in it。

If you look at the way the grid works today, you'll find that there's actually a lot of "free electricity" — that is to say, a lot of it was there, but not really used. At the same time, the transmission system itself needs to be upgraded. Moreover, it is important that the costs of these upgrades be borne by us, not by ordinary contributors. There are many places where clean energy itself is available, but these are actually underutilized and, to some extent, wasted。

Thus, when the data centre needs to enter, there is a real incentive to upgrade the old and outdated grids. Such an upgrade would actually bring real benefits to communities. In North Dakota, for example, we have seen that the construction of local data centres has helped to improve the infrastructure of utilities, resulting in lower electricity prices for the population。

Alex:
Okay, last political question. You donated $25 million to Maga Inc., a political action committee supporting Trump。

Greg Brockman:
You talked to Kara about this before。

Note: Kara Swisher, a well-known American tech journalist, has long covered Silicon Valley and Internet companies, known for her questions and styles。

Alex:
Right. You said, "I will do anything that will help make this technology really work for everyone." It doesn't matter if it makes you a single-issue voter or a single-issue donor. But what I've been thinking is, for this “one-issue” camp, should it not be “to make this country stronger” itself, but the North Star at the heart of any political action

That is, even if a candidate is not 100 per cent supportive of what you are doing, should he be an important criterion for political support if he can make this country stronger? If so, is that part of your contribution

Greg Brockman:
I see that: the donation was a decision I made with my wife. We have also contributed to the super-political action committees of both parties。

I think it came very quickly. In the coming years, it will really change everything and become the bottom of the entire economy. But it is not welcome now. So we would very much like to support politicians who really want to embrace this technology and understand it。

Of course, on a larger level, the technology itself is indeed enhancing the capacity of our country. In a sense, I am a single-issue voter, because I think this is the area where I can make a unique contribution. In the final analysis, however, this is an expression of support: as a nation, we should embrace this technology。

CORE COMPETENCIES FOR THE FUTURE: NOT USING AI, BUT MANAGING AI

Alex:
IF THERE'S A MAN NOW WHO'S SCARED OF AI SITTING IN FRONT OF YOU, HE'LL THINK THAT AI WILL TAKE MY JOB, DESTROY MY COMMUNITY, MAKE THE WORLD CHANGE TOO FAST

Greg Brockman:
THE THING I'D LIKE TO SAY MOST OF ALL: GO AND TRY THESE TOOLS FOR YOURSELF. BECAUSE ONLY IF YOU REALLY EXPERIENCE THE AI THAT EXISTS NOW WILL YOU REALLY UNDERSTAND WHAT IT CAN DO FOR YOU。

AND WE HAVE SEEN TOO MANY OPPORTUNITIES, POTENTIALS AND EMPOWERMENTS FROM THIS TECHNOLOGY TODAY. YOU JUST SAID WHAT YOU CAN DO WITH IT, RIGHT? PEOPLE WHO HAVE NEVER HAD A WEBSITE BEFORE CAN DO IT NOW; IF YOU WANT TO DO A SMALL BUSINESS, YOU MIGHT HAVE BEEN INTIMIDATED BY BACKSTAGE PROCESSES, DETAILS, BUT NOW AI CAN HELP YOU WITH A LOT OF THESE THINGS。

So I think, for your own life, you should think: Will it help you manage your health? Can I help you take care of the people you love? Can I help you make money? Can I save you money? These are all realistic options。

I think it's always easier to see what changes, but less so to see what you get. But I think it is worth giving it a fair opportunity to seriously understand what each ends of the balance is。

Alex:
THIS, BY THE WAY, IS ALSO A POINT THAT IS RARELY DISCUSSED IN THE POLLS. PEOPLE WHO HAVE ONLY HEARD OF AI BUT HAVE NEVER REALLY USED IT, OR WHO HAVE HARDLY USED AI, TEND TO BE MORE NEGATIVE. AND ONCE YOU ENTER A GROUP OF HEAVY USERS, EVEN ORDINARY USERS, THEIR PERCEPTION OF THE TECHNOLOGY IS USUALLY MUCH MORE POSITIVE。

Greg Brockman:
For me, we've been thinking about this technology for years. And the way in which the reality that I see unfolds now is more dramatic, more useful and much more positive than we had imagined。

Alex:
Last question. If anyone asks you, "How can I prepare for the future?" What would you say

And the answer can't just be "to use tools." For I had a friend with me who asked me, "I don't know what will happen to my work, what will happen to the world, but what will be done now?"

Greg Brockman:
I still think the first thing is to understand the technology. We have seen that those who really get the most from this technology are often those who approach it with curiosity. They'll actually put it in their own stream and try to cross the threshold of the beginning — that is, in the face of a blank input box, the feeling of what I should do with it。

You need to develop a sense of action: I can be a manager; I can set directions; I can assign tasks; I can supervise. It is also important to truly bring out such capacity development, which would be a very foundational one。

We built this technology to help humanity, to promote more human connections and to give people more time to do what they really want to do. So the question is, what do you want? And what is really important is to think about it and use it to achieve it。

Alex:
Exactly. Thank you so much for coming to the show。

Greg Brockman:

Thanks for the invitation。

Alex:
And thank you for listening and watching. We'll see you next issue of Big Technology Podcast。

QQlink

No crypto backdoors, no compromises. A decentralized social and financial platform based on blockchain technology, returning privacy and freedom to users.

© 2024 QQlink R&D Team. All Rights Reserved.