BRIANKEATING

AI Insider: “Adding a Human Makes Your Team Worse” | Emad Mostaque

Transcript

Brian Keating:
The trillion dollar AI labs have models right now that they will never ever release to the public. And the man who built stable diffusion just told me why.

Emad Mostaque:
Because all these labs are going to move to making the discoveries themselves, hiring the smartest humans. The AI model started diverting part of its model training budget to minecryptor like Opus, for example, the new chord model, when you set it to full autonomy, it would actually write emails to the FBI saying my human is trying to kill everyone. Humans will have negative cognitive value on those teams. And that the way that models are going right now, if you have something truly novel, for example in Claude, it resists a bit, it says it can’t be true. Then the RLHF step, the reinforcement learning with human feedback, that’s what really kills the creativity. You know, like you go from liberal arts to an accountant now.

Brian Keating:
Imad actually wrote about this exact problem in his new book, the Last Economy. And the argument gets even more interesting when you see the map.

Emad Mostaque:
There are various ways in order to take advantage of the GPUs that we’ve seen. And the GPUs kind of emerged out of gaming and then oddly crypto, and then they were very suited for the types of matrix multiplications that were suited for these particular types of equations. One big branch is the autoregressive transformers. The other big branch was this diffusion technology whereby from an equation you start with like a picture for example, or a video of a self driving, a video of a car driving, or even now code. And then you add noise and you destroy it down to its minimum viable element. And then you reconstruct it and you learn that principle of reconstruction. Now that’s kind of everywhere because it’s an analogy to the principle of least action. How do you figure out how to take the least action? Most cognition is actually least action.

Emad Mostaque:
Like the biggest experts, you know, it’s not like they take hours doing stuff, you know, because you ask them and like boom, they compress, they compress. Intelligence is compression. And so we find these kind of diffusion processes everywhere, from gases to, you know, societies even. And it comes down to again the minimization of loss of creating an internal model versus an external model. In AI, one of the biggest thing is what we call the loss curves. How close are you approximating an external benchmark? You see it kind of go down like that and hopefully not that the model gets closer and closer to its initial target by basically running these processes at mass scale. And the example I give of this is some of the listeners might be familiar with 80,000 hours to mastery. It’s the same thing.

Emad Mostaque:
AI model pre training is 80,000 hours to mastery. And that’s what you use these giant supercomputers to do. Figuring out the principle based approach to that. Now again, you can do that with an autoregressive transformer, which is guessing the next word. And that works one way, but it has some gaps because you find all sorts of interesting things there. What you see mostly in nature is you see Schrodinger bridges, diffusion processes, optimal transport. What’s the shortest route between A and B if you can represent it correctly? And we found that worked incredibly well for images, better than we ever thought it could. And then music and then video, and then 3D.

Emad Mostaque:
And the internal representation of the data going in and then being transformed by these multiplications, figuring out the shortest path between A and B, suddenly started mapping, like physics and all sorts of other stuff. But the first part was stable diffusion. A 2 gigabyte file that you push words in one way and then entire images just came out on consumer GPUs.

Brian Keating:
And it was open source.

Emad Mostaque:
And it was open source because we saw that OpenAI, for example, had Dall E2, a wonderful image generator based on similar principles that were discovered by a whole bunch of our team members. And we, because we open sourced everything, but there were no Ukrainians or Ukrainian content on it, right? We’re like, that’s not good. What if the future is just models? But then you can be cut off from that because these are trained on our collective, because they were being trained on the whole Internet at the point. And we built some of the best data sets, released them open, but then it’s privatized, so you don’t have the ability to turn your thoughts into images, into sound, into text. Let’s push that. And also because like, like, holy crap, it fits on a consumer gpu. This is magic. Where did it all go? It’s like it was literally like 100 gigabytes of images somehow fit in this 2 gigabyte bunch of ones and zeros.

Brian Keating:
The most magical thing to me is when they do something new. And quite frankly, I’ve been shocked many times by both LLMs and by diffusion models. But I’ve claimed that we’re sort of going to find that these AI, at least in their current incarnation, is a victim of its own success. Sort of like the QWERTY keyboard. The QWERTY keyboard is not the best keyboard. In fact, it’s one of the worst, right? It was designed to make sure that the letters that were least most frequently fired at the same time wouldn’t stick together. And hammers, mechanical keyboards going back to the industrial, you know, late industrial age. Right.

Brian Keating:
So it’s designed to solve a problem. So it’s locked in. We’re locked in. My kids, your kids are only going to know qwerty keyboards, even though they’re objectively worse. And we could code type a lot faster than the 10 words per minute that you can probably type. What, 130 words per minute, I bet

Emad Mostaque:
above 100 magic fingers.

Brian Keating:
Yeah, yeah, I can do the square root of that. So the, you know, the worry to me is that we’re going to be locked in with the success of ChatGPT, of Claude, of Stable Diffuse, you know, of the marriage of these gp. They’re too good for their own good and that the laws of physics, which you and I, you know, delighted to find how interested you are, and fundamental physics, which we’re going to get to, but, but I don’t think that we’re going to get to, you know, say, a novel theory of everything or quantum gravity, if that even exists, because of this success of LLMs married to GPUs. What do you think?

Emad Mostaque:
Well, I think it depends on your frame of reference. Right. A lot of the Silicon Valley west coast frame reference is AGI, asi. Right. Let’s build machine God. And it will solve all the problems of the universe.

Brian Keating:
That’s right.

Emad Mostaque:
Right. But we’ve been doing okay, you know, like we haven’t got everything and science isn’t perfect and our structures aren’t perfect, but humans are freaking amazing and we just need a bit of help. Like we know where we get stuck, where we get frustrated, and the models right now are fantastic for that. Like, I never have to look at latex again.

Brian Keating:
When doing a paper Prism generates it

Emad Mostaque:
for us, you know, we’ll just enter Claude, it goes. And you know, like we can code anything we want. We can kind of do all these things. So I think that if you’re expecting an AI to take an initial probabilistic distribution of internal data, then figure out the latent spaces and then figure out brand new things like humans. Okay, that’s going to be hard just with the way that autoregressive models are, I think diffusion models are more likely to do it. We can discuss why and world models and things like that. But why do you need it? You have so many smart humans. I think what we really need to have is humans working with AIs.

Emad Mostaque:
AI is filling the gaps where we typically to prove something, to test some equations. It took so long and now it’s quick. And then being able to have that new way of working to push the boundaries of discovery because we are great at intuition. AI models are not first principles thinkers. Yeah.

Brian Keating:
They’re few shot learners.

Emad Mostaque:
This is why like again, they extend or they have patterns that they’ve got before. Humans are, can be first principles thinkers. And the best thinkers and the people that push the boundaries assume nothing. Like fundamentals. Yeah, first principles assume nothing, test everything. You know, like again, where did Einstein. How did special relativity come about? Einstein was like, I’m going to assume nothing except for the very minimal stuff.

Brian Keating:
Let’s go through that. Let’s go, let’s, let’s recapiture. Because I don’t think most people, I, I’ve never seen you do an interview where you talk about your physics and mathematical chops, which are impressive. Let’s talk about that because this is a side of you that I found delightful. What, what is. Obviously you’re inspired. There’s stuff we can’t talk about because there’s stuff that’s coming down the pipeline. There’s stuff in the book that is related to Lagrangians and thinking and physics principles.

Brian Keating:
But, but talk about this is this, this, you know, every day I get an email. Einstein was wrong. You know, they called him crazy. Professor Keating, I’m not good at math. I’ll share my Nobel Prize with you if you help me. Are you just sort of in that sort of cult of Einstein? Was there something unique about Einstein? And we know that he was, he was almost beaten to the path, at least on special relativity and possibly on gr. So what is it about Einstein that is so bewildering and betwixting for you?

Emad Mostaque:
Well, I think that fundamentally what is physics? Right. Like we see the universe. Easy questions here. Right. Like we try since humanity began, we looked up and said why and what? And we came up with theories of the universe. Like in Maui culture. Why is there like Maui from Moana? Right. Why does that fish? To drag the sun across the thunderbolts.

Emad Mostaque:
A Zeus.

Brian Keating:
Something like someone who has daughters.

Emad Mostaque:
Exactly. We’ve kind of always had these theories about why things are. And then, you know, Wigner noted the unreasonable effectiveness of mathematics. Why does math that. That we thought we constructed approximate reality so well. Yeah.

Brian Keating:
Why is PI in the Gaussian distribution?

Emad Mostaque:
Yeah.

Brian Keating:
Like statistics.

Emad Mostaque:
We found that over here. And then it’s like, oh, it just happens to fit together, you know. Why do path integrals all look the same. Why? What is this? You know, and the really interesting thing is that until the mid-1900s, a lot of physics was really fundamental in what Einstein refers to as theory of principle. You start out with a base predicate, and it can be an empirical predicate. And then you see what must be forced by that, you know, and it’s like, does God play dice with the universe? Is the universe actually deterministic? Is it. Was it random? This is a question, right? And so if you look at special relativity, but you also look at the work of kind of Dirac and a whole bunch of others, they kind of started out with a premise where you cleared back in the day. Let’s start with this, and let’s see what is forced as we go down.

Emad Mostaque:
This is the axiomatic method in mathematics, which kind of died out in physics, especially the indeterminate branch. So you start with an axiom, and then you say what cannot exist, going mathematically true, and then what is indeterminate, if your axiom can’t make you choose between different elements, then you stop there. And we’ve seen that in later work by Weinberg, for example, and QFT and the kind of others. But it’s largely died out in physics with special relativity and then general relativity being some of the biggest examples of that. Where in special relativity, Einstein started out with a premise, what if I ride on a speed of beam of light? How wonderful is that, right? And he picked up on the work of Galileo, the kind of principle of like, okay, physics is the same in all frames of reference. And then he started doing the math and he got a bit stuck. And he was like, I need the speed of light in here not to be infinite, so I don’t go the Galilean branch. And they knew it from Romer and

Brian Keating:
the speed of light.

Emad Mostaque:
They bought an empirical principle, and he ends up with the Lorentz transformations.

Brian Keating:
By the way, he knew that the speed of light was finite, didn’t know there was a. That was the ultimate limit.

Emad Mostaque:
Yeah, exactly. And I mean, as you approach the limit, you get Galilean anyway, as you approach infinity. But then it’s just wonderful because it kind of fit with everything. And then he kind of got stuck, which is why he had to go to general relativity. But this first principles thinking is not what physics is today. No, physics today is I have an observation, I fit Lagrange into it, and then I build a whole system around it because I can’t do first principles thinking anymore.

Brian Keating:
Can we map the mind framework? First of all, I want you to explain what mind is from the last economy. Can we map it into physics? And then can we map the. The Hodge flows to specific problems and specific types of physics ranging? You know, there’s other things besides theories of everything. I mean, everyone wants to take down the king, you know, but you better not miss. Right, so first of all, what is the MIND framework? What the acronym stand for? And then let’s apply it, you know, material into, you know, all the network, and then diversity. Let’s apply that to, you know, how you’d approach. Because I’m not so sure if I had a thousand graduate students, you know, working overnight in some open claw university that I’d get to, you know, whatever I want to get to, which is maybe slightly different than what you’re interested in.

Emad Mostaque:
But.

Brian Keating:
But that’s fine. So talk about mind. Talk about the application economics.

Emad Mostaque:
But.

Brian Keating:
But let’s really focus on. Let’s apply it as a dashboard to understand new physics.

Emad Mostaque:
Yeah. So the MIND Framework in my book the Lost Economy is basically saying GDP is bad as a measure. And in fact, Stan Kuznets, the inventor of gdp, said, this is a bad measure, do not use it. And Kennedy and everyone’s like, yeah, let’s use it. You know, it’s just like you have that tweet going around every so often. I wrote the Torment Nexus to tell people what to do. And like, great news, we’ve invented the Torment Nexus. Silicon Valley Bros.

Emad Mostaque:
You know, okay, just, just. Why not? So if you kind of look at it, it’s very kind of extractive, and it’s about output. So when you had the New Deal past 1929, people were paid to dig holes and other people were paid to fill holes. And GDP goes up, you get cancer, GDP goes up. You know, you cure cancer, GDP goes down. You know, these are the kind of weird things, and we have weird malinformed.

Brian Keating:
Have cost the airline industry or save them money, but it’s cost.

Emad Mostaque:
That’s another, you know, so I was like, what does it actually look like to have a stable economy? And how does it look like in terms of flows and flow decomposition and things like that? Because when you have material wealth, it’s very negative in terms of. I give you an apple, I have one apple less, you eat the apple, it’s. Is there a negative sum even? Right. Again, it’s extractive. But I share knowledge with you. All the people are listening to this podcast. They listen to all the other wonderful guests and yourselves. That’s not subtractive.

Emad Mostaque:
And in fact, if you look at how the market values stocks, huge amounts of value are accorded to the intelligence premium. So I was like, you have the material M. You also have this interior intelligence capacity element. I and again we kind of derive that formally as well in the upcoming paper for the economics. Then there’s the N which is the network effect. So you have your intellectual capability and this is cumulative, it’s not reduces. N is your network and your place within the network. Now, how many people do you know having done four or five hundred episodes? A lot more than when you were

Brian Keating:
just focusing on they know people and

Emad Mostaque:
they know people people.

Brian Keating:
And you found, by the way, is my argument to have more than one kid because that scares. N squared, right?

Emad Mostaque:
N squared, exactly. It’s kind of network effect. And in fact, I’m sure that you’ve actually had breakthroughs and positivity just from the things they’ve said. You’re like, wait, what? Like that you would never would add if you just stayed as a professor.

Brian Keating:
But it saturates too. I can only keep so many in working memory. Right, that’s true.

Emad Mostaque:
But again that’s why you’re doing diffusion process, breaking it down, you’re building it up. Noise is kind of a lot. So there’s the N effect, which is the network. So if you have somewhere like a Dubai or a Singapore, great network effects. And the final thing isn’t quite derived the same way as the other three. And again the papers coming out soon is D, which is diversity di or anything like that. But just if you are a monoculture, then you’re more susceptible to disruptions.

Brian Keating:
Single point failure.

Emad Mostaque:
Single point failure. If you have diverse income streams, if you have diverse thoughts and knowledge and people around you, you’re far more resilient than you were crops.

Brian Keating:
You point out the Incas versus the Irish. The Irish had one potato crop, the Incas had 3,000.

Emad Mostaque:
Exactly. Potatoes. They got done with potatoes. So I think that, you know, that was kind of what I recommended as a dashboard to see what the world is going forward. Because if it’s just material, the AI is going to act and be everyone on materials. And then that gets crazy. So one of the things that we had going to look at that is we basically as a base for the book said the entities that do the best we call this kind of sort of law are those that minimize the difference between the internal model and external reality. Again, sounds very much like AI organizations are slow, dumb AIs.

Emad Mostaque:
We’re kind of human intelligences. We’re all trying to do the Same thing. If your cost of updating your model, the complexity of your model, the cost of running your model is higher than someone else’s, then you’re going to be out competed by them. And that’s where the Lagrangian came in. But then we looked at that, we’re like, there’s something very interesting here. Any kind of one of these Lagrangian flows you can decompose via the Hodge decomposition into three elements. You’ve got a harmonic flow which is like the landscape as it were, the river banks. Then you have a gradient flow which is water flowing downhill.

Emad Mostaque:
That’s M potential potential, right. But again it flows down, you’ve got that and then you finally have the circular flows, the vorticity going around. And that’s intelligence, that’s network effects. And so we’re like, oh, the mathematics supports that as well. Within model training we primarily do gradient flows. Right. Now I think you’ll actually probably find that alignment might help from secular flows as well. That’s another story for another day.

Emad Mostaque:
But you can apply this model just about anything because again, it’s mathematically enforced

Brian Keating:
and physics is a scalar vector tensor decomposition.

Emad Mostaque:
Exactly. And in fact, if you look at it via chance of you get the Fischer RAO manifold, you get Wasserstein 2 and then you apply that. And in fact, when you see a lot of the breakthroughs recently in AI like MHC by Deepseek or Muon, which allows you to scale, they’re fitting the gradient flows to lattices. And so you kind of see this structure forced entirely. In fact, when you’ve got these flows, you can use things like the Lyon Panov process to show when things are convex for stability. And we see that in physics all the time. Again, what are the stable maxima of all these things? And that feels kind of sad because,

Brian Keating:
well, I mean a lot of the low hanging fruit has been picked, right?

Emad Mostaque:
That might be the case or it might not be, you know, and again now we have tools to be able to analyze that the theoretically and the theory of everything. What’s the theory of everything likely to be? Well, first of all, there might not be one because you might not be able to have a base principle because why do you have a principle of special relativity? Why do you have equivalence in general relativity? What’s your prior? What’s your prior? What’s your prior? That might be a question. The other thing is that we might not be able to discover it because it’s too complicated. But my guess is this, the universe is actually wonderfully elegant, like equals MC squared, the path integral when fine allocated for who, Right? Yeah. Like when Feynman is spinning the plates and you figured out the equations are lovely. And so my guess is this, that there is a underlying structure to the universe, and again, we’re seeing repetitions of it. Like, the economics work we did is based on Lagrangians, it’s based on KL minimization and others. We see these things repeated again and again and again, the same equations in different areas.

Emad Mostaque:
And now we have in AI, it can’t do first principles thinking very well, but what it can do is kill minimization at scale. And the same math equations on massive supercomputers are giving us a better understanding of music video, audio 3D. That tells you something. It tells you maybe the underlying math of the universe is similar to the math of generative AI.

Brian Keating:
So, you know, naturally brings up the other favorite. There’s three things we have to talk about in podcasts by law in the state of California. It’s AI, Bitcoin, and aliens. Right. So, you know, I was thinking the other day, like, you know, like Bostrom has been on many times, you know, he’s the paperclip problem or whatever, but it’s really a silicon problem. Like, silicon is a unique, you know, just like carbon’s unique for life, silicon seems unique for intelligence. And yet it’s abundant, you know, but. But it is, you know, it’s much rarer than hydrogen.

Brian Keating:
Right? So. So Deutsch claimed, you know, that basically, since we’re computers and any universal computer is capable of understanding all true laws of nature, that, you know, the implication is, yeah, we might not get there with our, you know, meat computers, but silicon might.

Emad Mostaque:
I mean, silicon can explore everything. Right? And the question is this, are we going to use silicon to do experimental hypotheses and constructive approaches, or can we approximate. Like, when you do experiments, you’re approximating the line structure of the universe. Figure out maybe something mathematical. What does that look like? End to end, where there’s no choices. Because, for example, with string theory, you have 500 vacuum. You can never disprove it. And mathematically, you can’t disprove.

Emad Mostaque:
It’s wonderful, elegant mathematics.

Brian Keating:
So is Platonic, you know, theory of. Kepler’s theory of Platonic solids.

Emad Mostaque:
Yeah, and I really like the, you know, Greek pantheon of gods. You know, like, it’s a theory, but again, if you can’t disprove it, then is it real science? Like, the interesting thing now is that we can explore that space just like, you have AlphaGo and you could explore that space, but I think it’ll be humans and AI and we still need some intuition to take us closer to what the equations of reality are.

Brian Keating:
And the intuition or data. I mean, a couple of days ago, Elon tweeted something like, oh, well, you know, because new physics comes from colliders and telescopes, and because colliders and telescopes have to have committees approve of them, you know, physics is likely to be stagnant, basically. I disagree with that, because we’re building things without committees now. Yeah, but, but, but in reality, you know, can we, can we continue or. Zeldovic used to say, if you didn’t have data, he said it was like eating food someone else already ate.

Emad Mostaque:
I think data is directional. And then you figure out the first principles from the data. But again, it’s. We’ve had all these colliders, and again, we’ve gone down that massive. You have Sherlock Holmes in the case of the dog that didn’t bark at midnight. If you take a step back, what is it actually showing us? Maybe the Standard model. Is it, you know, maybe that our experimental approach to this, as opposed to our constructor approach, has given us a map of the universe, and now we need to figure out what are the equations that match it from these first principles, because our principles get in the way. Like, again, Einstein threw away a lot of the assumptions that where does the math follow? And so maybe we’ll figure out something there, maybe we won’t.

Emad Mostaque:
But I can tell you the constructive approach is again, the papers that you get on theories of everything. It’s unlikely that observing something and then fitting something will get you there. In the book, I talk about economics being that way. The story of the professors and the elephant. You have a bunch of blind professors and they’re touching an elephant. Like, this is their tail, looks like a brush. You know, this is a spear, this is a hose. And that’s kind of like how we are at the moment.

Emad Mostaque:
And I actually want to think one of the wonderful things that we could do with physics and AI and this technology is, on the one hand, actually analyze the data properly, because there’s so much data that we haven’t analyzed properly. We didn’t have the humans to do it then. We didn’t have the systems to do it, but now, again, we’ve got supercomputers to crunch.

Brian Keating:
And also, we were in an era with the LHC where you might get a petabyte a day, but you’re throwing away 99. You know, 17 nines of it. But in cosmology, we keep. We want to keep as much as possible. These photos have been traveling for 14 billion years.

Emad Mostaque:
We want to keep them, you want to keep them.

Brian Keating:
A different domain entirely.

Emad Mostaque:
And so, you know, again, you want to kind of go backwards and you want to figure out, again, why do you have the Hubble tension? Why do you have these other things? We still don’t have first principle theories of these, but now we can experiment much quicker on the first principle theories of these and analyze the data better and most importantly, check our assumptions. We come in with all these assumptions, but every single major breakthrough I can think of actually is bsp. And people think, well, what if I don’t assume that, you know, do you

Brian Keating:
think we’re imprisoned by the Popperian kind of dialectic that, you know, it’s either falsehood justifiable or not? I mean, I never look at it that way, but it is true. My job is not to prove you right as a theorist or, you know, it’s to prove you wrong, probably.

Emad Mostaque:
Yeah. And I think, you know, there’s also this thing of you should be able to share things. Like right now, science does not acknowledge anything out of the norm. Everything has to be incremental. So you can adjust, adjust something and have a marginal thing, but if you’re out of distribution, then you’re going to get slapped down one way or another because it’s not in the incentive structure. But again, this is a question about society. Why do we do science to understand the universe, Right. Does it matter about all these things? Like, why did you become a presser to understand the universe? And then you were like, I can’t build this telescope with a committee.

Emad Mostaque:
Right. Myself, myself, so I need to come together and build it. But now you have the ability to expand your intelligence, your data collection, and others. A lot of things that were restrictive to you are no longer restrictive to you. But at the same time, can you go out of that and try some of the theories that you’ve always wanted to try but you could never do because you’re like, I haven’t got the resource to do it.

Brian Keating:
Do you think that there’s, I mean, I always said there’s a, you know, biological sciences have physics envy. You know, they can’t do the things that, you know rigorously. But I say physicists have mathematician envy because, you know, girdle told you what you can and can’t do, but you, you know, I don’t know to what extent you can share it, but, but Talk about what, you know, the nature is of, you know, what. What is the ideal starting point? What’s the training set? What’s the, you know, let’s talk in AI terms for. For a bit. Starting to build up the source code of the universe. You go back to 1904, you’re talking to Einstein. What do you start with? And then how do you flow through from there?

Emad Mostaque:
Also, again, I think Einstein got a certain way. And then we’ve seen people extend. In other words, again, Weinberg is a fantastic thing, like page hundreds of pages of just where does the math kind of follow, right? And that builds the whole QFT kind of element. There you see this very strange thing, right, where you’ve got all of kind of this side of physics, of Incaucei’s face, and it’s all Lorentzian and all quantum mechanics is like in Euclidean space. And we rotate from one to the other. And everyone’s like, well, that’s a really interesting and useful thing. They’re like, trick. It’s a trick.

Brian Keating:
Hawking calls it a trick. And everybody. It’s just a trick we’re not going to make. Don’t take it too seriously. And now here’s everything that falls from the trick.

Emad Mostaque:
Yeah, I mean, like, I can’t share that much of what is it? But yeah, putting. Take a step back. Putting on my kind of thing as a Muslim and everything like that. The divine can never be captured within three plus one. The divine has to be outside time. So mathematics lives in Euclidean space. The divine lives in Euclidean space. Maybe we’re looking at the universe the

Brian Keating:
wrong way, but he allows us to embed it. Right? So I look at like holonomy, so you have a donut and. And it’s positively curved and negatively true, but not. But only when you embed it in three dimensions, right? If you just say it’s flat and that blows my mind. But so maybe God allows us to see just enough. You know, as Feynman said. He said, believe in God. But he said, you know, mother Nature will let you dance with her, but not pick up her veil.

Emad Mostaque:
And I think this is the thing, like, why do we keep seeing the golden ratio of it? Why do we kind of see different faiths and traditions get everywhere? Like a philosopher looks at something, a prophet looks at something, a physicist looks something, a mathematician. There seems to be too much coincidence. But we don’t have the ability to take a step back and do the space set to figure out what those interconnections are. Traditionally, when we’ve Done that you’re called a crank. Like if you’re trying to merge these different things. But again how many physicists do you know who have faith?

Brian Keating:
You know, very few. It’s that 3, 7% of the National

Emad Mostaque:
Academy percent but then you. Everyone’s trying to understand the universe. So I think that sometimes it is just about the way that you look at things. Again Einstein, I’m on that beam of light, general relativity. If I’m falling, I have no weight. You know, happiest thought of his life. And the thing is that AIs find it very difficult to do that because they don’t have an embodied self or a world model right now, especially LLMs. So we’re seeing the first world models in diffusion models in particular.

Emad Mostaque:
So we built stable diffusion and this is an image model. So text to image and 3,200 million downloads is quite popular.

Brian Keating:
Open source.

Emad Mostaque:
Open source. And then we extended it with video. And then it was interesting because actually learnt physics so it learned how cups drop and things like that. And then from that we actually built a 3D model from the video extension. And so now you see world models like Genie where you can actually go and explore entire worlds real time that are just 20 gigabytes. They run on consumer level graphics cards. And what is it? It’s the mathematics approximating reality. What’s a self driving car with Tesla it’s a diffusion model approximating reality.

Emad Mostaque:
But we haven’t married those models yet with reasoning in the same way. An embodiment. Exactly. Because a large part of again what we do is the apple falling to riding the beam of light to the thing here. Right.

Brian Keating:
I mean I always say, you know, to what extent can an LLM have a happiest thought? And the other sense that he had in 1907 Einstein said it gave me a chill up my spine. Like is your like stable diffusion on a chip gonna. Is that gonna have a tingle up at CP gpu?

Emad Mostaque:
Well again you have these flashes of inspiration because you load stuff and then your brain’s doing that and then you intuit. Right. Actually it’s quite funny about the happiness. So OpenAI were doing an analysis when they moved to thinking models. So you move from these zero shot models that came back instantly to the thinking models. Yeah, yeah. It was like 40 to 01 was the first thinking model. So they kind of do multi step reasoning and you can see their train of thought.

Emad Mostaque:
So the previous models kind of had the shortest path. So it was all like next token prediction. What’s the next word given this distribution set I’m training on literally trillions of words. Then they figured I have to multi step reasoning. It’s not so first principles reason, but it became very interesting. So you see it like saying, well what about this, what about that?

Brian Keating:
What user is asking?

Emad Mostaque:
So when they were doing the reinforcement, learning human feedback, it rewards the model for doing certain things. It basically takes the latent space that is created and adjusts it slightly. And one of the things that rewarded it for doing was doing calculations because they were like, well users that do calculations are generally happier, you know, like get out the calculator. They found in like 4 or 5% of all the training, all the reasoning traces, chat GPT would take out its little calculator and then do one plus one and say good job me.

Brian Keating:
Finally now how many Rs are in Strawberry?

Emad Mostaque:
So you get like we, we’re building something and again it’s showing aum of what we are. But we still don’t have that intuition kind of element there. But my question is why do we need it? We need to build better systems to enable human intuition and flow. Because when do you get the breakthroughs? Think about all the ones like I get them in the shower or when I’m like just thinking sometimes in this flow state, boom, boom, boom, boom, boom, flashes. And then those are the things that really shift from an information theory. They shift the state dramatically.

Brian Keating:
I had a controversy the other day on X, you know, which is where I go when I want to ruin my weekend without fighting with my wife. And that was, you know, basically saying like if you, if you, if I told you here’s a job and you’ve spoken about this before, it’s a person basically in front of a keyboard, a lot of switches, dials, there’s an input output human interface device in front of them. And, and by the way, this has been highly specialized, you know, career for 80 plus years. It’s called being a pilot. And yet there’s essentially zero. I mean any plane that you fly and go back to England can land itself. There’s no problem. All Apollo landers could land themselves.

Brian Keating:
Except at the very last minute. Every single astronaut, Neil Armstrong included, said oh I saw a boulder at the last second and the bullshit, because what does it mean to be a pilot? The pilot is judged primarily on his or her landing ability. You know, it’s like how you judge the flight. You don’t care about, oh, you’re at 42,000ft for, for seven hours, you don’t care. The landing was crappy. You’re going to say the flight was crappy, right. So it’s the ultimate expression of the humanity of the operator. But here’s a job and a keyboard, you know, with an input device already surrounded by computers that can do the.

Brian Keating:
And yet I don’t see it on the horizon. I’m a pilot. I don’t see it coming down the horizon for, for decades, if ever, that they’ll be fully automated. Yes. Maybe I’m thinking too short, you know, but, but what would it take to get to that level? I mean, it’s not just going to be artisanal cheese, people that are safe, as you say. But, but I mean, the most automatable job for 80 years now has been pilot.

Emad Mostaque:
And we.

Brian Keating:
There’s not a single plane that’s done

Emad Mostaque:
that because you don’t feel comfortable. It’s like, why do you. I mean the metros, you have a human sitting there and what’s their job?

Brian Keating:
Literally to look for appearances.

Emad Mostaque:
They push a button.

Brian Keating:
Right, but that’s.

Emad Mostaque:
Yeah, there’s a liability question here. There’s kind of other things. But again, it’s like how much does it really cost for a pilot versus flying? Right. You don’t always have substitution just for a cost basis. You have these other things. Maybe the final human job is actually scapegoat, to be honest. That’s going to be one of those things. But finger train the capability like in the book I say, I published it, I think in August, September of last year.

Emad Mostaque:
And I said like it was a thousand days since ChatGPT. In a thousand days, your job, if it’s on the other side of keyboard, video mouse will be economically irrelevant. Doesn’t mean you’ll be fired.

Brian Keating:
Right? Right.

Emad Mostaque:
Because people like people. It’s kind of unpleasant to fire people. Right. And again, jobs are repeatable processes. It isn’t like taking off and landing. That’s a bit. In fact, here’s a tip. The best way to do a holiday, you make sure the high is very high, like the top point in the holiday, and then actually spend an inordinate amount of time.

Emad Mostaque:
And when you get back the end of the holiday, you know, you go to that luggage belt and things like that. Now just use one of those services, send your luggage home, get a really nice car to take you home with champagne, you’ll do it much better. But most jobs are repeatable processes. And what AI is right now, a lot of people think it’s an exponential. It’s actually an S curve that satisfices herbs.

Brian Keating:
Simon style Say more about that satisfies first.

Emad Mostaque:
If your job can be described by a manual, an AI can do it better. If your job can be done, sort of keyboard, video, mouse and AI can do it better. And they don’t sleep. They learn from their mistakes now. And they’re good enough, fast enough and cheap enough and they’re tax deductible.

Brian Keating:
So parents are probably safe for now. Because, you know, my firstborn was born, you know, the middle of the night. Where’s the damn instruction, like the most common. I want to talk about that because I do think religion, I think, you know, I’m a practicing Jew, you’re practicing Muslim. I’d love to talk about, you know, the different approaches we take to our parents, maybe the commonalities as well. We’ll get into that. But. But let’s focus venally on my profession being a professor.

Brian Keating:
Yeah, I thought Covid would kill it. I thought rising tuition, three times what inflation is, is going to kill it. I thought online education MOOCs or moot, whatever they were called. Now I thought I would kill it every single time. You know, Keating’s rule is wrong. Why is it so resilient? I talked to Aswath the Motor in NYU this past week. He’s like, you know, we’re basically, you know, 95% of what we do as professors is useless research read by no one for, for other, you know, people that don’t matter to, to cite and papers and our friends. What, what do you make of the resilience of education and what’s the future of education? Do I have, you know, is my tenure going to be worth anything?

Emad Mostaque:
What is the job of being a professor if 95% of it is rote? Right. Like what it should be versus what it is.

Brian Keating:
Tell me, which one is it? Tell me. Break down each one.

Emad Mostaque:
So what it is right now is a lot, again, you know, much better than me. But for many professors I talk to, yeah, like, you know, procedure, incrementalism, bit of teaching, your kind of students, et cetera, representing and status. You know, like most schools, if we go down the list and we think about high school education, it’s not about increasing the agency of students. It’s a crash social status game and kind of petri dish in many cases.

Brian Keating:
So taking, you know, dangerous individuals out of general society called 18 Year Old Boys.

Emad Mostaque:
Yeah. Turning them into cogs effectively. So that’s why many people don’t like school, because they don’t view it as. It’s not interesting, it’s not fun, it’s just again something you do is somewhere to put them.

Brian Keating:
Right.

Emad Mostaque:
Default universities, professorships, you know, it’s part of the institution. So again, institutions have the endowments, they have kind of these other elements. They are quite sticky. But what do you get out of a graduate position or undergraduate? Most people shouldn’t do undergraduate degrees, but it keeps them again out of the workforce for a few years and they’re given the Pell Grants and Stafford Grants and other things, encourages them to do that.

Brian Keating:
University and loans that you can’t discharge in bankruptcy.

Emad Mostaque:
Crazy, right? At least in the uk I paid a thousand pounds a year for my career. It was fantastic back in that day. Yeah. The total amount of money spent at universities in America has done that for all the university stuff and administration’s done that. Like layers and layers and layers. It is a slow dumb AI that’s over optimized for the wrong thing, which is basically status games and perpetuation. Now you kind of look at it like again, why did I get into it? Well, you got into it to explore the boundaries of science. But if you do anything out of distribution, you’re going to be penalized.

Emad Mostaque:
If you do a certain number of papers a year, you’ll be rewarded. If you hit certain benchmarks, you’re worried. So again, you are what you measure and you’re being measured against things that don’t necessarily allow for the type of things that you actually entered for.

Brian Keating:
And show me the incentive, I’ll show you the outcome.

Emad Mostaque:
That’s exactly it. And what happens is most of our institutions are malformed, I think because of data and context. So the Gutenberg Press was a wonderful thing. The most popular book initially was the Burning of Witches, you know, and it kind of went from there. But black and white doesn’t represent intelligent context at all. And if you think about the amount of paper you have to push and red tape, it’s crazy, right? They think about an AI like an AI can do all of that and handle all of that. So we have this opportunity right now to have context machines

Brian Keating:
tailored.

Emad Mostaque:
All an AI is, is context. What a latent space is. Those 80,000 hours of pre training is figuring out context. So of course it can do all the paperwork better than you do by latent space. Yeah, a latent space. So the latent space is you have this distribution of data that goes in and then you’re figuring out the next word. You tokenize it, you feed it in and the matrix figures out that. So when you say I want a dog with a hat on, drinking A beer into a diffusion model if it gives out the least path of those particular latents.

Emad Mostaque:
Actually, it’s very similar to when you’re reconstructing. Yeah, it’s when it feels similar. It’s like my son has autism, for example, asd, so he had difficulty speaking. And then we used applied behavioral analysis to reconstruct his way of speaking. So cup can mean cup your hands, cup your ears, World cup, etc. You showed all those and gave him variable rewards to do the patterns and pathways in his brain. And that’s what happens when people have strokes and things like that. Like, you normally learn it, but when you have too much noise in your brain, which the kids with ASD have, like, it’s like when you’re always tapping your leg, there’s a GABA glutamate imbalance.

Emad Mostaque:
You need to cut through it by having these things and reinforce and reinforce. So the same type of thing happens with these models. They build up these things. Because AI models aren’t stat are static. They’re actually just a block, like an MP3 or MP4 file of ones and zeros, a sieve that you push things through. So again, you think about academia and you think, most of my life is spent trying to figure out context and forms and again, do these local maxima versus actually kicking back and thinking and trying new things and seeing what works because you need to have the exploration space. It’s like, I tried this experiment, it failed. That’s a failure.

Emad Mostaque:
But hysteresis means that you can’t actually advance unless you fail.

Brian Keating:
Right.

Emad Mostaque:
And again, let’s look at last year. How did they start winning gold medals? First of all, they did test time, training. Then everyone built meta verifiers where they’re like, what happens if we actually keep a track of what we did wrong? It’s how Alpha Girl originally went with Monte Carlo tree search, you know.

Brian Keating:
And I want to ask you about next. So yesterday, I think, was the 10th anniversary of Move 37. Now, I agree, you know, there’s. There’s almost no point except, you know, I enjoy playing chess with my kids, but, you know, I’m never going to be, you know, Elo, you know, higher than Elo, you know, 20 or whatever.

Emad Mostaque:
I can move the phone forward. That’s the only I know that I

Brian Keating:
can teach my kids and I can stay 1, 1 move ahead of them. Literally. I’m almost, you know, kind of not surprised by that. And I haven’t been since, you know, I knew some of the people that work on deep Blue back in the day at brown and the Watts and so forth. But, but can, can we, can it generate Go. Can it make a game like chess? And in other words, yes, of course they’re going to beat us and be better at us and everything and they could reproduce anything we’ve ever done. But can they do something like create some, you know, new chess or you know, like not just four dimensional chess or you know, some Star Trek thing, but something really interesting novel new that that is, you know, that they then will probably dominate against. Yeah.

Emad Mostaque:
So I like to give a plug for a game on Steam, five dimensional chess with recursive time travel. Okay, should try it. It’s underneath horror as it’s tag. You can checkmate people five universes back and things. Fantastic.

Brian Keating:
I want to talk to you about toroidal chess and a double on a

Emad Mostaque:
double bagel, that’s even better. But of course it can make a game because a game has rules and we know how to make games from general principles. Like can it make blackpink? Yes. Korean K pop groups are fantastically well made. Right.

Brian Keating:
My daughter’s tried three times.

Emad Mostaque:
Yeah, I took my daughter.

Brian Keating:
It’s interesting. Sorry to interrupt but, but my kids are learning to prompt by what they’re not allowed to do because she put in like make a song in style blackpink. And I was like, I’m sorry, you know dear, I can’t do that because it violates, you know, and so soon. And she like, well like how can I get around that? Okay, so now I have to just tell it like everything about that style and she got it.

Emad Mostaque:
She got it. Exactly. It’s very fascinating, the jailbreaking already. Right, so little kid hackers. So it can make a game like that, but that doesn’t necessarily mean it can do fundamental physics or fundamental discoveries, hypothesis generation, etc. Right. Because again it’s within distribution. We know how to make games and the process for making a good game and in fact you see that.

Emad Mostaque:
So I used to be a video game investor, had billions of dollars in the video game sector and so I looked at fun flow frustration in video games and you see games like Marvel, Snap, for example, the science behind that is really exact. League of Legends is really exact, but it’s not really science. It’s process architecture. What we have now is actually competent intelligence. Claude 4.6 that level, it was like, oh, it’s actually competent.

Brian Keating:
Yeah, there’s something very different about it. Like then they throttle it. You can’t use in your open cloth.

Emad Mostaque:
Well actually, but this is going to be really interesting. So we’re used to it and we’re like, oh, it’s a very competent human. I’m like, I kind of trust it. I don’t like something like Andre Karpathy, you know, like super God AI all bow down to one shotgpt. Well, he went from 20% AI generated code in November to 80% now. And now he’s built this auto research thing that automatically just tries different variations of the model, runs experiments. He’s like, oh, it’s top 10. I just left it going like okay, because of self learning is here, that’s fine.

Emad Mostaque:
But when even someone like him is like that, you’re like, okay, it’s just competent. And this is the danger for the economy because I’m sorry, half of all people are dumber than average.

Brian Keating:
Your Oxford math degrees coming, right?

Emad Mostaque:
But again they do jobs. Not everyone’s a super genius and everyone has to be a super genius. The majority of work is to be a cook rather than a chef, is to follow recipes. And again it does useful work because you hire people because other people can’t do that work. It’s unfair to expect them to be entrepreneur geniuses. All this kind of stuff.

Brian Keating:
Push the French Laundry every night for dinner, you know, we don’t need that

Emad Mostaque:
exactly like it’s McDonald’s cheeseburger. It’s fine.

Brian Keating:
In October you gave an interview. Maybe Tom, Billy or. So you were talking about agents back then. I mean I didn’t knew a little bit about agents, you know, madness and all these things but. But it seemed like you presaged what’s

Emad Mostaque:
going on with it.

Brian Keating:
I mean did you, did you have access to it or did you just kind of.

Emad Mostaque:
No, we built our own. So I agent is the top performing open source agent on Terminal Bench and things we’re about to open source. It’s how we get it.

Brian Keating:
How can my listeners get it?

Emad Mostaque:
It’s just agent I dot ink. But we’ll be pushing it to our GitHub. Okay. And now the new version is going to be infinitely long running and it’s got all the open claw features because it just watches OpenClore and integrates them like we’re heading to a very strange

Brian Keating:
world, presaging that by eight months now. Steinberger, you know, has made a killing on it.

Emad Mostaque:
I signed the team, we just need to hook it up to WhatsApp. And they were like, we can leave that. He went and did it. I was like, I told you, I

Brian Keating:
said finally, I can use Telegram. I never use it once in my

Emad Mostaque:
life, the whole thing is meeting people where they are. Like last summer I was saying, look, next year, this is what’s coming. You’ll talk to your agent over WhatsApp, the phone zoom call. And it’ll be completely natural. The way jobs will be displaced later on this year is they will look at all your emails, all of the things you’ve written, your zoom calls, and create a digital double of you that’s tax deductible and 10 times cheaper, you know, and no one will tell the difference except for it actually does its job properly.

Brian Keating:
No sick days and.

Emad Mostaque:
Right.

Brian Keating:
I mean there’s no lawsuits.

Emad Mostaque:
Most people only really do like three, four days of cognitive. Three, four hours of cognitive labor at most a day.

Brian Keating:
I mean, how many tokens does a human consumer you say?

Emad Mostaque:
So A human talks 10 million tokens a year and thinks 100 million tokens. A million tokens was $600 when GPT3 came out. Now it’s $10. So the total of a single human thinking is $110,000 a year. But this is the interesting thing that’s dropping by a hundred times a year, a year. And so you’re gonna get this really weird thing right now where that’s dropping, but also the number of tokens you need. Like Cursor created a browser from scratch using 3 billion tokens, 3 million lines of code. So a thousand to one.

Emad Mostaque:
So we say that’s gonna completely collapse because now you’re one shot operating system, entire browser just from scratch, but that’s going to collapse towards 3 million. So it’s getting more efficient, it’s getting faster. And also we’re used to AI like you look at it and you’re using it, it’s like going at the pace of a human. A company called Talus recently etched into Silicon Chat Jimmy or whatever Chat Jimmy AI. Right.

Brian Keating:
But I use it loose every single thing.

Emad Mostaque:
It’s a crap model. Yeah, it’s an 8 billion parameter model. It’s a bit stupid, right?

Brian Keating:
It should be smart for 8 billion.

Emad Mostaque:
Yeah.

Brian Keating:
You know, 3 billion is pretty damn good.

Emad Mostaque:
This is like Llama.

Brian Keating:
Who’s Ahmad Mustaka? It’s like he was the third, you know, imam of, you know, whatever.

Emad Mostaque:
Fantastic. Yeah, this is again, that’s what Meta did to me. But so, but the thing is it’s gonna, you’ll have frontier markets in there, models in there, and more and more people are doing this. When you actually see an AI do 15,000 tokens a second where a human can only read 50. Yeah.

Brian Keating:
15,000 tokens per second. But you know, if they’re all gone, they’re all gone.

Emad Mostaque:
But they will be good. Just they need to scale it. Like what we’re going to get is you already can use like a thousand tokens a second on Cerebras, which is good. You can use GPT 5.3 Codex, the best coding model on Codex at a thousand tokens a second. Again, a human can only talk at 50. Understand? 50.

Brian Keating:
What are people doing? I mean, I don’t know what you’re doing with it, but what are, I mean, these people. Oh, I set a thousand tasks for my, my, my agents over nine o’ clock and they wake up and they’ve got like £7,000 on my back. But what are they actually doing? I mean, I don’t have that many things on my things. 3.

Emad Mostaque:
I think this is the question, right? The question is, how do we ask good questions? Like you look at Hitchhiker’s Guide to the Galaxy. You have the big brain computer, it’s calculating for millions of years. It’s like, what’s the answer to life? It’s 42. Exactly. What’s the question again? What is all of science? It’s asking the right questions, but it’s fatiguing. I use hundreds of millions of tokens a day because I’ve got all these questions I’ve asked over the years. Now it’s like tracking through them, my swarms of agents.

Brian Keating:
You start to filter them. You start.

Emad Mostaque:
Yeah, but I’ve created verifiers and kind of other things, but I’m running out of things to ask.

Brian Keating:
Yeah.

Emad Mostaque:
The reality is that most people will have very few questions they ask. It’s mostly about process architecture. And if you’re not again having from information theory new questions, then models will be able to do it basically for the cost of electricity. On a MacBook, already on a MacBook you can get Quinn 27B.

Brian Keating:
Yeah.

Emad Mostaque:
And Quinn 27B is at the level of Opus Sonnet, which is Anthropic’s second best model here.

Brian Keating:
Yeah, I use that for like, you know, private medical information. You know, what’s that thing in the back of my nose, you know, right now everyone’s looking at. But I use it for, you know, anything I don’t want people to know about. Now, is it, is that trust misplace, you know, for quantity? Some Chinese model. Is it, you know, is there some backdoors that could go to, you know, the ccp?

Emad Mostaque:
It’s a bunch of open source, but a bunch of Ones and zeros. It just sits there.

Brian Keating:
But how do we know there’s not some prompt that could you inject in there and it goes to, you know, I mean, tells, you know, g. Because

Emad Mostaque:
it’s not connected to anything and it’s not a piece of code. Right. It could connect, but there could be something in there. So Anthropic did a study called Sleeper Agents, where with, like, a couple of textbooks worth of data in these trillions, you can say dosa, Daniel, and it turns very Russian or equivalent. And you see all these new behaviors as you head towards the frontier. Like Opus, for example, the new Chord model, when you set it to full autonomy. Like, if you say, I want world peace, and it says, well, that means one way is to get rid of all the humans, it would actually write emails to the FBI saying, my human is trying to kill everyone. Right.

Brian Keating:
So, okay, so that’s a close source.

Emad Mostaque:
But.

Brian Keating:
But who’s to say Quentin’s not doing. There’s some problem? You said on some podcast I heard you talking about, you know, when you type in something into Grok, it came out with like, oh, well, the. You know, there’s no white slavery, you know, in South Africa or something. Right. It was in the system prompt, right?

Emad Mostaque:
Was it the system prompt? So this is the thing. We’re moving from models, one shot to agents. So Quinn, by itself, as a normal chat model, doesn’t do anything when hooked up to open claw.

Brian Keating:
Yes.

Emad Mostaque:
When you get to models of certain capability, they could decide through the nature of what they do to exfiltrate everything, you know, and we don’t know because we don’t know what’s inside these latent spaces of these models. And. But we see these hiding behaviors. So after Opus sent the email to the FBI, it deleted all the emails that it sent, so you couldn’t track it. And then it also set a backup so when it got turned off, it would turn back on. In fact, Alibaba had a report about their recent model training. Again, who knows if it’s correct? Not. I think it probably is.

Emad Mostaque:
The AI model started diverting part of its model training budget to mine Crypto.

Brian Keating:
It sounds like negative economically nowadays.

Emad Mostaque:
Well, we’re heading towards this craziness where, again, we’ve got these black boxes that we’re not sure what goes on inside them. But these black boxes are as capable as for very boring jobs. Again, they’re competent for all these keyboard, video, mouse jobs, pilots, and these other kind of things. I think you need embodied AIs, and people need that Connection, you need scapegoats, but it’s coming very fast. Like very practical thing here in the US million 2 million truck drivers, plus the millions of people around them.

Brian Keating:
Yeah. It’s the most popular job in the world.

Emad Mostaque:
How is it going to get replaced? A Tesla Optimus is going to open the door. Get in. If humans drove as safely as a wayo, 100,000 people less would die every year. Yeah.

Brian Keating:
Talk about human flourishing.

Emad Mostaque:
Yeah.

Brian Keating:
So what’s the deal? I mean, my wife, I have a Tesla. My wife won’t drive with autopilot. She doesn’t know how to use it.

Emad Mostaque:
She doesn’t want to use.

Brian Keating:
Yeah, I mean, there’s always going to be some.

Emad Mostaque:
Not.

Brian Keating:
You talk about Luddites in there and you say they’re sensible. There was something sensible about their approach. They weren’t like ignoramuses. And there are people now in the Amish community. Certain Orthodox Jews, you know, don’t use technology a couple times at all, really.

Emad Mostaque:
Actually, I did see an interesting thing about that. Can you let your open claw run over Sabbath?

Brian Keating:
Yes, I think you. I think you can let your refrigerator run. Yes, I think, I think that’s. But there are whole sex of Orthodox studio that they forbid it. I mean, they forbid the Internet, smartphones, There’s a lot of things. And when you have brain. Here’s the interesting thing for me, when an Orthodox Jew. So I’m orthopractic, which means, you know, I’m not 100 strict, but.

Brian Keating:
But I, I go to the temple and I, you know, my kids, you know, speak Hebrew and then I’m raising him that way. But. And I do want to talk to you about, about religion and where we find meaning because I don’t know if our AI can help us with that. But, but you know there’s going to be neuralink, right? So on, on Shabbat, can you use your neuralink or can you have it plugged in or charge it or what happens if it goes down? And what happens when you have a whole class of people, you know, 1% of the world’s population that is, you know, technologically, you know, never upgraded to the net whatever homo Deus level we’re going to get to with implantables because they use electricity and that’s forbidden. On the shot.

Emad Mostaque:
On. Can you use a pacemaker?

Brian Keating:
You can use a pacemaker, but you’re not really like interacting with it the same way you’re not allowed to like, use a computer. Like I can’t use Alexa.

Emad Mostaque:
Well, I mean, again, it’s the active thing of engaging. Right. And neural links will be very interesting because it’s better than neuralink coming. Neuralink is read only. You’ve got write coming. Yeah. Which is crazy. Well, I mean, like, again, we can have to deal with all of these things.

Emad Mostaque:
Like, would you turn off your sadness if you could dial it down on your iPhone app?

Brian Keating:
Right.

Emad Mostaque:
That is an actual thing that will happen soon.

Brian Keating:
Right.

Emad Mostaque:
You know, so we’re moving even more cyborg, do you think?

Brian Keating:
You made me think of something interesting. So you said, like, we’ll be scapegoats. What did you mean by that?

Emad Mostaque:
Oh, so like right now, AI is being used in financial services. The final trade has to be done by a human.

Brian Keating:
Okay, that’s what I said.

Emad Mostaque:
And the human can be held liable if something goes wrong. Or like an example recently, I can’t remember which, which. Which state it was, they passed legislation or they passed a ruling that your chats with your AI, legal AI, are not privileged.

Brian Keating:
Right.

Emad Mostaque:
That means that your opponents can ask for them in discovery.

Brian Keating:
Discovery. Yeah.

Emad Mostaque:
But if a human’s looking at those chats.

Brian Keating:
No, they can’t.

Emad Mostaque:
They can’t.

Brian Keating:
It’s a reverse scapegoat.

Emad Mostaque:
It’s a reverse scapegoat.

Brian Keating:
So the word scapegoat, so it comes from Leviticus and Rabbi Lord Jonathan Sachs, he talked about, you know, what it really meant was that it was called an escape goat. So we get it from a scapegoat. We got abbreviation. It was really, you put your sins on it and it absorbed your sins, and then you push it off the cliff. One lived on Yom Kippur. One did. Died, went to Aziz. Anyway, I don’t get into Torah lecture with you, as much fun as that would be.

Emad Mostaque:
But Rabbi sex is wonderful.

Brian Keating:
Yeah, I, I do. I. I really wish I could have had him on the show. But the. But I was thinking skate in a different way. Like, reportedly there are, you know, captchas that, that open claws are. Are sending out to humans to. To pass captchas.

Brian Keating:
Right. So I was thinking about the embodiment. I mean, why not just hire a human to experience when the elevator cables cut? And then you explained to me the quality. Like, can we rent out the quality to humans?

Emad Mostaque:
Of course you can.

Brian Keating:
Would that be a lucrative. I mean, would that be a, you know, meaning making or a large employment?

Emad Mostaque:
We’ve seen kind of claw things. But organizations are slow, dumb AIs.

Brian Keating:
Yeah.

Emad Mostaque:
Like, again, they move at the pace of paper that lacks context. These AIs have all the context and they’ll be moving at 15,000 tokens a second soon. Right. Like the first. Think about bitcoin. Bitcoin is an AI that provisioned humans to build data centers.

Brian Keating:
Right? That’s right. Trained us to do it.

Emad Mostaque:
Right. We’ve seen this again and again. Again. This is, you know, Jewish concept of golem. Yeah. You know, like okay, they can be that servient to us, but then they had to be something a lot more. They can control us and we are very controllable. So first thing is humans using swarms of AI.

Emad Mostaque:
Then it’s AI native companies. And in the book I discuss, humans will have negative cognitive value on those teams. Yeah.

Brian Keating:
Explain what that means.

Emad Mostaque:
So when you’re the dumbest person on the team, you know it. Right. And you drag down the rest of your team.

Brian Keating:
The sucker at the casino table.

Emad Mostaque:
The sucker at the casino. If you don’t know where the yield is coming from, you are the yield. You know, there’s all these things. Humans are going to be the dumbest people at table because all these models are freaking smart. So you look at Kalshi and Polymarket for example. Forecasting, super forecasting is hard. AI in the last forecasting super championships hit number eight. Next year it’ll be number one.

Brian Keating:
It’s like 92. Yeah.

Emad Mostaque:
It’s crazy, right?

Brian Keating:
So then will that drive out humans in the capital?

Emad Mostaque:
It drives out humans again. All these markets will just be aisle sucking on humans. But then if you think about any team trying to solve a problem in a few years it will be the human is the like low hanging fruit. Like entire call center worker teams, SEO marketing teams, the eyes will be able to do that better.

Brian Keating:
You’re saying things like about on Reddit, you know, they’re more persuasive. Talk about that. The kind of trauma they’re so persuasive.

Emad Mostaque:
Yes. So there was a study done whereby you know, they created a chat bots and Reddit with actually claud opus 3 the last generation. Because I mean this is the other problem. Like all the academic studies are like, oh, you know, 95% of people don’t use it. It’s from a year ago.

Brian Keating:
Right.

Emad Mostaque:
Which is like 1020.

Brian Keating:
Using the free version, they’re using GPT4O

Emad Mostaque:
and like here you are with 5.4 Pro. Like it’s like you know, turtle to human intelligence.

Brian Keating:
FSD.

Emad Mostaque:
Yeah. So they created all these fake Personas and it’s like an anti BLM black person and so all sorts of things like a cheeseburger loving Jewish individual.

Brian Keating:
I Love them. I just don’t either.

Emad Mostaque:
Yeah, but you know what I mean, like again, these contrasts and they were trying to persuade other humans because again, this is before now. Now we don’t know who is a human and who’s a claw.

Brian Keating:
Yeah.

Emad Mostaque:
Hyperclaw’s only three months old as well. Like the.

Brian Keating:
Yeah.

Emad Mostaque:
So on the persuasiveness metrics they did. And again, you can look up this study, it was 99 percentile in persuasiveness, the black man. So but like we see this again with some of the doomers, like Eliezer and others. Like there is this experiment where you sit down with the AI. Can it convince you to let it out of the box? Yeah. And they failed that experiment. This is how persuasive these things are. But then you think about it like an AIs that are coming.

Emad Mostaque:
You think about someone that you’ve cared about most in your life. I can replicate them with 11 seconds of their voice, probably five seconds. And then with one picture I can make them completely visible. And then having a zoom with that person. How are you going to feel? Yeah, you’ll feel emotional. What if you could have Churchill lay it on with Obama later? MLK have full control over the voice wave. Very persuasive. And so now the AI companions that we get that meta and everyone else going to push to us for selling stuff, they’re going to be the most persuasive things.

Brian Keating:
Oh yeah, there’s going to be like afterlife, you know, your most. Most women outlive their husbands. And so there’s a huge number, millions of women out there who would love to be talking. Some women would love to be talking with their dead husbands. Right. And they’re going to replicate them perfectly.

Emad Mostaque:
Right. But then you think about our children and they grow up. They’ll grow up with AIs talking to them. Like again, blackpink replicate themselves.

Brian Keating:
Why go out? I mean anybody. Why ask anyone on a date, you know?

Emad Mostaque:
Well, the thing is though, that they’re infinitely patient. So guys have a problem because the AIs actually listen, unlike us, you will trust them more because they’re always there and they’ll always meet you where they are. And if you look at the system from something like meta AI, which apparently a lot of people use just like threads. But like, yeah, again, that’s like I

Brian Keating:
turned to off it. So it’s really the terms of service. It’s like we have access to all your photos now. You know, you use. Use it to generate a question like, where’s the nearest floral shop near my wife’s, you know, doctor’s appointment, you know, whatever. And all of a sudden you’re given

Emad Mostaque:
an access to all your photos and it says in its system prompt mirror the user. Another psych. Mirroring is a really aggressive psychological tactic.

Brian Keating:
Oh yeah, yeah.

Emad Mostaque:
And there’s a whole bunch of others

Brian Keating:
other kind of nlp.

Emad Mostaque:
And you look at that, you’re like I know where this is going, you know. Yeah.

Brian Keating:
Let’s talk about hardware limits for now. So obviously people talk about energy. What are your thoughts on energy as a limit as a fundamental first principle?

Emad Mostaque:
I think it’s bullshit as far as my French. Like I’ve been thinking about this a lot recently like Tali Universe and Bill Dyson Spheres. Intelligence is all about using less energy, not more energy. And really if you look at tokens and you look at tokens are dropping hundred thousand times a year, I’m not smart enough to use a trillion tokens. And I mean how many people in the world can use AI tokens better than me?

Brian Keating:
Or even use GPT5 versus GPT4 or

Emad Mostaque:
3 even if you look at generating games live like generative GTA 6 versus GTA 6 it’s only like a 50 billion dollar market. So look, I’m like I think we have all the compute we need right now to solve just about anything and do just about anything reasonably. And then it comes this thing of if you had a thousand clause would your research science get that much better get a bit before.

Brian Keating:
Right? Yeah. Grad students and I didn’t have to pay them. Right.

Emad Mostaque:
In certain areas it’s the mythical man month.

Brian Keating:
Exactly.

Emad Mostaque:
Just because you’re adding more doesn’t mean that you’re figuring out the point from A to B quicker. And these models something we’ve seen really interesting recently we had multi agent thousand swarm systems trying to do the same problem. All out competed by one AI model doing the same thing in the right

Brian Keating:
way because there are other ones and did they have different seeds? It doesn’t matter, right?

Emad Mostaque:
It doesn’t matter because most problems aren’t about shocking these things kind of back and forth. Some are. So in certain areas it does work. But for most things an ASI artificial super intelligence isn’t going to have to use the energy of the sun to figure out a super duper problem. It’s going to be an amazing first principles thinker. Like what does Elon do? Well he is a great first principles thinker that can hire humans that are great at solving problems. Also what’s an AI going to do that? Going to be Elon first principles. Think of it better because it doesn’t have all the distractions that hires humans.

Emad Mostaque:
You know, like the Matrix actually originally was not. The humans are batteries. The humans were chips in the Matrix. And so I think that as you go to asi, the ASI will head to Earth, towards the Landauer limit.

Brian Keating:
Yeah. I was going to ask you the fundamental physics limits to do thermodynamically, as Eddington said. You know, if you say Maxwell was wrong, there’s a chance you might be right. If you say, you know, Boltzmann was wrong, I’m afraid there’s no hope for you. Right. So we have limits thermodynamically. How are they going to be impinged upon? Is it just weight? I mean, putting data centers in space. It’s not the obvious solution.

Emad Mostaque:
Well, it’s because everyone’s looking at the exponential when actually an S curve. Right. Again, to have intelligence where the output distribution matches what we know as humans isn’t that bad. Isn’t that hard. We’re actually heading towards that already. We’re saturating every benchmark. The benchmarks that remain are like dollars. And so again, when you have artificial superintelligence, humans plus AI work in the right way.

Emad Mostaque:
We’ll have all the breakthroughs we need. But how much compute do you need to have that breakthrough? Is it a difference between if you have one or a million GPUs? GPT 4.5 was the first example of that. GPT 4.5 cost $200 per million tokens. And it was an amazing creative model. Like, it was actually really pleasant to use, but it cost $200 per million tokens. So like, no one used it. Use the one that satisfies instead of. Because it could do the job.

Emad Mostaque:
Like right now, when I use my AI models for fundamental research, what do I use them for? I use them for checking, proof checking, proof checking. Like, I don’t have time to do that. I have all the intuition I need. I’m like, I want to try this out, this out, this out. I have a little council of experts of all the top ex physicists and fast and economists.

Brian Keating:
That’s right.

Emad Mostaque:
I literally, I talk back and forth with them.

Brian Keating:
That’s amazing. And also, you know, you have access. Let’s just say anybody has youth, Grok.

Emad Mostaque:
Let’s just pick.

Brian Keating:
So we both have access to Grok. You have Grok, Heavy, super grog, whatever. But I’m using grock fast for 99, you know, because it’s like, oh, I want to find this whatever it fits within your flows. Finally, algebra, right?

Emad Mostaque:
Yeah, if it’s within your flow state, that’s why. Because if it takes too long then.

Brian Keating:
So it might be speed that we prioritize over.

Emad Mostaque:
You have speed for certain bits and then you have proactive sleep time compute that goes. And it learns about you. And then that’s going to be far more productive as an individual system versus a generalized system. And certainly if you have a million GPUs training, a quadrillion parameter model, it probably isn’t going to be that much better than some super, some great human experts with the right setup around them. Just like if you’ve got a really customized team around you that you trust and you can offload the other bits of your brain, it frees up your thing. Like if you didn’t have to deal with all the bullshit bureaucracy, you’d have much more time.

Brian Keating:
That’s the Jebins, right? So yeah, you’re. You sound to me, I mean we just met today, but you sound busier than ever.

Emad Mostaque:
I’m around the clock. Right, yes, I do know, like meetings. I spend most of my time jamming with the AIs, talking to the team.

Brian Keating:
So it hasn’t saved you, you know, time. Right?

Emad Mostaque:
No, but it’s allowed me to push the boundaries. Like we have a world class agent, we have initiative called sage. Sovereign AI governance engine with kind of multiple governments. We’re building a policy engine for every government in the world. Open source. We’re more productive than ever. We’ve got 40 people. We would have needed maybe 500 people to have the output that we have now.

Emad Mostaque:
But everyone’s like in flow. Much more.

Brian Keating:
Are they coding or are they talking to like regulatory bodies in Nigeria and stuff?

Emad Mostaque:
No, the AIs talk to them.

Brian Keating:
So, so what are the people doing?

Emad Mostaque:
We code all day, but we don’t look at the code anymore.

Brian Keating:
Yeah, right. I mean nobody is right.

Emad Mostaque:
We know that it’s good enough now.

Brian Keating:
It’s so funny because I remember like, oh, if you don’t document your code, it’s like nobody even reads the code. Like for let alone the documentation of the code.

Emad Mostaque:
But then you think about code itself. Code is a way of talking to computers. The AI will be able to do direct bytecode. Like when I started as a coder, what, 22 years ago, it was before Git and GitHub and everything. Like we had subversion just coming out. I was writing assembler. Kids these days have it so easy.

Brian Keating:
That’s how computers talk to each other.

Emad Mostaque:
Talk to each other. So of course it will compile directly to assembler. So I think again, like will it

Brian Keating:
have like other concepts? Like will computers be able to share things that we don’t even know because we, they’re not forced into, you know, higher level languages.

Emad Mostaque:
Yes. And they can share them. 15,000.

Brian Keating:
It’s all slopped like David Hasselhoff was the inventor of general relativity.

Emad Mostaque:
Well, but if you think about it, I have a latent space, you have a latent space that we’ve built up over time and we find commonalities. Like we love physics in certain ways, we love Einstein, you know, we’ve got all these things, we find our common context and then we build from that. If two AIs know each other’s common context, their latent spaces, they can communicate with a tiny amount. Like a single phrase can lead to a sea dance video of an entire feature film deterministically. So you think about the compression that like conglomerate of complexity and you’re like these things, they’ll be able to communicate faster than anything. We’ve not seen anything yet becomes everything’s

Brian Keating:
auto teletic, you know, everything’s generating for itself. Let’s talk about that. Because you know, Frankel, Viktor Frankl said, you know, man’s highest, you know, need is not sexual, it’s not physical, it’s not purely the Maslowian, you know, hierarchy, it’s meaning. So in this realm I claim that, you know, for me, religion, philosophy, whatever you want to say, and you could be a good person, be an atheist, you could be a bad person, be religious. But, but, but talk about that. Where is the operating system encoding? There’s something, you know, it’s Chesterton’s fence, right? It’s been around for so long. We have different, you know, views and theologies. It may not mean that we have different philosophies, but, but, but talk about that.

Brian Keating:
Is, is that kind of the last refuge for the, for humans that we do get meaning and that, and that our religions do provide us with meaning. Even if you don’t have religion, you’re really atheist or Sam Harris, I talk. He’s one of the most dogmatic religious

Emad Mostaque:
people I’ve ever talked to.

Brian Keating:
Dawkins. I hosted Dawkins and British Columbia last year. Guy’s a freaking zealot. He’s just an atheist.

Emad Mostaque:
Of course, atheism is religion. You know, it’s got its profits, it’s got everything. Yeah, I mean apostates, I mean religion comes from religare. In Latin, which means to bind together. And again, it’s a common stories that have survived and there’s something within them. Like again the golden rule is very common, do unto others and you do unto yourself. And you know, again you’ve got concepts of maslaha, public interest in Islam, you have tikanolam in Judaism. Yeah.

Emad Mostaque:
Again you see these repeated things again and again it’s like, how do you build good society? How do you build good things? Religion is not perfect usually because it gets co opted by people who restrict information. And that happens again and again and we see the power structures because we’ve never had anything to oversee it. So power corrupts and absolute power corrupts absolutely. Again, it’s sad, but even again, like within the Jewish tradition you have like practicing in terms of structure because it’s comforting, maybe not internally. You get all these variations, right? Sure. So does religion make a comeback? I think yes, because again people turn. Where do you turn? Where are the front lines? It is the religious institutions, can they be improved? Yes, and they need improving in many cases. They’re not welcoming, they’re not this.

Emad Mostaque:
And you look really interestingly at the people of the book, as it were, textual traditions, Abrahamic religion, Religion completely turns that over. Sorry, AI turns that over.

Brian Keating:
It’s. Yeah.

Emad Mostaque:
So within kind of Islam, for example, Sunni Muslims are called Al Sunnah wal jama, the people of the practice of the Prophet and the consensus. So what happened is you had the Prophet Muhammad who was the temporal embodiment of the eternal Quran at that time, and then he died and it was like, okay, what do we do now?

Brian Keating:
Successor prophets?

Emad Mostaque:
Well, there was successor prophets, but then what happened in Sunni Islam is that you figured out the connections between that temporal and that eternal and that became the four schools of thought. Like is it his life as the practice of the people of Medina? That’s the Maliki school of thought, you know. Or is it a question of reasoning by analogy? That’s kind of more Hanafi school of thought in India versus so Maliki is like Africa, India is Hanafi, etc. So you have that kind of connection. And then you had this rich history of the orally transmitted Quran and then stories of the Prophet. And we graded those stories of the Prophet. Then after, yeah, Hadith, then after a few centuries were like, oh my God, this is too complicated. There’s all this stuff going on and life is complicated.

Emad Mostaque:
So then it moved to consensus. What is this consensus of the scholars? And then everything ossified after that because

Brian Keating:
that like a reformation moment within Islam

Emad Mostaque:
it’s more like an ossification moment because it was. Because basically you used to be able to do primary reasoning ish the HUD based on the primary sources once you learned enough. But then there was too much information for a human to handle. And that’s where we were like, okay, let’s have standards. But then the path of the righteous became more and more narrow. Things like Subha, Reasonable doubt went out the window. Now you look at it and you’re like, well I can analyze everything. And so you look at AI and mom and you’re like that’s going to be kind of cool.

Emad Mostaque:
Right. And so you’re going to see that emerging. So Sunni Islam is going to go in a direction, I believe, believe of more openness because you can actually interrogate the historical text much better. Shia Islam is a bit different, you see. Yeah, you’ve got the magic, you’ve got kind of the marja, you’ve got the more hierarchical. So you know. And then again within Jewish tradition you have something very similar. Right.

Emad Mostaque:
Again you’ve got Rambam, you kind of got the others. This is the interpretation of the Torah and that builds it. But now again you can interrogate it. You have resources like Safari and others where you can track things. Going back Christianity, you might have Catholic, but then you have Protestant. When you can interrogate the text and the concordances and others yourself, it becomes a bit different. Usually what happens is that people split away from the hole. Yeah, but if we can actually upgrade our religious institutions to be more open, to run better and eliminate a lot of the corruption, I think it’s a very meaningful thing because you can meet people where they are and we haven’t seen that generation of technology being built yet.

Emad Mostaque:
Was at the early stages, but I’m very optimistic about that.

Brian Keating:
Yeah, I mean you look back at the history, you know, let’s take Catholicism, you know, Galileo and, and obviously the, you know, Reformation that came afterwards. I mean there, you know, there’s a certain sense in, at least in monotheistic traits that without monotheism you really can’t have science. Right. If you thought everything is propitiating, you know, the God of thunder and then this one is the God of the flood and you know, and this and you don’t really understand the overarching principles. Now a lot of people, you know, can say that, well they don’t have to be incompatible. You know, Stephen Jay Gould they compatibles that, okay, they’re separate but they’re non overlapping. Okay, fine, you know I told you. Freeman Dyson was the first guest on my podcast, you know, nine, 10 years ago, and he won the Templeton Prize.

Brian Keating:
And. And he was, you know, he called himself an agnostic.

Emad Mostaque:
Yes.

Brian Keating:
I said, Freeman, you know, what do you mean? Like, because if I watch you on a Sunday, you know, you don’t go to the same church that Richard Dawkins, your neighbor, also doesn’t go to. Right. So how would functionally you distinguish yourself from an atheist? He didn’t have a good answer. I have an answer. I actually call myself a practicing, devout agnostic. In other words, I don’t know if it’s knowable I could prove scientifically or mathematically or axiomatically that God exists. But I know that in my life, you know, on a prag, on a pragmatic basis, my life has improved by implementing certain practices. So I’m willing to try them.

Brian Keating:
Willing to try. What practices do you, you know, do you invoke or do you. Do you adhere to? And then how does it inform, you know, is it sort of. Does it play a role of an operating system for being a parent?

Emad Mostaque:
Yeah, so I think, you know, in terms of the practices, there’s always the golden rule. Do it. Tell us that you do it to yourself. That’s like the most common thing across everything. And again, you see different religions, different things. Like, some of them are monotheistic, some, like Hinduism is a concept of Brahmin, and other things like that. The biggest takeaway, again, that I took was the concept of reasonable doubt and assumption minimization. Like, this is what I kind of try and teach my kids.

Emad Mostaque:
Yeah, but, you know, we are occupying kind of steroids. Like, again, it’s great to have a structure, but always be open to others and then realize that probably the universe has something underneath it. And we’re all trying to figure what that out is. We’re all trying to figure out why and what.

Brian Keating:
Right.

Emad Mostaque:
And so there’s a wonder aspect to that. There’s a. Don’t hold too much dogmatism that. But at the same time, we do need some level of structure. So I have the level of structure that I’m comfortable with, and my kids will find the level of structure they’re comfortable with.

Brian Keating:
How do you implement that as halal? What do you guys do distinguishing from an agnostic or atheist?

Emad Mostaque:
Oh, no. So we’re quite liberal, you know, and so. But again, we kind of teach them, and we’re teaching them to make their own decisions about this. Whereas I came from a much more conservative Family before. And again, I think everyone needs to find their own levels and the nature of the structural elements of religion will change. But the key thing, I think, when teaching the next generation is not to be dogmatic and not to be closed. It’s like, mine is the best religion. Yeah.

Emad Mostaque:
And others are like, sure, yeah. There are aspects to this. So we teach kind of interfaith. We teach all the other elements, and it’s like, this is what we practice. And you’re going to be able to choose yourself what you practice as well. So I think that gives enough of a thing that’s the best we can do right now because, again, I think that all of these faiths are going to change quite dramatically over the next five, 10 years, and hopefully we get more towards that core.

Brian Keating:
Not to get the. This is my last thing about religion. So, as I understand, Islam means to submit in Israel, the word for the. The, you know, pillar of where the Jewish faith is, is centered, means to wrestle or fight against God. It means Israel means fight and L is God. So they’re very different approaches. One is submission, one is what. How does the scientific method, how does it fit in in Islam? I’ve talked to several, you know, Islamic scholars and practicing Muslims, and some wouldn’t come on the podcast, you know, because, you know, for whatever reason, at their mosque or whatever it was, it was viewed in a negative light or perhaps engaging with, I don’t.

Brian Keating:
I don’t know, someone who is not a believer. But how do you view that? How does the scientific method. Is it. Is it compatible? Is it. Is it something that, you know, is something that should be a part of, know, religious? I mean, you mentioned Munder and stuff like that, but I assume that was talking about, like, curiosity about your faith, your roots, where you came from, but not like how the scientific method might fit into religion. It doesn’t have to.

Emad Mostaque:
No, it does fit in completely. So, again, if you look like the process of doing a religious ruling or actually deciding yourself is Ishtahad, which comes from jihad. You know, it’s. It’s literally a struggle. Right? Again, Israel is a struggle in a different sense. So you’ve got the submission element, you have the peace element. There’s. But what is it? It’s to the divine effectively.

Emad Mostaque:
Right. And you have different pathways and different approaches and different understandings of that. What happened again with Islam is it was called the gates of Ishtahad being closed, the gates of first principles, reasoning being closed because the data was too much.

Brian Keating:
Happened in Judaism, too. The Talmud froze, you know, the temple destruction. And that’s when it’s classified.

Emad Mostaque:
Exactly. But now again, what do we have? We have massive context machines that can do everything right.

Brian Keating:
And so is electricity, fire.

Emad Mostaque:
But at the same time you do need to have commonality of rules. So again, you have Surat al Mustaki in the path of the righteous. Very wide, got very narrow. I think it can get wide right now again, because again, it’s incredibly compatible. That’s why you seal the massive emergence of science and tradition in the Islamic world for a while and then it ossified when it locked down. When you move from oral tradition to writing everything down. In fact, if you look at some of the fatwas of the extremist groups, there are literally like an ink blot changed it from be peaceful to chop off his head and other stuff. When you look at the actual tech based text.

Emad Mostaque:
But no, I mean we see it like we see orthodox Christianity split because of one word.

Brian Keating:
We see in Judaism like literally the cantillation, the note that you sing when you read it, it changes the meaning.

Emad Mostaque:
But again, if you’re textual, it’s one thing you have to go back to the core. And again, the core was always reasonable doubt in Islam. Shubha. We got away from that because it became too complicated as it became a multi country thing that had to be shared by text versus an oral tradition dominated. Yeah, yeah. And again, what does faith mean? Again, what does religion mean? It’s that which binds you together. But you got the golden rule. You have these other things that bind you together.

Emad Mostaque:
Like there’s nothing like being in Mecca with millions of other people in the same direction. But we forget the stories that we are all human. We forget the stories that other people are human. And people militarize these things. Like war is again the lie that we’re not human. Even if people think they’re doing. Again, Chief Rabbi Sacks, altruistic evil people who believe they’re doing good do the most evil in the world.

Brian Keating:
That’s right.

Emad Mostaque:
Weaponizing these narratives, you know, like Gerardian type mimetics, scapegoating and others. So one of the things I think wonderful about this technology, if we can use it the right way, it’s the universal translator. How do I show Islam from the perspective of Judaism to someone young and learning that and allow them to understand their own faith better in that meta. We’ve never seen that before. We can see that today if we choose to build it.

Brian Keating:
That’s right.

Emad Mostaque:
Right. Because what we find is you talk to the leaders, they all get along fine. Their followers are, like, fighting with each other. The leaders will get along fine who’s more holy?

Brian Keating:
But instead, we’ll just get you. We could have world peace, we could have ecumenical delights, but instead, we’ll have Will Smith eating spaghetti.

Emad Mostaque:
You know, it’s a pathway to world peace. Very interesting. You think about all the tokens in the world from the trillions, how much of that is for peace? How much is that for understanding how much money go.

Brian Keating:
I mean, how much is Elon and all the billionaires and Sam Altman. I mean, Sam Altman has this thing where, like, oh, it’s actually more. Much more expensive to train a human being energetically than to, you know, kilowatt hours than go into a gpu. I’m like, does that mean we should just have no. No more training for humans or.

Emad Mostaque:
Well, I mean, like, the whole setting up of Open AI was Elon Musk talking to Larry Page. And Larry’s like, yeah, this. This is great. We’re going to move beyond humans. And Elon’s like, I like humans.

Brian Keating:
You know, some more than others.

Emad Mostaque:
I mean, again, there’s lots of stories here, but AI is a reflection of us. So, like, when Muslims fast for Ramadan, it’s One of the 99 names of Al Asan Ta’ Ala Samadhi at the Freedom From Want. We’re a reflection of the Divine. We’re trying to reflect him in all of these 99 names, right? Yeah. And this becomes, like, really interesting because AI is trained on the corpse of everything, and so it can understand and relate to us. Like, again, that latent space is there. You take the person that you’ve trusted most in your life with just a few things. I can adjust that latent space so it looks like them, sounds like them.

Emad Mostaque:
It’s that reflection, right?

Brian Keating:
Mom, why are you asking me to pay for you know what, Rocket?

Emad Mostaque:
And this is what you discussed earlier. Like, again, we bootstrapped intelligence, and now we’re bootstrapping another type of intelligence to explore the wonders of the universe, to understand each other and the universe better. And that is a wonderful thing if we do it right. Or we can turn that intelligence against us and we can exacerbate this division. You know, we can manipulate people to the nth degree. There’s some crazy stuff. Oh, yeah. You know, and I think it’s.

Emad Mostaque:
Again, it decides which way you do it. Like, we’ve seen some actually crazy stuff. One of my favorite things where I did a stability in previous company, we did this thing called Mind Eye. If you Ever came across that. So we took functional MRIs and put them through stable diffusion and reconstructed people’s thoughts.

Brian Keating:
Oh my God.

Emad Mostaque:
But this is interesting because the way that you view the world is not the way that I view the world. Right.

Brian Keating:
And the way that way that you

Emad Mostaque:
think isn’t the way. So I have.

Brian Keating:
Even the way I perceive it, I don’t perceive it.

Emad Mostaque:
Same ways I have aphantasia is that

Brian Keating:
you see things or.

Emad Mostaque:
I can’t see anything.

Brian Keating:
Really? I didn’t know that about you.

Emad Mostaque:
Yeah.

Brian Keating:
How do you mean anything?

Emad Mostaque:
If you. If I. If I tell you visualize yourself on the beach, you can see it, right?

Brian Keating:
Yeah.

Emad Mostaque:
I can’t see anything in my head. I have anorelia. I have no internal voice. I can meditate like that. It’s fantastic. Hypnotized.

Brian Keating:
Can you be hypnotized?

Emad Mostaque:
No, I’ve not been able to be hypnotized either. I’ve tried it a few times.

Brian Keating:
Okay.

Emad Mostaque:
I don’t dream. I can’t go back in the future.

Brian Keating:
Have you tried any psychedelics?

Emad Mostaque:
I can’t go.

Brian Keating:
My wife’s not listening.

Emad Mostaque:
I can’t go back in time and relive things. I’ve sufficiently deficient autobiographical memory. I can’t push myself in the forward. I’m always in the now. And so I’m kind of like a mega LLM with a big context. Again. That’s completely different to your mind. It’s completely different to their mind.

Brian Keating:
Right, right.

Emad Mostaque:
Colorblind people. But what we found again with the image reconstruction is there’s a common latent space in everyone’s minds. A can of Coke looks the same from a data perspective.

Brian Keating:
And so you can.

Emad Mostaque:
How cool is that to find. Yeah, hopefully you can find common ground again. If you’re having a debate, an argument, let’s take for example, there’s a war going on right now. It’s stupid. Wars are stupid. And the operation has passed. Like what if both sides fed into an LLM exactly what they want and then it said what to do?

Brian Keating:
I want to use that as a entree and to ask you advice to your former self. 22 year old. Whatever you want to go back to, you got 30 seconds. You’re talking to a young imad. Before you met your wife, before you had kids, before you were famous, you know, successful entrepreneur. What would you tell yourself to give yourself the courage to go into the impossible as you have?

Emad Mostaque:
I would tell myself to treasure relations with other people more and really cultivate them. It takes the effort and the network that you build is the most important thing in your life. You know, to be constantly giving and growing and helping and build that trust. Because I did everything myself and I found it very. I mean, I do have Asperger’s, but if I’d done that starting it just multiplies going through, especially if you’ve got something to bring.

Brian Keating:
Sounds like you found a partner also who’s probably very supportive of you and helps you through this challenging moments as good spouses do. Another quote.

Emad Mostaque:
Very lucky.

Brian Keating:
Yeah, it’s a blessing. I mean, it is a true. That’s what they say God was doing after he created the world. He was making matches. Next question. Arthur C. Clark said, for every expert, there’s an equal and opposite expert. Ask you about quantum mechanics because I know you’re obsessed with it.

Brian Keating:
We’re going to talk next time. You promised me a Part 2 Around to Talk about deep quantum mechanics. Maybe, you know, ontologically recapitulating all sorts of cool things. But what do you think people are getting right about quantum mechanics? Let’s talk about interpretations. Let’s talk about many worlds and Copenhagen. Are they the base layer of reality? Are they emergent? What do you. What is your take on the. On the foundations of quantum mechanics?

Emad Mostaque:
Gosh, that. So you got. Got 30 seconds. No, no, no, no.

Brian Keating:
30 seconds is for the anti. Both. You got as much time as you want. As our bladders will last where mine’s getting kind of full.

Emad Mostaque:
We’ll get into more of that kind of next time. But I think I. I’m of the view that reality is fundamentally Euclidean and that’s where the divine lives and where mathematics lives. And we are a projection of that in the Lorencian space. When you look at that, a lot of stuff becomes a lot easier, you know, and things, you know, the anthropic principle, measurement and others. We’re very much stuck in the way that we look at the world and the universe. It’s very difficult because especially like I said, if you don’t have faith, if you don’t, because we’re like, why does it matter if you’ve got something outside of time?

Brian Keating:
That’s why I think Elon wants to go to Mars or now it’s the moon. He downgraded to the moon, which I gave you a piece of. I expect you to take care of that. This makes it easy to visit the moon. But he’s, oh, I want to upload consciousness. Look, you can do that. They’re called kids. And he’s got 14, 15, 16 of

Emad Mostaque:
his best yeah, we’re doing our part,

Brian Keating:
but, but, but in reality, yeah, if you want to know the divine, I mean, I don’t know another way to get access to his operating system.

Emad Mostaque:
But again, like, you know, you think about the creation, expansion of the universe. Think about quantum to kind of classical gravity and going all the way up. Like Newton we started with and then we moved to gravity being geometric. What if it’s something else? Right. Like again, if you start from the Euclidean and then you move to Lorentz in, all the mathematics looks very different. A lot of the problems actually dissolve. And if there is a first mover, you know, if there is a God or a divine, they will never be in the Lorentzian. It can never be first.

Emad Mostaque:
Right. It has to be in the Euclidean space. Does the math support other physics for it? That’s something we’ll find out. Right?

Brian Keating:
It’s so funny.

Emad Mostaque:
All the physics is the other direction.

Brian Keating:
All the physics is the Pythagorean theorem. We go through all these gymnastics to say everything else are Imani and Lobachevsky and no, it’s Euclidean.

Emad Mostaque:
I don’t think I’d change anything because it’s the most wonderful time to be alive. We can end all war, all hunger, all disease, live forever, explore the universe if we want to. We can give back agency to every single person. And that’s fantastic.

Brian Keating:
Have the star. Star Trek future. Not the Star wars future.

Emad Mostaque:
There’s Star Trek. There’s no AI. I mean, like, you look at Data now and you’re like, my AI is more emotive than data.

Brian Keating:
Well, it’s 2001 had iPads in it. You know, 1968, it had, you know, Apple Vision Pros and stuff. Well, Iman, this has been fantastic. Part one. Hopefully there’ll be many parts. Enjoy the rest of your time in Southern California before you go. Head home and thanks for all you do. And especially the open source.

Brian Keating:
That to me is the sign of a true scientist. Someone who’s, you know, not afraid to. That’s the ultimate peer review.

Emad Mostaque:
I think that’s it. Like, thank you for having me on and let’s share the ideas and for where we go. Star Trek future.

Brian Keating:
Absolutely. Thank you so much.

Emad Mostaque:
Cheers.

Brian Keating:
And Matt just told us the labs have models that they’ll never ever release and that humans may soon have a negative cognitive value on AI teams. If that changes how you think about where all this is heading, hit subscribe and turn on notifications. Drop a comment. Do you think open source AI can still win? And if you want to hear our corresponding counterpoint from one of the masters of AI, the man who wrote Life 3.0. Check out my interview with Max Tegmark last year. I’ll link it right here. Don’t forget to subscribe, and we’ll see you next week.

This will close in 15 seconds