BRIANKEATING

Brian Keating

Peter Diamindis: Are We Moving Too Fast With AI?!

Transcript

Speaker:

There’s no on off switch and there’s no velocity switch we can turn down. We’re using faster and faster computers to design and build faster and faster computers. We’re using stronger and stronger AI to write code on its own for stronger and stronger AI. The notion that we’re gonna have AI that is fully human like and then exceed human capabilities. I don’t think it’s a matter of if. It’s only a matter of when.

 

Brian Keating:

Hi, everybody. Welcome back to AI and Your Life, the Essential Summit. I’m joined here. Well, I should say I’m Brian Keating, and I’m joined with my good friend And, mentor and and really, big influence influential person in my life, doctor Peter Diamandis, Who was recently named one of the world’s 50 greatest leaders by Fortune Magazine. Peter is the founder and Co and executive chairman of the XPRIZE Foundation, the executive founder and director of Singularity University, and cofounder of Bold Capital Partners, a venture fund with $250,000,000 in investing in in exponential technologies. Doctor Diamandis is also a best selling author New York Times best selling author of 2 books, Abundance and Bold. I’ve got those here. He earned degrees in molecular genetics, aerospace engineering from MIT and did his MD at Harvard Medical School.

 

Brian Keating:

And Peter’s favorite saying is the best way to predict the future is to create it for yourself. Peter, it’s always a treat to be with you. Thank you for Joining us on this AI summit.

 

Speaker:

My pleasure, my friend. A pleasure. And, I was just saying, when we’re getting ready here, how much I enjoy my time speaking with you. So this is always a treat when when you got 2 friends getting together and talking about the amazing world we’re living in.

 

Brian Keating:

Yeah. We’ll we’ll run out of time before we run out of topics. We’ve, both hosted each other on each other’s podcast. Peter’s podcast is, Moonshots and Mindsets, And that’s, really delightful. It’s seen exponential growth of its own, so make sure you subscribe, wherever fine podcasts are bought and sold. So let’s get started. So, I wanna ask you. You are the person I look to, a guru for many things, but lately, there’s so much information coming in on AI, and so much promise, so much hype, so much excitement.

 

Brian Keating:

And I I don’t have time. I I get 20 emails a week as you do, probably even more. But the one I always read is yours, in addition to mine. But I read yours because you distill it, You concatenate and you make sense of the world of the new developments in AI. So we talked a few months ago, but so much has changed. Where are we at right now with artificial intelligence?

 

Speaker:

Wow. So where we are is at a fascinating, transition point, an inflection point. You know, one of the things I wanna just, put this into perspective. Right? I think we might have talked about this before. The first time AI was really discussed, was at a conference at Dartmouth in 1956. And so at that conference, some of the, you know, founding leaders in AI gather. It wasn’t a large group, was a dozen or so, but the the term artificial intelligence and the concepts around AI. And so that’s just 70 years ago or so.

 

Speaker:

Why has it taken so long, to get to where we are today? We’re finally in 2023, and I put that as the inflection point because everybody’s speaking about it. Chat GPT was a user interface moment. I’ll talk about that in a moment too. But why did it take so long? And and it turns out there are really four reasons, that have done to us where we are today. The first is, computational power. Right, what’s called the law of accelerating returns by our friend, Ray Kurzweil, and, you know, Moore’s law, which is integrated circuits. There’s been an exponential growth, it’s continued, you know, doubling in power every 18 to 24 months And it’s just now, really in the last 5, 6, that was almost 7 years that there’s enough computational power you can throw at these deep neural networks to get them to operate. So computational power, and by the way, it’s just exploding still, you know, we’re seeing massive, GPU clouds that are coming on whether it’s Tesla or Microsoft or, you know, Google, everybody.

 

Speaker:

So computational power is not slowing down. In fact, on a log scale, if you graph it, it’s curving upwards. Right, which tells you the rate at which it’s accelerating is itself accelerating. The second thing is the amount of labeled data out there. And this is the Internet, this is everything, every tweet, every Facebook post, every corporate web page, everything you ever put online. This labeled data is what these AI engines are crawling and learning from, they’re learning from us. It’s not like they’re making it up, you know, from 0, they’re basically modeling us, and they’re extrapolating and interpolating from the information we’ve given them. So the amount of data is doubling every 24 months.

 

Speaker:

There’s a new term for the amount of data. We’re gonna hit a yottabyte of data very soon. I love that term. And the third reason is that the models, how we’re modeling AI. There’s been a 99.5% improvement over 5 years, so for a dollar invested. So it’s just getting cheaper and cheaper to create these models. And then the 4th reason, probably the most important one, is, you know, massive amount of money being invested 100 of 1,000,000,000 of dollars. So all those things are just turning the volume to 11 on AI here.

 

Speaker:

So that isn’t slowing down. It’s accelerating. But what happened to make it a topic on everybody’s lips today? Well these, these generative pretrained transformers, you know, GPT 3 and 4 by OpenAI, BARD, and what’s coming from Google soon, Gemini. But what’s interesting is there’s been an inflection point. Right? And that inflection point really, you know, I give credit to Sam Altman and the team at OpenAI with with Chat GPT. And what does that mean? So we had as as you well know and you’ve spoken about, you know, we had ARPANET, which was out there. And ARPANET was around. It connected all the universities, and it was really Marc Andreessen with, Mosaic that put a user interface on top of very complicated equipment.

 

Speaker:

And that user interface, Mosaic and then Netscape, allowed anyone to use this this this capability of, TCP IP and the Internet protocol, and it made it easy for people to use. And the number of websites exploded in over the few years to 1,000,000 and in tens of millions and hundreds of millions. Well, ChatGPT put a user interface on top of AI, and we went to a 100,000,000 users in 2 months’ time. I think the important thing to realize is It’s just one form of AI that’s available, these these general pre trained transformers. There are other ones coming. I was just hearing from a a team out of MIT called, Liquid AI that I’m gonna bring to my stage at Abundance 360. And the numbers I saw, you know, outperform Chat GPT in speed and context by orders of magnitude. And these are really based on modeling neuronal systems, you know, like, neurons of the brain.

 

Speaker:

And so I think one of the important things to realize is there’s no there’s no on off switch, and there’s no velocity switch we can turn down. We’re using faster and faster computers to design and build faster and faster computers. We’re using stronger and stronger AI to write code on its own for stronger and stronger AI, and so the notion that we’re gonna have AI that is fully human like, we can describe what that is in a moment, and then exceed human capabilities, I don’t think it’s a matter of if. It’s only a matter of when. Probably the greatest predictor, of this is someone, that we both know. Ray Kurzweil, raised my cofounder at Singularity University and my board at XPRIZE, a dear friend. And in 1999, he predicted AI would achieve human level intelligence by 2029. Right? So, interestingly enough, everybody laughed at him back then, and where we are today is no one’s laughing.

 

Speaker:

In fact, their the entire industry’s predictions that used to be it’s never gonna happen or it’s gonna be a 100 years or 50 years. It’s all converged on Ray’s prediction of 2029. Even Elon, recently, said he agrees it’s likely to be, you know, 27, 28, 29, or thereabouts. And that idea of human level intelligence where you can have a conversation with AI about anything and, man, are we so close right now just when we play with it? But guess what? The next year, it isn’t human level. It’s superhuman level. Right? And we can talk about artificial superintelligence or or digital superintelligence, but it’s coming. And to quote, you know, the CEO Sundar, the CEO of Google, it’s gonna be more impactful to humanity then electricity or fire, and I agree.

 

Brian Keating:

Wow. That’s an astonishing statement. I hadn’t heard that from Sundar. And I guess, you know, this dovetails in nicely with the next statement, but also something you said a few minutes ago, which is, you know, now we have AIs, training AIs, and, you know, if they can be great tutors and and human level when they become superhuman level. What are some of the concerns? I I I you know, you and I both remember this horrible affliction that afflict afflicted bovines called mad cow disease. I’ve talked about the problem of of AI’s training AIs, which It’s only recently starting now with ChatGPT opened up, you know, on on searching the web with Bing, etcetera, recently with an update. You know, I call it mad bot disease. What are you most Concerned about.

 

Speaker:

Are are

 

Brian Keating:

yeah. So what are the most, concerning fact factors and most exciting things just on a personal level? You’re a dad. You’re a a medical expert. What do you play around with? So first Yep. Concerns then, well, how are you having fun with it? How are you using it every day as a dad, as a man? Yep.

 

Speaker:

So let me start by saying that I have zero question. AI is the single most important invention that humanity has ever come up with. Right? Artificial intelligence is going to enable us, in the scientific and medical world like nothing ever before. It is gonna be more powerful than the microscope and the telescope, more powerful than the scientific theory. It’s it will give us room temperature superconductors, it will give if they if it’s possible, it will give us fusion, it will give us the ability to design, and improve life, to perhaps solve aging. Right? Which is where I spend a lot of my time is on the idea of aging, is something that can be slowed, stopped, even even reversed. So there is no putting AI back in the bottle. There may be opportunities to at least direct it, where it goes and it flows.

 

Speaker:

Having said that AI is so important to humanity and there’s not it’s not gonna be stopped. There’s no on off switch. Right? I’m not worried about artificial intelligence. I’m worried about human stupidity. Right? So let me parse 3 time frames. The 1st time frame is right now, 2023 to 1st q of 2024. And I think if AI stopped right here, right now, it’d be amazing. You know, all upside, no downside, making us super productive, helping us all become better programmers and artists and writers and more productive than ever before, it would be awesome, but it’s not gonna stop, and it’s not gonna slow down, it’s gonna accelerate.

 

Speaker:

I I wanna make that clear. It’s accelerating, not even linear. So the next time frame is really mid 24 through 28 that I wanna talk about. I’ll call that the midterm. And my biggest concern is the impact It’s gonna have on the US elections, and it’s gonna be the ability for AI to cause, a reinvention of disinformation, a reinvention of of a truthful society. So if what you’re watching, seeing, and hearing cannot be distinguished from what actually occurred, we’re in trouble. So patient 0 is likely to be, a series of events that occur around the election, which are create times of panic or distrust. And if you’re defeating democracy, that’s a really, of great concern.

 

Speaker:

When I’m talking about these problems, the next thing I say to all the entrepreneurs out there is solve them. Right, the world’s biggest problems, the world’s biggest business opportunities. You know, we have blockchain. There’s gonna be an ability to, embed, you know, blockchain watermarks or the case might be for authentication. But it’s still sometimes, unfortunately, what you see on TV, whether, you know people can show you something that looks realistic, like photo realistic, and say this is fake, but your brain is still seeing it, and you may still believe it. And therein lies some of the challenges and the issues. So So we’re gonna see patient 0 around the election, we’re gonna see, AI being used for terrorist activities. What does that mean? Bringing down a power plant, bringing down a Wall Street server, just trying to cause havoc.

 

Speaker:

Right? There are enough malevolent individuals, and there’s gonna be an AI arms race between the white hats and the black hats. The only way to take on the, You know, dystopian use of AI is with AI. There’s nothing else. And, so there are a lot of companies being funded to do that. So that time frame between 2024 and 2028 is that period of angst and distrust and and challenge. I think it’s in the latter half of that, you know, we’re not really seeing the impact on jobs yet. I think we’ll see a number of jobs being taken, and transformed. But, you know, every place I see AI being used right now, it’s enhancing people’s abilities to do more with every minute of time they have.

 

Speaker:

It’s not I haven’t hired any less people. I’ve I’ve had higher expectations of what they can do, with this technology. No one’s working less, at least in my companies. So and then the 3rd time frame is really 2029 onward when we have AGI, and then very soon shortly thereafter, ASI, artificial superintelligence. And the challenge there is AI could be the single greatest prohuman capability we ever had. I personally believe, and I’m curious what your thoughts are, that the more intelligent a being is, the more pro life and the more peaceful and the more pro abundance it is. You know, all the TV shows where, you know, super advanced aliens come here and destroy us for Terminator. You know, I call bullshit on that.

 

Speaker:

I think that’s ridiculous. I mean, there’s there’s no shortage of resources in the universe as you well know more than anybody. And, you know, the movie Her where the AIs get to a particular point where they get bored with humans and then leave to go out and explore is much more realistic to me. But Having said that, I think AIs can help broker peace. I think they can help uplift humanity. What do I mean by that? Why do we have wars? Well, for a number of reasons. But one of them is that people are unhappy with their state of life, that they don’t have food, energy, water, health care, education, and they have nothing to lose, and they’re angry. Imagine a world in which we could uplift every man, woman, and child, where we truly have massive abundance where everybody has access to everything they need.

 

Speaker:

I think that if you had the best life you could live, and a mom knows that her kids are gonna have the best health care, the best education to make their dreams come true. I think the last thing you’re gonna do is, you know, start going to war and put on a suicide vest. I think you have so much to live for that you would not want to give it up. So I think creating a world of abundance is one of the most important things that we can do, and I think that occurs. I’ve had this conversation with Elon. He said, we’ve talked about abundance. It’s a theme he’s spoken about. He, you know, was, a big supporter of my first book there.

 

Speaker:

And he said absolutely after AGI. AGI gets us that level of of mass abundance. And so the question is, how do we make sure that AI is pro humanity? And this is, another conversation that’s going on right now at all the top companies, not enough inside the government, which is this idea of alignment that as we are building and training our artificial intelligent algorithms. We need to make sure that they are aligned with humanity’s best interests, and they’re not aligned with radical factions.

 

Brian Keating:

Mhmm.

 

Speaker:

And the best example comes from a guy, who’s become a dear friend. He was at Google, helped bring 4,000,000,000 people onto the Google platform, when he was head of business. His name is Moe Gadot. And, Moe wrote a book called Scary Smart. And it’s a great book I commend to everybody. It’s a short read, very insightful. And Moe gives the analogy. He says, I want you to imagine that, Superman Superman came from Krypton, landed in Kansas, met the Kent family, became a loving, prohuman individual because he was trained that way.

 

Speaker:

But imagine instead that he landed in the Bronx and he was brought up by the mafia or drug lords. Nothing against the Bronx. I was born there. He’d become instead of a superhero, he would’ve probably become a supervillain. So the the equivalent of Superman landing on Earth is AI, and we are AI’s parents. And so how do we raise that AI? How do we inform it? How do we teach it? How do we model for it? Because right now, we’re modeling all of the insane stuff on the Internet. So there needs to be the set of training languages, training datasets that are carefully selected. Right? You can send your, you know, 9 or 10 year old to, you know, a, a great school, or you can send it to a terrorist camp.

 

Speaker:

And you’ll have a very different outcome from the exact same genome. So, this is what we have to think about.

 

Brian Keating:

My thumb’s rather occupied right now holding up good old Carl Sagan, but yours is free to push that like button. And don’t forget to subscribe. It really helps us with the algorithm. Now back to the episode. You and I are both, proud parents of twins. And you just mentioned a thought experiment that, I’d like to explore in more depth perhaps and the way that genetics are not necessarily going to become destiny, and it might be the educational system that really parlays into a child’s future success And making sure that alignment doesn’t supersede the role of a parent. I wonder now if you could talk about how has it affected you as a parent? It it has affected me, but I already know enough about me. I’m really curious about you.

 

Brian Keating:

You’re you’re one of the most, I would say role model, you know, exemplars I look to as someone who’s ultra successful self made and, just a force for good. But I know above everything, it’s it’s your it’s your It’s your kids, that that

 

Speaker:

makes you

 

Brian Keating:

feel the future. The future is for them. So how has AI impacted your parenting if at all, and how will it impact their prospects for jobs and employment in an abundant future where nobody has to work.

 

Speaker:

Yeah. Beautiful question, and probably the most important question for for anybody listening who has kids. And, yeah, I consider my my number one purpose in life is is being their dad, and be a good role model and such. So, they’re 12 now. They’re, in 6th grade middle school. I don’t think the educational system is preparing them at all for the world they’re gonna inherit. You know, I just I did a a a quick poll on I’m gonna call it Twitter. Sorry.

 

Speaker:

Yeah. Sorry, Elon. I did the Twitter poll, and I was asked, is is your if you’re a parent, is your is your middle or high school preparing your kids for, the technological future? 3% said yes, 14% said maybe, and 83%, absolutely not. And I think it’s I think people know this. Right? Because, my our, you know, my kids how old are your twins again?

 

Brian Keating:

They are. They’re 5.

 

Speaker:

5. So definitely for them. You know, by the time my kids graduate high school in 6 years, we’re gonna have AGI. We may have ASI, and they’re gonna have to learn how to live in a world, in which they’re in partnership with technology 247. Yes. You know, we’re all gonna have a version of Jarvis from Iron Man, which which is my favorite knowledge. You’re all gonna have a, you know, what Microsoft calls a copilot, which understands you, enables you, supports you you think in Google, basically, you know, it understands you know, as you enter a room, it knows your favorite music and it switches and over. If you’re upset about something, it may turn on comedy on the TV to get you to relax.

 

Speaker:

But I remember when my dad didn’t wanna buy me a calculator because he wanted me to learn the math tables. And I did learn the math tables. I got him to buy me a TI 59. I don’t know if you Yeah. It was and it I I could I learned how to program on it, which is what I said. Dad, I was gonna learn to program under that thing. And so, I think our kids need to become AI natives. To ignore that is, is, I think, ridiculous.

 

Speaker:

And so but now the question becomes, what are they who do they need to be as people in a world of these, these superhuman AI capabilities. I think, I’m more concerned about creating kids who are empathic, who, understand how to make an argument, how to ask great questions. You know, 1 of my boys last semester had to memorize the state capitals of all 50 states. And I’m like, No. Please. It’s like, honestly, that’s why God created Google. You know? I I don’t think I need to occupy your neurons with that. So we’re gonna have to reinvent what and how we teach our kids.

 

Speaker:

So I’m thinking about, you know, do I start a new school for them? For do I partner with Singularity University to create, you know, after school programs and summer programs and so forth? But I don’t our educational system is broke and and, broke and broken. Right? It’s like, it teaches to the test, and it is not it’s not preparing kids for the future. Not even close.

 

Brian Keating:

Yeah. And and kind of, segueing from that into education, which is, you know, my Vocation, but it’s also, my hobby as well in learning about it. And and you operate a university, Singularity University, And it’s, really transformative. I mean, you talk to top, executives. You see them on LinkedIn there. That’s always the thing they put. They put that above Almost everything except for, you know, the the the Harvard, MIT axis that you also represent. But when you think about these exponential technologies, And you mentioned, you know, creating a school, which is not an you know, that’s kind of a analog technology to to overcome and perhaps, you know, inculcate the natural intelligence rather than, you know, kind of educating humans on what AI can do.

 

Brian Keating:

And I wonder, are there, you know, kind of applications that either from Singularity University or its alums or perhaps in that same space, in the education spaces. You and I talked about, You know, several months ago when we were on we did a pod crossover, like, you know, when Laverne and Shirley would go on with Ozzy. You and I remember that. But, We did a crossover and I said, you know, educate my profession hasn’t changed much in a 1000 years. Since the 1st university opened in Bologna, Italy in 10/80, you know, there’s been some sage on the stage and he mostly he, unfortunately, but now more she, took a rock And, you know, scraped on another piece of rock, and and there were these, you know, wrapped, you know, students in in the audience except, yeah, they had the power to go on strike, And then the teacher wouldn’t get paid. So thankfully, tenure got rid of that barbaric practice, Peter. But but tell me, you know, I see No. No.

 

Brian Keating:

Besides your profession, health care or your original profession, I see almost nothing more ripe for extreme disruption. And no job is safer, it seems, from the perils of AI stealing your job than the academic landscape that I inhabit. So tell me, Peter, What are your thoughts on building a new university? Will we make my my my job obsolete when you can learn from Fineman and Galileo and madame Curie. What I learned from Brian Keating. So Yeah. Go ahead. Please explain what you think Are the opportunities, perils, and pitfalls for education, and then we’ll pivot to health care at the end.

 

Speaker:

Yeah. Well, you should polish up your resume. Yeah. So, so listen. Let’s be clear about what’s coming. What is coming is a complete total revolution in education that is experiential. I’m clear about this. Right? The example I used last time, I’ll use it again, is if I wanna learn about ancient Greece, I can pick up the Odyssey, the Iliad, and try to make heads and tails of it.

 

Speaker:

But in a world in which I can enter a virtual world which is photorealistic, and that tech is here right now. And there’s a there’s a guy in a white toga on a slab of marble who calls me over, and then Socrates or Plato. And, and he says, let’s have a conversation. Let me introduce you to my friends. Let’s walk around Athens. And I experience it. I’m there. I’m in conversation.

 

Speaker:

I’m asking questions. I’m, like, being told funny stories. And that is amazing. Alright? And there’s just no way I mean, a great teacher can can transport you to that moment in time if They are a great orator and storyteller and and make it fun, but not mass scale, but not personalized. So I think we’re within, you know, 3 to 5 years of that being here, definitely within 7 years. So I’m gonna be able to learn what I want, when I want in a hyper personalized fashion. So the question becomes, what you know, we’re gonna divide education, I think, into 2 different parts. One part is, learning math, learning history, learning, science and skills and so forth.

 

Speaker:

Another part is, human interaction, being a good leader and and, you know, into the stuff in the real physical world that’s gonna be important to human human capability, though we’re looking at an XPRIZE right now to teach empathy using VR. So maybe I take that back. Anyway, I am curious. I don’t know the answer. I mean, one of the biggest challenges is, to help people find their purpose. So we’re purpose driven. You and I share that. Right? We’re very driven, to to create and to educate and to Spire, to guide, and to do all those things.

 

Speaker:

Is there something inherently human that no AI is gonna be able to replicate? We’re gonna find out. We’re we’re really gonna find out. I don’t know. What do you think? What do you where do you think the you know, a decade from now after we have AGI and ASI. And people you know, we have the early versions of that with Khan Academy. Yep. And there are great games, and we, you know, we gamify a lot of things. We don’t sufficiently gamify education.

 

Speaker:

Right? I I love the the discrepancy. You know? In education, you start with a score of a 100%. Every time you get something wrong, you get lower and lower and lower. In video games, you start a score of 0. And every time you get something right, it goes higher and higher and higher. I mean, it’s just but that’s just broken. What do you think? What what do you think education is gonna be when we have artificial superintelligence.

 

Brian Keating:

I wanna divide it because you you’ve inspired me to to break things into epochs, and and I wanna talk about the epoch, of The, you know, current status of how education can be optimized, the, you know, near term future and then the post 2029 deep future. Right now, I’ve been underwhelmed by a lot of the of the, you know, artificially recreated Brian Keatings, you know, that I’ve tried to do and Had them read to my twins bedtime stories in my place. And, you know, they can do an okay job, but but really trying to replicate me leaves me wanting, which is kinda pricing. Right? Because we’ve left digital breadcrumbs, you and I. I was on the Internet in 1989. Actually, before that, the bulletin board system, I’m sure you even were I

 

Speaker:

remember the built in board system. Yes.

 

Brian Keating:

Yeah. It was great. I till I ran up, you know, a local phone bill on a, you know, of over $1,000 back in 19 6. That that almost got me kicked out of, the house. But, but so we’ve left these digital bread crumbs for literally, you know, 3 to almost 4 decades. There’s almost nothing to my mind that sort of knows what Brian Keating knows and what he doesn’t know in order to optimize an educational experience for me. And my our kids that are digital natives, they will have that opportunity, but I think, you know, there’s a tremendous concern about privacy and so forth when, you know, in reality, Google knows more about you than your priest and rabbi, minister, doctor, you know, lover. Right? So we have this preciousness.

 

Brian Keating:

And we should because privacy is a human right as Tim Cook said, and and I’m I’m sure you you’ve had discussions.

 

Speaker:

Question is, do you actually does anyone really believe we have Fosse.

 

Brian Keating:

Yeah. No. I I with the sensors and services and I mean, Amazon’s and Motorola AI.

 

Speaker:

Alexa’s listening to everything all the time.

 

Brian Keating:

I changed I changed Alexa to computer. Computer. Who is Peter Diamandis?

 

Speaker:

Yeah. This might answer your question. Peter h d. Evandis is an American marketer, engineer, physician, and entrepreneur.

 

Brian Keating:

Computer, stop. So what I’ve done, Peter, is I’ve connected the c word back there, my my Jarvis Yes. To its own power supply. So someday, I’m gonna see if it’ll turn itself off, to see if it can make it a digital SUSAN. But in reality, I think there’s tremendous I I think my colleagues are so averse. You’d be surprised. We’re working on technology that can spot a light bulb on the surface this of an exoplanet. Yeah.

 

Brian Keating:

And yet and yet we’re not really thinking about how we can utilize this technology to have that, you know, Socrates’ experience that you just spoke about. And it’s a huge opportunity for those that get there first. I’d say for the next 10 years after that, kids, they will grow up, and they can kinda, you know, go through it and and have keep records because a computer you know, your human brain, you taught me this, is not for It’s not for storing information. It’s for creating new information and having imagination. And lastly, I think the deep future is is really to predict. I mean, if you listen to futurist, you know, of any kind of caliber decades ago, it was flying cars and underwater cities and life on other planets. I I think it’s it’s so risky, and yet and yet they missed the Internet. You know? Even though the Internet, as you pointed around, was around 1970, so they missed the impact of it.

 

Brian Keating:

Famous quotes from people like Nobel Prize winner Paul Krugman that the Internet’s gonna be the impact of a fax machine. I I think the deep future is very hard to predict. I’m Excited to see what Ray you know, his predictions, how how they materialize. But, but education, I think, is ripe for disruption, and I always say, you know, the professors, shouldn’t sit on their laurels, But, you know, for now, they’ve had the opportunity to do that. We only have about 10, 15 minutes left, Peter. So there’s so many more questions that That I know the audience is gonna wanna really hear about. None more so perhaps, than, than your work, in in the medical industry, And what types of drug discovery, diagnostics, you know, medical assistance in the in the waiting room. You know, Eric Talpaul has written about, you know, doctors nowadays are typing into 1 computer and then patients are on their cell phone.

 

Brian Keating:

Talk about the impact of AI specifically, and and, I Urge everybody, subscribe to Peter’s newsletter because I’ve gotten not only many good subscriptions to many of the products, you know, Peter, has has, presented to me that I otherwise wouldn’t have, but also, to have the insights from, one of the world’s foremost experts on medicine And AI. So, Peter, take it away. What can an AI assistant do for us?

 

Speaker:

Yeah. So I do agree education and health are the 2 areas that AI is gonna fully disrupt. And it’s sinful health care system we have today, and I’m on a mission to disrupt and reinvent it. And so that’s where I spend all of my time. My venture funds are investing in there. The companies I’ve started are focused on that. And in the US, we pay ridiculous amount, and we’re like in the, you know, way down a couple of decades on on health care, quality. So what does it mean? We’re complicated systems, you know, we our 3,200,000,000 letters are just the beginning of of the complexity.

 

Speaker:

You know, we have 40 trillion cells in our body. Each cell is doing like a 1000000000 chemical reactions per second. So there’s a lot going on. We’re quantum systems. I think one of the things I’m excited about is quantum chemistry and quantum computation helping, giving us new tools to understand, you know, how and and and why we are physiologically as we are. But here’s Here’s the biggest thing people need to understand. Your body is really great at masking disease. We compensate really well.

 

Speaker:

What do I mean by that? You don’t develop Parkinson’s tremor until 70% of the neurons and substantia nigra are gone. 70% of all heart attacks have no precedent, no shortness of breath, no pain, no nothing on in a CT scan. You know, the cancers that kill us, 70% of those are not screened for. We screen for, you know, breast and prostate and skin, but we don’t we don’t, you know, screen for glioblastoma or or pancreatic cancer or other other cancers. And, you know, if you, god forbid, should have cancer, you don’t feel anything in stage 1 or stage 2. It’s only when you get to stage 3 or 4. And you go in, and the doctor says, I’m sorry to tell you this, but, you know, it’s kinda late. We just found this.

 

Speaker:

And we all have people who’ve gone through that. So I’m saying that because the world has changed where technology and AI now enables you to know exactly what’s going on inside your body. Right? So we’re pilots. I you know, as I’m flying, I have all my gauges. I know exactly what’s going on inside the airplane. Right? In my Tesla, I’ve got all my gauges. I know exactly what’s going on in the car. For most of us, we have no idea what’s going on inside of our body.

 

Speaker:

And and until you look and people say, I don’t wanna look. I say, bullshit. Of course, you wanna look. You wanna find it at inception and and take action. So as example, one of the companies I serve as executive chairman of, spending half my time in it, because I’m so diet about it, because it’s got the biggest potential impact. It’s called, fountain life. We have 4 centers right now, New York, 2 in Florida, 1 in Dallas, we’ll be opening 1 in LA next year, and we have a waiting list of like 40 centers we’re building out around the world. You go and we digitize you.

 

Speaker:

Full body MRI, brain, brain vasculature, an AI enabled coronary CT, a DEXIScan, your full genome, your, microbiome. You know, it’s a 150 gigabytes of data. And the reality is no doctor could fathom or handle that much data, but we can now with AI. Right? AI can take this data, integrate it, you know, look at your 120 plus biomarkers that are collected from the blood draws, and look at what’s what’s going on in the imaging and everything else and start to create a model of what if everything’s perfect, fantastic. If something’s wrong, what’s wrong right now? What do we do about it? Or what’s likely to happen to you and how do we prevent that? So this is the era of of really preventative medicine, with sensors all the time. And so We’ve saved hundreds of lives. I have a couple of my friends who their doctor said, don’t waste your time and money, and we find cancer in them. You know, we find cancer in 2% of people who think they’re perfectly normal.

 

Speaker:

We find aneurysms in 2 a half percent of people. We find a life saving finding in 14.4%. At the end of the day, we’re all optimists about our health, but we don’t actually know what’s going on. So the technology to know what’s going on and then take action is finally here. This will get cheaper and cheaper and cheaper over time. What I mean is, we’re going to, you know, the next stage, and and and Fountain has their the vision I’m portraying his fountain at home. So I’m wearing an Oura ring, I’ve got my Apple Watch, I have a CGM, continuous glucose monitor, and we’re gonna have dozens of, of wearables, consumables, implantables that are measuring everything all the time, and that data is being fed to your AI that is making sure everything’s in perfect calibration. And if something is off, it knows about it instantly.

 

Speaker:

And then in the near term, it’s gonna inform your your physician, you know, and it’s gonna be a a pilot, copilot relationship between the doctor and and medical AI. Eventually, it will be just the AI system. It may modify the meds you’re taking, may modify modify the food it’s serving you. But we’re gonna get into a very rapid closed feedback cycle to optimize your health. And that’s the vision of where we’re going today. It’s 17 years on average between a medical discovery and it being available to you in the doctor’s office. It’s crazy.

 

Brian Keating:

Yeah. And drug discovery and also this yeah. Having the copilot. I mean, right now, I I almost feel like in the in the future, we’ll look back on AI free doctors visits, this kind of, you know, bloodletting and phrenology and so forth. Right?

 

Speaker:

I I’m predicting in 5 years, it’s gonna be malpractice to diagnose a patient without AI in the loop. I’ll give you one example. Do you know how many medical journal articles are published per day. I may have asked you this number before. There’s there’s 5,000 articles in medical journals per day. Right? So the question I laughingly ask is, how many did your doctor read this morning? Right? And then can it know, for my specific genome and my specific medical upload data, whether there’s an article from this morning that has the answer to what I need. Right? But that’s that’s where we’re heading towards, and it’s gonna be extraordinary.

 

Brian Keating:

Yeah. And similarly in the, you know, aviation industry, we talked about this last time right now. Every plane has to dial in by hand. Oh my god. Every pilots that take his eyes off or her eyes off of the windscreen and look down and type into it, and then they have to wait a minute for the weather to be red. This should all just be in a heads up display delivered to glasses. They know where you’re going. They know where you’re landing.

 

Brian Keating:

They know who you so I I just think feel like the the impact in saving life, Etcetera is going to be so monumental that you’re right. It’ll be malpractice in a host of industries ranging from legal, to to, to education, to aviation, to a medicine. But, Peter, we’re coming up on the on the end of it. I wanna ask you, just just 1 an ultimate question is, I mean, as as a as a person, on the human level, as we see what what these technologies have always done. I was in Cleveland recently, back at my alma mater, Case Western, and, and I stayed near the Cleveland Clinic. And I saw some Amish, people. And they would get on the elevator with us, and they would ask us to push the button, because they they will not make use technology, I suppose. I’m not familiar with that religion.

 

Brian Keating:

I know for my religion, Judaism, on Sabbath, on Saturdays, we don’t use electricity. I don’t work either. But tell me, Peter, are there gonna be a class of digital kind of denizens that are left out or non digital? I can always already see. As I said, 1 7th of my life, I don’t use technology. Yeah. I mean What about what is that gonna do? Is it gonna bifurcate a class of of of of to make a cast system based on digital access to AI?

 

Speaker:

It will at the choice of those who choose not to. I think technology is a demonetizing and democratizing force. The better it is, the cheaper it is, and the more available it is. And people may choose a different path of life. Will it eventually cause us to speciate? Maybe. In particular, as we start, on down the path of brain computer interface. Right? So there will be those. I would probably include myself that as soon as I can get a a good high bandwidth link to my, my neurocortex.

 

Speaker:

I’d love to, you know, be as smart as you and understand quantum physics and and, you know, astronomy at the level you do.

 

Brian Keating

It’s brought to my newsletter.

 

Speaker:

And and at the end of the day, We’re gonna be upgrading ourselves. You know, we’re going from evolution by natural selection, Darwinism, to evolution by human intelligence, hopefully, human direction. And so what does that mean if we can increase our IQ points, increase our connectivity? I think one of the areas that is gonna bifurcate humanity is those not who only use AI, but those who merge with AI. Right? That’s gonna be the most interesting, element. So, if I’m able to think in Google. I’m able to know the intimate thoughts of someone else who’s connected through this neocortex BCI system, a level of intimacy like never before. This is, you know, this isn’t stuff that, you know, are children’s children. This is us.

 

Speaker:

This is the next 20 years. It’s gonna be awesome.

 

Brian Keating:

Indeed. Okay, Peter. This has been a treat. As I said, we could go on for towers. I just love schmoozing with you, and and, I always learn so much. Last thing is just, how people can connect to you and go down The Diamandis rabbit hole as I did a decade and a half ago. I can’t believe it. You’ve been such a such a hero

 

Speaker:

of mine.

 

Brian Keating:

Where can people find you?

 

Speaker:

So, if you go to diamandis.com, you can sign up for my blog. I put out 2, you know, a tech blog twice a week, one’s on longevity and one’s on exponential tech. If, my podcast is called Moonshots, you can see it behind me over here. There we go. It’s that logo. And, it’s an episode a week, freedom. I’m really focused on talking to people who are taking huge moonshots in the world, as you have been, Brian, and what they learned, where they failed, where they succeeded, and, what their advice is to others who wanna make a big impact on the planet. Xprize.org, we have launched over $300,000,000 in incentive competitions.

 

Speaker:

We’re about to launch a quarter $1,000,000,000 of prizes in the next 3 months. Super cool. And then finally, if you’re interested in, my longevity plans. If you go to dmandis.com backslash longevity, I have a free, my longevity practices, everything I do, and why I do it all boiled down is a free, PDF book that you can get, because taking your health into your own hands is critically important. So, anyway, That’s my world. A lot else, but we’ll keep it there.

 

Brian Keating:

I love it. We love you, Peter. Thank you, so much for sharing so much of your valuable time with our, humble audience. And, we’ll tune in next time, and enjoy the rest of this beautiful fall season here in California or wherever your travels are taking you Around our solar system and beyond. Peter de Mendez, doctor Peter de Mendez, friend and mentor, for many years. Thank you, Peter.

 

Speaker:

Thank you, buddy. Appreciate you.

 

Leave a Reply

Your email address will not be published. Required fields are marked *