Neuroscientist: We’re Not Ready for What This AI Discovered |Vivienne Ming
Transcript
Vivienne Ming:
What if the smartest thing AI could do is refuse to give you the answer?
Vivienne Ming:
We trained it to never give answers. It is Socrates. It only gives context and questions. But Amazingly, upwards of 20% of participants switched into cyborg mode and did this amazing. Not just superhuman, but super AI performance. What is the point of us if the best and brightest can’t beat what an AI can do? But then in about 5 to 10% did something amazing. We called them the cyborgs. If we zombie walk our way into a future where it has all the answers and we don’t, what’s the point of us? Exploring the unknown is the one thing humans are uniquely well suited to do.
Brian Keating:
What is the best use of AI right now?
Vivienne Ming:
I have a paper coming up and the title of that paper is Human Human Capital, Not AI Benchmarks Predicts Hybrid Intelligence in Forecasting. So how do you measure how creative a human is or an AI is, or the two together, when maybe a massive large language model has just memorized every measure of it we would typically use in science, which by the way, it has, and so it distorts. And then you have a bunch of marketing professors or computer scientists who, who I love them, but they aren’t scientists running these experiments that aren’t really valid. What have we made predictions of the future? What will the price of oil be in six months? Which, as we record this, everybody has some sense. But I will tell you, when we ran the experiment several months ago, nobody, nobody knew what the price of oil was today, much less than six months from now. So the humans in this experiment did terribly. The AIs did great and pretty much how they did, tracked with the traditional AI benchmarks that they were used on. But then we paired them together and Human Capital absolutely dominated, which is to say the vast majority of people in the experiment, including a whole lot of UC Berkeley students, smart kids, essentially said, gemini GPT, what will the price of oil be in six months? And then they submitted that answer.
Brian Keating:
What do you expect from the second best UC school? Right?
Vivienne Ming:
Yes. Well, I mean, we can’t all be in the sunshine all the time. In this particular case, you know, you could look at that and feel really terrible. What is the point of us if the best and brightest can’t beat what an AI can do? That essentially they’re just a pair of legs to walk it across the answer across the room. But then in about 10, 5 to 10%, not a huge percentage, but we saw it in there, about 5 to 10% did something amazing. We called them the Cyborgs the cool thing was you couldn’t tell who made the prediction. Was it the machine? Was it the person? Because what they would do is they’d make a big prediction and then the AI and we used a variety of different models, would sort of say, oh, but wait a minute, what about the data? And then the humans would say, okay, you’re right, that wasn’t right, but what about this? And then they go back and forth several rounds. They had to make 10 predictions in an hour.
Vivienne Ming:
They didn’t have time to cheat the system like they went in. They did better than the best humans, even if they themselves on their own were modest. They did better than the best AIs even when our cyborgs just had a small open source model available to them. And we took the questions off of polymarket, which again six months ago, very few people knew about. Now that’s in the news and in many contexts they were comparable to polymarket in high volume questions. So this is a prediction market. People have actual money trying to figure out what that price of oil will be. And a human with no prior knowledge paired with even a modest AI, but with this right set of behavior.
Vivienne Ming:
So why did I drag you through all that nerdy stuff? Because what predicted what is the human capital I’m talking about? Well, working memory span is a classic fluid intelligence perspective taking the ability to understand how what other people are thinking. Theory of mind, predicted ability to use AI to make these predictions. Curiosity, intellectual humility. So when you had a prediction of your own and the machine said no, did you just take its answer? Did you push back? Did you change? Did you learn? So we looked at that behavior and I’m finally getting to the answer to your question. We said, could we make the worst performing AI of all time? You put it on any benchmark, it does as bad as any GPT one we’re going to. In this case, we took an open source, a llama model and we trained it to never give answers. It is Socrates. It only gives context and questions.
Vivienne Ming:
So it does terrible because it refuses to give answers. But amazingly, twice as many. Instead of 5%, upwards of 20% of participants switched into cyborg mode and did this amazing. Not just superhuman, but super AI performance. And of course the great thing is as the AIs get smarter, so does the hybrid intelligence. So that actual experimental result that we can see what makes humans amazing. And we could think about what that means for parenting, education, workforce, but we could also see what about AIs make humans amazing? Why are our benchmarks about what AIs can do all by themselves. Why aren’t they about how they make us better? And as I said, it turns out giving you the answer is almost the worst thing these machines can do.
Brian Keating:
You talk about cultivating a failure resume. Yeah, which, which I took, you know, just a slight bit of a counter, you know, take on, which was just that, you know, that’s great for Silicon Valley and you know, fail early, fail often. I’m so sick of hearing that, you know, from the Google days. I mean, it’s great when you have a $4 trillion valuation, right? But if you have a small company or a small businesswoman, right, so you have these companies, do you really want them to be failing often or would you like them to not fail at all and learn from other people’s failures?
Vivienne Ming:
In the perfect world, we would never fail. We always know what the right answer is. But note, and you’ve sort of asked me two separate questions. One is what I really meant by a failure resume, which is, listen, it’s hard for me to literally get out of my head. I’m a neuroscientist. I think of a lot of high level human behavior in that sense. We’ve got this thing called our anterior cingulate acc. Back in the day we used to call the ACC the oh shit network.
Vivienne Ming:
Because if you had someone in an experiment, like they had to really quickly make decisions, if they made a decision and they immediately knew they were wrong. Oh shit, oh shit. Then you’d see their ACC light up in like an fmri. So all this neuroimaging, again, you don’t
Brian Keating:
really want that from your pilot, right?
Vivienne Ming:
You clearly, you don’t want people making mistakes. Except if that network doesn’t fire, where does it signals go? To a variety of places. But one of the places it goes is into your amygdala and into your nucleus accumbens. What is the nucleus accumbens doing? That’s your reward center. It’s getting predictive errors. So endogenous opioids to bring you that pleasure. But I actually just wrote about this recently, maybe it’s my newsletter for next week about curiosity is maybe a better way to think about dopamine. Is it is the prediction drug, it’s not the pleasure drug.
Vivienne Ming:
People don’t you give them a dose of exogenous like an injection of opioids to a mouse or you can do similar things with humans. There’s no evidence of pleasure. You don’t feel pleasure, you just want to do a thing. So when you get Those error signals, implicitly your brain made a prediction about the world and now your experience is, oh, that wasn’t quite right. That error signal is how you learn and how you train. You never get that error signal. By definition, you do not learn. So I mentioned John Hopfield won the Nobel prize in physics a couple of years ago.
Vivienne Ming:
My grand grand advisor. So did the work of the team working on protein folding get the chemistry award and that was for reinforcement learning. Well, guess where we figured out reinforcement learning from? Not AI came right out of the brain. Studying rats, solving mazes. No ACC, no error signal, no learning. I will add one additional addendum for the CEOs of the world is you are too cautious. I get to run lots of cool research and I will keep this very tight. I got a really interesting pair of calls in April of 2020.
Vivienne Ming:
It was Amazon and Facebook and they both said, we just send everyone home and we don’t know what to do. We don’t know how to be innovative through a camera. And so I got data from two of the biggest data sets in the world, Amazon workforce and Facebook in general, to do things you’re never supposed to be able to do, which is track everyone. But particularly I’m interested in what makes the smartest, most innovative teams. And there’s a lot of people doing this research, so I’m far from the only one. But we got to see this massive data set and what we see is innovative teams are very much risk takers. Like not incremental innovation, but transformative innovation was these step change behaviors. They never got there on the first try.
Vivienne Ming:
They built off of failure. Again, super long story short, what we found is in science, in popular music, pick any domain you want. We are far too risk averse for sort of a theoretically optimal Bayesian innovation agent. As soon as someone has a good enough answer, we all start herding around those safe answers and stop exploring. We actually went so far as to call it the information exploration paradox. The more information is available, like say free information from an AI, the less we explore. Even scientists do this. That really scared me.
Vivienne Ming:
And so this idea of a failure diary wasn’t keep track of all the mistakes you’ve ever made, it was keep track and tie those mistakes to the eventual success that they led to train your brain to connect. Oh, that didn’t work out to now I understand it better.
Brian Keating:
Why did you say it’s scary that scientists just a second ago, it’s scary
Vivienne Ming:
that scientists show this same very human failing? Because our job is to explore the unknown Here’s a thing that maybe only people like you and I could really share as an experience. But when I originally worked with Bert, that was like the first big language model. So this is Google’s early model and then GPT and nowadays all of these different models. My immediate experience is this is a lot like working with my grad students. My grad students are brilliant. They know everything. For they’re a brilliant kid who for five to seven years their whole life is about this one esoteric question, right? Without spouses or kids or pets and yeah, nothing else. They know it better than me.
Vivienne Ming:
They never needed me to learn the facts about that question. Not forget a world witch AI or the Internet. You could have just gone over to Geisel Library or our libraries up at Berkeley and and learned about it on your own. Our job is not teaching facts that may be shocking to you. The job of science itself is not a bunch of facts. Science is exploring the unknown. Science is what happens when the facts have run out and you don’t know what happens next, whether that’s in physics or neuroscience and psychology or any field. And my job with my students who know everything but understand nothing is to teach understanding.
Vivienne Ming:
What do you do when the answers end and you don’t even know what the next question is? Let’s call that ill posed problems. When you have questions that have answers, these are well posed problems. The job of science is the ill posed problem. When more papers come out within a subdiscipline of science over the last 20 or 30 years. Scientists stop reading as many papers, they stop citing new papers as much. The existing high profile individuals with lots of citations getting even more cited. And then this is the scary part, the correlation. This is something you could only do in the age of AI because how else would you read all of these papers? The correlation between the quality of new papers from new breakthrough researchers and the chance that they become the hit song of the summer, if you will, goes down.
Vivienne Ming:
So this is why it’s scary to me. If the people whose fundamental job is to explore the unknown, which in a world with AI, I’m going to kind of pointedly say maybe that’s all of our jobs now within our domain, exploring the unknown is the one thing humans are uniquely well suited to do in an AI driven world, then it scares me that even scientists are doing this over the last few decades than they ought to. And so then a lot of what I write about in the book, particularly in the last sort of second half, last third of the book is well, how do you push against that? There’s a chapter titled how to robot proof your kids. There’s a chapter titled how to robot proof yourself. How to robot proof your company or community. I don’t remember which I actually titled that one. And I just wanted to do something I would almost never do, which is just say, here are some concrete steps for how to counteract this deeply human thing. We’re not bad for not exploring.
Vivienne Ming:
Everybody shows this terrible behavior, but we have to fight against that. If you think like the people in my experiment I described earlier, we called them the automators versus the cyborgs, the ones that just asked AI for an answer. Smart elite students, I’m scared.
Brian Keating:
Vivian actually wrote about this exact problem. And the data she collected changes how you think about your own kids. This is a parenting book, you know, a stealth parenting book, parenting book in disguise. It’s, it’s full of, as I say, red meat or white tofu for you Berkeley denizens, you know, up there. But it’s, it’s chock full of technical good goodness. But one thing that keeps popping out is the sensitivity and really a tenderness towards children. And it’s obvious, your upbringing and your life in general and your current, as you said, your son. And a lot of the things that have been done not, I would say, without a tremendous justification.
Brian Keating:
Kind of makes me think about all the ways we could be using AI for good. And one of the ways I thought is why the hell would we want to robot proof our kids? Especially when they’re. According to the UNICEF, there’s 100 million child slave laborers in the world, some of them unspeakable. I’m not going to talk about that. I’ll get too emotional about it. But why not? I mean, what if we could replace them with robots? Vivian? What’s wrong with that?
Vivienne Ming:
So let’s be clear. I grew up. I grew up in a little valley in Central California. The valley is called Cral de Tierra, but John Steinbeck wrote a book about it called the Pastures of Heaven. So according to a Nobel Prize winner, I grew up in the pastures of heaven, which not coincidentally is west of east of Eden. Passage of heaven was meant as an ironic description. It was beautiful. But every day in the Salinas Valley around California, the most productive farmland in the world, an army of human beings marches out, bends over in those fields and provides the bounty.
Vivienne Ming:
If we knew what they could do, if we took that away 100%, all of those heads of lettuce should be picked by a robot, a human being. Shouldn’t do that. What I actually talk about in Robot Proof is not something like, everyone should have a universal basic income and just sit around and somehow we’ll all be artists and scientists. Guess what? We don’t pay artists anything right now. So if you want to be an artist, what are you waiting for? So the idea that the point here is you shouldn’t want to do amazing things. I think it’s right in the spirit of what you’re talking about. Every single kid on this planet is amazing. Every single one.
Vivienne Ming:
I don’t say that lightly, nor do I say it in the sense that everyone gets to live that amazing life. Absolutely not. The barriers between birth or even how your parents met early on in their lives and who you get to be is phenomenal. But I have a chapter in there, or maybe it’s like a section, and the title is if Kids Were Bonds, where we built an economic model. And we just said, what if we took everything we knew, like Nobel Prize winner James Heckman and Raj Chetty, who’s probably on my list for an economics Nobel someday. We just took their work, some of the work I’ve been able to do in my life, and. And took things we knew, worked to change the direction of someone’s life, and invested in that. At scale for every kid, say, in America or because of work I’ve done in the past, every kid in South Africa, every kid in India.
Vivienne Ming:
I mean, the returns to the US in our estimate was $1.8 trillion from almost 15 years ago. We have an amazing economy here. That’s still only like 10% of the US economy at the time. But a 10% boost, trillions of dollars would be. Who would walk? Even if you were selfish in this question, I don’t want to spend my tax money on someone else’s kid. What if you looked at what it would bring to your life 20 or 30 years from now, and it was the most. If kids were bonds, they’d be the backbone of the world economy. There is no guaranteed payoff that’s better than that.
Vivienne Ming:
So in writing a book that’s notionally about AI, I’m really writing a book about kids or about anyone who’s around today and what it takes. So when I talk about robot proof, I don’t mean, you know, what should Sarah Connor teach her kids to fight Terminators. Although I’ll say my AI, My literal AI. Professor here at UC San Diego, John Batali, who was stoned every single day as far as I can tell of his life. But I Loved him and he was an amazing guy. He taught the AI class in the know your enemy vein of artificial intelligence. But you know, rather than Cylons, what I really mean is, and like this book is the answer to a question, what qualities about human go up in value as machines become more intelligent and how to achieve that and how to, you know, first is just naming it. What are these qualities? It’s not the university you go to, although it’s it.
Vivienne Ming:
So let’s just say once we’ve identified these things, then it unlocks other things actually then like conditioned on that, let’s be nerdy again. The university you go to matters a lot conditioned on that. Knowing how to do hard, technical things ranging from earlier, like factorizing a polynomial to harder, like building AI itself or building rockets. Like in a sense, knowing how to do that without these foundational qualities. I mentioned some of them because they were in that experiment I described earlier. I called them meta learning in the book. Like the qualities that help you learn how to learn. Other people nowadays call them durable skills or foundational skills.
Vivienne Ming:
What I don’t like is soft skills or 21st century skills. As though in 75 years you better not know these anymore because one, they’re very measurable. I talk a lot in the book about the scary, miserable. You’re saying so that’s what I’m saying is at one point I was the chief scientist, one of the first companies doing AI and hiring. And we built a data set of 122 million people. A lot of that data came from LinkedIn. The first time I ever met Reid Hoffman, I introduced myself and he said, oh, I know you, I don’t like you.
Brian Keating:
That might not be such a bad thing with Reid’s positioning nowadays.
Vivienne Ming:
So we had all this data. My job of my team was to predict how good you were at a job you’ve never held. And it was fascinating and amazing because I’d never done work. I’d started a couple education companies. I’d been a theoretical neuroscientist and psychologist before that. Here’s this new challenge. And the thing was again, if you just looked only at what university you attended and nothing else predicted a lot or what skills you claimed on your LinkedIn profile didn’t actually predict much of anything. But if then we did the hard job.
Vivienne Ming:
Are you resilient? The psychological construct of resilience? How likely are you to find success after experiencing failure? Do you have good perspective taking skills? Do you know, understand other people’s perspective on the same problem? What is Your analogical reasoning like a self assessment. We can run through this list of a few dozen different psychological constructs that were measurable, that were highly predictive of long term life outcomes, not just career outcomes. Insulin sensitivity, central body mass, all cause mortality, size of your friendship network and yes, lifetime income, lifetime earnings. Think of anything you’d want to be true about you when you’re 65. Walking speed at age 65, maybe my favorite life outcome variable of all time. Strongly predicted by the psychological construct of purpose. Do you have something in your life that’s bigger than you would take more than a lifetime to complete? So we look at all these constructs that again, we often think of as soft. But then my job as a total nerd is how do I measure it in concrete behavior based ways? How do we measure its, not just its correlation, but in some cases demonstrate its causal influence on long term life outcomes.
Vivienne Ming:
And again, not just economic outcomes. And then finally, because otherwise, who cares? Are these things changeable? If we’re talking about your child, is fluid intelligence, working memory span, numeracy, literacy changeable? Why do you think everyone says read to your kids? But actually even better than that, have adult like discursive conversations with your kids, challenge them, like bring that up fast. Emotional intelligence like purpose and resilience, grit, growth, mindset, highly predictive of long term life outcomes. Far as we could tell from our data set, not only changeable when you’re a kid, pretty much changeable all throughout your lifetime, but not easily like these. This isn’t a lecture. Dr. Ming doesn’t get up in front of 100 students in a lecture hall and says here are the facts about resilience. So when you’re looking at building meta learning, foundational skills, durable skills, and what I talk about in the book is, and resilience is a great example.
Vivienne Ming:
The only way to become more resilient is to experience failure. And for a parenting lesson or a leadership lesson, then you as the parent need to be there and catch them on the other side of that failure, but not walk them to success. Catch them just enough that a rough rule of thumb is about 80% they can’t always succeed or you don’t grow. And if they fail too much, the exact opposite happens. It’s like this edge, but you can see it happen. It takes time. This isn’t like a six.
Brian Keating:
I want to ask you about the, the most strikingly moving part of the book, except for the fact that I want to scream out and say, vivian, why don’t you make a million dollars by creating, you know, Tinder and grindr and stuff 20 years ago is a product that you made called Sexy Face. Now this is the ultimate in AI machine, or at that time really machine Learning wasn’t like GP Ts that we’re using today. But it was a very early way of utilizing machine intelligence, partnering with humans to make a super cyborg that did good, not like I would have done, which is to make billions of dollars creating Tinder. As I said. Tell us about Sexy Face, Vivian.
Vivienne Ming:
I will say in self condemnation and self defense, I have never found a business plan I couldn’t make worse with a little heart. I’ve started many companies. It’s hard. So I’m not going to look down my nose at anyone whose goal is to make an amazing business that is hard enough as it is. But if you were on a mission, you truly want to change the world, then that’s what you’re doing. You don’t get to pivot, you don’t get to realize, oh, but if we just made this one compromise. So the chapter in the book is not the original version. In the original version, I offer a deep self parody because it starts with me bragging about this amazing idea I had for a product called Sexy Face.
Vivienne Ming:
I’m going to get you laid. And I walk through, you know, for free. We’ll have you select these faces in this online game. This was back in like 2012. Yeah.
Brian Keating:
Explain them what it is for the audience.
Vivienne Ming:
And so you pick these faces. Back then we were pulling off of Flickr and Facebook because this was the wild west and you could just grab data from anywhere and you’d pick it. And our promise was we’d find everyone you think is sexy for free. And for $5 we’ll find everyone who you think who thinks you’re sexy. So this is in my self defense. I’m not the world’s worst person. This is what you call a Trojan. So once you did a round or two of this and the scariest thing is that it worked.
Vivienne Ming:
I mean, Forget the where AI is today. This was again like 2011, 2012. We’re working on this. And then it confesses. Actually this is just a mind reading game. So pick any face category you want to like. Southeast Asian mutton chops and a sense of ennui. As long as you’re consistent and how you make the choices.
Vivienne Ming:
In the same game, it was 18 random faces pulled out of our data set. It would find the faces you think is sexy. It was sort of a fun machine learning game. To play. But even that wasn’t why we did it. We did it because there’s this book, this book that’s produced by the un, the UN High Commission for Refugees. It’s a book with a million photographs in it, and it’s the face of every orphan refugee camp in a refugee camp somewhere in the world. In this case, in around 2012 and 11.
Vivienne Ming:
I don’t mean a million faces in the history of the un, a million faces that year, kids that were in these camps. And the reason we built this thing and had this phenomenally sleazy game was because people’s choices were training what arguably was a very early version of a deep neural network. So they were training our model not about faces, but about how people perceive faces. What’s a happy face, What’s a sad face? What’s a cute face? Because then what we found is some uncle, you know, if you put this the time on this, right, some uncle in Syria hasn’t heard from his sister’s family in years, and he’s fearing the worst. So he drives and drives and drives to a camp across the border in Jordan, and he goes to the refugee camp there. You gotta cross a border to be a refugee, otherwise you’re an internally displaced person, because that matters. So you go across the border and they give you the book, and you start leafing through this book, hoping you don’t blink on the page where maybe your niece is there. So we made this thing that you could put on a tablet, and you play a version of the game with all these faces of these kids, and if your niece is in a camp anywhere in the world, in about three to five minutes, you found her.
Vivienne Ming:
And so we did this project, and it was amazing. I’ve gotten to do some truly amazing projects, particularly these philanthropic projects in my career. And despite where this project ended up and the challenges we experienced with it, and the fact that one of my projects was for my own son, this is about the thing I’m most proud of in my entire career. Not any of my startups, not my scientific publications. It was this project. You cannot imagine the life of an orphan refugee in a refugee camp outside a war zone. The chance to give them back their life again is something kind of in a very different context I had for me, but also that story. I learned how to build AI models of faces here at UC San Diego on a scientific project funded by the CIA within the broad swaths of Terry Sinosky’s labs.
Vivienne Ming:
And then later, I got to use what I learned here at UC San Diego to reunite orphan refugees. And for another project, I got to build a system for Google Glass to help autistic kids learn how to read facial expressions. There’s a lot of bleak stories about AI. Some of them are legitimate and justified, but there’s also these amazing human stories of what can be done if we choose to use it that way. One of the cool things we found in our innovation research was you could take a super high performing scientific team that has published groundbreaking research and the longer they work together, the more they publish, but the less impactful their publications get. One new person joins that team and just like that, it pops again.
Brian Keating:
You talk about diversity, but it’s not what we mean. Around here we have an equity diversity inclusion office. Talk about what you mean by diversity.
Vivienne Ming:
I mean, I don’t want to shy away from all forms of diversity, but in our case we’re looking at very high dimensional metrics. So psychological diversity, socioeconomic diversity, skill sets. The interesting thing in this research literature is not just that diverse teams were more effective, but. No, it was a team in which, let’s say a Nobel prize winner and an undergraduate intern were working together that when they were collaborating that disappeared. And you could just, for example, in one paper they just look at turn taking in collaborative documents. In our research, we looked at turn taking on a camera. How often was a face the biggest face on the screen? Just that we’ll get a little nerdy doing measures of inequality on just that number alone predicted how innovative a scientific team was, the more equitable that turn taking. And so then you combine this wide definition diversity with radically flat hierarchies or sort of turn taking behavior or just listening who has a good idea right now? And what’s interesting is you take that further into an experimental setting and you look at how do you incentivize people to listen to one another.
Vivienne Ming:
I will lean into a classic measure, which is gender. Often in a lot of studies it finds in heavily sort of male teams that the women often don’t get listened to. Great paper out of Stanford several years ago. They used an AI system to read every dissertation published since 1971, 1.8 million papers. And. And the less represented women were in the field of research that they analyzed, the more innovative and sort of breakthrough the ideas their dissertation brought. But also what they found was the less likely it was to be taken up by their peers and the worse their career outcomes.
Brian Keating:
That would never happen in astronomy.
Vivienne Ming:
But I’m just kidding, of course not. But the point for me isn’t just that somehow one single person with diversity it is. Instead, are you building a family, a community, a lab where people are truly bringing something different? So all of that, let’s live it. I have one interview question. You want to be in my lab or one of my companies? Pitch me a mad science project. I don’t ask you anything else. You know this ahead of time. Prepare if you want to.
Vivienne Ming:
I don’t really care. I never prefer anything anymore. I know my research. I don’t. What is there to prepare? So we come in. It’s not a dissertation defense. I don’t grill you. We take an hour together and we try to figure out how to make it work.
Vivienne Ming:
The only thing I then ask myself is, did we have a better idea together than I would have had by myself? I don’t need you to know how to program. I don’t need you to know about brains or AI. Math is harder to learn later in life, but to the quirk of my life, it is a thing. I am, if nothing else, a proof that that is a doable thing. I can’t teach you to have an idea. I wouldn’t have had the value of that. Especially now that we live in a world where the answer to all of those well posed problems are free in your pocket. Essentially for free.
Brian Keating:
Yeah.
Vivienne Ming:
It is what you would uniquely say that makes you valuable. It’s what your child would uniquely say, even unique compared to you, that makes you valuable.
Brian Keating:
It’s inescapable for me to think about. There’s a statement by Einstein. Einstein had a lot of shortcomings, okay? He was a horrible father, husband, you know, married his cousin, abandoned his oldest son who had mental illness, it turns out. Never saw him for 30 years, even though he’s rich and he could travel. He was in Europe many. Anyway, it’s not a discourse on Einstein, and we’ve talked about him in glowing lionistic terms already. But he did say, when he actually didn’t ask these questions, like, what would it look like if I was traveling on a light beam and I looked at myself in the mirror, what would I see? He said he was very close to his father and he said it was good. I didn’t ask my father that question because what would have happened was he would have given me the answer of the day, which was, by definition wrong.
Brian Keating:
Right. Because it took Einstein at 25, in his miracle year in 1905, 20 years later, for him to discover the true answer to that question. Question. Therefore, if he had asked that question and gotten a response from the Gemini of the day, which would have been his dad. He would have gotten the wrong answer by definition. In so doing, it really illuminates the central one of the core theses of the book, which is I thought if you just read the subtitle in an age of what is it again? When the machines have all the answers, I would have said if you just stopped there, ask better questions. But no, it’s not about asking because that’s like prompt engineering. And you know what? It’s kind of fatiguing.
Brian Keating:
I can do it for an hour. You’re much better at all this stuff. You have so many actionable things in here. It’s really, you know, I hate like cookbooks and recipe, but this is like you’re going to learn so much about best practice from. From one of the gurus, geniuses of our time. And you know, I only wish the book came out earlier, but then if it came out earlier, I’d be like Einstein. You know, maybe in the only way I could be like, I’d ask better questions, but I wouldn’t go through the. Destroying the gray matter is just like destroying your muscle at the gym.
Vivienne Ming:
To build it back stronger here is to ask myself a setup question, but it’s one I’ve gotten over the years. I very self indulgently recently shared a social media post in which I posted a bunch of predictions about AI in the future and then twist. Those were all dated from like 2014. And I cited the articles that they were from. Yes, we live in a world now where the AI can kind of do it all by itself, whereas before someone had to build it. But the fundamentals haven’t changed what it can do. You just don’t have to be able to build those models yourself anymore. And so a lot of this started when I started thinking about writing this book over 10 years ago.
Vivienne Ming:
And it was only really that I feel like this story, AI will do all the boring stuff so you can have fun. That’s where it started to scare me. So I get this question. Dr. Ming, isn’t this just like the printing press, the typewriter, the calculator, the computer? And no, it isn’t. And I actually take a little time in the book, there’s a title. This is not the Industrial Revolution. It’s the title of one of the chapters.
Vivienne Ming:
Because I can’t tell you how many times I’ve been on a panel on a stage with someone who says, ah, don’t worry, this is just like the Industrial Revolution. And I think, do you know anything There’s a British diplomat who wrote in like 1860, 1850, the Plains of India are bleached white with the bones of Indian weavers. All put not out of business, out of life by the British Midlands, just that it transformed the world. Just like AI, astonishing, powerful and potential huge benefit to the world. But it has consequences. If we zombie walk our way into a future where it has all the answers and we don’t, what’s the point of us? But I don’t just mean that from a leave space open for humanity. There is a technology that has already come and has had a measurable negative impact on human cognition, and that is gps. And let’s be clear.
Vivienne Ming:
I use Google Maps all over the world. I love it, I travel a lot and like that I’m a navigation native. But what I no longer do is follow its directions. Because I made this prediction about 15 years ago. People were not happy with me for saying this, but I said in 10 to 20 years we will see a statistically significant increase in early onset dementia. And the research is still accumulating, but it fairly strongly to date supports the story that people that use GPS more show worse and earlier memory decline effects. GPT is the new gps. If in both cases you use it as a substitute for you, you don’t need Google Maps to navigate, you need Google Maps to do things you literally couldn’t have done yourself, which is instantly know that street’s closed.
Vivienne Ming:
This is Maps. So I actually give one regular lecture a year at Berkeley. They’re smart enough to keep me away from the undergrads. And in that I challenge them to come up with a better version of Google Maps. What I say is for any technology, this is for an engineering and entrepreneurship course. Not only should your technology make you better, when you’re using it, you should be better than where you started when you turn it off again.
Brian Keating:
That’s what it means to enhance your life.
Vivienne Ming:
And for someone who does a lot of work in neurotechnologies, that really means a lot to me. I don’t want an implant that does my thinking for me. I want an implant that makes my thinking better. You know, in the true neurotechnology’s world, first off, we’re only talking about people that have Alzheimer’s or massive stroke. So you’ve lost your language. Well, I don’t want to take away your reasoning too. I just want you to be able to speak again. And there’s some cool technologies upcoming in that space.
Vivienne Ming:
So I don’t want to replace, I want to Augment, I don’t want to automate, I want to augment. I don’t want to sell you robots, I want to make you a cyborg. That’s what I’m hoping to do with the book and with my general research. And when we looked at these questions and people’s behavior, we end up with some sometimes shockingly simple ones. I mean, when I say that to these UC Berkeley, UC San Diego students, I’ve done this here as well. They come up with amazing AI driven. Like, it gives you clues, it turns navigating through Paris into a game. All of which is great.
Vivienne Ming:
Here’s what I do. I do it even in Berkeley, which I know like the back of my hand nowadays, I look up how to get where I’m going and I stick it in my pocket and I think to myself, how can I beat Google there? That way I’ve gotten the benefit of what AI can do that I cannot. Collecting all that information and giving me that I couldn’t have done otherwise. But I’m still thinking about it. I’m thinking, what do I know about Berkeley that the collective wisdom of Google does not? That at this time of day and this time of year, that left turn it wants me to do is not going to work out well. And even if I’m not right, I
Brian Keating:
thought about it, I gamified it, and that’s pleasurable. And you have a reward, even if
Vivienne Ming:
you’re to bring this full circle to the question of real innovation. How do we go beyond, oh, there’s some small molecule that we’ve already identified. It’s there in the research literature. It’s just no one’s thought of using it in quite this way to there’s this quirk about photons hitting metal plates. There’s nothing else. We figured everything else out and suddenly the world is transformed as soon as we go deep there. How do you make that deep breakthrough? I’m not saying I could never do this, but I am saying right now it doesn’t matter how many extra parameters we add in to existing transformer models or even reinforcement learning models. Model free sort of acausal learning is never going to give you this aha moment of here’s something no one’s ever done before.
Vivienne Ming:
But I think what is true and comes right out of this experiment I’ve cited a couple of times now, is when the AI can bring all of that data to bear on a question and then the human can say, ah, but what about. And not ignore the AI, which unfortunately is what a lot of doctors do, but also not just do what the AI says, which unfortunately is what almost everybody does, but instead find that cyborg dynamic. But what about? And then it explores the what about for you. I hate to put it this way because it’s such a cheesy, classic Hollywood example, but I loved the image of Tony Stark talking to Jarvis in Endgame, solving time travel, not because there was anything realistic about the science behind it, but that. Run this model for me. Deploy this tool. He talked to it. He explored the long tails of possibility.
Vivienne Ming:
It explored the probability density of likelihood, the real world, and the two worked at it. That’s like the closest thing I’ve seen in big media that really captures what it feels like for me to work with one of these models.
Brian Keating:
Well, you mentioned your tip to people out there in terms of GPS navigation. I want to give one out there that I use every. Every time I’m in the car with my wife, which is to turn off the voice on the gps. And that way I don’t have two women yelling at me. Okay, Sorry, Vivian, I had to use that. Now we’re coming to the end. You said every scientist is a storyteller. And you said at the very hour outset that all stories should be an hour or so.
Brian Keating:
Sure. Coming up on the hour before we land the hovercraft at all. We can’t not talk about these people that are possibly malevolent, possibly malicious. Whether it’s Sam Altman saying, look at the cost to train your kid over 20 years. That’s a lot more than the train LLM, which I find very dystopian and very sociopathic, perhaps. I don’t know him personally. I’d love to talk to him. I have talked to Elon Musk on the podcast and.
Brian Keating:
Well, no, he’s sort of the prominent person working on things that are, you know, I can’t imagine, not of interest to you. From neuro implants to Optimus to the optimist robot, which might be, you know, he already has fleets of cars that have AI, you know, collecting data about us and about the world. What are your thoughts on him? And, you know, not in his personal life. I mean, we already talked about, you know, evil, bad fathers. I don’t want to get into that. And I actually did confront him on one of his kids on the podcast, which was remote a couple, about a year or two ago. But tell me, what are your thoughts about embodied cognitive? It seems like that will be the next kind of frontier.
Vivienne Ming:
Let’s be clear. The questions that interest him clearly interest me too. I’m not gonna have a lot of positive things to say about Elon Musk as a human being. And Peter Thiel is like a Bond villain, but he is a Bond villain that he’s an actual threat. He never knows how to make like Startup Ville work. Just dismissing people because you disagree with them. Apart from being like the exact opposite of what I argue. If you are in my book is then you’re throwing out whatever powerful insights they’re actually bringing into the world.
Vivienne Ming:
You’re right. Einstein is not a perfect person and Richard Feynman was not a perfect person. None of us are. In fact, my next book will be titled Small Sacrifices and the subtitle is the Science, Economics and Story of Purpose. And the anchors of the book is an experiment in which I got everyone to do things that they had already said were morally wrong. And another one in which we found that people helping one another for no obvious benefit actually predicted the helpers life outcomes, their health outcomes and even their economic outcomes. So understanding the economics of that and why it doesn’t match with liberal economic theory is worth understanding. So we, we are like some weird quantum mechanical superposition of the worst version of ourselves and the best version of ourselves.
Vivienne Ming:
All of these versions, there are no
Brian Keating:
single edged swords out there.
Vivienne Ming:
And the right context will bring any of that out. Which is why in the book what I’m arguing for is use the AI to create the context that makes you the best version of yourself. So I don’t think Elon Musk is the best version of himself. I note some fans of him have gone from calling him the smartest man in the world to the smartest industrialist to the greatest salesman. And I can’t deny any of those. And I neurotechnologies I love. I think BCIs are maybe the most boring form of neurotechnology. Like I can type without my fingers.
Vivienne Ming:
All right, yeah, that. But there’s so much more we can do that is so exciting in this world. And if you don’t like research this space, it’s fascinating. And if I want to know what his company or others are up to, well, his companies in particular, I just wait a month, he fires everybody and I can go ask the people that used to work. The question is, I was told this wonderful story by Ed Catmull. So Ed maybe has retired, but he the founder of Pixar and the longtime sort of CEO and president back from when it was a supercomputing company and then it slowly morphed. Steve Jobs owned Pixar. He made money from Apple, but he made his money from Disney buying Pixar.
Vivienne Ming:
And Ed had this amazing story. I’ve heard all these stories. I met Elon very briefly once, but I know how his places work. I wish he was a better person because I admire so much of what he’s trying to build and aspects of how he runs his companies. But here’s one I do not. And it is this contrast between Steve Jobs and Elon Musk. Ed had this story. So he had this, what I think is one of the best books about the sort of business of creativity, which is Creativity Inc.
Vivienne Ming:
That really good. Has a little bit of the whiff of Silicon Valley.
Brian Keating:
Hey, geography. Jobs.
Vivienne Ming:
Nonetheless, I got to talk about. He’s writing a new book, and we chatted about it, and he shared these stories about Steve, and he said there were two ways to get fired from the board of Pixar. Because it was at will for Steve Jobs, because he owned the company effectively. It was agree with him too much and agree with him too little. It was horrible. You know how many times I’ve heard the story of people that worked directly with Steve saying, I would say of things, and. And Steve said, you’re wrong. You’re an idiot.
Vivienne Ming:
You’re fired.
Brian Keating:
Right. He wasn’t gentle.
Vivienne Ming:
You know, they’re in the Caribbean the next week, just relaxing, and their phone rings, and Steve says, you know, I’ve thought about it. You were actually right. Come in. We’ll chat about this afternoon. And he said, well, Steve, I’m in Saint Barts, like I can. He says, well, you can chat about it with me this afternoon or never again. So he’d admit that he’s wrong and still be an asshole about it.
Brian Keating:
But.
Vivienne Ming:
But he’d admit that he’s wrong, and he’d populate his boards with people that would disagree with him the right amount. That’s not what Elon does.
Brian Keating:
Okay, so we’ve talked about the book many times. We haven’t done what we’re supposed to do in a Bayesian framework, which is to judge a book by its cover, which they tell you not to do. But who. Who are they? I don’t even know who they are, but they know who they are.
Vivienne Ming:
Hey, book lovers, we’re judging books by the covers. We know we’re not supposed to do it, but I interview the impossible. There’s nothing to it. Let’s take a look and judge some books.
Brian Keating:
Vivian, take us through the title, the subtitle, and this lovely color cover art.
Vivienne Ming:
I mean, I Think the one thing that you can immediately take away, even before you start reading the words, is the color scheme. Black and yellow. Clearly, if this is an evolved thing, it’s almost certainly poisonous. You shouldn’t eat this. It might have a bad sting.
Brian Keating:
It is good food for a thought, though.
Vivienne Ming:
Yeah. So there’s other ways to consume it.
Brian Keating:
Although if they buy more books, they’ll, you know, and they eat them.
Vivienne Ming:
All right. I don’t want to dissuade you from making a salad out of my book, but I don’t think that’s the best way to make use of it. And I’m always terrible with dates. Let’s call it 2014. I got invited to the Obama era. Department of education, pre Linda McMahon or,
Brian Keating:
yeah,
Vivienne Ming:
the Department of Education. For those of you viewing this today, that was this thing that used to exist. So I got invited there, and the minute I walked in the door, I was asked, Dr. Ming, how do we robot proof our kids? The first thing I thought is, that’s going to be the title of a book someday. Although I will say this, my publisher thought that sounded a little too much like a parenting book, so that became the title of a chapter instead. How to robot proof your kids. Anyone that’s ever run a company. Being a parent and being a CEO aren’t wildly different exercises if you’re really invested in getting the most out of your employees.
Vivienne Ming:
So we made this change, which I actually am fond of. Not because I didn’t like how to robot proof your kids, but. But because the subtitle here when machines have all the answers, build better people kind of echoes a little bit. My brilliant students have all the answers. They know everything about their research. Question why am I there for an hour a week? Answering their because they understand nothing. And I’m trying to get the understanding across. Well, my experience when I first used Bert and then used a variety of other models over the years is AI knows everything.
Vivienne Ming:
Except here it knows everything. And increasingly, hallucinations are less and less of an issue, but it understands nothing. We can get in very nerdily why I say that in a sort of hard way, like the difference between model based reasoning and model free reasoning. And what’s going on in these machines, let’s be super clear. Large language models, agentic. AI is intelligent. It’s intelligent in ways. We are intelligent, but we have other forms of intelligence, if you will, that we are able to leverage that it isn’t.
Vivienne Ming:
Which in a way is great because now our machine intelligence and human intelligence become very complimentary. We bring something it lacks, it brings something all the facts that we lack. Putting the two together does this amazing thing. So when machines have all the answers, build better people. I was getting the initial results of my experiment showing that it was human capital, not AI benchmarks, that predicts cyborgs hybrid intelligence. And right when they were saying, could we have a broader, more inclusive title for the book. And so I pitched this one at my publisher and they loved it. And I thought, well, I do have this story.
Vivienne Ming:
What it means to be a parent and to care about kids and what the world we’re leaving for them that they’re going to have to build for. I also wanted to be clear. I’ve spent 30 years now building these, nearly 30 years building these models. I’ve gotten to do amazing things with them. I believe in what AI can do if we choose to use it in that way. And also we changed the COVID art. The original cover art was a person standing over a destroyed army of robots. And I really wanted to make it clear AI doesn’t have to be the enemy of this story.
Vivienne Ming:
I think the enemy here, and I’ll be pointed about this, we have spent over the last couple of years, trillions of dollars now in aggregate training machines that do not need us, you know, building AIs based on benchmarks that don’t involve a human being at all. How well can they solve math Olympiad problems? How well can they answer every question off of every bar exam and every medical licensing board exam? That’s not valueless, but what value does it hold for us? Where is the benchmark that says or the story for CEOs, which is you want to get more value out of the AI that OpenAI and Google and Anthropic, who I think deserve some credit right now because part of what they’re doing is publishing a lot of very realistic research about the results of their own models. If they’re selling you this idea that AI will do all of the boring work for you and then you get to do the fun stuff, painting and the poetry. That is 100% not what we see in our experiments. In our experiments, those cyborgs, the humans and the AIs are doing it together. The boring stuff, the fun stuff, the creative, the routine, they’re just bringing that relative strength in, the well posed versus the ill posed. And I’m worried again that this, we’re selling the story that we’re going to sell you an army of minions and it will do everything you want. You just have to sit back and do Nothing.
Vivienne Ming:
That is a truly terrible story for the future.
Brian Keating:
You know, I’m not as broad and discursive a thinker as you. I only care about venal things, like how can I get more grant funding. But, no, seriously, how can we improve physics? That’s my main quest with AI. And I’ve asked this to many people, and I said, you know, 1907, Einstein had what he called his happiest thought. He said, this was a thought that titillated me unlike any other thought I’ve ever had. And that was, person in free fall would experience no gravitational force. And I’m like, okay, well, that’s really interesting on a couple of different benchmarks, because, A, can a robot have a happiest thought? A, that’s A. And then, B, can it.
Brian Keating:
Even an embodied robot, can it. I don’t even know what to call it. Can it experience the sensation that you and I have had thousands of times, and it’s always better with your kids. You go over a roller coaster. You go over a bump in the car. Isn’t that amazing? Can you have that sensation that led to this breakthrough that led to Einstein’s equivalence principle, which paved the way from special relativity to general relativity to the cosmological explosion that we have today? Can we do it? I see no evidence. To me, the real Turing Test. I call it the Keating Test because I like to name it, like, the Keating Metal.
Brian Keating:
Hold up. The Keating Metal, Vivian, which you are the latest recipient of, which comes with a magnet, which. And a meteorite. There’s Arthur C. Clarke on the front and picture of the Monolith on the back. But, Vivian, to me, the real test is not the Turing Test. I mean, it seems to me we crushed that years ago, probably thanks in part to your work. But it’s.
Brian Keating:
Can AI construct, first of all, can it reconstruct? Recapitulate physics that was cutting edge? Say, Einstein’s theory of general relativity from the orbit of Mercury, which was known for 100 years before Einstein came up with it. So he kind of retrodicted it. Can it redo that A, and then, B, can it do something new? Can it predict. Oh, there’s a fifth law of nature. It’s sitting right there, and that’s that. That I call the Keating Test. And Evan and I are working on a paper, and Demis Hasibus also came up with this test a couple of weeks ago, you know, after me. Maybe he’s watching these videos.
Brian Keating:
I brought it up three years ago. Evan, I have been working on for a while, got to publish that, Evan. But the fact that can it come up with novelty? Can it do that? That inscrutable physics are the, the, the ill posed questions? I don’t know. Are you optimistic?
Vivienne Ming:
So, interesting enough, my 13th company, if you toss in all the nonprofits as well. So we do our philanthropic work through the Human Trust, but my latest is the Possibility Institute and we’re working on metascience Understanding innovation itself. So there’s a lot of funding for AI and science right now, as I’m sure you appreciate and I’m going to describe a truly valuable and highly worthwhile. But almost all of this funding is let’s have these agentic models read every pharmacological study ever published and look for essentially things we already know but haven’t realized yet. Like, ooh, there’s this really interesting drug interaction. So if you’re middle aged black male and these drugs interact in these interesting ways and we see a reduction in your cholesterol levels that’s hugely economically valuable to these because you don’t have to rerun the studies all this. But has it discovered anything? Well, I mean, we sort of get into a complex epistemological question here. What I’m interested in is what you are getting at, which is photoelectric effect.
Vivienne Ming:
Special relative, general relative. Like this guy did it three goddamn times. That. And not that no one else was thinking about the questions when, you know, Einstein had his moments, but he was the one. And that idea that I said I inherited from my own advisors. Science is a story. He figured out how to turn his insights into stories that even. Well, you, of course, but anybody could go through his thought experiments about those space elevators and all the rest of them and understand.
Vivienne Ming:
Oh, wait a minute. What if time varied with elevation and you were up here and someone was down there and time dilated like it’s suddenly these. You’d never understand the math behind it if you weren’t your field of research. But you could get what he was talking about. And you know, I am not the world’s most intelligent person. I thought I was supposed to be when I was a little kid and I thought I was supposed to win Nobel prizes. And when it became very clear that that wasn’t true, I gave up on life.
Brian Keating:
You got the Keating Prize.
Vivienne Ming:
Come on, boy. Did they. Yes, I’ve got my own. Well, I’d much rather have the monolith here in front of me and tell poems about trees, but to be able to actually understand we’re truly Schumpeterian, disruptive innovation in Science and beyond, in policy, in culture, comes from. Overwhelmingly, it comes from people that are different and have different ideas. Neither science, despite this great man idea, that isn’t how any of this stuff runs, even among people that have amazing breakthrough ideas. The way I run my labs forgive me for in any way implying I deserve to be in this firmament, but it is hard working for me. Not because I’m mean.
Vivienne Ming:
I often describe it as like an episode of House. I need my employees to pitch me ideas. How if we are going to change the education system in Kenya because they did everything the World bank says, their international test scores, their PISA scores go up every year. No one’s hiring Kenyans on the global talent markets. Dr. Ming, what’s wrong? We need to change this. I need you to pitch me ideas. Even if you’re fresh out of university and your whole career has been giving the right answer to test questions and making your professors happy, that’s not what I’m after.
Vivienne Ming:
I need you to have the courage to tell me that I’m wrong and the guts to pitch an idea that maybe even you think probably isn’t right. Not because I need you to be right. I need us to be less wrong. I need us to explore all the possible ideas so that that amazing transformational idea, not the one that’s in the zeitgeist right now, not the one that’s in the self help books, something that sounds crazy, except maybe 10 years from now someone will make a discovery and it turns out, oh wow, they were right all along. That’s my job. And I pay for all my philanthropic work myself. So I’ve got no vested interest in this than us making a positive difference in the life of these Kenyan school kids or in the life of the orphan refugees or my diabetic son. I just want us to make progress.
Vivienne Ming:
And to make progress, you have to be wrong. And if you’re relying on the right answer from AI or the right answer from your peers, or even just from only your internal voice, like this thing that I worry about with Elon, then we’re never going to break through to those amazing transformative answers that no one was ever thinking about before. Another lovely compliment someone gave is, I just want to follow you around and catch the sparks coming on. But those sparks happen because we get together with people that are different enough that it’s challenging. Not that it’s broken, our trust is broken, but that it’s challenging that my students have the courage to say, vivian, I think you’re not right on this one. And the resilience when I explain to them in detail why I am right to push through and say, either, you know what, you’ve convinced me, or I still think I’m right. Let me try this. Anyways, my dissertation, wildly braggy, but I’m going to offer it because it’s the best support I have in my personal life.
Vivienne Ming:
My dissertation that ended up being a nature paper, I was explicitly told by everyone on my committee, including my advisor, not to do it. And I just simply went and did it anyways behind their back. And it isn’t because I’m a genius. The work was inspired by many other people. It was simply that I saw the problem differently than they did. They were, and has been well proven over their careers, were geniuses. I was just coming at this differently than them. I could see it differently.
Vivienne Ming:
That’s what I’m trying to suck out of the minds of the young people in my labs. So I want that. I want that for every single single kid.
Brian Keating:
Well, Dr. Vivian Ming, proud alumna of UC San Diego, we’re so glad that you came back, came down from the Bay Area, and congratulations on this book. As I said, it’s a parenting book in disguise, but it’s an extremely important book for this, you know, for the time that we’re at. I find I’m learning a lot. I’m talking to people, AI experts, and talk to the Yan Leones and the Anil and the swamis. But more and more I’m getting inspired by people like Rebecca Newberger Goldstein, who was just on talking about meaning and searching for meaning, and I joked about her new book. It should have been called Woman’s Search for Meaning, but it’s called the Mattering Instinct. And this book is what is going to make us uniquely human and what is the ultimate need of a human being.
Brian Keating:
Frankl said it was really meaning, and without that, what are we? But with only that, what are we going to be capable of? So I think it’s partnering with the robots, and I think you’re right. We’re trying to create this almost new species of cyborg, and maybe they will be benevolent to us and follow Asimov’s laws of robots, but maybe we will define what they are and what we are in the future. So thank you so much for this book. I cannot recommend it highly enough.
Vivienne Ming:
It was such a pleasure to be here. I’ve got my own dangerous yellow thing here. This was a blast.
Brian Keating:
Thank you so much. And join us next time on into the Impossible for more great conversations. Thank you, Vivian. Vivian just showed us the smartest AI move is refusing to give you the answer. And if that changes how you think about intelligence, hit subscribe, drop a comment, and, like, let me know. What’s one question you’d want a Socrates level AI to push you on? If you want more on what makes humans irreplaceable, check out my conversation with Max Tegmart. It’s linked right here. Come on, click it.
Brian Keating:
Prove that you’re not a robot.