Bringing The World Home To You

© 2024 WUNC North Carolina Public Radio
120 Friday Center Dr
Chapel Hill, NC 27517
919.445.9150 | 800.962.9862
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Is It Possible To Create A Mind?

IRA FLATOW, HOST:

Up next, what is intelligence? What is thought? What does it really mean to have a mind? And if we can answer those questions, is it possible for people to reverse-engineer the process and build an artificial mind? Sure, there are things like Siri, which can understand enough of your question to pull up directions to the restaurant, and there's IBM's Watson, which took on human contestants in a game of "Jeopardy!" and won. But how to jump the gap from those to something everyone would agree is truly intelligent?

Joining me now is Ray Kurzweil. He is a pioneer in the field of artificial intelligence; author of many books, the most recent being "How to Create a Mind: The Secret of Human Thought Revealed," just out from Viking. He joins us from our studios at WBUR in Boston. He's not shy about sharing what's - I was about to say, on his mind. Welcome back to SCIENCE FRIDAY.

RAY KURZWEIL: Yeah. Great to be with you again, Ira.

FLATOW: What is - what question do you think is the biggest question about our brain, the one that is the most puzzling part?

KURZWEIL: Well, the most important thing is, how does the neocortex works? That is where we do our thinking, and we can now actually see inside our brains with enough precision to see our brain create our thoughts but also our thoughts create our brain, and that's part of the secret of human thought revealed, that we actually create this grand hierarchy in the neocortex that reflects our ability to recognize patterns, our thoughts, our memories. And I talk about what we know about how that works from a series of both thought experiments that I guide the reader through, the latest neuroscience research and also work in artificial intelligence, including my own work, which at least gives us ideas of how these techniques might work when we see things that work in A.I. in some of the examples you mentioned, like Watson.

FLATOW: Is it possible to reduce everything in the brain to understanding a series of circuits and chemical reactions?

KURZWEIL: Well, I do have a thesis about how the neocortex works; that we have this repeated module - and we have about 300 million of them - and each module does some actually complicated tasks. It can recognize a pattern. It can recognize it even if the pattern is partly obscured or modified, and it can then also wire itself, literally with actual, you know, intraneural connections to create a hierarchy. And we have this very grand hierarchy from recognizing very simple things at the low level like edges of objects or the crossbar in a capital A up to the highest level, like, oh, she's pretty. That's ironic. You know, that's funny.

And it's all organized in one hierarchy, and we create that hierarchy ourselves. And the neocortex is able to build that hierarchy. And I talk about how that works, and this runs a little bit counter to - one idea in neuroscience says that the neocortex is highly specialized. You get this little region, the fusiform gyrus, that recognizes faces, and V1, where the optic nerve spills into, recognizes low-level visual objects. And I cite a lot of research, much of which is very recent, showing the - not only the plasticity but the interchangeability of these regions.

One startling result is what happens to V1, which is, you know, generally recognized as very low-level features, like edges of objects in visual images - what happens to it in a congenitally blind person? Does it sit there just doing nothing, hoping that visual images will come in? What it actually - it actually gets harnessed by the frontal cortex to recognize high-level concepts in language. It's the completely opposite end of the spectrum in terms of complexity.

FLATOW: All right. We're going to come back and talk more with Ray Kurzweil, author of "How to Create a Mind: The Secret of Human Thought Revealed." 1-800-989-8255 is our number. Stay with us. I'm Ira Flatow. This is SCIENCE FRIDAY from NPR.

This is SCIENCE FRIDAY. I'm Ira Flatow. Of course, we'll keep you up to date on any of the latest developments coming out of Newtown, Connecticut, where there has been a report of a shooting spree with multiple deaths. We'll keep you up to date on that, and, of course, full story on ALL THINGS CONSIDERED later in the day. We're talking today with Ray Kurzweil, author of "How to Create a Mind: The Secret of Human Thought Revealed," just out from Viking.

Our number, 1-800-989-8255. You can tweet us, @scifri, @-S-C-I-F-R-I. Ray, do you think we're getting to the point where we might be able to digitally back up our brain, you know, like to make a backup of it onto some digital equipment?

KURZWEIL: Well, you know, that's the far-end projection. Where we are today already is quite stunning. You mentioned Watson. What people don't often realize is that Watson didn't get its knowledge by being programmed fact by fact in some computer language, like LISP. It actually read Wikipedia and several other encyclopedias and is able to deal with the ambiguities and subtleties and vagaries of human language. And very soon, search engines will be able to read for content and meaning, all these billions of webpages and other documents, and they're going to get more and more intelligent.

Ultimately, we'll put them inside our bodies and brains. They'll be an extension of our thinking. They are already. I feel like a part of my brain went on strike during that one-day SOPA strike when I couldn't access these resources like Wikipedia and Google. Technology is getting smaller and smaller. That's another exponential trend. They will ultimately be small enough to go on our bloodstream, and we'll be able to noninvasively introduce computation augmenting our immune system to make us healthier and going into our brain and providing gateways from the brain to the cloud, just as your phone is actually a gateway to the cloud, and we'll be able to do that directly from our brains.

To access the information that's in our brains - and you're right, there is information there. Our memories, our skills, our personality is represented by these patterns in each of these 300 million modules, by the connections between them that we create with our own thoughts. And that information is not backed up today, and it's real information. It's not just a metaphor. We will be able to capture that. That's more like a 2040s scenario, so that's not very soon, but that is where we're headed.

FLATOW: Well, what are we going to take - what kind of breakthrough in technology will we need by 2040 to be able to make that happen?

KURZWEIL: Well, we'll need the ongoing shrinking of technology. So this computer I'm holding in my hand, which is my cellphone, is 100,000 times smaller than the computer I used as a student. We're shrinking technology at a rate of 100 in 3-D volume per decade. So go out to the 2030s, certainly by the 2040s, these very powerful computerized devices will be the size of blood cells, and there will be many applications, as I said, to augment your immune system and to go inside the brain.

Interacting between electronics and your neurons is already being done. Parkinson's patients have computers connected into their brains. They can actually download new software wirelessly from outside the patient to the computer connected into their brain. That's today. Now, that's not blood-cell size today. It's pea size. It's pretty small, but it does require surgery. So we need a shrinking of technology, and we also need better models and better emulation of how the brain works. And I talk about where we're - the current status of that, which is actually much further along than people realize. We actually have enough information to make an intelligent statement about how the neocortex works.

FLATOW: And in a nutshell, could you tell us about that? How does the neocortex...

KURZWEIL: Absolutely. You know, there's been a tendency in neuroscience to talk about the specialization...

FLATOW: Right.

KURZWEIL: ...of the neocortex, as I mentioned earlier, but there's actually a shared algorithm - one of the recent research studies shows that there are these modules that are repeated over and over again in the neocortex. There's about 300 million of them. They're about 100 neurons each. And the connections and the structure within each module doesn't change, but the connection between the modules is completely plastic, and we create that with our own thoughts. Now, each of these modules can recognize a pattern, and it gets its input from other modules so they're organizing the hierarchy.

So, for example, I have a bunch of pattern recognizers that recognize the crossbar in a capital A. And they've learned to do that. I wasn't born with that, but I learned that in my life. They will signal a high probability - ah! There's a crossbar in a capital A there. It'll send it up to a higher level, and at the higher level, the pattern recognizer might go, oh, there's a capital A. It sends up a high probability to a higher level and the pattern recognizer then might go, oh, the word apple. In the region of my auditory cortex, pattern recognizer might go, oh, somebody just said the word apple.

Go up another 10 levels and it's getting - it's now at a high level of the hierarchy to get input from different senses and might see a certain fabric, smell a certain perfume, hear certain voice and go, aha, my wife just entered the room. Go up to the highest level, and you have pattern recognizers that go that's funny. You know, that's ironic. She's pretty.

I talk about on the book this surgery with a girl, brain surgery, and she was conscious, which you can be because there's no pain receptors in the brain. And whenever the surgeons triggered a particular spot, she would laugh. And they thought they were triggering some laugh reflex, but they quickly realize they're actually triggering the perception of humor. She just found everything hilarious whenever they triggered this spot. You guys are so funny just standing there, was a typical comment, and they weren't funny. So they had found a spot, and she obviously has more than one. But they found one that, you know, represented the perception of humor.

And these pattern - you might think that those pattern recognizers must be much more sophisticated than the one that recognizes, let's say, a crossbar in a capital A. They're actually the same, except that these high-level ones exist at the top of this hierarchy. And we build that hierarchy ourselves, and we build one level of it at a time. So I've got a one-year-old grandson and he has successfully laid down several levels of this hierarchy. And we can see babies and children and adults learning, well, sort of, one conceptual level at a time.

One thing is that we fill that up, those 300 million pattern recognizers by the time we're 20. And so to learn new things, we have to actually learn the art of forgetting. Part of that is just to get rid of redundancy. I mean, I - you start out with thousands of recognizers that might recognize a crossbar on a capital A. You don't need that many, so you can give up some of the redundancy to learn new material. But some people are better at that than others.

FLATOW: Let me go to the phones. We got lots of people who want to talk to you. Let's go to Ricky in New York City. Hi, Ricky.

RICKY: Hi. Thanks for taking my call. So I was wondering, to integrate technology so closely with ourselves, it'll probably have to play nicely with people. So why do you think that artificial - when artificial intelligence is developed, it'll continue to kind of play a protagonistic role instead of an antagonistic one like in the movies "Terminator" and so forth?

KURZWEIL: Well, for one thing, it's not one thing. And you're not going to get a form, I want to be enhanced with artificial intelligence - yes or no. There's going to be a million choices. You have a million choices today just on iPhone apps. So there'll be very conservative things. Nanobots that have AI that protect you from disease or improve your memory, and they're very well established. Other things will be more experimental and more cutting edge. So there'll be many different choices.

And this is not just the future. I mean, we are much smarter already with the brain extenders we have. Most of them are, you know, hanging from our belts and in our hands and not in our bodies and brains, although some of them are. And they're very widely distributed. They're not just in a few dark intelligence agencies of, you know, nations. They are this six billion cell phones. There's a billion smartphones. A kid in Africa with a smartphone has access to more intelligent knowledge and search capability than the president of the United States did 15 or 20 years ago. So that gives me some comfort that these technologies are very widely distributed, and they're, you know, evaluated by markets. And some things succeed and some things don't, if - they really need to meet human needs to be acceptable in the market.

FLATOW: Yeah. Let's go to Dave in Sterling Heights, Michigan. Hi, Dave. Welcome to SCIENCE FRIDAY.

DAVE: Hi. Thanks for taking my call.

FLATOW: Go ahead.

DAVE: I just want to know about, you know, the concept of emergent phenomenon theories about how consciousness can emerge from matter, you know? The way I understand it is, you know, putting together enough complexity, consciousness will emerge from something such as the human brain. I just wonder if there's anything beyond that explanation because it seems like that's just kind of a label and, you know, how - as far as our science, you know, there doesn't seem to be any real explanation on how consciousness (unintelligible)...

FLATOW: How can consciousness be transferred to a machine, is, I think, is part of his question.

KURZWEIL: Well, you bring up a great question. It actually goes beyond science. I talk a lot about three grand philosophical issues in the book: consciousness, free will and identity. And ultimately, it cannot be resolved by science alone, which is to say there's no falsifiable experiment you can run to absolutely prove that this entity is conscious and this one isn't. Can't build a machine that some green light goes on, OK, this one's conscious, that doesn't have some philosophical assumptions built into it, and different philosophers would have different assumptions there. We have some shared assumptions about it, but we disagree, for example, on the consciousness of animals or on unborn babies. And this underlies a lot of, you know, social controversy.

FLATOW: So all the more problem in deciding whether a machine has - is conscious.

KURZWEIL: Yes, and that's going to be a fundamental issue. I will say that, I think, machines will emulate human intelligence better than, say, animals do. And we will - my own sort of philosophical leap of faith is that when we encounter an entity that really is convincing, that it has the subjective state that it claims to have - so you can meet a character in a video game today that claims to be angry at you and you don't really believe it because it doesn't have all the subtle cues associated with that. But part of my prediction is by 2029, we will meet very convincing artificial intelligences who have mastered this level of emotional intelligence. And when we believe them, we will accept their consciousness. That's my prediction and that's my own personal belief.

FLATOW: Mm-hmm. Or it might be something closer to, when you see it, you'll know it.

KURZWEIL: Well, that's another way of saying the same thing, right?

FLATOW: Yeah. Yeah. Let me just let everybody know that this is SCIENCE FRIDAY from NPR. I'm Ira Flatow talking with Ray Kurzweil, author of "How to Create a Mind: The Secret of Human Thought Revealed." You've talked before about how advances in technology and biology will push humanity towards something called the singularity. What is it - what is that, for people who don't know, and how does the brain science fit in?

KURZWEIL: Well, there's three grand revolutions, all of which have to do with information, all of which are progressing exponentially: biotechnology, nanotechnology and artificial intelligence. And the most important one is AI, because intelligence is the most powerful force in the world. It allows us to reshape our lives and our world to solve problems. We also create new ones. We are going to be able, in my view, to, you know, fully emulate human intelligence and then actually make it more powerful just the way Watson - even though it actually, today, doesn't read as well as you or I, it can read 200 million pages and remember it all, and has total recall.

And so the tremendous scale of computers can be applied to human intelligence and actually go beyond it. Ultimately, AI will be able to access its own design and improve itself and make itself even smarter. My view, we're going to merge with it, become a hybrid of biological and non-biological intelligence. So that's one view of the singularity, and we borrowed this metaphor from physics. It's - the metaphor is really the event horizon, which is hard to see beyond because it's so transformative.

But you can think of it as a singular change in human history when we make ourselves, you know, a great deal more intelligent than we are today by merging with the intelligence that we're creating, which in turn will be based on these biologically inspired methods that we learn from reverse-engineering the brain.

FLATOW: Let me get a caller before we have to go. Douglas in Boise, Idaho. Hi, Douglas.

DOUGLAS: Hi. It's a great honor to be on with you and Ray. Hi, Ray. How are you?

KURZWEIL: Great.

DOUGLAS: I - listen, I've really researched Ray's ideas, and I want to say that in March of this year, Forbes Magazine printed an article saying that almost all of Ray's predictions for 2009 were inaccurate. And I personally feel that Ray presents - Ray, you present a dystopian idea of the future that most people would object to. And I think it's easy for transhumanists to say, we are talking about such high intellectual matters. We are the torch carriers of the future. We don't have to listen to people who are objecting to our ideas, who are probably less intelligent than we are.

And I think that there should be a broader discussion about the theological implications of Ray's ideas. I'd love to hear a TED talk with Ray and maybe somebody who disagrees with Ray, so that we as a society can evaluate the transhumanism that Ray is actually promoting and see if that's really what we want head towards.

FLATOW: Thank for the call. Ray? Got a, uh...

KURZWEIL: Well, I mean, just to set the record straight, I published a 150-page paper on the 2009 predictions that I made in the 1990s in my book "Age of Spiritual Machines." Eighty-six percent of them were correct. An example of one that was wrong is that we would have cars that could drive themselves, which, in fact, we did have. But I count it as wrong because you can't really buy one. But you can - if you just Google how my predictions are faring, and you can also put in the word Kurzweil, you can get that essay, which goes through in great detail each one of the predictions. And I'm not familiar with that Forbes article. But that's - the reviews of those predictions have, in fact, been very positive and consistent with that analysis.

FLATOW: What about your attempts to live forever? Do you think that's going to come through?

KURZWEIL: Well, I think the idea is that we are adding more time to our expected life expectancy than is going by. At least we'll get to that point, by my calculations, in about 15 years. It's kind of a tipping point. It's not a guarantee. But as more time goes by, we have more advances, particularly as we figure out the information processes underlying biology. And we'll get to a point where we can actually access and back up the information that comprises our bodies and brains. It's not a metaphor to say there's information in our brains. And right now, that's not backed up. We ultimately will be able to that. That's a 2040 scenario.

FLATOW: All right. I've run out of time. If you want to hear more about what Ray is talking about, please read his book. Ray Kurzweil is author of "How to Create a Mind: The Secret of Human Thought Revealed." Ray, it's always nice talking to you. Thank you and have a good holiday season. Thanks for coming on Science Friday.

KURZWEIL: Yeah. You too. My pleasure.

FLATOW: You're welcome. Transcript provided by NPR, Copyright NPR.