E63 -

Susan Schneider joins the show to challenge our preconceived notions of consciousness and whether machines can achieve it.

Hosts
Paul Matzko
Tech & Innovation Editor
Guests
Aaron Ross Powell
Director and Editor

Aaron Ross Powell was the director and editor of Lib​er​tar​i​an​ism​.org, a project of the Cato Institute.

Susan Schneider is currently the NASA/​Baruch Blumberg Chair of Astrobiology at the Library of Congress. She writes about the fundamental nature of the self and mind, especially from the vantage point of issues in philosophy, artificial intelligence, and astrobiology. The topics she has written about most recently include radical brain enhancement, spacetime emergence, superintelligence, the nature of life, whether minds are in some sense programs, panpsychism, and the nature of persons. She has been featured by The New York Times, Science, Nautilus, Smithsonian, Discover, PBS, The History Channel and more.  

People may not really know what artificial intelligence is but they are convinced that it will either utterly destroy humankind or lead us into a utopian Singularity between man and machine. But, as philosopher Susan Schneider reminds us, there’s much we don’t know about artificial intelligence, including the nature of consciousness itself. And consciousness, while it may be hard to identify, entails significant ethical obligations, a point that any fan of the HBO show Westworld will quickly grasp. These kinds of questions have been the object of philosophical debate for millennia and it’s a line of inquiry that we should understand before, and not after, we program the first conscious artificial intelligence.

What does it mean to design a mind? What is the ‘problem of other minds’? Why does the consciousness of AI matter? Can machines be conscious? Do you think Androids are conscious? Would we ever have anything like a Westworld with true violence? How much do you replace and you are still you?

Further Reading:

Transcript

[music]

00:05 Paul Matzko: Welcome to Building Tomorrow, a show about how tech and innovation are building a better world. As usual, I’m your host, Paul Matzko, or am I? Have I been replaced by a digital-​analog of Paul Matzko, an artificial intelligence that has all of my memories uploaded to the Cloud, that’s simply been programmed to sound like Paul Matzko? Now, that seems unlikely given the current technological limitations, but if we’re able to overcome those limitations in the future, how would you truly know? And, ultimately, what, if anything, makes a conscious person different from a conscious, or for that matter, unconscious AI? Is it me? Am I me? Who the hell am I? Now, I’m a bit worried if I keep talking like this, I’ll break down into mad gibbering, punctuated by the occasional wild, “Yop,” howled out at an uncaring existential void. So, instead, Aaron Powell and I are joined by someone with expertise in applying questions of consciousness to… And just the philosophy of personhood to artificial intelligence. Susan Snider, currently NASA’s Baruch Blumberg Chair of Astrobiology at the Library of Congress, who holds a bevy of other academic postings around the globe, has a book coming out October 1st titled, Artificial You: AI and the Future of Your Mind. Welcome to the show, Susan.

01:22 Susan Schneider: Hello. Thanks for having me.

01:25 Paul Matzko: Now you’ve coined the term, mind design, I think, in the book to describe human interactions with artificial intelligence in the future. What does it mean to design a mind?

01:40 Susan Schneider: Good question. Well, I think, if you’re designing a mind, you’re designing something that is a conscious being, because, I think, that’s a necessary feature of having a mind. To be conscious involves… It’s feeling like something to be you. So, when you smell the aroma of your morning coffee, when you see the rich hues of a sunset, when you feel the pain of stubbing your toe, those are all conscious experiences. Feels like something from the inside to be a sentient being. So, if we design minds, that would mean creating synthetic beings that have developed qualities of experience. And I don’t know if we can do it or not. I take a wait-​and-​see approach when it comes to artificial consciousness.

02:32 Aaron Powell: What would it mean to design such a thing when it seems like this central characteristic of it, namely its ability to have experiences, is, at least, appears to be inaccessible to us as designers? So, I can assume that Paul has a mind and has experiences, but I can’t have experiences of his experiences. So, I guess, I can’t really know that there are such experiences. So, how would we go about designing something that requires it having those experiences, if we can’t really ever know if it’s, actually, working?

03:11 Susan Schneider: You just raised a classic philosophical problem called, the problem of other minds. How do we know that the people around us are minded? Maybe, we’re, say, in a computer simulation, and we’re the only sentient beings that exist. It’s like, maybe, I am alone, [chuckle] and everybody around me is a zombie. They aren’t conscious at all. So, that’s a philosophical thought experiment that is quite popular. There is a soft answer that, I think, it’s actually pretty good to the question of, “How we know there could be other mind?”

03:52 Susan Schneider: But boy, it gets super difficult if we’re talking about, in addition to that problem, the problem of how we know just humans around us are sentient, it’s really hard to ask the question and answer the question, “How do we know that machines are minded?” And here, I take a practical approach, actually, even though I’m a philosopher. I aim to develop tests for determining whether machines are conscious. So, I think, that artificial consciousness is something we might be able to run tests for. Concerning how we find out whether humans have minds, I think, it’s really just inference to the best explanation. They have nervous systems like ours, so we can tell, introspectively, each one of us can tell that we’re conscious. It feels like something to be us. And I know that you have a similar nervous system. And I see your behavior, so I could’ve infer from there that you must be conscious.

05:01 Paul Matzko: That’s, actually, kind of, sounds a bit like an argument that you critique in the book, which is a simplistic notion that if you reproduce the exact same physical set of characteristics… So, instead of a human brain that is wired in a particular way, we reproduce it in, maybe, some other substrate in silicon, or in digital form, if it’s completely reproducible, it will be the same thing, it’ll be the same consciousness, the same being. Is that similar to what you’re saying here with… Because you know Aaron has a nervous system, you have a nervous system, therefore he must be minded?

05:41 Susan Schneider: Yeah. The physical conditions are the same. I, probably, shouldn’t have brought up the thought experiment of being in a simulation though, because that’s different. It could be that nobody has a nervous system, but, yeah. It’s very similar. And, I think, the question here is, “Will we ever produce artificial intelligences that are isomorphic to us in every respect involving consciousness?” And there, we don’t even know if it’s compatible with our best technology, or even the laws of nature, to do that. If it was, I assume the machine would be conscious, but that’s just the thought experiment. So, what I’m worried about is whether the machines, that we actually have the technological capacity to build, would actually be conscious beings. And there, I think, the issue is very nuanced. So, I, actually, take a wait-​and-​see approach, and, I think, we have to look at the details of not just the substrate like whether the microchip sees something like silicon, or carbon nano tubes, or what not, but also the architectural details of machines. And, I think, there are reasons for, and against, actually developing conscious machines.

06:58 Aaron Powell: Well, so, that, maybe, that brings up a follow-​up question which is, why does this matter? So, if we’re setting out to build minds, presumably, we’re doing that in order to accomplish something. It might just be to see if we can, but we’re building them, Google’s building an AI, so that I can wake up in the morning and ask it what the weather is gonna be. And as long as it can tell me, accurately, what the weather is gonna be, and engage in a convincing conversation, why does it matter if it’s conscious, or not? As long as all the… I guess, the outward signs look to me like consciousness, so I’m comfortable talking to it.

07:40 Susan Schneider: So, think about the human future. So, if humans merged with AI the way that people like Elon Musk, and Ray Kurzweil envision, then, if machines can’t be conscious, post-​humans, I.e. Humans who’ve merged, wouldn’t be conscious beings. They wouldn’t, really, have minds at all. So, what we would be doing with our technology is extinguishing consciousness from homo sapiens. And changing… We would no longer even, technically, be homo sapiens, but those post-​human beings wouldn’t, in a sense, be an improvement on us, it wouldn’t feel like anything to be them, even if they’re vastly smarter than we are. So, we have to think about what we value. Without consciousness, it won’t feel like anything to be ultra smart, nothing would really matter to a being that isn’t sentient. So, machine consciousness is ultra important when it comes to questions about the proper use of artificial intelligence technology. Do we wanna merge with machines?

08:58 Susan Schneider: It’s also important in understanding the AIs that we create, even setting aside the human brain. So, if we’re creating ultra sophisticated AI, some of them look human. Like take the Japanese Androids, for example, that are being developed to take care of the elderly in Japan. The public is going to assume they’re conscious, because they look human. I call this the cute and fluffy fallacy. [chuckle] If it’s cute, if it’s fluffy, it’s got to feel like something to be those creatures. And that, of course, makes a lot of sense in the biological domain, but we’re talking about artifact. So, if we decide that a certain group of AI, maybe the cute AIs, if you will, the fluffy AIs, they’re conscious, then, we’re giving them rights, special legal consideration that conscious beings get.

10:01 Susan Schneider: Well, their trade-​off with other conscious beings, and all, sorts of, cases like, consider Charlie Brown. [chuckle] So, we’re gonna be sacrificing humans in certain contexts for AIs that we think are conscious, but “Oops. We made a mistake. They’re not.” So, I think, we really, carefully, need to investigate the questions of machine consciousness, because Androids that, already, look human are under development, and people will be [10:28] ____. Sentience is absolutely key, no matter what moral system you have. And they are saying, thinkers like Peter Singer in the context of animal liberation who has, rightly, pointed out that if an animal is sentient, we have special obligation. So, that’s why I raise the issue of machine consciousness as being so central, because it’s really a question of what the class looks like when it comes to sentient beings. Would AI be in it? And would we be in it if we altered our brains, if we took out parts of our brains responsible for conscious experience and replaced them with Microchips, would we even be in that class?

11:12 Aaron Powell: I wonder about the psychology of this, because we can… So, we can take on the philosophical side, we can try to address the question of say, like, “Is an AI actually sentient or conscious? Is it not?” but on the psychological side of these things, the more these things are present in our lives and we interact with them, we’re going to establish norms of interaction that will emerge independent of… Either independent of the answer to the consciousness question, or well before we can get an answer to that question. And, I wonder, how much of this… And so, you talked about the cute and fuzzy side of things. And that we have the pets, and we treat them a certain way because they’re cute and fuzzy, and they interact with us and we, kind of, assume consciousness into them, even if, in some absolute sense, we can’t be certain of it, but we don’t have that with our laptops, even though they do things.

12:09 Aaron Powell: Is there an angle to which embodied-​ness matters in the psychology of our interactions with AIs, so that if the… You mentioned the robot, the Japanese robots that are helping elderly people, that simply by nature of them being embodied, having this physical form that we can interact with, it makes us feel the same way as we’re interacting with a pet, or with a person, we’re going to treat them as if they have consciousness in a way that maybe the Google AI, which is present in a set of speakers throughout my house, but I can’t hug it even if it’s much, much smarter and acts in a more conscious way, I’m, kind of, less likely to interact with it as a conscious thing, because it doesn’t manifest in the category of things I’m used to as consciousness.

13:02 Susan Schneider: Yeah. So, it could be that, in order to build a conscious machine, it would need to be embodied. Although, I happen to think that we could give it a, sort of, VR reality, so that it may not even, physically, need a body, but setting that issue aside, you’re absolutely right that people will assume that something that is embodied, and looks like us, is sentient. And I also believe that we will establish cultural norm, even before we understand whether they’re conscious. Although, my hope is that we will hit the ground running, and develop tests for determining whether machines are conscious. I think that it’s natural for us to treat androids, for instance, with some, sort of, dignity, because they look so much like us, and even if we understood that they weren’t conscious, it would probably be a bad idea in society if people were acting abusively towards creatures that look like biological beings. It could, kind of, degrade our respect for sentient beings.

14:16 Susan Schneider: So, that’s just a bad idea, but, I think, the cute and fluffy fallacy is super dangerous precisely because we will treat Androids with a certain level of respect, at least, many of us will, and they may not have consciousness. And, in that case, again, there will be trade off. So, if you’re on a trolley, and on the left side, you… If you take the left track, there are two humans, and if you take the right track, there are three Androids. If everybody thinks the Androids are conscious, two humans will have died, but we will have made a mistake. It doesn’t feel like anything to the android.

[music]

15:08 Susan Schneider: So, Westworld depicts android abuse, and, I think… I went to South by Southwest to speak, they actually have a, sort of, a Westworld that they created. Isn’t that funny? But, anyway, would we ever really have anything like a Westworld with true violence, rather than just after… In a non-​violent situation. And, I think, we shouldn’t, for lots of reasons, but many also add that, at a point in which we don’t know whether a being is conscious or not, before we’ve developed tests we can believe in for machine consciousness, we should, probably, apply something like a precautionary principle, because it would be really bad, really catastrophic if we created an enslaved group and mistreated them. And even if we tweak their settings, so that they want to be our slaves, that just sounds awful. It sounds like Brave New World.

16:20 Paul Matzko: Yeah, but which is, kind of, the point of Westworld, where… We… It’s… When the audience realizes that the robots in Westworld are actually sentient, everyone, your feeling is, “Oh. Well, that’s, obviously, wrong.” but, I think, one of the broader points of the show is that what was going on at Westworld was wrong even before… Even if none of the robots were sentient, that it brought out the worst in people, it encouraged them to… And applying… I guess, that applies to artificial intelligence outside of a TV show.

16:52 Susan Schneider: Oh, yeah. Absolutely. I think, that the widespread abuse of humanoids would be very bad, or even in Blade Runner, they had AI animals. If we saw people kicking around AI puppies… I think, this stuff is a bad idea, not just because I have a puppy here.

[chuckle]

17:17 Aaron Powell: How far, though, do we take this precautionary principle? Because this is all… Right now, we’re talking about it in application to stuff that’s potentially coming, and… But if what we’re looking at it as like there’s, kind of… There’s two potential bad things that we’ve been talking about. One is the badness in terms of the behaviors that we engage in, and almost like, kind of, in an Aristotelian habit formation, sort of, way. If I simply, even if this thing isn’t sentient, if I am using it as a way to practice being rude, then I’m likely to, maybe, be rude elsewhere in my life with, actually, sentient beings. I’m instilling bad habits, I suppose. That’s bad in one way. And then, the other badness is the… If we treat these things poorly, and, in fact, it turns out that they were conscious, then, we’ve committed some terrible wrong that we can’t take back, but both of those, we can apply those forward to emerging technologies and things we might run into in the future, but it seems like we can also apply them backwards to existing, and acceptable, or largely acceptable, behaviors.

18:24 Aaron Powell: So on the badness of habit stuff, does that mean that we shouldn’t also be playing Fortnite or Call of Duty in which we are shooting at representations of people or does this mean that we have to be vegetarians? Or if, let’s say, the panpsychism is true where consciousness is… Then suddenly, is there people who believe that, and they might be right, that rocks and plants and trees and all that have… How far do we need to take this precautionary principle in both directions?

19:02 Susan Schneider: Right. Okay. So I think in the book, I was thinking of a precautionary principle, mostly in the context of AI, that we don’t know whether they’re sentient or not. And until we do, we shouldn’t put them in any circumstances, whether it be trade off with other rights holders. So, that’s what I meant in the book. Now, you ask really good questions though.

19:32 Susan Schneider: Like about panpsychism and I’m okay eating a salad. VR, I think it’s important to ask that kind of a question, at least… Again I don’t apply the precautionary principle so much to just interacting with human-​like beings, I really apply it when we just don’t know whether those beings in question are conscious. When we’re playing video games, we know they’re not conscious, but you’re right, there could be other reasons why we don’t wanna see human-​like creatures hurt. I think there have been debates like this, about video games when we get into VR that’s extremely vivid and life-​like, I actually don’t think it’s a good idea that avatars get… Creatures inside of a VR program get abused or racists, and it would potentially be kind of diminishing our respect for sentient beings.

[music]

20:45 Susan Schneider: You’ve mentioned a couple of times that the possibility of tests to determine consciousness of these AIs, what would those tests potentially look like?

20:58 Susan Schneider: Sanoma and Precept Code and others have developed tests based on the integrated information theory, which is a mathematical measure that seeks to look at computational systems and determine whether that system is conscious, that’s been influential. I actually am more skeptical about that. So what I’ve done together with some others is look at other ways of determining whether AIs are conscious. Together with Ed Turner at Princeton University, actually an astrophysicist, and one of the people who works on exoplanets and space images, he and I developed a test that is actually a behavior-​based question and answer test, but it looks for whether the machine kind of understands the felt quality of experience, whether it has a kind of sense of what it feels like to experience the world?

22:03 Susan Schneider: We have that test, we wrote a short little piece on that in Scientific American, if anybody wants to look at it in more detail, and I talk about it in the book. And then in addition to that in a TED talk and in the book, I talked about what I call the chip test. So in the TED talk, I asked you to suppose that you replaced part of your brain responsible for… Part where all the neural basis of conscious experience with microchips. Then I ask, “Would it feel any different, would you notice a sort of diming of conscious experience or some sort of elements of your consciousness just going away like maybe visual consciousness?”

22:53 Susan Schneider: And you might say, “Well, Susan, this is just a thought experiment.” But the thing is real prosthetics are already under development for all sorts of brain disorders and injuries. For example, Ted Burger over at USC is developing the artificial hippocampus, which is already in clinical trials in humans with some success. DARPA has been working on all sorts of invasive brain-​implant technologies for post-​traumatic stress disorder, Alzheimer, and whatnot. So suppose that in the process of treating patients, we stumble upon microchips that we think could help with parts of the brain that are responsible for consciousness in patients that have disorders, and so we try it out.

23:45 Susan Schneider: Suppose it works. Well, in that case I think it tells us that microchips are the right stuff, at least that kind of microchips that would appear to be the underlying element for conscious experience. Now that’s not to say that all AIs that are created out of that very type of microchip will be conscious, I think consciousness is delicate issue depending not just on substrate but on architecture as well. I think if we do find that neural prosthetics work in those contexts, that will give us some indication that conscious AI is possible, and I do think that we will know the answer to that question as long as invasive neural implants are pursued the way they are now.

24:37 Paul Matzko: One of the other thought experiments that you engage in the book is asking someone to walk into a future mind design lab and, “Okay, well, which parts of your consciousness do you want to swap out, do you wanna make yourself smarter so you can process information faster? Do you want to remove the… ” You can remove literal parts of your brain, replace them with upgraded artificial enhancements, whatever the substrate’s based on, and you pose the question of, “Well, how much of your brain do you replace and you’re still you?” What are you getting at with that kind of thought experiment?

25:22 Susan Schneider: Yeah, I love that thought experiment, because it gives you this sense that you’re out on a shopping trip. But instead of getting like Starbucks coffee beans or whatever you’re getting like enhancements, like Zen Gardens, where you can upgrade to the level of a Zen master in the blink of an eye, right? Okay, so if we start to replace parts of the brain responsible for consciousness, it could be lights out for us, if we don’t have the neural basis of consciousness, right as the level of microchip, including chip architecture you’ll not just like hype like whether it’s silicon or not like the actual architecture of the chip. So that’s one issue, but there are other issues as well. So if I go and I get Zen Garden and then I get human calculate which is an enhancement that allows me to say, be like a physics Einstein, [chuckle] figure out space time.

26:28 Paul Matzko: Nice, yeah.

26:29 Susan Schneider: And so that all sounds really cool, right? I mean who wouldn’t wouldn’t want to be smarter, or have more control over their mental life? But the problem is, after too many enhancements, even setting aside the consciousness question, how do you know that it would even be you, especially if you made the changes rapidly? These are questions having to do with the nature of the self or person. What is it that allows you to persist over time? This is an issue that develops in the literature, in metaphysics on the nature of the person and one influential theory is simply that the mind is the brain. Well, if your minds your brain and you replace too many parts of your brain with some other things, you’ve actually killed yourself, right? The people like Elon Musk who advocate merging with AI in this kind of transhumanist fashion would be contributing exactly to the death of all the people enhancing, unwittingly right? It’s pretty hugely tragic but that’s… It’s not to say that those new beings wouldn’t be smart, and maybe they’re conscience that depends on machine consciousness.

27:47 Paul Matzko: But are they a different being? Yeah.

27:48 Susan Schneider: But I… Yeah, may not be you, right? This is a philosophical mind field like we’re playing with ideas and philosophy involving the nature of the mind self in person. And in philosophy, we’ve been speaking about these issues for thousands of years. And there are no easy answers to questions about the mind’s nature, whether there’s a self, whether there’s a soul, what the person is. And so to just simply advocate merging with AI or a certain position about conscious machine without reflecting on all this is a bad use of AI technology. It would be dangerous to just sort of upgrade the brain, so to speak, but at the same time you could be ending the life of the person in question.

28:47 Paul Matzko: Yeah.

28:47 Susan Schneider: And you need the brain. Yeah, so these aren’t issues that technologists are authorities on, right? But I think people like Silicon Valley boomers and they tend to defer to AI experts a lot about the proper use of AI technology. And I think we can see already what goes on with the development of AI in Silicon Valley is to follow the money. It’s about profits which is great. It drives a lot of technological changes, but we’re also running into a lot of issues because it’s a… These are emerging technologies.

29:29 Aaron Powell: Well, then that raises the question of who gets to answer these questions and how do we go about it. Because you said like we should, as we’re increasingly approaching these technologies, working with them, making these developments, we should be conscious of these extreme philosophical difficulties, the complexities that may not be immediately obvious. And taking them seriously and really trying to think them through. And that all sounds like, Yes, of course, but then given that we’re talking about potential sentient beings that might have their own interests, that those interests might rise to the level of rights. How do we go about making these kinds of decisions or answering them? Because so like philosophy of mind is incredibly complex, and people have been having arguments about it, for quite a long time and that… And it’s not like it’s settled. It feels like we’re maybe making progress, but progress in philosophy is a hard thing to measure, right?

30:34 Susan Schneider: You think?

30:34 Aaron Powell: Yeah.

30:34 Susan Schneider: So what, yeah, I know definitely.

30:36 Aaron Powell: And so on the one hand, we could… And there’s these huge trade-​offs, because as you said, we can think of like in one extreme, we go too fast and we end up making these beings that in or in a meaningful sense like the kind of conscious things that we think of ourselves as or even more intelligent than we are and we treat them incredibly poorly maybe because we don’t know what… That we’re doing it, maybe because we’re callus. But that’s this monstrous wrong, that is you know like millions or trillions of lives being destroyed or made miserable, and that’s awful and we should avoid that. But on the other hand, we have the potential, this technology has the potential for radically improving the lives of everyone on this planet.

31:21 Susan Schneider: I hope so.

31:21 Aaron Powell: And if we… If we forgo that, yeah, and also has the potential of destroying all of it. But… So they… It’s like, there are all of these… It feels like, that every option can lead to possible extremes. Some extremely good some extremely bad, extremes in different ways, but we can’t… We have to somehow decide. And so does it make sense? Is this something you think we should make these… Come to these conclusions, as a society, whatever that might mean or that this is better left to the individual, say, innovators, entrepreneurs, scientists, philosophers, trying to work it out, but we’re telling them, “But please think clearly about it,” like how do we go about approaching this? Because what you [32:06] ____ on, it’s just incredible complexity.

32:08 Susan Schneider: It’s so tricky. In the book, I advocate a stance of what I call metaphysical humility, which means that we really need to understand that there are no easy answers to this question. And I see my job as explaining it to as many people as possible so that they understand that they have to mull over these issues to really get how AI can best promote human flourishing. So, I actually suspect that one way to deal with these enhancement decisions might be for there to be some counseling involved when people make enhancement decision.

32:47 Susan Schneider: So, for instance, if you’re getting a genetic test, before you get your results and even before you order the test, many hospitals offer genetic counseling to talk about the issues involved in seeing whether you have the gene for breast cancer and what not. So, something like that, as funny as it sounds, could be extremely useful and there I think it’s the issues involved, philosophical questions about the nature of the self, so that someone who’s thinking about getting Zen Garden or Human Calculate understands the risk, the risk aren’t just medical risks. They’re actually… It just sounds so funny, but I mean, there are philosophical risks involving the nature of the person.

33:42 Susan Schneider: That doesn’t mean though that we should not allow people to do what they feel is best for themselves, for their own lives. I mean, there will be daring people who don’t care so much and there will be people who have well-​founded philosophical views who don’t even believe in the nature of the self. So, Derek Parfit in philosophy has been extremely influential in arguing that there’s no such thing as the self. And think about the related Buddhist position, that we should actually see the self as something that’s illusory. But my point is to get people to think, and to not just take what Silicon Valley says at face value. They’re there to make money. All stakeholders need to be involved in questions about human flourishing. So I think it’s very important to educate the public.

[music]

34:41 Susan Schneider: I think AI has to promote human flourishing and improve the lives of sentient beings and we have to tread very carefully. So the stance of metaphysical humility, I think, is important, encouraging people to think about the ultimate questions before they use artificial intelligence technology in certain ways. Or before as AI researchers, they develop it in certain way. It’s important to know where it’s all headed. And of course we can’t predict the future, but we can recognize that issues involving the nature of the mind. The Enterprise of mind design, the potential for conscious machines being created, these issues involved philosophical mind view. There’s no easy solution to the nature of the self or the person or the question of whether a machine that we develop is conscious if it’s highly intellectually sophisticated. So we have to tread carefully. So, as we develop AI technology, we have to make sure our social development stays in lock step with the AI development itself.

[music]

36:02 Paul Matzko: Unless you’re an expert in artificial intelligence, you’re probably going to have a reaction like I had when I first read Susan’s book, which is, “Wow, we really don’t understand what consciousness is, but we’re trying to make decisions about whether or not a machine is going to be conscious and we don’t understand what makes human beings conscious in the first place.” So, I appreciate Susan’s call for a certain level of humility as we approach these conversations. This is a complicated subject, the idea of personhood, what makes a person a human being, what makes us conscious. These debates have been going on for hundreds, thousands of years and now we’re trying to apply it to machine technology. We don’t have it all figured out. We should approach this with a humble spirit. This has radical implications, not just for the future of technology, but the future of humanity as a whole and for future culture.

36:53 Paul Matzko: Think about this way, if it turns out that consciousness, whatever it is, is a thing that can be instilled in machines, it can be instilled in the cloud, doesn’t even need to be embodied. Think about the implications for entire sectors of human existence that ostensibly have little to do with technology, things like religion. Again, this conversation about machine learning and what point machines become smart enough or aware enough to be considered conscious beings, has broader spillover effects beyond the edges of technology and innovation, it really will change how we have to think about and conceptualize human society as a whole. And given that reality, I think Susan’s approach of encouraging epistemological humility is appropriate. Let’s pay attention to what smart folks, ethicists, philosophers, machine learning programmers, what the smartest people in the room are grappling with. Let’s learn from them so that we have the tools to make decisions for ourselves in the future when we actually are building potentially conscious machines.

[music]