“Artificial Intelligence” with Robert Hunt

 
 

Show Notes:

Is AI going to take over the world?

Probably not in the way pop culture would have us imagine (or in the way Chris fears). Today’s guest, Dr. Robert Hunt, offers Eddie and Chris a broad overview of AI--what it is, how we see it, how we see ourselves in relation to it, and how we use it. 

Dr. Hunt has been the Director of Global Theological Education at the Perkins School of Theology, Southern Methodist University since 2004. He is Professor of Christian Mission and Interreligious Relations and teaches classes on World Religions, Islam, Interreligious Dialogue, Cultural Intelligence, and Mission Studies. He earned his M.Div from Perkins School of Theology and his Ph.D. from the University of Malaya. He is also the creator and host of the podcast Interfaith Encounters.

His upcoming book about AI will hopefully be out this fall.


Resources:

Learn more about Interfaith Encounters or listen to it on Apple Podcasts or Spotify.

Purchase his books, Muslim Faith and Values: A Guide for Christians and The Gospel Among the Nations on Amazon.


Transcript:

Chris McAlilly 00:00 I'm Chris McAlilly.

Eddie Rester 00:01
And I'm Eddie Rester. Welcome to The Weight.

Chris McAlilly 00:03
Today we're talking to Robert Hunt.

Eddie Rester 00:05

Dr. Robert Hunt teaches at the Perkins School of Theology at SMU. His specialty is in world religions, inter-religious dialog, mission studies, cultural intelligence. But today he's talking to us about something that really is a deep interest of his: artificial intelligence.

Chris McAlilly 00:23
What is artificial intelligence, Eddie?

Eddie Rester 00:25

Well, he's gonna explain that to us today, but basically it is, in our day and time, generative digital intelligence, where computers are now learning from us. And we've grown up with ideas digital intelligence, where computers are now learning from us. And we've grown up with ideas of AI. It's a lot in the language right now. People are talking about ChatGPT. Apple has recently announced Apple Intelligence, which is their version of AI. So it's in the language around us. And we thought it would be important to hear from somebody who's studied it, who's writing about it, who's talking a lot about it theologically, not just technologically.

Chris McAlilly 01:04 I think that AI is scary.

Eddie Rester 01:07 You think it's scary?

Chris McAlilly 01:08 Absolutely.

Eddie Rester 01:08 Why is it scary?

Chris McAlilly 01:09
It's scary because it's going to take over the world, Eddie.

Eddie Rester 01:13
Okay. He's going to talk about that in the podcast. He's going to address that exact fear. Obviously you weren't listening to as we talked to him.

Chris McAlilly 01:22
I always listen to you, but I am a little bit scared. I'm not lying, but I am a little bit scared.

Eddie Rester 01:27
I mean, there are pieces of it that really...

Chris McAlilly 01:29

Well, it's scary! Computers are going to take over the world, and they're going to lead to these questions around whether or not we have a human future, a post human future that is scarier than the one that we're in. I don't know. I think the younger you are, the more scared you are. When I talk to younger people, even more scared.

Eddie Rester 01:47

Well, it impacts jobs. It impacts how we relate to people. It impacts how we view humanity. And I think that's one of the things that we really hone in on in this conversation, is how it impacts our understanding of what humanity is and what we're here for.

Chris McAlilly 02:03

And that's where my hope is, because I'm not just fearful. I think my hope is that the moment that we're in where artificial intelligence all of these questions around what it means to be human, it will force us to go back and rediscover the resources that the church has to offer of what it means to be human, created in the image of God. And I do think we'll... I still think it's kind of scary, I'm not gonna lie. I look forward to it. It's a good conversation. Robert's delightful. We've tried to have him on the podcast before.

Eddie Rester 02:10

We're gonna explain that right off the top. I think AI has been working against us, actually. So enjoy the podcast. If you're interested in this, take a deep listen. Share it with some other folks who may have questions about AI. As always, leave us a review. Give us the five star treatment wherever you listen to podcasts, so that others can find us as well.

Chris McAlilly 02:57 The five star treatment.

Eddie Rester 02:58
Give us the five star treatment.

Chris McAlilly 02:59
[INTRO] The truth is the world is growing more angry, more bitter, and more cynical. People don't trust one another, and we feel disconnected.

Eddie Rester 03:10
The way forward is not more tribalism. It's more curiosity that challenges what we believe, how we live, and how we treat one another. It's more conversation that inspires wisdom, healing, we live, and how we treat one another. It's more conversation that inspires wisdom, healing, and hope.

Chris McAlilly 03:22

So we launched The Weight podcast as a space to cultivate sacred conversations with a wide range of voices at the intersection of culture and theology, art and technology, science and mental health, and we want you to be a part of it.

Eddie Rester 03:37

Join us each week for the next conversation on The Weight. [END INTRO] Today, we're here with Dr Robert Hunt from Perkins School of Theology at SMU. Dr Hunt, thank you for joining us today.

Robert Hunt 03:52
You are most welcome. I'm very happy to be here with you.

Eddie Rester 03:55
What people need to know is this is, like, the third or fourth time we've tried this.

Robert Hunt 04:01 Right.

Eddie Rester 04:02

Our topic is AI, and so the first time we actually recorded an episode, and for the first time ever, we had an episode disappear. The second time Chris didn't get added to the calendar, some strange reason. He fixed it. Then you had some other issues. This time, Chris's mic died. Your mic has had some issues. Obviously, Skynet does not want us to talk about AI today.

Robert Hunt 04:29
I think this is true. The AI robots are taking over, and don't want to be discussed in that context.

Chris McAlilly 04:38

Yeah, this is clearly the case. We don't want theological reflection on the issues of the day, and so they're shutting it down. But we're avoiding them for a brief moment. So we better get this in.

Robert Hunt 04:51
Better jump on it. Yeah. Better jump on it.

Eddie Rester 04:55
So help. We're going to talk about AI today. So for the uninitiated, we're going to have long topics about things. But for the uninitiated, what is AI? What's artificial intelligence?

Robert Hunt 05:11

So artificial intelligence is basically the idea that you can get a computer to demonstrate the kind of intelligence that we associate with a human being. Maybe not as smart as, but to be as intelligent, have the same kind of intelligence as a human being. Just a little bit more about that, this may be helpful, when we think of contemporary artificial intelligence, like ChatGPT 4.0 or Claude AI or Copilot, that's Microsoft, or Gemini, that's Google. We're talking about a specific way of getting a computer to be intelligent, and that's by using a massive neural network that has been trained with a large language model.

Eddie Rester 06:05
Say a little bit more about that. What's a large language model? You hear that a lot when people talk about AI. But what does that really mean?

Robert Hunt 06:14

What it really means is that this neural network has been trained with language, human language. So vast amounts of language in written form are fed into the neural network. Complicated algorithms help the network, I think, for want of a better word, help the network figure out the probabilities that one set of letters will follow another set of letters under certain circumstances. And as all this language goes in, that gets encoded in the neural network through the weights that are given to relationships between microprocessors, of which there are literally billions, so that the whole thing now understands human language in a particular way, i.e., as relationships with probabilities that one set of letters will follow another set of letters

Eddie Rester 07:16
And we've been surrounded by sci-fi AI.

Eddie Rester 07:20

Our entire lives. I mean, C-3PO AI in my Star Wars movies, HAL 9000 in the Space Odyssey movie. How is what we're seeing now like or different from what we've been surrounded by in science fiction all these years?

Robert Hunt 07:40

Well, it is like what we've seen in science fiction all these years, in the sense that having been trained in human language, it's exquisitely good at speaking to us in our language. So it's like it in that sense. It's unlike it in this sense: the actual physical computers necessary to do this fill pretty huge data warehouses in different places, and not into a small robot brain. And so when we're chatting with ChatGPT, for example, we're chatting with a large language model that exists in a warehouse in California somewhere, and uses as much electricity as a small city. So it's not like one of these robots. The second thing is, and this is important, too, those robots that we encountered, with the exception of HAL, by the way, those robots were embodied. They were in control of moving around, however awkwardly, or rolling around or something like this. So they were doing two kinds of things. They were speaking, and they were also moving and acting. A large language model GPT, that's the name of the processing process, a large language model GPT really only talks. It listens and it talks, but it doesn't move. It has no action. It has no arms or legs or anything like that. It's just a talker.

Chris McAlilly 09:21

But isn't it just... I mean, even there, this idea that AI listens is an interesting... I don't know. It's just kind of an interesting word, because, you know, in a sense, it's not listening. It's receiving data, and then it's processing that data and, with a quick degree of speed, giving you an answer back. But I think, just even... A way in which... It's hard for us to talk about something that is non-human, that has intelligence in the way that we would talk about human beings. And I think this is where we can talk about the mechanics of it. I do think it does very quickly raise philosophical or theological questions, and this is an area that you've thought a lot about. What are some of the ways in which you are... What kinds of questions or have you been asking about AI from a philosophical or theological perspective?

Robert Hunt 10:35

Well, the first question I've been asking philosophically is the one that you just raised, which is, is it listening or is it hearing? That's Biblical, right? And you're exactly right. What's going into it is data. It processes this data in order to find the best possible pattern match within itself, and then that pattern match comes out in language. So we have the illusion that we're speaking to something like us. And this is, philosophically, this is what's called we have a theory of mind. So a characteristic of humans is we have a theory that when we're talking to another human, they have a mind like ours. And since AI speaks human language, then we're tempted into a theory of mind. We're tempted into thinking there's a mind back there like ours, when in fact, it's a vast neural network that parsed groups of letters, not even words and sentences, by the way, recognized a pattern in them that it matched to other patterns, and then it output the new pattern. So one thing, one big question philosophically is, how do we avoid, or theologically, how do we avoid attributing a mind to this thing that is so exquisitely trained to appear that it has a mind, but it doesn't. It simply doesn't. You have to go into the physics to see that.

Chris McAlilly 12:11
Yeah, I feel like there's an illusion of consciousness, or an illusion of...

Robert Hunt 12:18 Exactly.

Chris McAlilly 12:19

That's happening, and there is something about human beings. We just love these slippages, these opportunities to play with or to think about things that are either like us or not like us, but...Just the way in which we would interact with, I mean, on the one hand, animals and the other hand, all of the fantasy and and science fiction around alien intelligences.

Robert Hunt 12:51 Right.

Chris McAlilly 12:52

It's a realm of almost constant human imagination. Why is that? Why are we so enamored of this search for intelligence that is either, that is not like ours, but that is close to ours, or that exceeds ours?

Robert Hunt 13:10

Yeah, I think that's a super good question. I'm going to give, I'm going to offer two answers, and I'm working on a book on this right now that I expect to finish in the next month or so on being human in an AI age. But one reason is that we are socialized, from the beginnings, the earliest dawn of humanity. Okay, we have looked for something that has a mind like ours, because that's useful, something we can communicate with. And so we attributed a limited theory of mind to something like a dog. Right? We feel like they're somehow they're like us. They understand our words and they think like us, right? Cats, we're maybe pretty sure that they don't think like us.

Eddie Rester 14:00 They don't even like us.

Robert Hunt 14:01

They don't even like us. And there's good reasons for that, but part of that's based on the way they move their eyes. Dogs eyes have whites of eyes like ours, so they can watch our eyes move. We can watch theirs. Cats don't, so we don't see their eyes' movement. So yeah, but we're socialized from the very dawn of humanity to look for other minds like ours, because those are minds we can manipulate and use. And that's useful. That allows us to domesticate animals, that allows us and keeps us, by the way, interacting with people who don't speak our language. We don't... We look at that person, we see that they respond in certain ways. They make what seems to be intelligible speech, even if we don't understand it, and we go, "Okay, I don't understand that language, but I think it can be translated into my language." Right? I think that's part of it. The second part of it is for the last 200 years, 250 years in the modern world, we have placed before ourselves in popular culture the idea of alien intelligences and alien beings. And this really begins when we get out of the Ptolemaic universe and we move into a Copernican universe. We realize we're not the center of the universe. And then, beginning the 19th century, we realized those stars are a long, long way away. But so are those planets. Those planets are not stars that are big. Those planets are places. The telescope. One of the places this started really going was when they pointed telescopes at Mars and they saw straight lines.

Eddie Rester 15:33 Yeah.

Robert Hunt 15:44

Whoa. All of a sudden that looked like something humans would make. So instantly we imagined Martians. And then science fiction filled right in. It jumped right in and began to create alternative worlds and intelligences, until by the time... I mean, look, we, if the... We have the Orson Wells 1930s radio show, right, War of the Worlds where the Martians land. So it's early as the 30s, we have imagining of aliens in science fiction that just gets more and more popular and, and that's a fascinating way, by the way authors use this, and we use it, authors use it as a way of exploring something that we all want to explore in literature. What is that other thing? What's that thing different from me that helps me understand myself? So does that make sense?

Eddie Rester 16:39

Yeah. And I keep thinking that, you know, we invested all this science fiction energy in Martians, and when life was finally found on Mars, it was bacteria.

Robert Hunt 16:49 Right.

Eddie Rester 16:50

It was single celled or there was the remains of single celled life. So life did not, in that regards, match up to what our dream or imagination of it had been. And I wonder if that's going to be the end result of our relationship with AI, down the road, that at this moment in history because the large language models, what chat GPT has released, what Apple is now proposing to integrate into all their products. I wonder if we're not going to get 50 years down the road and think, "Huh. That's it." Any thoughts on that?

Robert Hunt 17:30

Well, I think it's really good question, and I think you may be right. If we are sophisticated enough in our own self understanding and in the maintenance of our own humanity that we can become clearer about how this thing that looks so human is not human.

Eddie Rester 17:54 Right.

Robert Hunt 17:55

That's if we can. Now the more dystopian future is that, and this is already spelled out by some of the effective altruists, people like Elon Musk that talk about the post human. The alternative is we understand ourselves as just very sophisticated computers running algorithms, okay, this book "Deus," in which case we're okay with these superior intelligences, right? If we treat ourselves, if we treat a computer as if it's a human long enough, we'll start thinking of ourselves as computers, and we're already well down that road. And that's a little more dystopian, you know?

Chris McAlilly 18:41 Yeah.

Eddie Rester 18:42
So say, maybe say a little bit more about what that does, not just to how we see ourselves, but

maybe what that does to how we begin to see others.

Chris McAlilly 18:51

Let me jump in, Robert, one of the things I'm thinking about is just your comment about the way in which we have a theory of mind of animals, a limited theory of mind.

Robert Hunt 19:02 Right.

Chris McAlilly 19:02
And you mentioned the domestication of other intelligences.

Robert Hunt 19:06 Right.

Chris McAlilly 19:07

Which we manipulate and use them. I think in some ways, one of the things that I'm hearing in the conversation is there is a sense in which AI may be an intelligence that we have created and that we can use. And there's a fear that it's going to become so intelligent that the thing that we've done with other intelligence might turn that back on us. That there's this darker kind of dimension of what it means to be human, which is to say that we manipulate and use the intelligences of other beings, and so there's a fear that that same thing will happen to us. And did you mention the book "Homo Deus"? Is that the book?

Robert Hunt 19:50 Yeah, "Homo Deus," yeah.

Chris McAlilly 19:51
Yeah. So that's by Yuval Noah Harari.

Robert Hunt 19:54

Harari, yeah, exactly. Well, I think it's a great question. So and to take a step back, couple of things. First of all, Harari quite openly says that humans are just algorithms being run in a neural network. Right? And they happen to become so sophisticated that they can create a version of themselves that's more sophisticated, right? For that kind of thinking to take place, and his two books need to be read together on this, really, we have to adopt a new view of intelligence that did not exist 400 years ago in our culture.

Chris McAlilly 19:55

Yeah. He goes back and looks at the principal force of evolution, natural selection, that gets replaced by intelligent design over the course of, you know, the long history. And there's this sense that, you know, there used to be Homo neanderthalensis. Now they're Homo sapiens. And the idea is that we manipulate the place. Yeah, we, you know, Homo sapiens actually, you know, decimated the neanderthals. And if we've created this, I don't know, hybrid, like, computer-human subspecies, and all of a sudden the regular old Homo sapiens are just going to get decimated. I don't know. How are you thinking about those dimensions?

Robert Hunt 20:39

Because from the time of Aristotle up until the beginning of modernity, and even well into modernity, intelligence was what humans had that made us in the image of God. Right? We had rational minds, and God had a rational mind, and therefore we had a connection with God through our rationality and intelligence. And gradually, the shift in the definition of intelligence moved to intelligence being a systematic way in which an organism understands its environment and responds to it fruitfully for the sake of evolution, right? And so now we're going to have a gradation of intelligence. A bacteria is intelligent in a very low way. It knows, it has some knowledge of what's going on around it and responds to it. We're intelligent in an extremely sophisticated way. But as soon as intelligence is not the distinctive feature of humanity that relates us to God, and instead simply the end point of a spectrum that goes from bacteria up to ourselves, then it's easy to imagine the next step beyond ourselves, the super intelligence. So we have been... Everything in our popular culture, its understanding of how humans evolved and how we came about, has pulled us away from this older definition of intelligence that makes us special to something that doesn't make us all that special. Now there's one other part of this, if I can pick up the thread of what you said, the intelligence or the algorithm, can these things then begin to exceed us in the same ways that we exceeded animals, bonobos or chimpanzees, or whatever we would think is that closest to us in intelligence? And we need to go back and think a little philosophically about this, though, because a characteristic of humans is that we have a will. We have desires, and we try to fulfill those desires. They're very sophisticated desires. But from the moment a child is born, it literally cries for its mother's milk. It goes looking, okay? It goes looking. You put it, put it close to something that smells a little bit like a breast, and it's ready to latch on. That's true of mammals too, and mammals in general, young that feed off their mothers say this is what they do. They have instinctive drives and desires. I would argue that at this moment in time, these large language model AIs don't have that. They are 100% responsive. They don't generate anything. If you don't poke them, they don't answer, nor are they programmed to. Now, I'm not saying they couldn't be, but they will begin to try to manipulate us only if we tell them to. Right?

Eddie Rester 24:42

And I think that's so important, because people fear AI for a lot of reasons. It's going to take jobs.

Robert Hunt 24:49 Sure.

Eddie Rester 24:50

It's going to demean who we are as humans. But they can only do what we tell it to do. You know, in a previous conversation that got removed, you said, at any point we can unplug these things. It's not like...

Robert Hunt 25:09

Yeah, exactly, right. So easy to forget. Well, I listen to an interesting podcast where Lex Friedman, who's a huge podcaster, interviews one of his buddies that's all into making AI girlfriends or something. And they talk about, well, can the AIs take over the world? And he said, you've got to realize that for ChatGPT 4.0 to recreate a second ChatGPT 4.0, it will have to master all of the art of human manufacturing in the industrial age, and then somehow create a thousand factories to create millions, if not trillions, of processors and then assemble them. Okay? Because you know, it will have to recreate itself essentially, and not just once, but many times over. And here's the thing, it has no drive to do that. It's not even interested in that. All it does is inputs come in and inputs go out. Now, could we try to program it to do it? Hmm. It'd sure take a lot of human slaves to, you know what I mean? It doesn't have arms and legs or hands. It doesn't have a sense of smell.

Chris McAlilly 26:28

Yeah, that's, I do think so you mentioned the will and so, if you were inclined to reduce human consciousness to a brain is like a machine with free will, and AI is like a brain. It's a machine with a brain without free will. I mean, you're already in a reductionistic framework, it seems to me, because consciousness is... It's a reality that's not... You know, I think it's a reduction to say that a brain is a machine, because a brain is more complex than that. And I think one of the things that perhaps... I don't know...

Robert Hunt 27:19

Well, I think you're absolutely right. And this means that it's an important moment for theological reflection to ask what it means to be made in the image of God. Okay? And I, you know, I mentioned this idea that we're intelligent. We share rationality with God. This was a popular... This is not, by the way, something that would have come up in the first three or four centuries of Christianity. It really happens when Greek philosophical tradition begins to mix with the Christian tradition, develop a Christian philosophy. So maybe it's good to go back.

Chris McAlilly 27:19

I do think it's one of the things in this particular moment that the Christian tradition has something to offer, because there is this conceptual framework that says human beings are created in the image of God. And you know, there was a time where, I think about in the 18th century, you have this kind of philosophical set of questions that are that are rising up, that are interrogating what does it mean to be human? There's an emphasis on a particular kind of moral intelligence that rises up. But, you know, there's this... It's not just that to be made in the image of God is to be intelligent. God's intelligent. We are as well. But there's a way in which human intelligence is... It provides a particular point of view. It's located within a particular narrative and space and time. I mean, there are a lot of ways in which the Christian tradition, I think, gives a conceptual framework to kind of challenge those reductionistic views.

Robert Hunt 27:19 Absolutely.

Chris McAlilly 27:19
I'm gesturing in that direction, but I feel like you may be able to help us understand, give us some categories and some language for articulating maybe what I'm gesturing towards.

Robert Hunt 29:09

What does it mean to be made in the image of God? And I'll just give two points of contact. One is, of course, the stories from the Garden of Eden. God makes Adam, the primal human in God's image. It doesn't really say any more about that. We know that that does not mean that it looks like God, because that would be idolatrous. Right?

Eddie Rester 29:31 Right.

Robert Hunt 29:33

But you can ask, what's the first thing that God asks this creature made in God's image to do? And it is to name the other creatures, take charge of being a steward of creation and to be fruitful and multiply and cover the face of the earth. So, you know, I'm a missiologist by trade. We would say God has a mission. Humans have a mission. That's why we're in the image of God.

Eddie Rester 30:00
So our very createdness is to live out the mission of God, not just to be intelligent.

Robert Hunt 30:07

Exactly.

Eddie Rester 30:08
Not just to be rational, but to care about, to follow, to do the things that God does.

Robert Hunt 30:14
Yeah, and to not merely respond to our environment.

Eddie Rester 30:18 Right.

Robert Hunt 30:18

In a way that we can keep reproducing, right? Now, I want to add one other thing to that, and then maybe we can go further. But this means that it's really important for Christians today not merely to go back and contemplate that narrative, made in the image of God, but to contemplate what is understood by saying that Jesus is the new Adam, that Jesus is the new recreation of the image of God, to which we are invited by the Apostle Paul. And so, what does it now mean to have the image of God renewed in us through our relationship with Jesus Christ at a time in which there's all of these movements within popular culture that would detract from that, that would invite us to be subhuman? Right? So I think these are places for fruitful Christian reflection, although I have to say the problem of the 20th century was how to make God credible in a modern world.

Eddie Rester 31:31 Right.

Robert Hunt 31:32
I think the problem the 21st century, in an AI age, is how to make humanity credible in the modern world.

Eddie Rester 31:40

And I think that gets to something that you've talked about in another podcast, and that is authenticity.

Exactly.

Eddie Rester 31:48

One of the things, maybe one of the dangers of living in an AI world, is that we continue to do a couple things. One is we continue to get pushed further and further away as we allow artificial intelligence to do tasks for us, to relate to others on our behalf. But there's this other piece of even losing the ability to engage well with others. And I think that's important for us to think about in this world where AI is being handed so many of these basic tasks of writing emails, being a chat bot, getting my next airline ticket, when things go off the rails.

Robert Hunt 32:36

I think you're absolutely right, because, one, automation of things, automation of tasks, this has gone on for a long time. The first time a farmer used an ox to plow a field, we were beginning to go down that road. This is not a new problem for us. However, it becomes a problem when we substitute those relationships for human relations. And I mean, by that, I mean human-to- human, real-time relationships. If we allow AI to be the mediator, then it's just going to be much more efficient than the really non-AI chat bots that we already confront online when we try to do something like ask a question. Super frustrating. That's going to get a lot better. It's good. It's getting better.

Chris McAlilly 33:28

It is getting better. But I still think it... Maybe it will get to a point where we are... There's an imperceptible difference between content produced by AI and content that is produced by humans. I do... I have... Have you heard this concept of AI slime?

Robert Hunt 33:46 Yes.

Eddie Rester 33:50 What is that?

Chris McAlilly 33:51
The idea that the internet will just become an AI garbled dumpster?

Robert Hunt 33:58

Right.

Exactly, it's just full of AI slime.

Chris McAlilly 34:00

Yeah, just AI content will be in every corner of the internet. We won't be able to get away from it. It'll just be garbage. Just poor quality AI-generated content will be everywhere and, and I don't know... Maybe this gets... I think I'm just continuing to try to probe this idea of what does it mean to be human. And perhaps part of it is the sense of we can perceive and engage and participate in a purpose in the way that God does. I think there's also this sense of creativity, this kind of artistic dimension to what consciousness can do. We can engage the materiality of a particular place and time within the context of a particular story, and we can offer some new reality, whether it be in language or the visual arts or in music. And all of those things, I think, are these expressions of what it means to be human. So, you know, perhaps that's a part of it as well. I don't know. What do you think about AI slime?

Robert Hunt 35:10

Well, one, I think you're on track. This is very important. I've just written a think piece on art and AI, creativity and AI, and it should come out in the next couple of weeks in a bunch of newspapers. We'll see. It's been accepted. What we forget is that art... One, the word "art" doesn't actually come from being creative. It comes from the idea of having skill. Okay, so an artist isn't just creative. An artist is a person who's mastered the skills that are necessary to make their creative ideas come to life. More importantly than that, art doesn't exist by itself. No matter how skilled I am, if I live in a room of my own creations that nobody sees, it's not art.

Eddie Rester 36:04 Right.

Robert Hunt 36:05 Art's a community project.

Chris McAlilly 36:07 Agreed.

Robert Hunt 36:08

You know, so, and I think this is where, you know, it's fun that people can create this digital art. I've delved into that realm myself. I can tell you something about it. It's pleasurable to give a prompt that makes ChatGPT 4.0 come up with a cute picture that's weird, okay. Or pictures where you end up with three hands or a nose on the side of your face. Actually, it can be trained to not do that so badly. I will just say that the the Garden of Eden in steampunk style is pretty awesome. But I personally don't feel any satisfaction at that. I've entertained myself for five minutes, okay, but because I have no skill in drawing or painting or anything like this, I know that I gave nothing to this except I created a cute prompt. I would not buy it. If I go shop for art at all, and I don't always, and I do sometimes when someone drags me to an art fair or something, the only reason I buy something is because there's a backstory. I meet the human who made it. And I don't think anybody that's paying a million bucks for a piece of art is going to do it if they don't know the human story behind it. Right? AI can create a Picasso that experts couldn't tell from the real thing. Nobody's going to buy it, because we're as interested in the artist as in the visual thing that comes out. That's what makes it art. There's a human behind it, not a set of algorithms. And we'll have to... But we're going to have to keep focused on this, because we, and this is especially so in the church. If our idea of worship is something that happens on a screen, right, that keeps me entertained for an hour and a half, mildly stimulated, maybe slightly inspired, if that's what worship is for me, then AI can do that now.

Eddie Rester 38:20 Right.

Robert Hunt 38:21

Top to bottom and end to end. But will I be engaged in the worship of God if I'm not in the community of God worshiping? You know, I think, and I think the answer is not. I don't think that's going to be fulfilling, and it's not going to develop our humanity.

Chris McAlilly 38:36

Yeah, this is so, so interesting. So what what is fulfillment? I mean, it raises questions of, what does human fulfillment, or human flourishing look like? And what does it mean to cultivate a world in which that is possible? And I think what you're describing is a world that is relational. And I do think that that's part of consciousness, that it's not just I'm here by myself engaging in a non-material, ethereal world, but that there is a relational dimension to human consciousness that I know that I am because I'm kind of called out or called for. I'm reminded, I just had a conversation with a college student recently who was talking about her engagement with the church, and she was saying there was someone that she encountered along the way that called out her potential. And I was thinking that, you know, if you think about the first... Like, a baby is in the world looking for food, and then kind of connects to a mother, face or gestures. And there's a sense in which, from the very earliest moments, to become human, to engage in not just sentience, but consciousness. It has to be called for. There's this relational component to it. Right. I mean, we don't... We've made many efforts to name ourselves, but no name sticks until it's given to us.

Eddie Rester 39:40 Right.

Robert Hunt 39:41

We have to be named. This is the ancient ritual of baptism, right? You receive your Christian name. This is the name by which God calls you into service, okay? And at the same time, I do think, and I think consciousness is a very key question here, and one of the things that is called upon now is for Christian theologians to not take consciousness for granted as part of what it means to be human, and to understand it is a serious category that overlaps with the concept of Spirit, right? So God breathes into Adam, the first Spirit. Slightly different things happen in First Corinthians 15. What does it mean to have a spiritual body, right? But we're doing this, and the reason we need to do it is because we're doing it in an environment in which there are two competing views of human consciousness, and these views come into popular media and influence our thinking. So if I can, really quickly. View one would be the view that you find in 2001: A Space Odyssey. HAL, and by the way, in Heinlein's "The Moon Is a Harsh Mistress." HAL becomes so complex that HAL is conscious. Okay, so how do you get HAL out of the way? You start pulling parts of himself out, modules. Eventually he's not complex enough or in touch with the world enough to be conscious, okay? The other view, roughly speaking, is that what we think of as consciousness is not. It's just an illusion as different parts of our brain, based on algorithms, take center stage to guide us. Now in its own cute way, this is what the movie Inside Out is based on. There's competing things in it that are trying to take control of the brain, okay? And that's all we are. Now, I don't think Inside Out is promoting an evil view of consciousness or anything like that. It's a cute movie.

Eddie Rester 42:23
Cute movie. Yeah. It missed, yeah, but I see what you're saying.

Robert Hunt 42:25

About the way many people feel, yeah, about the way we feel inside. We can understand that. But we do need to realize that more seriously behind it is a view that says consciousness is itself illusory. There's a new Scientific American article. I haven't read it yet, but I did hear an interview on Think with the author of it about life after near death experiences, right, or post death experiences, being taken seriously scientifically. And one reason people have not taken them seriously scientifically is they don't fit in with a materialist, a purely materialist understanding of the brain. So it's good to see that scientists are taking two things seriously. One is that they can't know everything there is to know by definition. Their own math tells them that. And the second thing is that there are aspects of what it means to be human that are not reducible to material interactions. By the way, though, there's serious tests being designed to try to prove one way or the other on this. We shouldn't... You know, I just, I think that the problem of proving the the scientifically proving or disproving the reality of consciousness, or the problem of the brain and the mind is as, for some of the same reasons, is as unlikely to be solved as it is that we will actually know what happened one one-millionth of a second before the beginning of the Big Bang, or what happens at the end of the universe.

Eddie Rester 43:59 Yeah.

Robert Hunt 43:59

The math itself tells us that it can't be solved. It's not a soluble problem. And so we're gonna have to recognize that we, who do not live entirely in the realm of science, actually have something that we need to say.

Eddie Rester 44:22
And I think that may be an important place, a good place, even for us to lay down, because we've entered now into the realm of mystery, something beyond.

Robert Hunt 44:33 Right.

Eddie Rester 44:33

Just the numbers and the algorithm, and maybe our best response is to continue to claim who we are as God's children, created in the image of God, who get to engage in mystery, and in that mystery, discover purpose. We talked a lot about purpose today, and one of the things Harari says in "Homo Deus" is the thing that's missing in his model, and he he admits it, as we become this God-human, is purpose.

Robert Hunt 45:03 Right.

Eddie Rester 45:03

Nothing that explains how we find purpose, our purpose. And I'm sitting there reading the book, going, I can tell you where we find our purpose, buddy. But yeah, so I wish we could talk for another hour. This has been... I'm a science fiction geek. I'm a science geek. Chris would just call me geek. I don't know what he'd call me. But thank you for taking your time again and again to get this episode recorded, and thank you for... When's your book coming out on?

Robert Hunt 45:38
I'm hoping, late this year, in the fall.

Eddie Rester 45:41 Okay.

Robert Hunt 45:43

And I'll get back to you guys on that. If I can just say one other thing. In this particular world that we're living in with AI, with all of the ways that our screens and our devices try to engage us and pull us away from humans, I think we have a Christian obligation to not leave our fellow humans on their own. We need to be out there inviting them into the company of humans so that they don't slip into the sub-human relationships with chat bots that offer some immediate gratification but can never really fulfill their humanity.

Chris McAlilly 46:18

Yeah, if I had to sum upour conversation, it would be, you know, we're more than machines. We're persons, but we're more than individuals. We're both spiritual and material, and we find our uniqueness and fulfillment and flourishing in community with other people, not in isolation. And so I think that might be a good place to land. And it's been great to connect with you. Robert, thank you so much for your engagement, for raising these important, difficult, interesting questions. And we'll, as Eddie continues to be stumped by AI, we'll have to invite you back to help give Eddie a little bit more clarity.

Eddie Rester 46:58 Yeah, at any time, yeah.

Robert Hunt 46:59
Okay, thanks. I appreciate very much your invitation to be on The Weight.

Eddie Rester 47:04 Thank you very much.

Chris McAlilly 47:05 Take care.

Robert Hunt 47:05 Take care.

Chris McAlilly 47:06

Eddie Rester 47:06
[OUTRO] Thanks for listening. If you've enjoyed the podcast, the best way to help us is to like, subscribe, or leave a review.

Chris McAlilly 47:15

If you would like to support this work financially, or if you have an idea for a future guest, you can go to theweightpodcast.com. [END OUTRO]

Previous
Previous

“Mistakes Good Leaders make” with Tod Bolsinger

Next
Next

“Revival Of Peace” with Brian McCormack