top of page
Writer's pictureBeth Hayward

Finding Home via Yellow Brick Digital Roads

Those who know me will know that artificial intelligence, (AI), is one of my special interests. I've never been a reader of sci-fi; I haven't even seen a Star Wars movie since 1978! But now that AI is within the realm of real life, I've taken a keen interest. With the introduction of Chat GPT many of us have been able to explore how our daily tasks can be supported by gnerative AI. I have friends in ministry who use it to summarize the key points of their sermons for social media purposes. I've used it to create a resume, which I didn't end up using because frankly, even user friendly AI, is a challenge for me.


My interest in AI is not so much in how it can help me to summarize a sermon but in how it comes up with the information if provides. Who decides what gets put into the data that powers ChatGPT? Who is responsible for the latent values programmed in? Who is accountable when it messes up? What is the ultimate purpose of this technology?


I have no interest in trying to shut down the development of AI. It can do countless amazing things. I'm not sure that the average person has any idea how quickly it's advancing and I am concerned that growth driven corporations not be the sole source of the values and vision embedded in these technologies.



Below is an article I wrote (it's a long one, be warned), on the topic. It was published in the June 2024 issue of Touchstone: Theology Shaping Witness. According to their website: The purpose of Touchstone is to bring The United Church of Canada’s heritage of theology and faith to bear on its own present life and witness, and to engage the issues of our day in the light of the biblical message and the Christian tradition. A subscription for full online access and a hard copies of the periodical is a mere $28.00 Canadian a year. You can find out more here: https://www.touchstonejournal.ca/

As a bit of fun, the images that follow have all been created by AI. These images are not part of the originally published article in Touchstone.

When it comes to art, let me say unequivocally, that I always prefer human creations!


FINDING HOME VIA YELLOW BRICK DIGITAL ROADS

By Beth Hayward

 

Fun, games, serious science, and a big worry


I asked Siri recently if he’s God. I use a rich baritone voice for my Apple assistant, and I’ve taken the liberty to assign it he/him pronouns. He replied to my question with this: “I’m not a robot or a person, I’m software here to help.” I felt like he was avoiding the question, so I asked again: “Siri, are you God”? This time: “I’m a virtual assistant, not an actual person but you can still talk to me.” It was disappointing. I was hoping to get a glimpse into what Siri’s programmers have to say about the nature of God. Turns out they prefer to avoid the topic.

Artificial Intelligence (AI), the driver behind Siri and so many other ‘helps’ we access daily, is pervasive throughout all aspects of our lives; from GPS maps that take the stress out of getting where we want to go, to those handy, and often annoying, autocorrect systems in text messages, to the ads that pop up on our screens presenting the precise product we just mentioned. The latest iteration of generative AI, Chat GPT, has gained tremendous popularity and some notoriety since its launch in November 2022. It can write prayers, teach the times tables, make medical diagnosis, translate, and write code.




AI is a broad and evolving field with many subsets. In general terms it refers to the ways machines simulate human intelligence. These systems simulate things such as learning, reasoning, and self-correcting. To name what machines do as intelligent is a stretch; they produce expected results given their objectives, or they generate new results, based on patterns. Troubles arise when we consider the truth that machines don’t have objectives and the patterns they follow come from data sets of human patterns, which are often biased. In 2018, mega company Amazon scrapped its AI hiring algorithm after admitting it was biased against women, a result of being trained on data that itself was biased against women.[1] AI is only as woke as the people behind it.


In the past quarter century AI has beat humans at chess, Jeopardy, and to the giddy surprise of its creators, the ancient Chinese boardgame Go. In a grueling five game match in 2016, AI powered Alpha Go took down the world’s Go champion Lee Sedol. The game is hundreds of times more complex than chess. What fun!



It's not just fun and games. In 2020 MIT researchers announced the discovery of a novel antibiotic, able to kill strains of bacteria that had been resistant. This was accomplished through training the AI with data in 2,000 known molecules. The AI then surveyed a library of 61,000 molecules and found one that fit the criteria of being a new antibiotic. It was named halicin after the character HAL from the movie 2001: A Space Odyssey.[2] 



AI has the potential to facilitate fun, offer expedient solutions to problems that confound human minds, and according to a group of scientists and tech business leaders, to bring the world to the brink of existential crisis.[3] 

It’s curious that those who create and bank roll AI’s rapid advance to the tune of unthinkable amounts, are also warning of its potential capacity to cause harm.


The same people who warn of the existential risk of AI do offer some antidotes. Sam Altman, CEO of Open AI, the force behind Chat GPT, suggests that artificial general intelligence can help us deal with problems created by AI in the first place.[4] This leaves one critic to query: “[If] the only way of dealing with the problem of AI is to [. . .] hand the problem to a more powerful AI [i]t becomes a very precarious question of pitting god-like systems against one another and hoping that the one fighting on our side is the stronger.”[5] It shouldn’t be a surprise that bigger and stronger is the AI developers answer to the question of threat. It’s an age-old fix; tried, if not true.


The fun, curious, and helpful things AI can achieve tend to be where most conversations about this life altering technology begin and end. Deeper questions about the ethics of how AI attains its data, whose interests are served, or just how much privacy ordinary people must relinquish to access the benefits of AI are presented as footnotes to discussions that ordinary citizens are led to believe they don’t have the time or wits to engage. Johann Hari, in his book Stolen Focus: Why You Can’t Pay Attention---- and How to Think Deeply Again, writes the following:

 

"One day, James Williams--the former Google strategist I met--addressed an audience of hundreds of leading tech designers and asked them a simple question: “How many of you want to live in the world you are designing?” There was a silence in the room. People looked around them. Nobody put up their hand."[6] 


Advances in AI are growing exponentially, its presence in our lives ubiquitous. Most of us are not paying attention. Theologians dismiss conversations about AI at our peril. As experts in the field of meaning making, we have a responsibility to bring expertise to bear on technology that is re-shaping every aspect of our common life.

 

AI – Deep down the religious rabbit hole


My curiosity about AI began a few years ago when I read an article in the New York Times entitled: “Can Silicon Valley Find God.” Journalist Linda Kintsler referenced a scene from an HBO series, Silicon Valley. It was a satirical play on the tech world’s aversion to all things religious. “You can be openly polyamorous, and people will call you brave. You can put microdoses of LSD in your cereal, and people will call you a pioneer, [ . . . ] but the one thing you cannot be is a Christian.”[7] After a good laugh I went on to read this: “[AI] is ubiquitous, yet it remains obscured, invoked all too often as an otherworldly, almost godlike invention, rather than the product of an iterative series of mathematical equations.” [8] I wondered: what are the implications of AI replacing God? What kind of God are we creating? What does it mean when humans create God anyway?


Reflecting more on Kintsler’s article, I took a deep dive into AI and some troubling questions began bubbling to the surface. Who determines what ethics and values are baked into AI? Is AI intended to make our lives better, easier, or like so much in our world, is it one more expression of the market economy bringing the intricate balance of life to the brink? In a world where, for many, God is dead, is there any harm in appointing AI as a replacement? So intrigued by this idea that AI is an otherworldly godlike invention, I reached out to Kintsler to find out more. Here’s what she said when I interviewed her on my podcast:

 

AI cannot be the second coming, cannot be a god, because we cannot know God, we cannot create God, that’s the kind of fundamental humility which frankly is not abundantly present in [Silicon Valley][ . . . ] that’s why we need to have different perspectives going into the creation of AI.[ . . . ] It’s not enchanted and yet we speak of it as if it is. And so, if we’re going to infuse it, if we’re going to speak about it in these religious metaphors, whether we regard them as religious or not, you need to have people from different faith backgrounds contributing to it because otherwise it will be dominated by possibly dangerous ideologies.[9]

 

It began to become clear that AI is like a new religion and the hidden few who control its application are the God behind it. In truth, they aren’t God, they are the Wizard, and we keep following yellow brick digital roads they pave for us. Often without knowing it, we set our hearts, brains, and courage aside as we open our phones and are either enamored by the possibilities or shocked by the intrusion.

            Employing God language without humility has been a source of great harm. Those who claim the identity of Christian have insisted on the terms of salvation, saying that it is reserved for the few who they determine as worthy. If AI saves us, who determines the criteria for salvation? Will some be left out? If the God that AI is becoming is all powerful and transcendent, how will ordinary citizens access this God? Christianity has a checkered past when it comes to its ability to differentiate between the all-powerful God and the all-powerful institutions that define that God. I wonder if we might take the learnings from the damage done in the name of God and insist on a new framework for this pervasive technology. Instead, it seems that we have pulled a dead God, or one that ought to be dead, from another era and simply imposed it on this new thing. New wine, old wineskins.


When theologians abdicate responsibility for stewarding, and interpreting these words, they risk being used in ways that warp perspectives of God and humans, undermine our agency, and leave people disempowered. When we accept our powerlessness, we end up feeling helpless in the face of an almighty technology that we can’t fully comprehend.


One of the primary ways we interact with AI is through algorithms, a term shrouded in mystery. “Look what the algorithm put on my screen!” “I feel like the algorithm is watching me.”



How an algorithm works is mysterious to most, leaving us to accord it to the realm of divine unknowable transcendence. What we don’t fully appreciate is that algorithms did not arrive in our midst as some pre-formed entity: as a human made of clay and plunked down in a digital garden. Algorithms are the result of what creators create with code. Johann Hari, in conversation with former Google engineer Tristan Harris, blows the hot air out of the algorithm god. According to Hari “Tristan taught me that the phones we have, and the programs that run on them, were deliberately designed by the smartest people in the world to maximally grab and maximally hold our attention. He wants us to understand that this design is not inevitable.”[10] When I read this, I stopped for a moment and let the weight of this truth settle in. Did I sense a whisper of empowerment, as I considered that algorithms don’t have to be the way they are: that we might be able to reclaim some of our agency in relation to this unruly creation?

 

Silicon values


Years before AI’s full infiltration into our lives, theologian Ian Barbour called for a redirecting of technology. Rather than serving the needs of corporations, governments, and economic structures, Barbour proposed that technology be redirected to serve the common good. He said that “the welfare of humanity requires a creative technology that is economically productive, ecologically sound, socially just, and personally fulfilling.”[11] Technology oriented to serve the common good sounds, not just like a noble cause, but an invitation rooted in gospel values.


As I learned more about what happens behind the veil of AI it became clear that consideration of what values are programmed into technology is rarely conscious and usually decided by computer programmers, not ethicists, policymakers, and certainly not theologians. AI doesn’t have any objectives of its own, it must be programmed to ‘exist’. This initial step is loaded with values and potential biases. Is this not an area of our common life that should be the business of faith? Tristan Harris left Google when he could no longer ignore what he viewed as a lack of commitment to ethics. This is what he spoke to an audience after leaving:

 

 “I want you to imagine walking into a room. A control room, with a bunch of people, a hundred people, hunched over a desk with little dials-and that that control room will shape the thoughts and feelings of a billion people. This might sound like science fiction, but this actually exists right now, today. I know because I used to work in one of those control rooms.”[12] 

 

Silicon Valley values are generally closer to corporate values than gospel ones. This includes the values of free market capitalism, growth at all costs, profits above people. Gospel values are decidedly about the common good, about loving God and neighbour, about challenging systemic structures that leave people behind, about calling to account powers and principalities that value profit over people, individuality over interconnection. I’m not suggesting a colonizing of Silicon Valley by Christian values, rather some fundamental humility to keep dangerous ideologies at bay.


“[….]All the eternal questions have become engineering problems,” [13] writes Meghan O’Gieblyn. As engineers approach a problem, they do so with their assumptions and values as well as their technical skills. If the problem is a human one - for example, how do I make changes to my health and lifestyle – the engineers behind the AI can apply certain code-

based principles and we get something like the Weight Watchers app. Lots of regular feedback, helpful hints, meal plans, all designed to have a result for me. But let’s take a difference example, a more contentious one: will there be an app to guide us through the complexities, the moral discernment of Medical Assistance in Dying (MAID)? If the data goes into that app, and the conversations, the human struggles, the theological considerations are missing – what has been created?


The potential risks of the Silicon values are great. As we know from the many studies of ‘religion meets secularism’ we dare not walk away or engage in a defensive battle with forces that have determined to dismiss religion. We need to clarify our thinking and engage this new reality of AI in our lives with faithfulness. We need to put forward gospel values, not those professed by religion coopted by a twisted society, by capitalism, by neo-liberal rhetoric, or captivated by tradition, but life-affirming Christian values, the ones Jesus lived; things like radical hospitality and transformed relationships that create loving community where the last become first and the oppressed are raised up.

 

Silicon God


As AI continues at warp speed to infiltrate every aspect of our lives, I’ve noticed another troubling trend: all the worst popular assumptions about God are being resurrected to describe our relationship with AI. Our language of faith is slipping more and more into an earlier and potentially damaging construct: AI is rapidly become the all-powerful God who saves us. Surely, we are not going to settle for the small group of players behind the veil of AI playing the role of God. Why in the world would we go back to this old story with new technology?


“Today artificial intelligence and information technologies have absorbed many of the questions that were once taken up by theologians and philosophers: the mind’s relationship to the body, the question of free will, the possibility of immortality[….]All the eternal questions have become engineering problems.”[14] But the engineers are almighty God and that God has let us down time and again, failing to get us out of the binds we find ourselves in. Like a God who is separate from creation, all powerful, unmoved by the plight of humans. There is another God out there, or in here, one that I’ve come to know through a long journey of struggle and persistence, and curiosity. A God that has much to reveal about a way forward, a persuasive, relational God, who does not singlehandedly determine the future but calls to us moment by moment to partner in co-creating a world infused with kin-dom values.

 

Relational God


We’re making God in our image when there are much better images of God to be found. The relational God I learned about as a child and further came to grasp through study and practice as an adult, might just have something to teach us about finding a way forward and discovering our voice in this important discussion.


A story: I remember standing beside the open casket at my grandfather’s wake. I was nine years old, eye level with his powdery face and sharp jawline, peering into the casket, I could feel the cold of the wood, the satin, the body emanating out, seeping into my skin. It’s the first memory I have of being aware of a tangible sense of the interconnection of all things, even the living and the dead. I hardly knew this grandfather, there was nothing I recognized in that casket. As I gazed at the body for what felt to my nine-year-old self like a very long

time, I realized something was happening. Something strange. I heard my name called. I turned around and found no eyes in the room connecting to mine. I needed no convincing that the voice was that of my grandfather, reaching out in death in a way he never did in life. It was something of a comfort on that strange day. Later and older, I wondered if, in fact, I had heard the voice of God. Either way, what I longed for beside the casket that day was to be seen and known, to be in relationship, a possibility that had ended with my grandfather but which I could now pursue with verve and conviction with my God and people.


This relational God implies another way of being human, one where we resist the urge to play God, where strength is found in humility and vulnerability, where our relationship with God, the world, and one another, is all interconnected. As Kester Brewin suggests: “Perhaps our real salvation lies not in accepting that we are as gods, and finding ways to get good at it, but in truly coming to terms with the outrageous miracle of human being, of accepting that we are fragile, frail, forgetful, but with opposable thumbs and immense imaginations.”[15] A relational God lifts up these values and can inform theologians and lay people who choose to speak up to the Wizards behind AI.


The relational God is written off by some as powerless or at best ineffectual. But I say, if we take truths about an open and relational God and expect similar from AI creators, as we would from such a God, we might find our theological voice in a conversation that has shut out religion and stolen our outdated language.

 

Promise behind the veil


I don’t fool myself that this is easy or that it’s a matter of just let religion in and all will be well with AI, goodness knows religion has its own checkered past. AI has impacted all areas of our lives and is here to stay. we can’t stop it, but we can be smarter and more informed; we can demand more. The particular and unique role for people of faith is about the theological and ethical issues related to AI. We ought to be a voice that regulators hear, even if corporations aren’t listening.


I care about AI because I care about people. I ache for people to have every opportunity to live life abundantly, for there to be enough for all, for values of care, mercy, and radical welcome to be embedded in every aspect of our common life. Maybe Siri’s avoidance to the God questions revealed more than I first thought, maybe it was confirmation that AI will never truly be in relationship with us. I’m “not an actual person.” No matter how much it imitates us, how much it reinforces our opinions like a deep conversation with a close friend over a warm beverage, AI is not truly capable of the essential human capacity for relationship.


I’m not sure if it’s humans or the AI we create that has the greatest likelihood of becoming omnipotent, perhaps it’s just a matter of degree. What I do know is that the sheer speed of AI advances mean we will soon be running to catch up. There is no going back, no putting it away for a while as we sort out the ethics. The time to reclaim our language and demand more is now.


AI might feel like a new and mysterious story, but it’s rather age old. Countless tales wind their way along roads that make false promises and lead to wizards eventually revealed as the feeble humans they are. But even those stories can make their way to unexpected resurrections, when the characters choose to make their way home. Perhaps we can find our voice in the technology conversation by committing again and again to the values of our religious home. Maybe the software that is Siri and other AI iterations can be reshaped to help us co-create a world where no one is left behind.


[1] Jeffrey Dastin, “Insight – Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, October 10, 2018, https://www.reuters.com/article/idUSKCN1MK0AG/. (Accessed April 29, 2024).

[2] Anne Trafton. “Artificial intelligence yields new antibiotic – A deep learning model identifies a powerful new drug that can kill many species of antibiotic-resistant bacteria.” MIT News Office, February 20, 2020, https://news.mit.edu/2020/artificial-intelligence-identifies-new-antibiotic-0220. (Accessed April 19, 2024).

[3] In May 2023 a group of tech and business leaders, including Sam Altman, Bill Gates, and Geoffrey Hinton, released the following statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”  “Risk of extinction by AI should be global priority, say experts,” The Guardian, May 30, 2023, https://www.theguardian.com/technology/2023/may/30/risk-of-extinction-by-ai-should-be-global-priority-say-tech-experts. (Accessed April 29, 2024).

[4]  Kester Brewin, God-like: a 500-year history of Artificial Intelligence in myths, machines, monsters (Vaux Books, 2024), 12.   

[5] Ibid., 12.

[6] Johann Hari, Stolen Focus: Why You Can’t Pay Attention---- and How to Think Deeply Again (New York, Crown, 2022), 123.

[7] Linda Kintsler, “Can Silicon Valley Find God, New York Times, July 16, 2021, https://www.nytimes.com/interactive/2021/07/16/opinion/ai-ethics-religion.html. (Accessed April 29, 2023).    

[8] Ibid.

[9] Linda Kintsler, “Souls in Soles S:2 E:4,” Beth Hayward, March 23, 2022, https://soulsinsoles.podbean.com/, (Accessed April 29, 2024).

[10] Hari, Stolen Focus, 128.

[11] Ian Barbour, Religion in an Age of Science: The Gifford Lectures Volume One (Harper One, March 1, 1990), PAGE.

[12] Hari, Stolen Focus, 112.

[13] Ibid, 8.

[14] Meghan O'Gieblyn, Meghan. God, Human, Animal, Machine, (New York, Knopf Doubleday Publishing Group, Kindle Edition, 2021), 8.

[15] Brewin, God-like, 193.

111 views0 comments

Recent Posts

See All

Comments


bottom of page