Artificial Intelligence (AI): Tool, Image Bearer, or Temptation?

By
,

Abstract: The headlines of today are saturated with talk of “AI,” from how Artificial Intelligence (AI) can improve your business to warnings of how it might transform our government, our schools, and even our churches. However, the lifespan of AI as a technology was not always certain. Around five decades ago, 1974 denoted the start of the “AI Winter,” a period of reduced federal funding and consequently a reduced research focus on Artificial Intelligence (AI). In the intervening fifty years, researchers rethought their earlier mechanistic views of intelligence, moving instead towards a ‘learning model’ of developing intelligence. This shift in focus revolutionized the field of AI and has led to many of the advances we see today. This shift, however, has moved AI from being a tool that we control to more of a technology that we shepherd. It is this distinction between tool and trainee that lies at the heart of many of today’s discussions on “the future of AI.”

 

In this essay, we will explore from a Biblical perspective three aspects related to AI: AI as a tool, AI as a trainee, and AI as a temptation. Used as a tool, we see that AI has many similarities to other technological advancements that we have used to both better our lives and to further the proclamation of the Gospel. As a trainee, we see that AI forces us to reestablish and reaffirm our views of mankind being made in the image of God and to consequently wrestle with what it means for AI to be made in the image of the image of God. As a temptation, we must reaffirm our God-given mandates and not cede them to technology. We conclude with our thoughts on the open questions that need to be explored in this area but also advice on how pastors can shepherd their congregations well during this exciting time of technological advancement.

 

Listen to the reading of this longform essay here. Listen as Mike Kirby, David Schrock, and Stephen Wellum discuss the essay here.

Introduction

Many Christians consider Paul’s statement “when the fullness of time had come, God sent forth his Son” (Gal. 4:4–5) to encompass not only the theological fulfillment of God’s plan of salvation, but also as a statement concerning God’s preparation of the geographical, political, and technological backdrop into which Christ was born. Roman engineering paved the way, literally, for the fulfillment of the Great Commission (Matt. 28:18–20). A Christian appropriation of technology, however, does not stop there. It was not long before Christians transitioned from scrolls to the “new-fangled” print technology of the time—the codex—and with it our move from being “people of the scroll” to being “people of the book” (again, literally). With a belief that “every good gift and every perfect gift is from above” (James 1:17), Christians through the ages have embraced various technologies as a means of spreading the Gospel. The ever-expanding development and adoption of technology by humankind, however, requires Christians within their time and context to evaluate new technologies for their potential to be used in God-honoring ways.

Today is, in some ways, no different from any other period in history; yet, in others ways it is very different. The difference is not the need to adapt to technology, but instead the rate at which society (and consequently the church) is being forced to confront and adapt to technological advancement. Futurist Ray Kurzweil, well-known for his commentaries on the exponential growth of technology in our age, has predicted that “the Singularity is near.” Kurzweil defines “the Singularity” as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.”[1] Although we as Christians would argue that true transformation only comes through the work of grace through faith, we might acknowledge that we are reaching another possible paradigm shift: the age of Artificial Intelligence (AI).

1. Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology (New York, NY: Penguin Books, 2006), 7.

Implicit in the evaluation of many technological advancements of the past has been the view that technology is, at its core, a means of enhancing, extending, augmenting, and/or amplifying the things that we as humans do.[2] The old adage of technology doing a task “better, faster and cheaper” was in essence a statement measured against how we ourselves might do the task. However, AI is also different, for AI also has the potential to resemble, imitate, and even impersonate the things that we as humans do.

2. John Dyer, From the Garden to the City (Grand Rapids, MI: Kregel Publications, 2011).

There are many tasks that we as humans accomplish that we are willing to delegate to the tools we use. AI, however, has now moved into the realm of doing things that appear more human-like, such as communicating through language (e.g., ChatGPT). As with every technology, AI has the potential to be mishandled or misappropriated. Both the power and the potency of AI have the potential to stimulate temptation, which in turn leads to people being “dragged away by their own evil desire and enticed. Then, after desire has conceived, it gives birth to sin; and sin, when it is full-grown, gives birth to death” (James 1:13–15).

The purpose of this article is to answer the question, “What is AI?” and to reflect on its strengths and weaknesses from a biblical perspective. As mentioned, we will consider AI as (1) a tool, (2) a trainee, and (3) a temptation. In what follows, we will first briefly provide some of our theological presuppositions about technology. Second, we will give a brief tutorial on the history and terminology associated with artificial intelligence. Third, we present our threefold taxonomy and consider what it means to view AI as tool, trainee, and temptation. Finally, we conclude with some theological reflections.

Presuppositions

There is a long history of studying the ethics of technology: from life-giving uses of technology (e.g., reproductive technologies)[3] to life-ending technologies (e.g., technologies used in war).[4] The starting point of all these studies is an acknowledgment that God is the source of innovation and providentially oversees its development and use[5]: “Behold, I have created the smith who blows the fire of coals and produces a weapon for its purpose. I have also created the ravager to destroy” (Isa. 54:16). We agree with Jason Thacker that “Technology is amoral but acts as a catalyst that expands the opportunities for humanity to pursue. It is not good or evil in itself but can be designed and used for good and evil purposes.”[6]

3. John Jefferson Davis, Evangelical Ethics: Issues Facing the Church Today: Fourth Edition (Phillipsburg, NJ: P&R Publishing Company, 2015), 59–89.

4. John Jefferson Davis, Evangelical Ethics, 251–25.

5. Tony Reinke, God, Technology and the Christian Life (Wheaton, IL: Crossway, 2022). 55.

6. James Thacker, The Age of AI: Artificial Intelligence and the Future of Humanity (Grand Rapids, MI: Zondervan Thrive, 2020), 26.

Counter to the secular humanists who hold that “technology can solve almost any problem,” we know that our fallen condition is a problem that humanity cannot resolve. Only God can atone for sins, only God can raise the dead, only God can make a new creation. And yet, ironically, even here the payment for sins came upon a tool—the Roman cross. What man intended for evil, God used for good and the good news is that by Christ’s death, man can receive eternal life.

In light of God’s sovereign rule and our creaturely dependence, David Ehrenfeld has said that “deep within ourselves we know that our omnipotence is a sham” and “our knowledge and control of the future is weak and limited.”[7] For the purposes of this study, it is important to appreciate that technologies amplify and channel animated power.[8] Lord Acton is credited with the saying, “Power tends to corrupt, and absolute power corrupts absolutely.” However, a recent study shows that power does not indeed corrupt; it “heightens pre-existing ethical tendencies.”[9]

Thus, as we will see, the role of AI in the church gets at deeper questions. Following the Christian ethicist Oliver O’Donovan, we hold that “If a moral ‘issue’ has arisen about a new technique, it has arisen not because of questions the technique has put to us, but of questions which we have put to the technique.”[10] The question we are putting to the “technique” of AI is: What are the liberties and boundaries God has set on us, His image-bearing creation, when we exercise our God-given talents to create images of ourselves? We may not answer all of these types of questions, but in order to understand the relevance of these questions, we must now turn to a brief tutorial on AI.

7. Norman L. Geisler, Christian Ethics: Contemporary Issues & Options: Second Edition (Grand Rapids, MI: Baker Academic, 2010), 318.

8. Tony Reinke, God, Technology and the Christian Life (Wheaton, IL: Crossway, 2022), 21–22.

9. Katherine A. DeCelles, D. Scott DeRue, Joshua D. Margolis and Tara L. Ceramic, “Does Power Corrupt or Enable? When and Why Power Facilitates Self-Interested Behavior,” Journal of Applied Psychology 97, no. 3 (2012), 681–2.

10. Tony Reinke, God, Technology and the Christian Life, 241–2.

Background of AI

When discussing the background of the development of a particular technology, it is often helpful to select a transition point in history from which we can make generalized statements about the past (i.e., prior to that point), while observing what has transpired since. For the history of Artificial Intelligence (AI), World War II (WWII) demarcates a transition in computing.

In the decades prior to WWII, a “computer” was a person who computed (think: the book and movie Hidden Figures). After WWII, a large plethora of research areas emerged, for example: nuclear physics, numerical weather prediction, and digital computing. During this time period, as digital computers were able to take on more and more “computing” tasks, the nascent computer science discipline started to ask at what point a computer might “appear” human.

Many computer scientists point to Alan Turing’s 1950 paper entitled “Computing Machinery and Intelligence” as the start of AI when he posed the following question: “Can machines think?” The phrase “the Turing test” became known throughout the computer science field as the question of at what point could a human interact with an interface, asking it questions and engaging with it, in which the human could not tell whether he was dealing with a fellow human or a computer. With the Turing test firmly established, the race was on!

Tremendous investments were made in the development of both the hardware, software, and algorithms needed to make a machine that was indistinguishable from humans. The quest for the machine that would pass the Turing test happened concurrently with other major advances in the biological sciences, namely our understanding of brain functions and more generally the field of neuroscience. From the end of WWII into the 1970s, we witnessed a tremendous increase in computer power and the ability to mimic many parts of human action and reasoning. However, the ‘rule’ or ‘instruction’ based view of cognition encouraged by people like Thomas Hobbes in his book Leviathan (Chapter 5) only took us so far. AI of the early 1970s was quite convincing on many tasks, but was not quite ‘human.’ The excitement over AI dwindled and federal funding to AI initiatives decreased. In 1974, we entered the “AI Winter.” An entire generation of computer scientists after 1974 were told not to call their research AI lest it be sidelined or ignored.

A Brief Timeline of Artificial Intelligence[11]
After 1945 – Post World War II technologies emerge, including computer science
1950 – Alan Turing publishes “Computing Machinery and Intelligence.” The Turing Test is born.
1950s–60s – tremendous increase in computer power
1959 – The term “machine learning” is coined and the technology continues to develop
1974 – AI Winter: federal funding for AI initiatives massively decrease
Beyond 1974 – Advances in computational hardware, programming languages, etc. pave the way for Artifical Intelligence
1997 – IBM’s Deep Blue defeats Garry Kasparov in a tournament-condition chess match, which marks the first computer victory over a world chess champion.
2012 – Geoffrey Hinton, Ilya Sutskever and Alex Krizhevsky introduced a deep Convolutional Neural Networks architecture that triggered the explosion of deep learning research and implementation
2017 – Google researchers developed the concept of transformers in the seminal paper “Attention Is All You Need,” inspiring subsequent research into tools that could automatically parse unlabeled text into large language models (LLMs).
2022 – ChatGPT is made publicly available

11. These dates and descriptions are drawn from Ron Karjian, “The History of Artificial Intelligence: Complete Ai Timeline,” TechTarget, August 16, 2023.

Research in the broad area of computer science continued, however, and many different advances in computational hardware, programming languages, etc., all continued—all things that would later play into the “AI Revolution.” The transition within AI research occurred in large part when researchers valued a different perspective on cognition by asking: how do we learn? How might we train a computer algorithm based upon examples and a correction mechanism?

During the intervening period between 1974 and now, the area of Machine Learning advanced, and has in large part been credited with transitioning us from the AI Winter to where we are now. By shifting our paradigm, we moved to a new view on the development of intelligence: a combined ‘rule’ or ‘instruction’ based view with a sufficiently flexible and adaptable internal representation that, through interaction with the outside world, can update itself and learn from examples. The AI of today, which is broader than just Machine Learning, benefited from this transition in outlook. The history of AI is rich and is still being written. This brief history is meant to help us understand the big ideas, the subareas of AI, and why some of the areas discussed below are of relevance now.

In terms of computer science, AI is viewed as a technical sub-discipline of the broader computing disciplines. Machine learning is one component of AI but not the only component. In general, artificial intelligence attempts to answer the question of how we can replicate the actions of humans and the intelligence that drives those actions. In this way, we can consider artificial intelligence as a collection of fields within computer science: natural language processing, image processing, computer vision, machine learning, etc. Each of these fields contributes in different ways to the algorithms and techniques that we find under the umbrella of artificial intelligence. In the next two sections, we will discuss concrete examples of AI as a tool and AI as a trainee, and then highlight some of the temptations that arise due to these new technologies.

AI as a Tool

The first vantage point from which to consider AI technologies is as a tool to be used. This is the category for which the use of AI has become both ubiquitous but yet subterranean. Given AI’s ability to sift through data and infer both linear and non-linear patterns, it has found use in personalized medicine (e.g., automatic review and recommendations based on radiology images to find tumors), financial services (e.g., detection of fraudulent credit card activities), driver-assistance (e.g., cars that can now drive and parallel park themselves for you), recommender systems used for music and movies (e.g., Pandora), and virtual assistants that understand and respond to voice commands.

This is just a short summary of a long list of places AI is already being used and benefiting our lives as a tool–something that accomplishes a task on our behalf. Many of these activities fit under the label of ASI: Artificial Specific Intelligence. This is the area of AI research in which we isolate a particular task or set of tasks and create an algorithm to accomplish that task. Over the past fifty years, computer scientists and engineers have made tremendous advances in developing Artificial Specific Intelligence.

AI as a Trainee

The second vantage point from which to consider AI technologies is as a trainee to be engaged. These are AI algorithms that start to move beyond tasks that we delegate, but instead start to take on what we might consider attributes of humans, such as communication. Although we have not reached the pinnacle of success in this area (which may be a movement to AI self-consciousness), we are moving towards what is called Artificial General Intelligence (AGI), which is AI that can perform well across a wide range of tasks. The general public was first sensitized to these ideas with the release of OpenAI’s ChatGPT on November 30, 2022. Like with the original Turing test, we now had an interface in which we could ask questions and it would answer “like a human.”

ChatGPT is not the only instance of this type of ‘trainee’ intelligence, however. For we see AI manifesting itself in tools that help you rewrite (or draft) your emails—or craft a sermon! If you could ask a good (human) assistant to do a cognitive (and possibly creative) task, there is now potential that it can be done by AI.

Of course, how we might use AI in the research process is an ethical question that needs careful attention. Using AI as a research assistant is much different than delivering an AI generated speech or sermon, as though it was your own. Nonetheless, these issues highlight some of the more nuanced challenges and ethical questions we now face as AI moves from being an inanimate tool to a personalized trainee. The question becomes, at what point does a legitimate research tool become a total replacement of the task given to humans?

Furthermore, one of the challenges this mode of AI usage generates is that AI mimics the data on which it was trained. If that ‘training data’ contains inaccuracies, then the results predicted by the AI will be inaccurate. Most critical from our perspective as Christians, if that training data contains biases that are a consequence of sin, or that promote error (historical, ethical, spiritual, or otherwise), then AI will also amplify or assist the desires of sinners.

AI as a Temptation

The third vantage point from which to consider AI technologies is as a temptation to be resisted. Given that “the heart is deceitful above all things and desperately sick” (Jer. 17:9), the number of temptations that AI might elicit is limitless. However, this should not stop us from contemplating what guardrails we might consider when engaging with AI. Based upon the previous discussion, four temptations surface.

First, there is the perennial challenge for mankind to abdicate a God-given role to which we are called. AI should not be used to replace, for example, our call to lead, mentor, parent, or shepherd. Such God-given mandates belong to humanity, and can only be fulfilled by those bearing the divine image. Certainly, this challenge will call us to properly define what humanity is, and it is likely that the rise of AI will require ethicists to provide language and limitations that explain what it means to be made in God’s image and likeness.

Second, there is the tendency to transfer culpability to artificial intelligence for immoral behaviors. For example, one could ask an intentionally vague question of AI in hopes that AI will produce something approximating a pornographic image. A human would have clearly understood the sinful intention of the question. Or self-deceived, the one giving the prompts might rationalize his lustful intents, such that he claims some kind of “plausible deniability.” This is the equivalent of “the AI made me do it.”

If we are honest, the possibilities of AI will invite new corridors of corruption in the human heart. But the wickedness of the heart is not new. And the same biblical truths will apply. Flee from evil, and seek righteousness. Mortify the flesh, by the power of the Spirit. And repent of all temptations that arise from within.

Third, if we find ourselves working with AI “assistants” on a daily basis, we might be tempted to treat others the way we treat AI. Terseness and abruptness are inconsequential to an AI, but they are rarely appropriate in our communication with other people. Human beings require patience and deserve respect because they are made in the image of God. We must always remember that we are not called to love AI and use others but rather to use AI (for good) and love others.

Fourth, if AI is not just a tool that we use, but a technology that we train, then AI will tempt some institutions and organizations to build inherent biases into the way their AI technology processes data. In other words, AI can be trained from a particular vantage point (worldview) to generate historical inaccuracies to discriminate against certain ethnicities, ideologies, or political parties, as was the case in Google’s recent controversy over Gemini, the company’s AI interface.

In a now infamous instance of “biased algorithms,” Gemini generated “historically inaccurate images, such as Black Vikings, an Asian woman in a German World War II-era military uniform and a female Pope.” Google claimed that the offensive historical images generated by their Gemini AI model were the result of the way their AI technology was “trained.” This example highlights two important points. The first is that if these technologies are made “in our image,” they will tainted by the Fall, just like the data they were trained on. The second is that it’s not hard to imagine how big tech companies and other organizations might deploy AI to manipulate the truth. The question of on what or in whom we place our trust is not new: “Some trust in chariots and some in horses, but we trust in the name of the Lord our God.” (Psalm 20:7).

Concluding Reflections

AI technology is yet another reason why Christians and churches need to have a firm grasp on biblical anthropology. The widespread challenges to the traditional definition of marriage, the aggressive discussions around sex and gender, and now AI technology requires us to have a ready answer to the question, what does it mean to be human?

While AI can process incredible amounts of data and communicate in ways that appear “human-like,” AI is not and will never be human. It will never possess a reasonable and immortal soul, a conscience, or manifest the work of the law written on the heart it does not possess. Humanity is the apex of God’s creation, made in the image of God to represent God’s rule and righteous character in the world (Gen. 1:26–28; Ps. 8:5–6). Now marred by the fall, we cannot even re-create the fullness of what it means to bear the divine image in ourselves. That work belongs to God alone who, by the Holy Spirit, re-creates us in the “likeness of God in true righteousness and holiness” conforming us to the image of his Son (Eph. 4:24; cf. Rom. 8:29). AI technology affords us another opportunity to reaffirm a biblical anthropology so that we might also faithfully proclaim the biblical gospel.

We must also recognize that AI technology is here to stay. The church cannot avoid giving careful thought to fundamental questions like: how does AI fit into the Christian worldview and how might we engage with AI technologies to further our mission without compromising our biblical values and principles? Generally speaking, technology may not be good or bad in and of itself, but technology is never neutral. Churches should think carefully about adapting amoral technologies into their ministries or corporate worship services because those technologies will inevitably have an effect on the nature of discipleship. It’s not just the message that matters, it’s the medium through which the message is communicated.

How the church might steward AI technology for good and noble purposes remains to be seen. Yet we can be confident that God’s word is sufficient to guide us through the time of rapid technological change: “Let them praise the name of the Lord, for his name alone is exalted; his majesty is above earth and heaven” (Ps. 148:13), and indeed his majesty is above AI technologies also.


ABOUT THE AUTHOR

Authors

  • Mike Kirby

    Mike Kirby is a professor of computer science within the Kahlert School of Computing at the University of Utah. He earned his PhD from Brown University and his MTE from Gateway Seminary. He is also the author of over 200 peer-reviewed journal and conference publications spanning scientific computing, machine learning, and computational science and engineering. Mike is an elder at Risen Life Church in Salt Lake City, UT, and is married to Alison. They have three children.

  • Matthew Emadi

    Matthew Emadi (PhD, Southern Seminary) is senior pastor of Crossroads Church in Sandy, Utah; adjunct faculty for the Salt Lake School of Theology (Gateway Seminary); and author of How Can I Serve My Church? and The Royal Priest: Psalm 110 in Biblical Theology. He is married to his wife Brittany and they have six children.

Picture of Mike Kirby

Mike Kirby

Mike Kirby is a professor of computer science within the Kahlert School of Computing at the University of Utah. He earned his PhD from Brown University and his MTE from Gateway Seminary. He is also the author of over 200 peer-reviewed journal and conference publications spanning scientific computing, machine learning, and computational science and engineering. Mike is an elder at Risen Life Church in Salt Lake City, UT, and is married to Alison. They have three children.