Whatever Comes, Get Wisdom: AI, the Future, and Our Chief End

By

The beginning of wisdom is this: Get wisdom, and whatever you get, get insight. (Proverbs 4:7)

As a professor at Arizona State University, I have access to many valuable AI resources. ASU has been pushing to lead the way in the use of AI in the classroom. Last semester, I took a course for instructors on how to use AI to improve classroom instruction. The first weeks of the course were about the different kinds of AI and how they work. As we got into the specific applications to the classroom, I have to admit I was skeptical. This is especially true because I teach philosophy and rely heavily on the Socratic method and group discussions. This means that I am already doubtful about the ability to translate such a course into an online setting.

One of our last assignments was to ask the AI to create a test to accompany a learning module. I chose to make one for a segment I teach about Augustine. Still skeptical, I didn’t think the outcome would be worth using. To my surprise, the AI actually made a good test. I don’t know how much of it was plagiarized from other already existing tests found on the internet. Presumably, all of it. However, it was able to take that information sourced from the internet and form a test within a matter of seconds, whereas this would have taken me hours.

The other side of this is that we know students use AI to answer the questions and cheat in their assignments. Ironically, that means instructors are using AI to make the tests, and students are using AI to take the tests. As a result, the use of AI frees up time from teaching and studying to do other things. But what are those things?

Answering that question will help us know if we should fear AI or freely enjoy its many benefits. Will it destroy us or help us achieve a brighter future? To put a Pauline spin on it (1 Cor. 6:12), will humans master AI or will AI master humanity? The answer depends on a key question: do we know our chief end? The development and use of technology always reflect how a society answers that question, and in this essay I will answer that first question about humanity’s telos so that we can answer the more practical question about using AI.

What Are We Made For?

Jack Ma, founder of the wildly successful company Alibaba, gave a talk about the future of technology. While many in the West enjoy doomsday speculations about robots from the future being sent back in time to change history, he gave a very optimistic outlook. He recalled his grandfather working sixteen hours a day for six days a week. Today, that kind of work can be automated by technology. Increasingly, the benefits of technology are being spread more universally. Ma tells us we are freed from one kind of work so that we can focus on work that makes us distinctively human. He is confident that technology will not be able to replace humans because it is only humans who can have wisdom. And his vision of the future is one where humans are busier than ever pursuing wisdom and other uniquely human endeavors.

Sound too good to be true? How do we compare that vision with one where students use AI to cheat on papers so they don’t have to learn? I believe the explanation requires that we understand both moral and natural evil. Natural evil is the various kinds of suffering that God imposed on the world after sin in Genesis 3. It includes old age, sickness, and death, as well as toil, strife, famine, war, and plague. When we slow down to think about it, we realize that most human work is aimed at minimizing and avoiding natural evil.

AI will obviously have many applications that can do just that. Like other technologies— from the automobile to the washing machine to the smartphone—certain kinds of work can be done by a machine. Work that involves toil and drudgery can be done by technology so that humans can spend their mental efforts on creativity instead. Ma said that we should expect to work more, not less, but that our work will be more human. It is a tautology to say that if a machine can do the work, it isn’t uniquely human work.

But there is a problem—a deep problem.

That problem is found in moral evil and its consequences. Moral evil is rooted in autonomy. It is rooted in the attempt to live apart from God. And it is here, we should begin to make the connection between autonomy and AI.

After the fall, autonomy became synonymous with human experience, and from this autonomy grew all the fruits of sin. While mankind was created in the image of God to know our Creator and be in relation with him, that natural design has been marred, and sin, which is now common to all, is an act contrary to the nature God gave to Adam and Eve in the Garden. Likewise, technology must be considered in the context of this moral complex.

Technology is often contrasted with nature. Nature has organic growth, whereas technology must be constructed. However, humans with the dominion mandate to understand the nature of the world God created would naturally produce technology. This is true from the simplest tools to the most advanced computers we see today. There is a line of development from the stone hammer to the smartphone. God created the world with latent powers in it, and as humans come to discover these powers, they can put them to use to multiply their efforts and free their efforts for other ends. And yet, we bring our sin to any technology we make.

Sinners Make Tools to Sin More Effectively

Instead of aiming at the chief end of glorifying God and enjoying him forever, unregenerate humans will seek our own ends. These might be immediately selfish, or they might be masked in altruism that covers selfishness. We put ourselves in the place of God to determine good and evil for ourselves. This raises the question: how does that affect our technology?

We can see immediately that it will be used for the wrong end. Think of the line of Cain. They are named as having produced some of the first advances in human civilization: Husbandry, metallurgy, and music (Gen. 4:17–22). But when these were coupled with sin, the result was a world filled with violence and corruption (Gen. 6:5).

And sin has inherent consequences. Sin produces meaninglessness, boredom, and guilt. So now, not only is technology being used to avoid the toil that might lead us to seek rest in God’s promise (consider Lamech’s hope in Genesis 5:28–29), it is being used to fill in the void left by meaninglessness and boredom while dulling the pain of guilt. Like a student using their energy to cheat rather than study, think of all the effort put into developing narcotics in order to avoid the pain of spiritual death.

I suspect one area that will especially be affected by AI is pornography. We already see the legal problems arising from AI making nude pictures of real people who are clothed. Humans will bend their minds to find all the ways AI can be used as a source of perversion. Pornography and the perversions that accompany it are an act contrary to our nature as created by God. They are an attempt to find meaning contrary to God’s law. They also bring significant destruction to the individual and society.

I once read a shockingly high statistic about how much of the internet is used for pornography. Different studies come out with different numbers. But whatever the number, this a huge internet business. The internet is one of the high points of technology, and AI is really just a part of that larger development. It is also being used for the same base urges that humans have always sought to fill. How can one and the same thing be both the height of technology and the lowest form of human debauchery?

AI Cannot Solve Human Sin

I think it tells us that in an important way technology doesn’t change our biggest and deepest problem. It can compound problems that arise due to sin and our attempt to avoid natural evils like toil. The future might look like 1984, with AI used to keep track of what we do. Or it might look like Brave New World, with AI used to distract us with pleasure so that the government can control us. But neither of those dystopian futures illustrates how bad our sin really is.

Technology only comes in at what I will call the tertiary level. Our beliefs systems can be pictured like a building with a foundation, walls, and a roof. These levels are the logical order of our beliefs. The most basic level is assumed by later levels. The basic level involves our beliefs about God, good, and evil. Our problem is that we have rebelled against what is clear about God, our creator. We know that God’s eternal power and divine nature are clearly revealed to all so that unbelief is without excuse (see Rom. 1:18–23; 2:12–16). We also know that we are not left in our sin but that God acted redemptively to save us through Christ. The secondary level are those beliefs that build on our beliefs about God and are about why there is suffering and what to do about it.

Technology comes in at the tertiary level. Once we have our beliefs about God and our beliefs about why there is suffering, we then form beliefs about what to do about suffering. For the unbeliever, technology is primarily a way to lead an easier life, seek power over others, and seek pleasure. It will always end in nightmares. Like Frankenstein, these technocrats will seek to replace God by altering nature. Silicon Valley leaders seek immortality in transhumanism. Their goal is an unending life apart from God. Interestingly, even here, technology presupposes the world God made. It is an attempt to live in God’s world while rejecting God and the nature of things. It embodies the self-contradictory nature of sin and spiritual death.

For the believer, technology is part of the dominion mandate, as well as the commission to make known the grace and glory of God to the ends of the earth. Specific to the the dominion mandate, we are told to understand the nature of the world God made. Adam named the creatures. Each one was different from the others in having its own nature, and all of them together were different than Adam. None of them could work with Adam in the unique work God gave. That work is to know God as he is revealed in all of His works (Psalm 145). It is continuous with the Great Commission where the redemptive work of God deepens the knowledge of God (Matt. 28:18–20).

Conclusion

I’m returning to my students and using AI in a philosophy class. AI is a tool that can automate many tasks, but it can’t learn philosophy for you. Nor can it redeem you and restore your relationship to God. It can write your answers for you, but with your pristine GPA, you still know nothing. Or, as Jack Ma said, it cannot be wise. That is a uniquely human ability.

The fear of the Lord is the beginning of wisdom. And whatever the seen or unforeseen effects of AI are, we can be sure that our chief end is still to glorify God and to enjoy him forever. And in that endeavor, we will need more than AI. We will need the ancient wisdom that never returns void.

ABOUT THE AUTHOR

Author

  • Owen Anderson

    Owen Anderson is a professor of philosophy and religious studies at Arizona State University and an adjunct professor of philosophical theology at Phoenix Seminary. He is the pastor of Historic Christian Church in Phoenix, Arizona. His books include The Natural Moral Law with Cambridge University Press and Job: A Philosophical Commentary. His substack is drowenanderson.substack.com. His YouTube is Youtube.com/drowenanderson.

Picture of Owen Anderson

Owen Anderson

Owen Anderson is a professor of philosophy and religious studies at Arizona State University and an adjunct professor of philosophical theology at Phoenix Seminary. He is the pastor of Historic Christian Church in Phoenix, Arizona. His books include The Natural Moral Law with Cambridge University Press and Job: A Philosophical Commentary. His substack is drowenanderson.substack.com. His YouTube is Youtube.com/drowenanderson.