Imitating Christ in an Age of AI

By

Over the last 100 years or so, philosophers of technology have sought to define what technology is by approaching it from a number of perspectives. One perspective holds that an artifact or tool is simply something that extends human capabilities and enables us to accomplish our objectives. Studied from this perspective, the emphasis tends to be on how technology enables us and influences our actions. For example, “how might the advent of email have altered our writing habits or communication modalities,” “do we use the phone less,” and so on. Typically, in this light, technology is most often seen as offering convenience and efficiency; we can do things faster and easier without bad outcomes, one might think.

Other philosophers have argued that this is too narrow a view of technology and that we need to understand the values that lie behind the design and development of a tool, and indeed how tools shape societies as they are used. In this article, I hope to briefly do five main things: 1) think through the good and 2) bad of Artificial Intelligence, 3) to explain that there is a worldview influencing the development of Artificial Intelligence and narratives around it, 4) to consider how Artificial Intelligence affects our humanity as image bearers of God, and then finally, 5) to consider a way forward.[1]

1. Due to space limitations these subjects can be treated only briefly. For a fuller treatment, see Jeremy Peckham, Masters or Slaves? AI and the Future of Humanity (London: IVP, 2021). See also the online bible-based course for individual or small group study.

It’s Not All Bad

When we look back in time it is clear that technology and innovation has shaped society, sometimes in helpful ways, other times not so helpful. Early agricultural tools helped to improve productivity and reduce the amount of time required to cultivate the land while increasing food production. More and more mechanization has resulted in far fewer people needed to work the land in industrialized countries and ensured food security, compared to much of Africa where farming is still largely a back-breaking and uncertain affair. The printing press, electricity, radio, the clock, the telephone, television—all of these have massively shaped the societies for good and for ill.

Although technology shapes us, it doesn’t always do so in a negative way. The late philosopher of technology, Don Idhe, suggested that technology has both an amplifying and reducing effect. This idea can be easily understood when we think about how a telescope enables us to see far distant planets yet at the same time it narrows our field of vision.

Knowing that we live in a fallen world with sin and its consequences should help Christians to appreciate how people’s values and goals might influence the design of technology as well as our own use of it. A simple and well documented example of this is the way in which social media platforms use AI software, designed to appeal to our vices, rather than our virtues, to keep us engaged on the platform. The motivation for this is to sell more advertising that pays for the ‘free’ platform we use. As users of social media, we have a choice of how much time we spend, what rabbit holes we go down, how we interact and whether we allow ourselves to become addicted to it. Unfortunately, the platform design is set up to promote bad behavior rather than virtuous. This can lead to narcissistic behavior and the toxic postings that are all too familiar on social media, leading to a greater polarization of opinion.

Understanding how such platforms are developed and used helps us to see that some of the philosophers of technology are right, values do shape how technology is developed and economics plays a key part.

You Are the Product

It is significant that much of the AI technology that has been developed in the West has been carried out by a relatively small number of Big Tech companies, like Amazon, Google, and Meta. These and other large companies like Microsoft and Apple have had the resources to acquire or invest in other specialist AI companies such as Open AI and the British company Deep Mind. Since 2016, over 500 billion dollars have been invested in this sector, so there is now a huge vested interest in seeing it succeed. For the likes of Google and Meta, their business model depends on keeping the ‘user a product,’ sucking in all their data, profiling users and selling this on to advertisers and others.

We are constantly told that this technology will benefit humanity by improving productivity and solving the world’s problems like health and climate change. Some go so far as to believe it might even lead us to the point where many won’t need to work and they will subsist on a type of Universal Credit. This techno optimist worldview has now been adopted by many politicians who tell us that innovation and productivity are necessary for economic growth. There is also an AI arms race with many Western countries seeking to outdo China and come out on top. Vladimir Putin once famously stated that “whoever wins the AI race will control the world.”

A Clash of Worldviews

We need to be mindful of the agenda behind the promotion of AI. The companies developing and selling it care less about the impact on society and individuals than the profits that they seek to accrue and in dominating the AI world. This shapes the design and marketing of these tools and how they are being deployed. The examples of social media, search engines, and even online shopping are instructive as the aim is to create a frictionless interface to the platform to encourage user engagement, to see more adverts, or to purchase more goods. The more immersive the experience is, whether on a smart phone or tablet, the more users are sucked in and the more likely they are to become addicted.

It is ironic that many of the leaders of these companies have recently warned of the existential risks that advanced AI (so-called Frontier AI) presents to humanity. So successful has this agenda setting been that the UK hosted the first global summit on AI safety in November 2023. Safety Institutes have now been set up by the UK and US with other countries likely to follow. Unfortunately, these institutes are more focused on addressing imagined future risks rather than the here-and-now risks that AI applications are posing to society at large through the way that they are shaping us. Neither are they adequately addressing the concerns that minority or marginalized groups have over specific risks, such as data bias, privacy, and injustice from prejudicial automated decisions.

Although many, even Christians, think of technology as amoral or neutral, it is undeniable that tools have shaped our societies and societies have shaped the values and ideas of the people who design and sell these tools. We tend to give too little thought to these aspects when we enthusiastically embrace new technologies and inventions. Christians are unwittingly shaped by the world around us, especially as the way we use technology is normalized and unquestioned.

Ultimately, in the development of AI and Artificial General Intelligence (AGI), we are facing a clash of worldviews. A naturalistic view of our brain thinks of it as a computer that can eventually be replicated, even surpassed, and it is now commonplace to hear AI applications described as thinking or reasoning and even being smarter than humans. Indeed, AI is already starting to shape how we view ourselves, to question what it means to be a human being. These anthropomorphisms are, however, a long way from the reality of how human cognition works. Although inspired by the neurons in the brain, the “neural networks” and “deep learning” that lie behind current AI technology are highly simplistic compared to the human brain. They are in fact nothing more than statistical pattern matching processes, trained on vast amounts of data that provide, for example, the likelihood that someone’s face or an image of a tumor matches items from the training data base or, in the case of Generative AI, predict the most likely picture, video, or complete story that matches the input prompt. It’s not surprising that these AI systems appear to perform better than humans at some tasks given that it would take a human being thousands of years just to read the vast quantity of information they have been trained on.

We must pay attention to the impact that the world around us, including technology and AI, has on the renewal of our self, our sanctification. We’re engaged in a spiritual battle against the forces of darkness and the devil will seek to subvert the process of our sanctification, our becoming more like Christ. Let us be careful that the convenience some AI applications offer us, or indeed any other technology, doesn’t become self-seeking, the substitution of self for Christ in our affections, ‘which is idolatry.’

Asking the Right Questions

We need to ask, what does this technology do for us and what does it do to us? Why are we using it, rather than something else? As we think about these questions it might be helpful to consider six key areas of our God-given humanness, illustrated in Figure 1, that could be shaped by our use of AI applications, or indeed any technology.

Figure 1: Based on the three complementary aspects of the doctrine of the Image of God—holiness, dominion, and relationship—the diagram shows six key aspects of being human, made in God’s image, that can be diminished or sidelined through the use of AI. Applications lie on a spectrum of risk, illustrated with examples from low to high risk.

There is a spectrum of risk associated with our use of AI. Some applications will have no impact on our ability to faithfully image Christ in the spheres where God has placed us, whether church, friends and family, or work. An example of this is the use of facial recognition technology in a smartphone to control who can open it. At the other end of the spectrum, where the state uses the same technology in mass surveillance, it infringes our freedom, privacy, and autonomy, which are all important aspects of being made after God’s likeness. We can choose whether to use facial recognition in a smartphone but we don’t have a choice if the state uses it in surveillance.

There are many applications where we do have a choice to use or not to use, but that will have a significant impact in shaping us. In these cases, we will need to evaluate our engagement and think about how it will shape our behavior over time as well as how our use will impact others and their view of Christ, who he is, and what he is like. Many applications impact more than one of the six human areas shown in Figure 1. We might ask: why are we using and developing such tools rather than our own brain and creativity or asking our pastor and friends for an answer to a question that we may have?

The more humanlike and convenient AI technology becomes, the more it erases the distinction between online and offline, while at the same time creating an illusion of more control of our lives and our digital world. Yet the evidence is that this technology is already beginning to control us. Children, for example, find it hard to take off the ‘lens’ through which they see and interact with the world. Digital technology, and increasingly AI-mediated technology, is their world. Many have become reliant on this technology and are uncomfortable when it’s taken away, finding themselves insecure and struggling emotionally to deal with people face to face. It has become a mediator through which we interact with other people and through which we understand our world. It has become nothing less than a digital priesthood. Idolatry can be defined as anything that we value more than God, the things that drive us. When we replace our responsibility to image him by, for example, using an AI app to write a prayer or a story, are we not valuing convenience more that being a faithful witness?

AI for Good

How might then we as Christians evaluate how to engage with AI application or not? Given the several aspects of our humanness that are potentially impacted by AI use, a starting place is to think through how a specific use case might impact each of the areas shown in Figure 1. Some practical questions that we can ask ourselves about using applications that impinge on relationships, for example, are shown in Table 1. Typically, we need to ask what this application is likely to do to us over time, how it might be forming us, and what we can do about it. Such questions can be developed for other application areas, even those like credit checking or analyzing our job application, that are working in the background without our knowing. Already some jurisdictions like the EU, recognizing the dangers, are seeking to regulate the use of such application areas.

We also need to think about how to develop and deploy AI-based technology that will serve humanity rather than simply race to replicate us. Applications of technology and AI in particular are best targeted at enhancing or extending human capabilities, rather than replacing them, amplifying something we can do but not reducing our humanity at the same time. An example is the use of robots in surgery where higher precision can be achieved than a well-trained surgeon might be capable of due to the limitations of hand dexterity. We can use AI based robots in hazardous environments like finding and neutralizing mines. The techniques used in AI, like Machine Learning, might be able to perform tasks that humans can’t easily do, such as finding patterns in data for fraud detection or cybersecurity. Generative AI and Machine Learning techniques are increasingly being used in ‘Digital Twins’ that are virtual models of real or intended physical systems and environments like wind turbines or manufacturing processes. These models have a role in improving the design and performance of many physical systems. AI might also be used to carry out tasks that would be impractical for humans, as they would take far too long, potentially speeding up scientific research in areas like drug discovery. There is no doubt a myriad of applications for Machine Learning in the more niche field of medical prosthetics and assistive technology for those with disabilities. Rather than replacing creativity and cognitive activity, AI applications might be better assisting in tasks, such as checking human-generated software code to highlight potential problems, rather than doing the programming itself. These are just a few illustrations of how AI could be used to benefit, rather than sideline humanity.

Characteristics of True Image Bearing in Authentic RelationshipsInfluence of Digital Technology and Personified AIHow Am I Being Formed by AI and Digital Technology?What are my choices?
LoveAbrupt CommunicationHow Much Time Am I Spending in a Digital World vs. the Real One?Give Preference to Face-to-Face Over Virtual
CommitmentInability to Pay AttentionWhich Do I Prefer?Use Text Search and a Neutral Search Engine instead of a Digital Assistant
KindnessLack of FocusDo I Prefer to Text?Limit Use
Preferring Others Over OurselvesDiminished Ability to Reflect and ThinkDo I Pay Attention to Others and Listen?Ask if It Can Be Done Another, More Human, Way
Encouraging One AnotherPreferred Because: They Don’t Answer Back, Are Empathetic, Are Easier to Deal with, and Always Do What One CommandsDo I Find it Easy to Empathize?Avoid
ListeningGender StereotypingWhat Are My Expectations of Others?
EmpathizingPersonificationWhat Am I Prepared to Give in Relationships?
True IntimacyAccept Answers UnquestioninglyHow Do I View Others, Such as Women?
Reliance on DeviceCan I Live without it?

Table 1: Example of the difference between authentic relationships and the negative impact AI, used in digital assistants such as Alexa, could have on these through the way it forms us, alongside self-analysis and our choices (from: Jeremy Peckham Masters or Slaves, AI and the Future of Humanity, IVP, 2021, used with kind permission from IVP).

As communities of God’s people, we can show the world a different way by modeling authentic community that is situated and embodied, where relationships are built on love and not likes. We can show how God’s gift of creativity is valued and celebrated by involving others in the process, rather than sidelining them by using so-called AI tools to create devotionals or graphics for our church. Being intentional about encouraging one another demonstrates how true knowledge and wisdom is shared within our communities rather than obtained from a statistically based artifact trained on masses of data that has no thoughtfulness or ground truth. Truth is found through reading, studying, and discussing God’s word in community and our knowledge and understanding of our world is mediated through reliable news sources, critical thinking, and discussion with trusted friends.

Conclusion

These next few years will be challenging as more and more applications are released onto an unsuspecting world, infiltrating the public service arena, our workplace, our homes, and even our churches. With some companies aspiring to develop AGI and to upload peoples’ brains in a quest for immortality without God, even to create a post-human world, one is reminded of the Tower of Babel and man’s quest to “make a name for himself.” We all know where that ended up—and so we must tread wisely. Let those of us who are developers be cautious about building AI applications “because we can.” Once technology is out there, people will be drawn to it and shaped by it. Responsible technology development is not about adopting the old Facebook motto—“move fast and break things.” With God’s help and by his grace the church can make a difference, being the salt and light that we are called to be. May God grant us discernment as we navigate the rapidly changing world of AI and may he keep us from losing our saltiness and becoming only fit for scattering on the ground and being trampled underfoot.

ABOUT THE AUTHOR

Author

  • Jeremy Peckham is a technology entrepreneur, and he has spent much of his career in the field of Artificial Intelligence. Over the last 25 years he has invested in and served on the board of a number of high tech companies. Jeremy has served as a deacon and elder at previous churches and currently attends a local FIEC church (Fellowship Of Evangelical Churches) where he participates in various bible teaching roles as well as serving in the “tech team”. In 2009 he founded Africa Rural Trainers that provides bible training to pastors in rural Kenya. For more articles and podcasts on AI, you can visit his website.

    View all posts
Picture of Jeremy Peckham

Jeremy Peckham

Jeremy Peckham is a technology entrepreneur, and he has spent much of his career in the field of Artificial Intelligence. Over the last 25 years he has invested in and served on the board of a number of high tech companies. Jeremy has served as a deacon and elder at previous churches and currently attends a local FIEC church (Fellowship Of Evangelical Churches) where he participates in various bible teaching roles as well as serving in the “tech team”. In 2009 he founded Africa Rural Trainers that provides bible training to pastors in rural Kenya. For more articles and podcasts on AI, you can visit his website.