Reflecting on his time playing the infamous cyborg-assassin in the movie The Terminator, Arnold Schwarzenegger declared that the future of artificial intelligence (AI) predicted in the movies is here. “Today, everyone is frightened of it, of where this is gonna go. And in this movie, in Terminator, we talk about the machines becoming self-aware and they take over . . . Now over the course of decades, it has become a reality. So, it’s not any more fantasy or kind of futuristic. It is here today.”[1]
1. Jack Smart and Kirsty Hatcher, Arnold Schwarzenegger Says James Cameron’s ‘Terminator’ Films Predicted the Future: ‘It Has Become a Reality,’ People, June 29, 2023.
Should Christians approach AI with this existential hysteria? Absolutely not. AI is simply computer code that has been developed by humans. At a basic level, it is mathematical and computing codes, including computer algorithms, that humans have designed to transform inputs (or data) under a set of complex instructions into outputs. There are no killer robots or Skynet.
But like any piece of technology developed by sinful human beings, AI can be used for both good and for evil. It’s a tool, and it’s probably a tool that you are using daily right now even if you don’t know it. In fact, by reading this article, it’s no doubt that you have likely used technology or software that has integrated AI for your good.
But as I reflect on the emergence of the AI revolution and the advances in technology, I have noticed how quickly our governing institutions are already (as they do in many cases) use the fearmongering and crises to justify an expansion of their own power.
It was in fact President Barack Obama’s former White House Chief of Staff Rahm Emanuel who once said: “You never want a serious crisis to go to waste. And what I mean by that is an opportunity to do things that you think you could not do before.”
This idea isn’t new. In fact, history is full of recurring examples of how during crisis (either real or fabricated) humans are most often willing to turn to the state as their “savior” to protect them. And while there are many circumstances (such as war or invasion) that the State is the institution God has given to protect, it is also true that the State can use crisis in times of emergency to centralize its power at the expense of liberty. As I’ve written previously, the American system of government was set up in a manner to resist centralized power to protect against tyranny.
The so-called “crisis” of AI is no different. Our elected officials are predictably already using the fear of AI as an opportunity to expand their regulatory authority under the guise of ensuring AI is “safe.” Senate Majority Leader Chuck Schumer,[2] in the context of regulating AI use in our elections, sensationally declared that our very democracy is at stake:
2. Senator Schumer is a notable person in Congress to highlight not simply because he’s the leader of the Senate, but because he has been leading an AI working group for some time to set the “roadmap” for AI legislation in Congress.
Once damaging misinformation is sent to a hundred million homes, it is hard—oftentimes impossible—to put the genie back in the bottle. Our democracy may never recover if we lose the ability to differentiate at all between what is true and what is false, as AI threatens to do . . . So, you need government guardrails on this and so many others.[3]
3. Senate Democrats, TRANSCRIPT: Majority Leader Schumer Remarks At Rules Committee Markup On The Impact of Artificial Intelligence On Our Elections, May 15, 2024.
What’s notable about this statement is the underlying belief that the government should be the arbiter of truth and help determine what information is “truth” or “false.” It hits at the very heart of what speech or information we should be able to consume in our public discourse. In what follows, I’d like to provide a few brief examples of our government’s current efforts to regulate and control online speech, particularly within the new subfield of AI known as “generative AI.”
How Does Generative Artificial Intelligence Work?
At a simple level, “generative AI” is AI like ChatGPT used to “generate” new content, including text, video, audio, etc. “Generative AI” is so closely intertwined with the information and the content that we consume on the internet that governmental control over the coding and computer backend would be a very important avenue for controlling the information and content displayed on the internet.
To understand the threat to speech, I think it would be helpful to quickly (and at a very basic level) outline the basic mechanics behind a “generative AI” application such as ChatGPT. ChatGPT is a computer algorithm that has a set of complicated instructions and rules built into its code that enables it to “learn” rules and patterns from data. To “learn” these rules and patterns of language, ChatGPT undergoes “training” on public data (such as data available on the internet) or data that’s been licensed. So, when a person enters an “input” (or a question) into the computer, ChatGPT uses its “training” to “predict” each word and the next word to generate an output (the answer to your question). In other words, ChatGPT has been trained on terabytes of information contained in millions and millions of books so that it makes a statistical determination on the best response to your question word by word.
You can understand then how a model like ChatGPT’s ability to sift through such an extensive amount of data is a powerful tool to improve our ability to consume information or content online. A generative AI model is dependent on the quality of the data that is used to train it, so if you train an AI model on data that is false or inaccurate, then it degrades the quality of its answers and vice versa. And because the data used to train the model is often what is publicly available—i.e., scraped from the internet or licensed—its outputs can have inaccuracies, bad information, or have a biased viewpoint (just like all human data).
So now let’s return to the U.S. government’s efforts to ensure your “safety” and ability to consume truthful and unbiased information.
How the U.S. Government is Seeking to Control AI
On October 23, 2023, President Biden issued a lengthy Executive Order on Artificial Intelligence.[4] The policy concerns built into this Executive Order resemble a broad economic command and control plan over AI by the U.S. Government.[5] Relevant to the control of speech is the focus by the U.S. Government’s regulation of AI in the pursuit of “algorithmic justice” to root out “bias,” advance “equity,” and stop “misinformation.”
4. President Joe Biden, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, The White House, October 30, 2023.
5. Space prohibits me from building out these points, but suffice to say the Executive Order also lays how the U.S. Government is attempting to ensure AI development is (1) “built through collective bargains on the views of workers, labor unions, educators, and employers;” (2) “improve environmental and social outcomes;” (3) “mitigate climate risk;” and (4) “[build] an equitable clean energy economy.
At a high level, Biden’s Executive Order states:
Artificial Intelligence policies must be consistent with my Administration’s dedication to advancing equity and civil rights. My Administration cannot—and will not—tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice. From hiring to housing to healthcare, we have seen what happens when AI use deepens discrimination and bias, rather than improving quality of life.[6]
What the Administration means by “bias” or “equity” is explained in the White House’s AI Bill of Rights, particularly its chapter on “Algorithmic Discrimination Protections,” which states:
There is extensive evidence showing that automated systems can produce inequitable outcomes and amplify existing inequity. Data that fails to account for existing systemic biases in American society can result in a range of consequences.[7]
6. President Joe Biden, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, The White House, October 30, 2023.
7. Algorithmic Discrimination Protection: You Should Not Face Discrimination By Algorithms And Systems Should Be Used And Designed In An Equitable Way, Blueprint For An AI Bill of Rights, The Office of Science and Technology Policy, The White House.
In other words, the “systemic biases” and “inequity” are tied to the “data”—namely, the information used to “train” the automated system. Consequently, the Administration states:
Any data used as part of system development or assessment should be representative of local communities based on the planned deployment setting and should be reviewed for bias based on the historical and societal context of the data. Such data should be sufficiently robust to identify and help to mitigate biases and potential harms.
The Chair of the Federal Trade Commission—a key federal agency that often enforces technology regulations—echoed a similar theme in her recently shared plans to regulate AI for “disinformation” and “bias.” In a New York Times op-ed titled: “We Must Regulate AI. Here’s How” stated:
“…these A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination—unfairly locking out people from jobs, housing or key services.[8]
8. Lina Khan, We Must Regulate A.I. Here’s How, The New York Times, May 3, 2023.
But it isn’t just stopping a factual error; it’s also about putting the thumb on the scale with controlling what type of information or viewpoints you can consume. One tangible example[9] cited by the Administration on how to ensure equity in the data is as follows:
9. While I am unable to provide a comprehensive view of these examples across the Federal Government (and state governments as well), these are widely adopted viewpoints by critical agencies as part of a broader push to regulate the backend code and data that comprise AI.
Those responsible for the development, use, or oversight of automated systems should conduct proactive equity assessments in the design phase of the technology research and development or during its acquisition to review potential input data, associated historical context, accessibility for people with disabilities, and societal goals to identify potential discrimination and effects on equity resulting from the introduction of the technology. The assessed groups should be as inclusive as possible of the underserved communities mentioned in the equity definition: Black, Latino, and Indigenous and Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of religious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and intersex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons otherwise adversely affected by persistent poverty or inequality.
In other words, the White House isn’t just asking to correct factual errors or to generically stop bias and inequity (though that alone would be problematic). They are pressuring AI developers to ensure that the training data used to generate content or information for the general public meets the state’s standards for “truth” and “equity” and contains only a “bias” approved by the State. And federal agencies are proposing audits to ensure these ideas can be enforced.[10]
10. See: National Telecommunications and Information Administration: United States Department of Commerce, NTIA calls forAudits and Investments in Trustworthy AI systems, March 27, 2024.
At a more fundamental level, regulating and controlling the design of computer code and the data used to train it is government “speech police” through the very computer code and data used to train the information and content that is displayed to you online.
But don’t we want to ensure content and information is free from errors or inaccuracies of bias? Of course. No one wants a situation where Google’s AI chatbot, Gemini, provides inaccurate information that decreases trust in the reliability of the AI. But unlike the government (which can imprison you), private companies don’t carry with them the force of law and are subject to public outcry and financial losses. Google’s clear mistakes, which totaled nearly $100 billion, motivated substantial change[11] because Google’s AI competitors rushed to fill the gap and produce a better product. Government mandates carry with them one-size-fits all rules which negate these market incentives to improve products and services.
11. Derek Saul, Google’s Gemini Headaches Spur $90 Billion Selloff, Forbes, February 26, 2024.
The future use of these tools for the government to determine “truth” or root out “bias” has significant implications for government censorship, suppression of certain viewpoints, and even the sharing of the Gospel and Biblical truth. Frederick Douglass, the famous slave abolitionist and writer, echoed the importance of free speech when he stated in 1860:
No right was deemed by the fathers of the Government more sacred than the right of speech. It was in their eyes, as in the eyes of all thoughtful men, the great moral renovator of society and government. Daniel Webster called it a homebred right, a fireside privilege. Liberty is meaningless where the right to utter one’s thoughts and opinions has ceased to exist. That, of all rights, is the dread of tyrants. It is the right which they first of all strike down. They know its power.[12]
12. Frederick Douglass, A Plea for Free Speech in Boston, 1860.
Conclusion
AI is not bringing the Terminator to your doorstep or any sort of existential crisis on what it means to be human, but we, as Christians, do continuously face the looming threat of the supremacy of the state and its creeping efforts to centralize its power.[13] AI is one example of how the state seeks to use “crisis” to centralize its authority by saying that it seeks your “safety” by ensuring that they are able to provide you “truth” or “false” information, controlling the very speech and information that you can consume.
13. See R.C. Sproul, Statism: The Biggest Concern for the Future of the Church in America, November 12, 2012, Ligonier Ministries.
We face a crossroads right now regarding the direction that we will take. I’ve laid out what may appear to be a hopeless situation—but let the reader understand—all is not lost. Every one of these policy positions described in this article are not set in stone; they can be changed and stopped if people have the will to act. Every American Christian has a voice in our Republic and the ability to wield political authority in this country through a vote—even in an upcoming legislation this fall—on the future direction of our nation as well as how we regulate and consider AI.
Now is the time to be vigilant and act wisely as we advocate for elected officials who properly understand the role of government and the dangers of unbridled or unchecked power. How will you respond?