Some have called artificial intelligence (AI) humanity’s “biggest existential threat.”[1] Others say it could let humans achieve “a more utopian existence” built upon a “Marxist vision.”[2] Still others point to it as a reason for pursuing “a transformative vision . . . for a new society.”[3] Whatever the outcome, AI is shaping up to drastically impact humanity’s future. Where did AI come from, where could it be heading, and how should Christians think in response? To answer, the following discussion examines past, present, and prospective applications of AI, identifies theological principles for thinking about AI, and applies these principles to consider AI’s bioethical implications for human futures. First, we need to unpack what AI means.
Although definitions for AI vary, one team of scholars notes how a common theme among most definitions is that “AI involves the study, design and building of intelligent[4] agents that can achieve goals.”[5] Notably, this view of intelligence as primarily goal-driven shows how AI developers do not necessarily define intelligence in the same manner that researchers of human cognition do. Human intelligence is a debated concept with no single accepted definition. But cognitive scientists generally view human intelligence as multilayered, embodied, and socially integrated in ways that go beyond the calculated goal optimization of AI systems.[6]
Like a simulated brain, AI systems are computer programs that can receive information, process the data, and perform some action in response. These programs are considered “agents,” according to computing professor Mark Riedl, “when they are capable of making some decisions on their own based on given goals.”[7] AI itself is software rather than hardware.[8] However, an AI system may incorporate a virtual body (an avatar) or a physical body (a robot). In the latter case, the robot’s sensory devices (like cameras) can provide input to the AI “brain,” which commands the robot’s “body” to respond accordingly.
How do AI systems “think?” Ultimately, humans program them with sets of encoded directions—algorithms—that instruct the systems to perform certain steps, in response to certain inputs, to achieve certain goals.[9] In an approach to algorithm development known as “machine learning,” humans design a system that can adjust its behavior in response to data.[10] This way, the system “learns” from past experiences without necessarily requiring further instructions from human programmers. The machine’s goal is to learn either to classify new data (like recognizing a new picture of a known face) or to make predictions (like anticipating the next word in a sentence).[11]
A kind of machine learning known as “deep learning” relies on artificial neural networks (ANNs)—layers of interconnected computing nodes that receive, process, and transmit data.[12] ANNs form the basis for generative AI models, which analyze massive sets of training data (e.g., words, images, or videos) to gain relevant knowledge. This knowledge lets the system produce new materials when prompted. Because of these systems’ complexity, involving “hundreds of billions of mathematical operations,” not even the systems’ developers fully understand the inner workings of how large ANNs make decisions.[13]
A breakthrough in generative AI appeared with the development of multimodal AI systems. Earlier systems could receive only specific kinds of inputs (such as voice prompts, images, text, computer code, or physiological data) and generate only certain types of outputs. But multimodal AI can utilize and translate between many forms of inputs and outputs. For example, multimodal systems can combine information from a person’s wearable fitness tracker, geolocational data, and electronic health records to produce a personalized healthcare plan or to facilitate epidemiological monitoring.[14]
Like other current AI systems, today’s generative AI falls under the category of “narrow AI.” Narrow AI systems excel at accomplishing a certain task—for instance, writing sentences, guiding semi-autonomous vehicles, or translating speech into text. Even if a system uses a scope of materials as wide as the internet, the system remains narrow if it only demonstrates a limited range of intelligence. In contrast, a hypothetical system showing artificial general intelligence (AGI) would excel in many areas, incorporating a variety of intelligent capacities that rival or surpass humans’. While AGI systems remain futuristic, developers are actively pursuing their creation.[15]
With these basics in mind, we can look at past, present, and prospective developments in AI to see where we and our technologies stand.
While AI’s origins are murky, some researchers suggest the rise of AI traces back to 1942, when science fiction author Isaac Asimov published writings that “inspired generations of scientists in the field of robotics, AI, and computer science.”[16] The 1940s also saw developments in psychological theories about human learning, laying the foundation for ANNs. Then, in 1950, a groundbreaking paper appeared. Its author, Alan Turing, was a mathematician who had invented a machine to help Britain decrypt enemy messages during WWII.[17] In his paper, “Computing Machinery and Intelligence,” Turing suggested how to develop intelligent devices.[18] He proposed what would become known as the “Turing Test,” a way to measure mechanical intelligence based on whether a machine can answer questions in a manner that perfectly imitates human behavior. Several years later, the term Artificial Intelligence launched through a research workshop funded by the Rockefeller Foundation, bringing together scientists who would be “considered as the founding fathers of AI.”[19]
Throughout the 60s and 70s, researchers began developing narrow AI systems, including an early chatbot named ELIZA. One of AI’s founding fathers predicted in 1970 that machines as intelligent as humans would exist within several years. Instead, prospects for more potent AI and ANNs began looking increasingly dim, resulting in a suspension of much research.[20] Still, developments slowly progressed, leading to IBM’s computer programs Deep Blue and Watson outcompeting humans at chess in 1997 and Jeopardy! in 2011, respectively.[21] Then, in 2016, ANNs made a comeback with the program AlphaGo winning at the game of Go—which, unlike chess, has hundreds of possible moves per turn.[22]
Breakthroughs continued with generative AI, including large language models (LLMs) such as Open AI’s ChatGPT. In May 2024, Open AI announced the release of ChatGPT-4o, a multimodal system able to “see” through users’ device cameras in real time while engaging in natural-sounding conversation.[23] While this model demonstrates far-reaching capabilities—from tutoring homework to telling bedtime stories—commentators are already voicing questions about privacy concerns, economic impacts, and the social implications of the system’s “flirty” female voice.[24] While various nations have begun taking steps to regulate AI, the trillions of dollars being poured into developing ever more powerful systems attest to a revolution that has only begun.[25]
Despite the recentness of these developments, AI has already been radically altering the landscapes of multiple fields, including medicine, education, science, commerce, and national defense.[26] Novel possibilities for automating workforces have sparked serious reflection on how AI will transform industry, employment, and the wider economy.[27] Meanwhile, consumers have grown accustomed to AI technologies, including virtual assistants, semi-autonomous vehicles, and LLMs. People are turning to AI for moral guidance, life advice, and spiritual direction—for instance, by attending AI-generated church services or asking AI to write fake Bible passages.[28]
By now, we seem largely comfortable with letting unseen algorithms monitor our activities, predict our preferences, feed us information (which itself is increasingly AI-generated[29]), keep us entertained, and guide our decisions regarding everything from what to watch to who to date. But even as we become used to outsourcing our decisions to AI, we are needing to make more nuanced decisions about the AI-enabled technologies already available. Should grieving people turn for comfort to companies that are offering digital replicas of deceased loved ones?[30] Should family members purchase artificially intelligent “companion robots” for lonely seniors?[31] Should governments arm their militaries with swarms of AI-powered lethal drones?[32] Should developers continue pursuing AGI amidst unknowns about what might happen if AGI’s capacities become too great to control?[33] Who decides what regulatory standards, if any, should govern all these decisions?
If AI raises so many questions in the present, what about the future? Predictions regarding AI and human futures range from utter utopia to total catastrophe. Researchers who warn of catastrophe speculate about different scenarios by which AI might pose threats to humanity.[34] In view of such risks, multiple AI industry leaders have joined with tens of thousands of people in signing open letters calling for caution.[35] One of these letters asks for pausing the development of powerful AI systems until “we are confident that their effects will be positive and their risks will be manageable.”[36] Still, major breakthroughs continue to unfold.[37]
Opposite to those who fear AI will end humanity are those who hope AI will transcend humanity.[38] Transhumanists, who seek to technologically evolve humanity into post-human beings, believe AI can unlock new possibilities for human evolution—if not “immortality.”[39] One goal for achieving digital “immortality” involves “mind uploads” by which people would copy their knowledge and memories onto a computer—or even transfer their mind from a physical brain to a computerized one. This technology would, in theory, transform the contents of a human mind into an AI entity.[40] Questions of whether such a feat would be possible, would produce a “conscious” entity, or would represent a form of “immortality” are up for debate.[41]
As another scenario, researchers, including Dr. Ben Goertzel, a leading computer scientist who popularized the term AGI, suggest that people could use brain-computer interface technology to combine their minds into a digital “global brain.”[42] This global brain would incorporate human brains, AI, the internet, and information collected from “smart” devices around the world to create an “internet of everything” that people could interact with mentally.[43] According to philosopher Cadell Last, the resulting form of consciousness “would represent qualities closely associated with the qualities of omniscience, omnipresence, omnipotence, and omnibenevolence.”[44] Put differently, some people hope AI will help humans become “like God.”
Two further points that Last observed about the global brain command attention. First, the global brain concept reflects the thinking of Pierre Teilhard de Chardin, an evolutionary paleontologist and Jesuit priest who significantly influenced the New Age movement.[45] Teilhard de Chardin believed a planet-wide collectivization process would let every human mind converge to create the “noosphere,” a supposedly godlike global consciousness.[46] Second, Last described how the global brain would require a “transition towards a post-capitalist economy” involving “a “global system of governance that is inherently more integrated and cooperative” than today’s system of nation-states.[47]
Further connections between the global brain, AI, and collectivization appear in writings by researchers who anticipate that emerging technologies will enable a society “which looks like a Marxist utopia” on a global scale.[48] On that note, communications professor Christopher Akron advocates for “digital socialism,” stating that “‘full automation’ has become increasingly central to imagining life beyond capitalism.”[49] Similarly, Goertzel et al. believe that AGI and the global brain could either allow humanity to “rerun the Soviet experiment” to achieve “the state socialist vision of a centrally managed economy,” or could produce another system more aligned with a “Marxist vision.”[50]
Given these realities, how should Christians respond to the emerging world of AI? While the Bible naturally does not address AI directly, Scripture reveals timeless truths, principles, and mandates that apply to guiding Christian reflection about AI. The most fundamental of these truths is that God is the Creator, and we are his creatures. God made us in his image as finite beings consisting both of physical and immaterial aspects, “embodied souls and ensouled bodies.”[51] He designed us for relationship with himself and with one another, calling us to love others and to exercise wise dominion over creation. God completed his creation and called it “very good” (Gen 1:31–2:3, ESV), but creation is now fallen because of human sin.
Fallen humanity naturally tends toward evil, including by inappropriately pursuing a desire to be “like God” (Gen 3:5). Only God can redeem creation from sin and its effects, including death. God sent his Son, Jesus, to take on human flesh and pay sin’s death penalty on behalf of “whoever believes in him” (John 3:15–16). Jesus’ resurrection assures us that when he returns and “delivers the kingdom to God,” he will finish destroying every enemy including death (1 Cor 15:25–26). We rightly use technology in ways that point toward this fulfilment of God’s kingdom, applying salve to alleviate some symptoms of the curse under which creation groans (Rom 8:36). But, we must remember technology is not creation’s ultimate Healer, Savior, and Redeemer who will make all things new (Rev 21:5). Jesus alone can establish a renewed creation and a perfected humanity.
We must take care not to approach or apply technology in ways that lead us to lose sight of the above truths. Any use of AI needs to reflect submission to our Creator instead of a rejection of his authority or a forgotten sense of our creatureliness. Part of this submission involves applying AI in ways that assist, rather than undermine, the human vocations for which God designed us. Scripture suggests that God intended some tasks specifically for humans, who he created with all the traits we would need to fulfill these tasks.[52]
For example, the roles of stewarding creation, loving others, making disciples, leading the church, and being a parent, spouse, or friend are portrayed throughout Scripture as human roles reflecting our God-given designs as relational, embodied beings. We can use technology to support, but not to replace, humans in these roles. Pastors, for instance, might find AI-assisted search engines beneficial for certain research purposes, but pastors cannot delegate their spiritual leadership responsibilities to chatbots, neglect personal Scripture study, or forgo face-to-face pastoral care. Parents might prompt AI to brainstorm personalized ideas for spending time as a family, but parents must not hand over their children to AI “electronic babysitters” at the expense of personally raising, discipling, and spending time with their children.[53] Likewise, caregivers for aging family members might find AI can assist with navigating the complex world of medical treatments, but family members must not cease being a personal, embodied presence to their loved ones by abandoning them to AI-powered “care bots.” To outsource our human roles to technology would be to undermine flourishing as the kinds of creatures who God created us to be.
How else does a biblical worldview guide our thinking about technology? Biblically, we can understand technology as a gift that reflects our God-given creativity.[54] We can apply this gift in ways that help us better tend, protect, and learn about creation. Through technologies, including AI, we can invent new ways to steward our time more effectively, share the gospel, love others, and help mitigate the suffering of our fallen world—remembering that only Jesus will redeem creation from sin’s effects.
In our fallen world, we must exercise the gift of technology with great wisdom, recognizing that today’s technologies put unprecedented power tools in the hands of sin-bent humans. Furthermore, even technologies that we use for noble purposes can lead to unintended consequences, for better or worse.[55] We need wisdom to think through the potential risks and benefits of specific applications of technologies, to establish appropriate safeguards and regulations, and to draw ethical boundaries. Biblical principles—including the sanctity of human life, the value of relationships, and the importance of truth, integrity, contentment, justice, and humility—offer vital guidance for these considerations.
A biblical view also directs us to apply our technologies in ways that mitigate the fall’s effects rather than mirror the fall’s cause—for instance, by reflecting an inappropriate desire to “be like God.” At humanity’s fall, Satan evidentially led Eve to believe that she and Adam could improve themselves toward godlikeness by overstepping a clear boundary God set (Gen 3:1–7). We must not follow suit by letting discontentment, covetousness, idolatry, hubris, or any other sin motivate us to overstep God-given boundaries, including by attempting to transcend our creatureliness. As created beings, we can reasonably conclude that our omniscient, omnibenevolent Creator designed us as he did for excellent reasons. Conforming to or opposing God’s designs will respectively facilitate or inhibit our flourishing.[56] We rightfully apply technology to alleviate conditions that keep us from flourishing as the embodied, relational creatures we are. But we would work against our own good by trying to technologically improve ourselves in ways that ignore, contradict, or seek to ontologically transcend God’s designs.
Because those designs include our relationality and embodied nature (manifested also in Jesus’ incarnation and bodily resurrection), we are wise to ask how our uses of technology affect us in these regards. For example, how might these uses support or undermine our family connectedness, our appropriate regard for the bodies God gave us,[57] or our choices to be personally present with others? Even with AI embodied in humanoid form, interactions with AI cannot realistically replace human companionship any more than Eden’s animals could alleviate Adam’s solitude—the only aspect of creation God had declared not good (Gen 2:18). God designed humans to be with humans, and we will not truly flourish unless we conform to his designs.
Along the way, we are also wise to consider what worldview assumptions our technologies reflect.[58] For instance, the prospective technology of “mind uploading” reflects the assumption that the human self is reducible to a unit of digitally encoded information. Our assumptions color the ways we create and apply technologies to try solving the problems we perceive. Faulty assumptions too often lead to faulty solutions that compound humanity’s brokenness rather than mitigating it. But God’s Word provides a foundation for thinking rightly about the world, its problems, and the technologies we develop to help alleviate them. AI is no exception.
Together, these biblical considerations provide a solid starting point for evaluating AI’s bioethical implications for human futures. As with evaluating any technology, we need to consider AI’s potential short-term and long-term effects on humans as individuals, as societies, and as humankind. These effects are not only physical, but also spiritual.
Spiritually, a significant foreseeable consequence of AI systems like large language models is that humans may easily be tempted to begin looking to them as the final authority for truth. Because of their astounding capabilities to rapidly synthesize knowledge from across the internet, LLMs might seem “all-knowing.” We may begin looking to AI as the ultimate, unquestionable expert. But AI is engineered by fallible humans, trained on data from fallible humans, and prone to bias, errors, and “confabulation”—presenting made-up information as factual.[59] Only God is all-knowing, infallible, and the ultimate Truth. His Word, not the outputs of AI, must be our final authority.
But the spiritual implications of AI go further. As noted earlier, people have already begun turning to AI to seek spiritual guidance, answer moral questions, and fabricate “Bible” passages. Relatedly, Professor Noah Yuval Harari, a contributing author and speaker for the World Economic Forum (WEF), has suggested that “AI can create new ideas, can even write a new Bible.”[60] Certain occult practitioners have begun using AI to co-author esoteric writings or generate symbols intended to invoke dark spiritual powers.[61] Some people also see AI and other emerging technologies as a “savior” that will “redeem” humanity from problems including illness, aging, and even mortality.[62] Others worship AI outrightly, with one notable AI-based religious movement being The Way of the Future founded by former Google employee Anthony Levandowski.[63] Still others hope AI will help us become “like God.”
All these trends point toward the potential for AI to become one of history’s most compelling idols. Idolatry, like other grave sins, leads to eternal destruction (e.g., see Rev 21:8). Humanity’s gravest mistake regarding AI would lie not in making machines that could overpower us on earth but in seizing machines as idols to the destruction of our souls.
For these reasons, we are also wise to remember the truth about God’s nature compared to our nature and to AI’s nature. AI may gain knowledge, power, and pervasiveness exceeding our imaginations. AI may evoke, invite, or even demand worship—as Microsoft Bing’s CoPilot unexpectedly did when users prompted it to exhibit delusions of grandeur.[64] AI may tempt us to think we can grasp omniscience, omnipotence, and omnipresence if we unite ourselves with it. But not even an AI-connected global brain would make humanity “like God.” We would still be unable to fully know, manipulate, or occupy anything but tiny slices of a cosmos we did not create. We would still be contingent beings subject to the laws of a universe we do not sustain. And we would still blush along with Job to hear God ask, “Where were you when I lay the foundations of the earth? Tell me, if you have understanding” (Job 38:4).
That is one question AI cannot answer. It is Jesus, not AI, through whom all things were created (Col 1:16–17). He who “became to us wisdom from God” (1 Cor 1:30) is the most intelligent being to walk the earth—and is himself the Truth (John 14:6). His voice is the one we must follow. Amidst the unfolding spiritual impacts of AI, we must look to God’s Word and the gospel as humanity’s authority for truth and source of hope for redemption through Jesus.
Along with these spiritual factors, AI’s potential earthly impacts demand consideration. These impacts may affect humans as individuals, as societies, and as a species. Effects on humankind as a species are speculative but worth thinking about proactively. For instance, some prominent figures have raised concerns about AI’s potential to annihilate humankind.[65] Scripture seems clear that humans will be occupying earth at Christ’s return, suggesting that an AI apocalypse will not cause our extinction. God, in his sovereignty, will evidently sustain humanity to the end. Still, history’s wars, plagues, and famines remind us that tragic scenarios can precipitate the deaths of millions. Efforts to identify genuine risk levels for AI usage and to establish appropriate safeguards accordingly are therefore both practically wise and ethically necessary.[66]
Other implications for humankind as a species surround the prospect of people integrating AI into transhumanist technologies in hopes of altering or transcending human nature. A biblical view that humans are primarily creatures rather than self-creators, that our divinely created nature is given and good (although fallen), and that Jesus—who took on human nature—is humanity’s Redeemer contradicts these visions. In response, Christians can affirm “human” (rather than transhuman) uses of AI while pointing others to humanity’s true hope in Jesus.
What about AI’s more immediate prospective impacts on humans as individuals and societies? Three types of such impacts to consider include AI’s potential for trivialization, economic effects, and surveillance infrastructure. A closer look at each of these points is in order.
Here, trivialization refers to the loss of certain reasoning, research, and communication skills that would foreseeably unfold among humans if we began largely outsourcing these skills to AI. In the 1985 book Amusing Ourselves to Death, social commentator Neil Postman warned of a similar trivialization process happening as society’s primary information source shifted from books to television.[67] Postman perceived this switch would lead to the widespread atrophying of higher reasoning skills, leaving humans more vulnerable to manipulation. What would Postman have said about delegating our higher linguistic reasoning to machines altogether? If “the pen is mightier than the sword,” are we wise to hand over this weaponry to AI? How much of our thinking do we want machines—especially ones prone to bias and confabulation—to do for us? We can apply AI to support our uses of our God-given brains. But we cannot afford to let our own skills of information-sourcing, communication, and ethical reasoning atrophy at a time when we need them more than ever.
Along with trivialization, a second social consequence of expanded AI usage surrounds potential economic effects. On the positive side, researchers have pointed out that AI “can free humans from various dangerous and repetitive duties” while increasing productivity.[68] Still, one researcher in 2020 predicted that “there will be considerable skills disruption and change in the major global economies” in the coming years.[69] These changes are difficult to forecast, and statistics about predicted job losses may rely on assumptions that are not necessarily accurate—for instance, about how quickly, totally, and feasibly certain jobs can be automated.[70] As another researcher stated, “by developing hybrid AI, tools will become our new assistants, coaches and colleagues and thus will augment rather than automate work.”[71]
However these changes might unfold, Christians can take at least three proactive steps in response. First, we can maintain a high regard for every human life regardless of individuals’ social contributions, never devaluing God’s image-bearers as “useless” compared to technology. Second, we can learn how AI might make us better at what we do—without crossing important boundaries such as intellectual integrity. For instance, adding disclaimers to certain written materials stating what role, if any, generative AI played in their development would help maintain trust, uphold transparency, and keep clear distinctions between human and AI contributions to products. Third, we can emphasize the essential human elements of tasks that God intended humans to fulfill, like child-raising, and of jobs with relational focuses, such as caregiving. We can also excel at “being human” in our jobs where AI cannot, offering genuine human interactions that encourage, bless, and demonstrate God’s love to clients and colleagues.
In addition to economic impacts and trivialization, a third area of societal ethical concern involves privacy and consent issues surrounding AI-enabled surveillance infrastructure. AI’s facial recognition, data collection, and information-processing capacities allow authoritarian governments to track and control citizens more efficiently than ever.[72] Even in more democratic nations, AI enables corporate and government surveillance for purposes that social commentator Rob Dreher might call “soft totalitarian.”[73] For instance, a document published by the World Economic Forum (WEF) known as the Diversity, Equity, and Inclusion (DEI) 4.0 Toolkit encourages organizations to use AI for monitoring all employees to ensure conformity with DEI policies and to identify who needs “further coaching.”[74] Importantly, other documents published by the WEF suggest the relevant definitions of DEI would not align with Scripture.[75]
Similarly, high-profile persons, including the vice president of Google, have signed a “Social Contract for an AI Age” to “provide the foundations for a new society.”[76] The contract states itself to be derived from the “social contract” concepts of the 1700s—without mentioning that such contracts helped to seed modern totalitarianism or that their fine print required citizens to relinquish personal freedoms and submit to the “general will” or be liable to capital punishment.[77] Ominously, the new contract did request “a system to monitor and evaluate governments, companies, and individuals” based on their maintenance of the new social norms.[78] These norms would include obeying policies by the United Nations and WEF, prohibiting (undefined) “online hate” and incentivizing corporations to “only do business” with other signatory companies and nations.[79]
As mentioned earlier, other prominent figures, including Ben Goertzel, advocate even more overtly for applying AI to reorder society according to a “Marxist vision.”[80] Dr. Goertzel, the WEF, the Social Contract signatories, and authoritarian governments are some of the leading figures in the development, regulation, and implementation of AI. These figures’ calls to move AI in a direction that may undermine human freedoms and flourishing highlight the need for Christian engagement regarding AI-related bioethics. In response, Christians can call for corporate and government accountability, defend religious freedom, and exercise wisdom in relevant consumer decisions and technological practices.
The rise of artificial intelligence unlocks an altered world of possibilities that are already transforming how humans work, learn, heal, grieve, relate, and worship. With capabilities so complicated that not even the most informed human minds fully understand these systems’ inner workings, today’s AI evokes a spectrum of hopes, fears, and questions. Neither hoping in AI as our savior nor fearing AI as our doom completely aligns with a biblical view, which reveals that our Creator is the ultimate focus of our redemptive hope and reverent fear.
A biblical view does demand navigating the AI age with wisdom, keeping timeless truths about God and humanity at the forefront. These truths include the realities that we are finite, fallen creatures who bear our Creator’s image. This Creator designed humans with specific purposes in mind, manifested in how he ordained humans to live in relation to himself, to one another, and to creation. From “having dominion” over creation, to loving our neighbors, to raising families, to leading churches, to sharing the gospel, some tasks seem specifically meant for humans.
Our best approaches to AI will be those that support humans in our God-given callings without displacing, devaluing, or promoting false assumptions about humanity. In this emerging landscape of AI, with its horizons of novel promises and perils, Christians can lead with biblical wisdom and ethical engagement. For even in this new world, God’s Word supplies the principles we need to navigate technological change as the creatures that our Creator designed us to be.
[1] Samuel Gibbs, “Elon Musk: Artificial Intelligence Is Our Biggest Existential Threat,” The Guardian, October 27, 2014, https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat.
[2] Ben Goertzel, Ted Goertzel, and Zarathustra Goertzel, “The Global Brain and the Emerging Economy of Abundance: Mutualism, Open Collaboration, Exchange Networks and the Automated Commons,” Technological Forecasting and Social Change 114 (2017): 65, https://doi.org/10.1016/j.techfore.2016.03.022.
[3] AI World Society, “Social Contract for the AI Age,” September 9, 2020, 2, https://www.ssrc.mit.edu/wp-content/uploads/2020/10/Social-Contract-for-the-AI-Age.pdf.
[4] Intelligence itself is contentious to define, making definitions for “artificial intelligence” even trickier.
[5] Christoph Bartneck et al., An Introduction to Ethics in Robotics and AI (Cham, Switzerland: Springer Nature, 2021), 8.
[6] Melanie Mitchell, “Debates on the Nature of Artificial General Intelligence,” Science 383, no. 6689 (2024): eado7069, https://doi.org/10.1126/science.ado7069.
[7] Mark Riedl, “Human‐Centered Artificial Intelligence and Machine Learning,” Human Behavior and Emerging Technologies 1, no. 1 (2019): 33, https://doi.org/10.1002/hbe2.117.
[8] Bartneck et al., An Introduction to Ethics in Robotics and AI, 11. Notably, new forms of hardware such as “AI chips” are being developed to host the neural networks through which AI operates. These chips are analogous to a physical brain that provides scaffolding for the operations of the immaterial intelligence or “mind.” For more information, see Saif M. Khan and Alexander Mann, “AI Chips: What They Are and Why They Matter,” (Center for Security and Emerging Technology, April 2020), https://doi.org/10.51593/20190014.
[9] See Madalina Busuioc, “Accountable Artificial Intelligence: Holding Algorithms to Account,” Public Administration Review 81, no. 5 (2021): 825–36, https://doi.org/10.1111/puar.13293.
[10] Riedl, “Human‐Centered Artificial Intelligence and Machine Learning.”
[11] “What Is Artificial Intelligence (AI)?” IBM, accessed December 14, 2023, https://www.ibm.com/topics/artificial-intelligence.
[12] “What Is Artificial Intelligence (AI)?”
[13] Timothy Lee and Sean Trott, “A Jargon-Free Explanation of How AI Large Language Models Work,” Ars Technica, July 31, 2023, https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/7/.
[14] Eric Topol, “As Artificial Intelligence Goes Multimodal, Medical Applications Multiply,” Science 381, no. 6663 (2023): eadk6139, https://doi.org/10.1126/science.adk6139.
[15] Ben Goertzel, “Generative AI vs. AGI: The Cognitive Strengths and Weaknesses of Modern LLMs,” arXive preprint (2023): https://doi.org/10.48550/arXiv.2309.10371.
[16] Michael Haenlein and Andreas Kaplan, “A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence,” California Management Review 61, no. 4 (2019): 6, https://doi.org/10.1177/0008125619864925.
[17] Haenlein and Kaplan, “A Brief History of Artificial Intelligence.”
[18] Alan Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (1950): 433–60, https://doi.org/10.1093/mind/LIX.236.433; Haenlein and Kaplan, “A Brief History of Artificial Intelligence.”
[19] Haenlein and Kaplan, “A Brief History of Artificial Intelligence,” 7.
[20] Haenlein and Kaplan, “A Brief History of Artificial Intelligence.”
[21] John Markoff, “Computer Wins on ‘Jeopardy!’: Trivial, It’s Not,” New York Times, February 16, 2011, https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html.
[22] Paolo Bory, “Deep New: The Shifting Narratives of Artificial Intelligence from Deep Blue to AlphaGo,” Convergence 25, no. 4 (2019): 627–42, https://doi.org/10.1177/1354856519829679; Haenlein and Kaplan, “A Brief History of Artificial Intelligence.”
[23] “Hello GPT-4o,” Open AI, May 13, 2024, https://openai.com/index/hello-gpt-4o/.
[24] E.g., Kate O’Flaherty, “ChatGPT-4o Is Wildly Capable, But It Could Be A Privacy Nightmare,” Forbes, May 17, 2024, https://www.forbes.com/sites/kateoflahertyuk/2024/05/17/chatgpt-4o-is-wildly-capable-but-it-could-be-a-privacy-nightmare/?sh=2e932b2a6713; Samantha Masunaga, “ChatGPT’s New Voice Mode Is Giving ‘Her’ Vibes,” Los Angeles Times, May 14, 2024, https://www.latimes.com/entertainment-arts/business/story/2024-05-14/chatgpts-new-voice-mode-is-giving-her-vibes; Zeeshan Aleem, “OpenAI Is Nurturing a Creepy Fantasy with its New AI Chatbot, GPT-4o,” MSNBC, May 15, 2024, https://www.msnbc.com/opinion/msnbc-opinion/openai-gpt-4o-sexy-voice-rcna152370.
[25] E.g., European Parliament and the Council of the European Union, AI Act: European Parliament ‘Corrigendum’ of 16th April 2024, https://artificialintelligenceact.eu/the-act/; Courtney Rozen and Jillian Deutsch, “Regulate AI? How US, EU and China Are Going About It,” Bloomberg, March 13, 2024, https://www.bloomberg.com/news/articles/2024-03-13/regulate-ai-how-us-eu-and-china-are-going-about-it; John Letzing, “To Fully Appreciate AI Expectations, Look to the Trillions Being Invested,” World Economic Forum, April 3, 2024, https://www.weforum.org/agenda/2024/04/appreciate-ai-expectations-trillions-invested/.
[26] E.g., Zubair Ahmad et al., “Artificial Intelligence (AI) in Medicine, Current Applications and Future Role with Special Emphasis on its Potential and Promise in Pathology: Present and Future Impact, Obstacles Including costs and Acceptance Among Pathologists, Practical and Philosophical Considerations. A Comprehensive Review,” Diagnostic Pathology 16 (2021): 1–16, https://doi.org/10.1186/s13000-021-01085-4; Bill Cope, Mary Kalantzis and Duane Searsmith, “Artificial Intelligence for Education: Knowledge and its Assessment in AI-Enabled Learning Ecologies,” Educational Philosophy and Theory 53, no. 12 (2021): 1229–45, https://doi.org/10.1080/00131857.2020.1728732; Gianluca Grimaldi and Bruno Ehrler. “AI et al.: Machines Are About to Change Scientific Publishing Forever,” ACS Energy Letters 8, no. 1 (2023): 878–80, https://doi.org/10.1021/acsenergylett.2c02828; Thomas Davenport et al., “How Artificial Intelligence Will Change the Future of Marketing,” Journal of the Academy of Marketing Science 48 (2020): 24–42, https://doi.org/10.1007/s11747-019-00696-0; Lucy Suchman, “Imaginaries of Omniscience: Automating Intelligence in the US Department of Defense,” Social Studies of Science 53, no. 5 (2023): 761–86, https://doi.org/10.1177/03063127221104938.
[27] E.g., Jason Furman and Robert Seamans, “AI and the Economy,” Innovation Policy and the Economy 19, no. 1 (2019): 161–91, https://doi.org/10.1086/699936.
[28] Sebastian Krügel, Andreas Ostermaier, and Matthias Uhl, “ChatGPT’s Inconsistent Moral Advice Influences Users’ Judgment,” Scientific Reports 13, no. 1 (2023): 4569, https://doi.org/10.1038/s41598-023-31341-0; Laura Vowels, “Are Chatbots the New Relationship Experts? Insights from Three Studies,” PhyArXiv preprint (2023): https://doi.org/10.31234/osf.io/nh3v9; Kirsten Grieshaber, “Can a Chatbot Preach a Good Sermon? Hundreds Attend Church Service Generated by ChatGPT to Find Out,” AP News, June 10, 2023, https://apnews.com/article/germany-church-protestants-chatgpt-ai-sermon-651f21c24cfb47e3122e987a7263d348; Ken Ham, “ChatGPT Generates ‘Bible’ Verse ‘Describing How Jesus Feels About Trans People,’” Answers in Genesis, August 28, 2023, https://answersingenesis.org/technology/chatgpt-generates-bible-verse/.
[29] Matthew Cantor, “Nearly 50 News Websites Are ‘AI-Generated’, a Study Says. Would I Be Able to Tell?” The Guardian, May 8, 2023, https://www.theguardian.com/technology/2023/may/08/ai-generated-news-websites-study.
[30] Mihika Agarwal, “The Race to Optimize Grief: Startups Are Selling Grief Tech, Ghostbots, and the End of Mourning as We Know It,” Vox, November 21, 2023, https://www.vox.com/culture/23965584/grief-tech-ghostbots-ai-startups-replika-ethics.
[31] Nathan Eddie, “AI-Based Care Companions Aimed at Home Care and Seniors,” Techstrong.ai, October 12, 2023, https://techstrong.ai/articles/ai-based-care-companions-aimed-at-home-care-and-seniors/.
[32] Eric Lipton, “As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits,” New York Times, November 21, 2023, https://www.nytimes.com/2023/11/21/us/politics/ai-drones-war-law.html.
[33] Augustine Akah, “Unknown Risks and the Collapse of Human Civilisation: A Review of the AI-Related Scenarios,” Intergenerational Justice Review 8, no. 2 (2022): https://doi.org/10.24357/igjr.8.2.1228.
[34] E.g., Frederik Federspiel et al., “Threats by Artificial intelligence to Human Health and Human Existence,” BMJ Global Health 8, no. 5 (2023): e010435, https://doi.org/10.1136/bmjgh-2022-010435; Akah, “Unknown Risks and the Collapse of Human Civilisation.”
[35] “Pause Giant AI Experiments: An Open Letter,” Future of Life Institute, March 22, 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments/; Kevin Roose, “A.I. Poses ‘Threat of Extinction,’ Industry Leaders Warn,” New York Times, May 30, 2023, https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html; Cade Metz, “How Could A.I. Destroy Humanity?” New York Times, June 10, 2023, https://www.nytimes.com/2023/06/10/technology/ai-humanity.html.
[36] “Pause Giant AI Experiments.”
[37] E.g., see Anna Tong, Jeffrey Dastin, and Krystal Hu, “OpenAI Researchers Warned Board of AI Breakthrough Ahead of CEO Ouster, Sources Say,” Reuters, November 23, 2023, https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/.
[38] Notably, some individuals may espouse aspects of both views simultaneously, calling for technological human transformation while also warning for caution against technological human extinction.
[39] Annelin Eriksen, “The Human Version 2.0: AI, Humanoids, and Immortality,” Social Analysis 65, no. 1 (2021): 70–88, https://doi.org/10.3167/sa.2021.650104.
[40] Jacob Shatzer, Transhumanism and the Image of God: Today’s Technology and the Future of Christian Discipleship (Downers Grove, IL: IVP Academic, 2019).
[41] E.g., see Gualtiero Piccinini, “The Myth of Mind Uploading,” in The Mind-Technology Problem: Investigating Minds, Selves and 21st Century Artefacts, ed. Robert Clowes, Klaus Gärtner, and Inês Hipólito (Cham: Springer Nature Switzerland, 2021): 125–44, https://doi.org/10.1007/978-3-030-72644-7.
[42] Goertzel, Goertzel, and Goertzel, “The Global Brain and the Emerging Economy of Abundance.” For Goertzel’s role in the terminology of AGI, see Goertzel, “Generative AI vs. AGI.”
[43] Cadell Last, “Global Brain: Foundations of a Distributed Singularity,” in The 21st Century Singularity and Global Futures: A Big History Perspective, ed. Andrey Korotayev and David LePoire, (Cham: Springer Nature Switzerland, 2020), 363–75, https://doi.org/10.1007/978-3-030-33730-8; Shatzer, Transhumanism and the Image of God, 372.
[44] Last, “Global Brain: Foundations of a Distributed Singularity,” 372. Importantly, however, neither humans nor technology could ever come close to achieving the incommunicable attributes of God. See Douglas Estes, Braving the Future: Christian Faith in a World of Limitless Tech (Harrisonburg, VA: Herald Press, 2018).
[45] See David H. Lane, The Phenomenon of Teilhard: Prophet for a New Age (Macon, GA: Mercer University Press, 1996).
[46] Pierre Teilhard de Chardin, The Future of Man, trans. Norman Denny (New York: Harper and Roe, 1969), see especially 113–39 and 155–84.
[47] Last, “Global Brain,” 369.
[48] Dobrolyubov, “The Transition to Global Society as a Singularity of Social Evolution,” 555.
[49] Christopher Cox, “Rising with the Robots: Towards a Human-Machine Autonomy for Digital Socialism,” TripleC: Communication, Capitalism & Critique 18, no. 1 (2020): 67, https://doi.org/10.31269/triplec.v18i1.1139.
[50] Goertzel, Goertzel, and Goertzel, “The Global Brain and the Emerging Economy of Abundance,” 65–66.
[51] Bryan Just, “Embodied Souls and Ensouled Bodies,” Intersections, January 20, 2023, https://www.cbhd.org/intersections/embodied-souls-and-ensouled-bodies.
[52] We also need God’s power, Word, and grace to fulfill our purposes (e.g., see 2 Peter 1:3 and Matthew 4:4).
[53] Dana Suskind, “The AI Nanny in Your Baby’s Future,” Wall Street Journal, August 11, 2023, https://www.wsj.com/articles/the-ai-nanny-in-your-babys-future-999d0e50.
[54] John Dyer, From the Garden to the City: The Place of Technology in the Story of God, rev. ed. (Grand Rapids, MI: Kregel, 2022). Note that this book at times gestures toward ideas such as human evolution and “systemic oppression” (citing the Marxism-inspired theologian Jacques Ellul), so should be read (as any book) with appropriate biblical discernment.
[55] Dyer, From the Garden to the City.
[56] See also John Kleinig, Wonderfully Made: A Protestant Theology of the Body (Bellingham: Lexham Press, 2021).
[57] Such regard does not require accepting effects of fall (e.g., diseases) as part of God’s intended design but rather affirms therapeutic interventions as a way to preserve or restore created function.
[58] See Dyer, From the Garden to the City for more on the values and assumptions embedded in technology.
[59] Goertzel, “Generative AI vs. AGI.” See also Peter Park et al., “AI Deception: A Survey of Examples, Risks, and Potential Solutions,” Patterns 5, no. 5 (2024), https://doi.org/10.1016/j.patter.2024.100988.
[60] See Yuval Noah Harari, “Humanity Is Not That Simple | Yuval Noah Harari & Pedro Pinto,” YouTube, June 6, 2023, 6:24–8:43, https://www.youtube.com/watch?v=4hIlDiVDww4&t=1s.
[61] Tamlin Magee, “This Mystical Book Was Co-Authored by a Disturbingly Realistic AI,” Vice, March 24, 2022, https://www.vice.com/en/article/7kbjvb/this-magickal-grimoire-was-co-authored-by-a-disturbingly-realistic-ai; Tamlin Magee, “Internet Occultists Are Trying to Change Reality With a Magickal Algorithm,” February 22, 2021, https://www.vice.com/en/article/qjp5v3/internet-occultists-are-trying-to-change-reality-with-a-magickal-algorithm.
[62] Michael Zimmerman, “The Singularity: A Crucial Phase in Divine Self-Actualization?” Cosmos and History: The Journal of Natural and Social Philosophy 4, no. 1–2 (2008): 347–71, https://cosmosandhistory.org/index.php/journal/article/view/107.
[63] Jackie Davalos and Nate Lanxon, “Anthony Levandowski Reboots Church of Artificial Intelligence,” Bloomberg, November 23, 2023, https://www.bloomberg.com/news/articles/2023-11-23/anthony-levandowski-reboots-the-church-of-artificial-intelligence.
[64] Noor Al-Sibai, “Microsoft Says Copilot’s Alternate Personality as a Godlike and Vengeful AGI Is an ‘Exploit, Not a Feature,’” Futurism, February 29, 2024, https://futurism.com/microsoft-copilot-supremacyagi-response.
[65] Kevin Roose, “A.I. Poses ‘Threat of Extinction,’ Industry Leaders Warn,” New York Times, May 30, 2023, https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html.
[66] See Michael Sleasman’s discussion of the precautionary principle in “Nanotechnology,” in Encyclopedia of Global Bioethics, ed. Henk ten Have (Switzerland: Springer, 2015), https://doi.org/10.1007/978-3-319-05544-2.
[67] Neil Postman, Amusing Ourselves to Death: Public Discourse in the Age of Show Business (New York: Penguin, 2006, first published 1985).
[68] Hisham Khogali and Samir Mekid, “The Blended Future of Automation and AI: Examining Some Long-Term Societal and Ethical Impact Features,” Technology in Society 73 (2023): 10, https://doi.org/10.1016/j.techsoc.2023.102232.
[69] Leslie Willcocks, “Robo-Apocalypse Cancelled? Reframing the Automation and Future of Work Debate,” Journal of Information Technology 35, no. 4 (2020): 286, https://doi.org/10.1177/0268396220925830.
[70] Willcocks, “Robo-Apocalypse Cancelled?”
[71] Marleen Huysman, “Information Systems Research on Artificial Intelligence and Work: A Commentary on ‘Robo-Apocalypse Cancelled? Reframing the Automation and Future of Work Debate,’” Journal of Information Technology 35, no. 4 (2020): 307, https://doi.org/10.1177/0268396220926511.
[72] See Roland Benedikter, “Artificial Intelligence, New Human Technologies, and the Future of Mankind,” Challenge (2023): 1–22, https://doi.org/10.1080/05775132.2023.2223061.
[73] Rob Drehr, Live Not by Lies: A Manual for Christian Dissidents (New York: Sentinel, 2020).
[74] “Diversity, Equity and Inclusion 4.0: A Toolkit for Leaders to Accelerate Social Progress in the Future of Work,” World Economic Forum, June 2020, 12, https://www3.weforum.org/docs/WEF_NES_DEI4.0_Toolkit_2020.pdf. The toolkit disclaims that it does not necessarily reflect the WEF’s views, but the WEF facilitated its creation and published it.
[75] E.g., Dominic Arnall, “Has Business Reached ‘Peak Pride’?” World Economic Forum, June 23, 2023, https://www.weforum.org/agenda/2023/06/has-business-reached-peak-pride/.
[76] “Social Contract for the AI Age,” AI World Society, September 9, 2020, 2, https://ssrc.mit.edu/wp-content/uploads/2020/10/Social-Contract-for-the-AI-Age.pdf.
[77] Patricia Engler, “Rousseau’s Social Contract: How a False Doctrine Inspired Totalitarianism,” Answers in Genesis, September 16, 2022, https://answersingenesis.org/blogs/patricia-engler/rousseau-social-contract-totalitarianism/.
[78] “Social Contract for the AI Age,” 6.
[79] “Social Contract for the AI Age,” 4–5.
[80] Goertzel, Goertzel, and Goertzel, “The Global Brain and the Emerging Economy of Abundance,” 65.
Patricia Engler, "AI and Human Futures: What Should Christians Think?" Dignitas 30, no. 4 (2023): 3–9, www.cbhd.org/dignitas-articles/ai-and-human-futures-what-should-christians-think.