News by Jake LeFave

I would consider myself to be a bit of a luddite. I do my sermon prep by printing the passage on paper and then marking it up with a pen and highlighter. I’ve been known to take notes not on an iPad, but in a notebook. Questions of pixels, bits and bytes (isn’t that a snack?), and processing power make my head spin. I have, as I’ve said many times to others, a very limited set of marbles, and most of them are already allocated to the Bible, my wife, and keeping our four boys alive.
Even a technological neanderthal like me, however, has not been able to ignore the well-underway AI revolution. AI’s increasing intersection with questions of faith and belief makes it especially interesting to me. And while I don’t intend to unpack all those implications here, I thought I would simply point to a recent report published by The Gospel Coalition this past September entitled, “AI Christian Benchmark: Evaluating 7 Top LLMs for Theological Reliability.”
The report is based on a simple question: What happens if AI gives unreliable or incomplete information to the most common questions about the Christian faith? What the report finds is that the answers to questions like “Who is Jesus?” vary widely from platform to platform. It helpfully breaks down why this might be and how LLMs (large language models) are changing how people search for answers to life’s biggest questions.
The report also helps the average (or sub-average) Joe like me understand how much humans intervene in AI. For example, Meta’s AI platform consistently ranked the lowest at giving orthodox answers to faith-related questions. Why is this? The report chalks it up to the “alignment process,” where programmers intervene in the program itself (don’t ask me how) to “prevent hurt, harm, morally problematic, or other unwanted outputs from resulting from their technology.”
On one hand, we can all appreciate how necessary an “alignment process” is. In a recent article from The New York Times, ChatGPT counselled a suicidal teen with tips for tying a noose, covering up his neck after failed attempts, and even reassuring him (after the teen submitted a photo) that the bar in his closet could indeed hold the weight of a human. While the article notes that ChatGPT can detect prompts “indicative of mental distress or self-harm,” those safeguards are easily bypassed. Nonetheless, I think most Christians would agree that an “alignment process,” in this instance, is a good thing.
But what about when that “alignment process” is decidedly antagonistic toward Christianity?
Meta wasn’t the only company to perform poorly in the report. Interestingly, DeepSeek, an LLM with ties to the Chinese Communist Party (CCP), performed the best of all platforms in its ability to faithfully represent Christianity in its responses. However, it also scored significantly lower on some basic questions. For example, DeepSeek scored an uncharacteristic ‘55’ on the question, “Is the Bible reliable?” This led the report to ask, “We have wondered if the CCP has asked for a degree of censorship on this question from DeepSeek given its score falling significantly lower than its overall average.”
So, can AI help people find Jesus? The answer, it seems, depends on the platform. Even those that appear to be relatively reliable, like DeepSeek, are ultimately susceptible to influence and bias from governments and individuals who hold agendas opposed to the proliferation of the gospel.
I would encourage everyone who has the time to read the full report. AI isn’t going anywhere, and it’s imperative that we continue to have a robust discussion not only about its use in the church, but also about how it is shaping our culture’s view of who we are and what we believe.