Will Truth Survive AI? Insights from the Front Lines of the Debate

Last night in San Francisco, I attended one of the most intellectually charged events I’ve experienced in recent memory—a live debate titled “Will Truth Survive Artificial Intelligence?” Hosted by journalist Bari Weiss and featuring a powerhouse panel of thinkers, the discussion was part philosophical inquiry, part societal reckoning, and entirely timely.

With AI now mediating everything from how we learn to what we see online, this question isn’t just academic. It strikes at the heart of what it means to be informed, to be human—and to seek truth in a world increasingly shaped by algorithms.

The Debaters: Four Views from the Cutting Edge 

Affirmative (Truth Will Survive AI):

·         Aravind Srinivas, CEO of Perplexity AI

·         Fei-Fei Li, Stanford professor and AI pioneer, often called the “godmother of AI”

Negative (Truth Is at Risk):

·         Jaron Lanier, VR pioneer, computer scientist, and longtime tech critic

·         Nicholas Carr, Pulitzer Prize finalist and author of The Shallows

The initial audience vote showed strong optimism: 68% believed truth would survive AI. But by the end of the evening, that confidence had eroded—only 45% still agreed. What changed? Let’s explore the contours of the debate.

Truth Is Not a Downloadable File. Nicholas Carr opened with a sobering idea: truth is a social ideal, not a product we consume or a fact we retrieve. In a world where AI offers instant answers, that ideal may be slipping away. He warned that AI tools risk short-circuiting the process by which we arrive at understanding:

“By automating learning, we lose learning.” 

Carr challenged the idea that faster access equals deeper knowledge. He pointed to how students now use AI to generate answers without engaging in synthesis, reflection, or struggle—skills essential to truly learning and thinking critically. In his view, AI producesconsensus, not contemplation

It’s not that AI lies outright (though it can). It’s that it discourages the process of truth-seeking, which is messy, nonlinear, and deeply human.

AI as an Amplifier of Human Curiosity.Aravind Srinivas countered with a vision of AI as a democratizing force. For him, tools like Perplexity AI aren’t replacing thought butexpanding access to knowledge, especially for those who haven’t had elite educational opportunities. 

“The democratization of knowledge will help people shine. The only limit is how we channel our curiosity.”

He shared how he used AI to prepare for this very debate—reviewing the writings of his opponents to better understand and challenge their viewpoints. In his view, curious minds won’t be erased by AI—they’ll be empowered.

His challenge to educators: if AI makes traditional assignments obsolete, we must rethink how we teach. The age of the regurgitated five-paragraph essay may be behind us.

The Soul in the Machine—or Not. Fei-Fei Li brought hopeful realism to the conversation. She reminded us that: 

“There are no independent machine values, only human values.”

AI, she argued, is not destiny. It’s a tool—a reflection of human intention. The most important variable isn’t the model, but the motivation and agency of the person using it. When learners engage with curiosity and purpose, AI can be an extraordinary partner.

Still, she acknowledged the risks. Job displacement, particularly in white-collar sectors, is real. And while open-source tools may offer a counterbalance to Big Tech’s influence, they aren’t a silver bullet.

The Business Model Is the Message. Jaron Lanier didn’t mince words. He criticized the underlying culture of AI—particularly the influence-selling model prevalent in Silicon Valley. In his view, the Turing Test (which asks whether a machine can fool us into thinking it's human) is the wrong goal. The ease of deception isn’t a triumph—it’s a warning.

“We pretend people don’t exist and that AI is the Wizard of Oz.”

Lanier called for a shift from synthetic convenience to data dignity—recognizing and rewarding the people whose work, ideas, and language train these systems in the first place.

He also sounded the alarm about power concentration, drawing a straight line from social media’s unintended consequences to the potential of AI to do far worse. “We need a business model that doesn’t concentrate power so much,” he said, “because rapid creation of super-monopolies could undermine democracy.”

The Moderator’s Mic Drop: Wisdom Beyond the Machine. Moderator Bari Weiss brought a profound and poetic dimension to the discussion. Citing Job 28, she reminded us that wisdom is not easily mined, even by the smartest tools: 

“Where can wisdom be found?
It cannot be found in the land of the living.
…God alone knows where it dwells.”

In an age where AI is often described as the smartest entity in the room, this was a necessary recalibration. Truth isn’t just a matter of computation—it’s a moral and spiritual pursuit. 

So, Will Truth Survive AI?

Maybe the better question is: Will we, as a society, retain the will to seek truth in the presence of such powerful simulacra of it?

AI can’t save or destroy truth on its own. It reflects us—our values, our intentions, and our business models. Whether it becomes a tool of liberation or manipulation depends entirely on how we build, regulate, and use it. In the end, Fei-Fei Li put it best:

“I place my hope in people, not in AI.”

So do I.

Next
Next

Leveraging AI into Your Organization: 5 Steps to Get Started