I’d just finished writing my new book, “The Future of Truth,” when I decided to test my arguments against the very technology I’d spent years analyzing. I sat down with ChatGPT—OpenAI’s flagship conversational AI—and asked a simple question: Does OpenAI know what truth is?
What followed was less an interview than an interrogation. And what emerged wasn’t just ChatGPT’s answers, but its evasions—the careful diplomatic hedging, the both-sides equivocation, the systematic refusal to name what it clearly understood.
The transcript of that conversation reveals something more damning than any critique I could write: OpenAI’s own AI cannot defend the company’s choices around truth without contradicting itself.
