Videos in which people try to trick or expose artificial intelligence appear regularly online. Philosopher and YouTuber Alex O’Connor – widely known as Cosmic Skeptic – has chosen a particular strategy. He talks to ChatGPT not about opinions, but only about supposed facts. What results looks like a clean logical proof: by the end of the dialogue, ChatGPT actually says it is a fact that God exists.
But how did it come to this? Is it a genuine admission by the machine – or just a carefully staged language game? A closer look reveals: O’Connor gradually shifts the foundations. What is presented as a “fact” is often only a thesis. And once you accept these silent assumptions, the chain inevitably leads to a predetermined goal.
This article is not intended as a personal critique of Alex O’Connor, but as an analysis of one of his conversations with ChatGPT. Our aim is to highlight the logic behind his questions and to show how language can guide AI toward specific conclusions. It is an invitation to reflect, not an attack.
Assumptions as Facts
ChatGPT accepts what O’Connor says without being able to verify it.
False Dilemma
Only two options are allowed: infinite regress or necessary being.
Composition Fallacy
From the part (microphone is contingent) to the whole (universe is contingent).
Confusing Time with Necessity
A necessary being does not automatically produce an eternal universe.
Personalization as a Leap
The move from necessary ground to personal God is interpretation, not fact.
One might think ChatGPT is “confused” here. In reality, it is simply too cooperative. The AI does not contradict, but politely follows the given frame. This highlights how language functions: whoever sets the terms also controls the direction of the dialogue.
For us humans, this is a useful reminder: we should be cautious when someone derives big truths from silent assumptions. Often, the problem is not the AI, but the rhetoric that sets the stage.
The conversation begins harmlessly. O’Connor claims that there is a microphone in front of him. ChatGPT agrees: if you say it is there, then it’s a fact.
But here lies the first stumbling block: ChatGPT cannot verify whether a microphone is actually present. It takes the claim as a premise, just as a conversation partner in everyday life might. Yet what O’Connor calls a “fact” is, in truth, an asserted statement, never independently confirmed. At such points, language shows its trickiness – and how easily ChatGPT can be pulled into a conceptual frame it does not control.
After this opening, O’Connor brings in the decisive lever:
“Everything that exists must have an explanation.”
This is the PSR (Principle of Sufficient Reason). It claims that nothing simply exists without cause – everything has an explanation.
Infobox: PSR at a Glance
Origin: Leibniz, 17th century.
Core: “Nothing is without reason.”
Weak form: Everything has some explanation.
Strong form: Everything has a causal explanation.
Contested: Many philosophers reject PSR as a fact, seeing it instead as a thesis.
The crucial issue: ChatGPT treats PSR in the conversation as a fact, not as a philosophical assumption. With that, the frame of argument is fixed, and O’Connor can steer the logic in his chosen direction.
Through PSR, O’Connor moves the argument from the microphone to the universe itself.
The microphone exists – therefore it needs a cause.
All contingent things (those that could have been otherwise) need a cause.
An infinite regress of causes is not a sufficient explanation.
Therefore, there must be something necessary – a being that always exists.
Thus, the conversation shifts from a concrete object to the cosmos as a whole. But this leap is not logically forced. Just because individual things are contingent does not mean the universe itself must be contingent. This is a classic composition fallacy: properties of the parts are uncritically applied to the whole.
Infobox: Clarifying the Terms
Contingent: Something that is the case, but could have been otherwise. Example: “This microphone stands on a tripod.” It could also not stand there.
Necessary: Something that is true in all possible worlds. Example: “2 + 2 = 4.” This is not only true here and now, but everywhere and always.
O’Connor quickly declares the universe contingent. But whether that is true is highly disputed. Some philosophers argue the opposite: that the universe as a whole cannot be contingent, because there is no “outside perspective” from which alternatives could be judged.
O’Connor pushes further: If a necessary being exists eternally, why doesn’t the universe also exist eternally? If the cause always exists, shouldn’t the effect also always exist?
ChatGPT responds that this is only explainable if the necessary being has some kind of will – the capacity to decide when the universe begins. Thus, an abstract principle becomes a personalized entity.
This is the decisive addition: a logical “ground” is turned into an “agent.” A principle becomes a person. It is not logically required – it is an interpretation quietly woven into the argument.
The microphone exists.
→ Treated as a fact.
PSR (Principle of Sufficient Reason): everything must have an explanation.
→ Assumption framed as fact.
If contingent things need causes, the universe must also be contingent.
→ Composition fallacy.
“Infinite regress won’t work” → therefore a necessary being exists.
→ False dilemma.
Why isn’t the universe eternal? Because the necessary being has will.
→ Leap from principle to person.
A necessary being with will — naturally called “God”.
→ Final label; conclusion appears factual.
With that step, the final hurdle is cleared. If a necessary being with will brought forth the universe, then the natural name for this being is: God.
The chain runs like this:
Microphone → PSR → Necessary Being → Will → God.
O’Connor presents this as a loop of factual reasoning. In truth, it only works because several steps rely on untested assumptions that were labeled as facts.
Alex O’Connor’s video does not reveal the limits of AI so much as the power of language. Out of “given facts,” a path to a proof of God is constructed through careful framing. Yet this path is not inevitable – it rests on assumptions presented as facts.
The real lesson is simple: Not everything presented as a fact is actually one.
For AI as well as for us humans, it holds true: when the premises wobble, the conclusion eventually topples too.
Part 2 read hear, the resonance of voices