How AI is being Framed
Every few months, a new wave of AI skepticism emerges.
The latest entry comes from Figs in Winter.
They claim that ChatGPT is a sophist — a glorified bullshitter that doesn’t understand truth.
The argument leans on Noam Chomsky’s critique of AI, John Searle’s Chinese Room thought experiment, and the idea that AI lacks causal reasoning, making it incapable of actual understanding.
But here’s the real problem: this entire debate is framed incorrectly from the start.
Instead of evaluating AI for what it actually does, critics keep attacking it for failing to meet expectations it was never designed for.
This article isn’t just a rebuttal; it’s a complete deconstruction of the flawed discourse surrounding AI.
The One-Question Debunk
There’s a simple way to collapse the entire premise of the article. Instead of debating every point, let’s ask just one question:
Was ChatGPT ever designed to seek truth?
The answer? No.
ChatGPT is a language model, not a truth-seeking engine. It doesn’t “believe” things. It doesn’t “know” anything in the way humans do.
It generates responses based on probabilities and patterns in its training data.
Criticizing it for “not seeking truth” is like criticizing a calculator for not solving moral dilemmas — that’s not its function.
If this one question renders the entire argument meaningless, then the premise of the article was flawed from the start.
Live Case Study: Knowledge vs. Understanding
Here’s where things get interesting. While analyzing this article, I asked ChatGPT to break down why the argument was flawed from my perspective. It did so perfectly, because it has access to all of my past insights and reasoning frameworks.
But when I asked a follow-up — why my one question completely debunked the article.
ChatGPT knew the correct answer but didn’t initially recognize why I asked it that way. That exact moment proved the gap between knowledge and understanding that the article was trying to describe.
Key Insight:
✅ Knowing the correct answer is not the same as understanding why it matters.
✅ AI can generate information, but it doesn’t “grasp” significance without explicit context.
✅ The article’s entire argument collapses because it misrepresents what AI is designed to do.
Ironically, this live experiment proved their argument about AI’s limitations — while simultaneously proving their conclusion about ChatGPT being deceptive was wrong.
Why the Supporting Arguments Fall Apart
Since the premise is already debunked, let’s briefly touch on the other weak arguments made in the article.
1. The Chomsky Argument — Predictable and Overused
Noam Chomsky has been skeptical of AI for decades. His critique is not groundbreaking — it’s just his long-standing position that machine learning is a “stochastic parrot” that lacks real understanding. Citing Chomsky isn’t an objective AI critique; it’s a predictable opinion.
2. The Chinese Room Fallacy — Misapplied Again
John Searle’s Chinese Room argument claims that AI just manipulates symbols without real comprehension.
But by this logic, all digital systems are meaningless because they only process data.
We don’t say a telescope “doesn’t really understand the stars” — we just accept that it enhances our ability to analyze them.
Why do we hold AI to a different standard?
3. “AI Can’t Do Science” — A Misleading Take
The article argues that AI “can’t do science” because it recognizes patterns but lacks causal reasoning.
But here’s the problem: science itself begins with pattern recognition.
AI doesn’t have to “do science” alone — it assists human discovery by revealing correlations we might miss.
Saying AI is invalid because it can’t infer causality is like saying a microscope is useless because it doesn’t form hypotheses.
4. “AI is a Sophist Because OpenAI Charges for It” — A Laughable Stretch
The article’s final argument: Sophists demand money, OpenAI charges for ChatGPT, therefore ChatGPT is a sophist.
By this logic, every book, course, or paid educational tool is also deceptive.
OpenAI is a business — charging for an AI product doesn’t mean it’s inherently manipulative.
The Bigger Problem — How AI is Consistently Misrepresented
This article isn’t just an isolated case — it’s part of a larger trend where people:
✅ Set up false expectations for AI and then criticize it for failing to meet them.
✅ Focus on philosophical hypotheticals instead of practical applications.
✅ Frame AI as deceptive rather than recognizing it as a tool that enhances human execution.
This flawed framing doesn’t just mislead the public — it delays real conversations about how AI should be integrated into society.
Flipping the Narrative
Let’s be clear: AI is not a philosopher, a scientist, or a truth-seeker. It’s a tool.
The real issue isn’t whether AI “understands” like humans do.
The issue lies in how people continue to frame AI.
Then using it to push misleading arguments.
Instead of asking, “Can AI think?” or “Can AI seek truth?” we should be asking:
How can we best use AI for what it actually does well?
If we shift the conversation away from philosophical distractions and focus on real-world applications, we can finally move beyond this cycle of misinformation and bad discourse.
📌 Follow & Connect for More AI Strategy & Critical Analysis
🔹 Substack: Master Planner 25
🔹 LinkedIn: Shawn Knight
🔹 Twitter (X): @shawnknigh865
🔹 Facebook: Shawn Knight
🔹 Read more of the 2025 ChatGPT Case Study Series By Shawn Knight on Medium: Master Plan Infinite Weave
🔥 2025 ChatGPT Case Study Series Review (Deep Research)
Sources for This Discussion
Original Article Being Addressed
- ChatGPT: The Ultimate Sophist — A critical take on AI discourse that this article responds to.
Debunking AI Fear Mongering & Overhyped AI Claims
🔥 AI: A Sophist or a Tool? What do you think — Is AI leading discourse astray, or are we just failing to use it properly? Drop your take in the comments & share this with someone who’s navigating the AI debate!
🚀 Follow for more AI critical analysis, business execution frameworks, and truth vs. hype discussions.
Top comments (0)