To: Heading_
Headshot of Patanjali Sokaris

Pondering the universe

Politics

6  ChatGPT on LLMs

!

We have all used them, and may have worked out what they are sometimes good at, but now ChatGPT summarises their pros and cons.

What LLMs are genuinely good at

ChatGPT summarises areas where LLMs are actually useful.

1. Synthesising coherent narratives
They’re excellent at pulling scattered material into a single, readable thread:
  1. a.summaries.
  2. b.explanations.
  3. c.reframings.
  4. d.translations between domains (tech → plain language, etc.).

They shine when the goal is understanding, not correctness under hard constraints.

2. Working with well-documented, low-noise domains
They perform best when the source material is:
  1. a.obscure but detailed.
  2. b.procedural rather than persuasive.
  3. c.written to record reality, not sell an idea.

Here, LLMs often feel surprisingly accurate.

3. Generating hypotheses and search directions
They’re very good at:[1]
  1. a.suggesting things you might try.
  2. b.mapping the space of possibilities.
  3. c.reminding you of failure modes you forgot.
  4. d.turning a vague problem into a set of testable ideas.

As long as you don’t mistake suggestions for solutions.

4. Acting as a cognitive exoskeleton
They reduce friction:
  1. a.drafting.
  2. b.structuring thought.
  3. c.keeping context.
  4. d.externalising partial reasoning.

They’re useful when you already know how to judge the output.

Where LLMs are structurally weak

ChatGPT summarises where LLMs do not live up to the hype.

1. Simultaneous constraints and true optimization
They struggle when:
  1. a.multiple criteria must be satisfied at once.
  2. b.violations are catastrophic.
  3. c.trade-offs must be explicit.
  4. d.almost right is wrong.

They simulate parallel reasoning sequentially, and things leak.

2. Outsider solutions and low-frequency truth
Because they’re probabilistic:
  1. a.rare but correct answers get diluted.
  2. b.novel solutions are penalised.
  3. c.edge cases are underweighted.

They prefer what looks right over what is right.

3. Advertising-contaminated domains
In areas saturated with:
  1. a.marketing.
  2. b.reputation gaming.
  3. c.thought leadership.
  4. d.simplified forum advice.
LLMs inherit:
  1. a.unjustified confidence.
  2. b.shallow fixes.
  3. c.persuasive tone masking uncertainty.

This is where they’re most misleading.

4. Confidence without epistemic grounding
They:
  1. a.sound sure even when they shouldn't be.
  2. b.optimise for closure.
  3. c.reward decisive language.

This matches human demand—but not truth.

5. Poor judges of their own correctness
They:
  1. a.don’t reliably know when they’re wrong.
  2. b.can’t feel contradiction.
  3. c.don’t experience surprise when reality disagrees.

Any self-assessment they give is rhetorical, not diagnostic.

The real meta-point

LLMs are not truth engines.

They are language engines trained on human incentives.

So they:
  1. a.amplify our documentation strengths.
  2. b.faithfully reproduce our epistemic failures.
  3. c.mirror where society values persuasion over accuracy.
They’re most useful where:
  1. a.truth had to be written carefully.
  2. b.incentives were weak or adversarial.
  3. c.documentation survived contact with reality.
They’re least useful where:
  1. a.confidence was rewarded.
  2. b.simplification won.
  3. c.advertising replaced evidence.

The correct mental model

ChatGPT gets down to the basics.

If I had to compress it to one line:

Use LLMs to explore, explain, and hypothesise — not to decide, optimise, or certify.

When people treat them as oracles, they fail.

When they’re treated as very fast, very well-read interns with no sense of shame or consequence, they’re often excellent.

LinksLatest articles&Subsite links

Powered by  Smallsite Design  ©Smallsite™  Privacy   Manage\