In Isaac Asimov’s Foundation universe, the Mule isn’t dangerous because he’s strong. He’s dangerous because he’s weird: a statistical outlier with leverage over history.
Modern LLMs, by design, are the opposite: they approximate a center of mass of text and taste. That is a superpower, and also a pressure.
The question
If our tools increasingly reflect what is most likely to be said, what happens to what is worth saying but rarely said?
A concrete failure mode
- “Seems reasonable” becomes a local optimum
- “Useful” becomes “average”
- novelty gets pushed to the margins
What I want from this blog
This is a bilingual notebook on AI and human minds. I care about:
- cognitive diversity
- agency, attention, and incentives
- demos that make abstract claims falsifiable
// Placeholder snippet: more interactive experiments will come later.
export const mule = (x: number) => x * 2 + 1;