Deepfaking Thought: How LLMs Emulate Reasoning

1 min read

Joscha Bach offers a provocative take on how large language models work in his conversation with Lex Fridman (1:13:27).

"They are basically brute forcing the problem of thought. By training this thing with looking at instances where people have thought and then trying to deepfake that. If you have enough data, the deepfake becomes indistinguishable from the actual phenomenon, and in many circumstances, it's going to be identical."

The profound insight: Can you deepfake until you make it?

But here's the uncertainty: when an LLM solves a reasoning task, it's hard to tell whether it's emulating a reasoning strategy it saw in training data or actually inferring something new. And Bach notes that humans might work the same way:

"In many ways, people when they perform reasoning are emulating what other people wrote about reasoning, right?"

His proposal for improving LLM reasoning is fascinating: increase the temperature (making output more creative/random) and combine it with a prover that filters viable solutions from nonsense—essentially mimicking how human creative thinking works.

On a separate note, Bach offers a striking definition of free will:

"The opposite of free will is not determinism, it's compulsion."

This reframes the entire free will debate. The question isn't whether our actions are determined, but whether we're compelled to act against our goals and values. Addiction is the loss of free will not because it's deterministic, but because it's compulsive.

And on emotions: "I don't want to have the best possible emotions, I want to have the most appropriate emotions." It's not about maximizing pleasure—it's about having responses that serve your actual goals.

Copyright 2025, Ran DingPrivacyTerms