This series is an attempt to pay homage to some particularly amazing blog posts I’ve read, which had a profound impact on me. Here’s the first one.

In no particular order:

### 3. Eliezer Yudkowski’s “0 And 1 Are Not Probabilities” on LessWrong

One version of the Third Law of Thermodynamics is that no system can attain Absolute Zero temperature (0 Kelvins) in a finite number of cooling steps. If something is warm, it’s relatively easy to cool it a little; then it’s a little cooler, so it becomes harder to cool it further. Repeated cooling steps lead to successively smaller effects on the temperature, and as a result, you can approach to Absolute Zero but never really reach it.

Simple example: You start at a temperature of 4 Kelvin and do “50 points” of cooling (units arbitrary here). In the diagram below, your temperature after this cooling step is 1 Kelvin:

Now you are at 1 Kelvin, and you do another cooling step of the same size, but it only takes you down to 0.4 Kelvin this time:

…a much smaller effect. You go again, and only get down to about 0.2 Kelvin:

And so on. The point is that you can’t reach 0 K in a finite number of steps; your temperature asymptotes but never arrives.

While this analogy is not in this post, Eliezer’s point is, in part, to point out that this is what epistemology is like. If you hold a belief, that corresponds to some level of credence. Let’s say you start at 80% confidence. But you discover some good, independent evidence for the claim, and so your confidence in this view rises to 90%. And then you discover some further (independent) evidence; if it is as strong as the first time, you don’t rise to 100%, but to something more like 95%. And more independent evidence would drop you to 97.5%. Then 98.75%. And so on. You can’t prove the view; you can only asymptote towards 100% without ever reaching it.

Same in the negative direction. Evidence for your view might start at 80%, then drop to 40, then 20, then 10, and so on. But you can’t reach 0% confidence in a finite number of steps. You would need infinite counter-evidence.

(in case the analogy wasn’t clear…)

But what about *impossible* things, you ask? Like, “X and Not X” cannot be true, therefore it has 0% probability, right? I deny this as well; you might be an insane person in an asylum who cannot understand the meaning of anything you are thinking about. In that case, where you are so confused that nothing in the world makes sense, you cannot be certain that “X and Not X” fails to hold. How do you know you’re not insane? Do you have infinite evidence for *that* view?

My reading of this post was the moment I truly came to terms with Bayesian reasoning. Certainty is not just outlandish; it’s actually impossible. You can be wrong about anything. It’s probabilities all the way down.

(A corollary: the more confident you are, the *more* damage negative evidence should represent to your worldview.)

## Comments