@gwern @rationality
Reading https://gwern.net/fake-journal-club
the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.
Gwern talks about the need to get into the mode of expecting things. From an information point of view, if nothing unexpected/expected is happening you can’t really determine the information you’re getting form the paper.
- If you’re really good at explaining an outcome, you’ve zero knowledge. If you can explain a fictional story by your current model as well as reality, then is that model really useful or does it really contain information? Yudkowsky noticed that he had discomfort in fitting a story to his current model, a kind of overfitting what he knew to the story. In his view, this was a sign that something was wrong, that his model actually couldn’t explain the story but he ignored it. It was easier fit the story to the model than accept his model might be wrong.
- There is a general feeling when you’re reading a book that you’re in this echo chamber, there’s no counter point but your own head.
- Experts can often be less accurate than a model that knows nothing because all the knowledge get’s jumbled up in
predictive behaviour. The example of if you’re fried know loads about Korea, been there a few times etc. tells you
there’s a 90% chance that they’ll test nuclear weapons next month, if this is priori unlikely you should be confused
as to the jump. If you had now knowledge and just guessed 50% here, you’d probably do better.
‘since North Korea only tests once a decade, what evidence do I have that should push it above my default guess of 10%? What unexpected events happened or didn’t happen?’
- He says the way to do this is calibration training to get a sense of what it ‘feels like’ to give certain probabilities to things.