Max Roser on extensions of our perspective with statistics
We cannot see much of the world through our direct experience. The horizon of our personal experience is very narrow. For every person you know, there are ten million people you do not know.
He states that a statistical understanding of the world should become more central to our culture.
Research
Is my job to just solve problems using computational modelling. And narrow those problems to energy/sustainability space? Think this is the job im looking for. In terms of the trio of what I’m good at etc.
I spend so much time being unsatisifed with any of the abilities I have, it stops me from being curious. Fundamentally, I will be using computational aspects in the future to solve problems so I’m going to try and lean into this again.
Probabilistic computation looks interesting. I think I should probably apply to electroroute.
I’m also scared that I’m being stupid in not getting enough sleep, although I do mainly get 7 hours or so.
Monday get an estimate of how long to do work for naphta and arons stuff.
You can have sources of uncertainty but ultimately, they are aleatory or epistemic.
This directly applies to whether more ‘data’ can improved certainty. So in the research project, depending on how we classify the input uncertainties, we could use statistical methods to improve our knowledge (in theory).
Bayesian Statistics
We have aleatoric uncertainty about the next coin toss but epistemic certainty about its average. Aleatoric uncertainty is that given a theta we know the next observation probability (or our uncertainty about the next observation). We have epistemic uncertainty about the probability of some theta.
Observations give us information about the true value of theta, so we should have a way of updating our probability of theta for observations.
After the data y have been observed, we can predict an unknown observable, Mathjax from the same process.
For example, y may be the vector of recorded weights of an object weighed n times on a scale N(\theta, \sigma^2) may be the unknown true weight of the object (note its not a point value) \bar{y} may be the predicted next weight recording.
The distribution of \bar{y} is called the posterior predictive distribution.
It’s always implied you’re in the probability space of the parameter?
frequentists do not accept that aleatory uncertainty can be described or measured by probabilities, while Bayesians are happy to use probabilities to quantify any kind of uncertainty.
Aleatory is unknowable, the next toss of a coin is unknowable, we have aleatory uncertainty about it. Although, do we? Epistemic (unknown) can be improved with more data or observations. Is the convergence of a coin to 50/50 chance not make it epistemic?