Read a good article yesterday on how Bayesian thinking allows you to first create a model, no matter how shitty and then condition it on data. Currently, it seems to me that the classical approach ‘constructs’ the model from the data, in that you shape this model from the data. Bayesian approach starts with a model and then the data gives it a marker of ‘plausibility’ as a generator of the data.

What you need for Bayes theorem is the joint probability distribution of the data and parameter.

Research

I need to establish the ‘outcome’ of the LCA under study and what the impact score is? What is the functional unit etc.

The bridge with LCA uncertainty is specifically concentrating on LCI uncertainty which overlaps with model uncertainty.

It might be useful, any time I’m reading a paper, to note where it fits into the structure.

Hamming’s notion of having your door open, despite how it effects ones own work might be a path to ‘great’ work. It’s the long run payoff of it.