Trying to not get too carried away with the Maths and orientate around what is required for uncertainty analysis
Taking the filtration process, as the simplest level. Its a function with n inputs.
The domain of this function is a set containing vectors of x’s.
The inputs can be seen as a collection of points in n dimensional space (or set). The function f maps from the space to the real number line.
A function is a mapping between sets.
The problem
What is the unknown? Expressing, numerically, the confidence we have in the ‘filtration’ function output. How close the output value is to the ‘true’ value. Where the ‘true’ value is some Platonic ideal. The standards doc addresses this notion of truth in Annex D.
What are the data? The input variables, their distributions. What are the conditions?
When we calculate the value for the filtration process there is still doubt about how well that value actually represents the ‘true’ value of the filtration process.
How does all this relate to ‘noisy’ measurements and ‘derived distributions’?
I think what I’m most confused with at the moment is the categorisation of errors