Random variables have nothing to do with the distribution of probability across its events, thats done in the definition of the probability space.

Research

Trying to understand the standards doc methodology and answer the question, given some function, what is the type of uncertainty analysis we must do on it with justifications for doing so.

I think Helton would say that the uncertainty analysis performed in the standards doc determines moments of the CDF that would constitute a ‘complete’ analysis.

Uncertainty propagation with the ‘method of moments’ (Morgan).

Reading through Morgan now, which seems like a more introductory version of the standards. Hopefully it can provide a link between all three documents.

A first order approximation is whats done in the standards doc, a Gaussian where the first moment of y is take. The expected deviation of y from its base case is zero the base x’s expectations can be used to measure the expectation of the output.

  • I don’t know where the Taylor series fits into things yet.
  • I know that what I consider a sensitivity anlaysis is the most basic unceratinty approx.
  • Then we go to the standards unceratinty approx which tries to incorporate the uncertainty of the input to get combined importance.

Whats interesting here is whether or not one needs to understand the function y. In my case, its feasible to understand all functions y, but what if we take the default of not wanting to understand them?

(Morgan ) has nice visual of creating a ‘decision’ tree for inputs, its a sequential tree that represents a combination of each of the inputs say, using there middle, max and lower bounds. Each leaf can be assigned a probability then and the CDF of Y can be constructed from the probability value pair at the leafs. MC method is this on steroids. We create loads of paths and weight the y value to how many paths we’ve gone down.

Morgan also shows a results that CDF’s can mislead. Some good stuff on visualisation.