It must be known to us that the existence of some one sort of a thing, A, is a sign of the existence of some other sort of thing, B, either at the same time as A or at some earlier or later time, as, for example, thunder is a sign of the earlier existence of lightning. If this were not know to us, we could never extend our knowledge beyond the sphere of our private experience;

Problems in Philosophy, Russell.

and

The fundamental principle in the analysis of propositions containing descriptions is this: Every proposition which we can understand must be composed wholly of constituents with which we are acquainted.

Not too sure what acquainted means yet.

Sense perception is ultimately an interaction between a physical object and spectator.

This colour is no something which is inherent in the table, but something depending upon the table and the spectator and the way the light falls on the table.

He summarises the first chapter with:

what the senses immediately tell us is not the truth about the object as it is apart from us

That it only tells us about the relation between an object and our sense data.


I should maybe watch the vid on derived distributions too, to get a sense if it helps thinking.

”When writing, you should be adapting to your readers needs”.

Mass flow

The case for the ‘mass flow’ would be to code the reaction mechanisms into Python.

I’ve done a bit of coding for the mass flow, I think its much purer to think of the fuel production as operations on a mass entity that is poked and prodded as it makes its way through the process.

Uncertainty

It seems that at each step of the measurement, there’s a recursive process where we go through uncertainty quantification again. So choosing a simple example might be a better start.

If I can get an idea of the process for a simple measurement, it might be easier to assign a recursive notion when I come across it in a more complicated measurement.

here’s how I’m thinking about ideas of error:

For a value of the measured quantity over infinite measurements. Random error is associated with precision. For any measurement, how far are we off what we expect to see if we did this same measurement an infinite number of times.

Systematic error is related to accuracy, for this mean we expect to see, how far is it from the actual value of the measurand (or its ‘true’ value).

Example

Expected value, mean and variance, how do they come into play here?

We have some realized measurand, say current in resistor.

We imagine that we can repeat our experiment an infinite number of times.

What is the expected value?

We have an expected value.

We assign a distribution around this expected value due to random error, the variation in things based on the repeatability of the experiment. e.g. the calibration of the ammeter, its ability to repeatably perform a measurement.

We assign a distribution around this expected value due to systematic error, the degree to which we’re not actually measuring current in the resistor. e.g. current is continuous, we are measuring discrete values. Incomplete knowledge about how all conditions might affect current measurement.

The Unknown and the Unknowable

I’m getting stuck up on this point, how we know that we can never know a measurand.

What an object is can only be known by proxy of the senses. Might have to be content with this for now. You can’t assign the sense data to the object. The table is not brown its only perceived as brown. We can’t tell its true color independent of our sense of it.

Is Systematic error quantifiable?

Like in Russell above, our sense perception is a form of reduction. Russell would go on to say, is there a real table there at all (outside what we infer of it), if so what kind of object is it. I think for my stuff, we assume that this quantity exists as we describe it theoretically? Or is that a sticking point for me? Are we sayign that we deviate from the ‘true’ object that is current or do we accept our best theory of current as the truth?

So is charge density per unit time current and then systematic error is how much credence we say that this ‘theory’ could be wrong?

Always in the same direction.

Dealing specifically with errors in measurement (not in theory).