Reality in the context of Physics

Click here to edit subtitle

What is Truth?

There are two important aspects to truth. One aspect is the correspondence between what exists or existed in Object reality and what is percieved by an observer, the other is to do with completeness. It can be seen that absolute truth belongs only to the unobserved Object reality. The observer fabricated  Image reality is a partial truth that may also be distorted. This explanatory framework unlike relativity alone or QM alone, both of which rely upon an observer's veiwpoint to give a singular reality regarded as truth, has a home for absolute truth. That is all that the object can be without the limitation of human perception (including limitations of enhanced perception using tools) and without distortion. It is the source of the complete information about it that may exist in the environment if it is able to fully interact with the surrounding pool of EM. That may be thought of as data able to produce all manifestations of it from different distances and orientations of observer relative to it.

Here are some of the things that I have said about truth that are relevant to this model. But first a note on measurement in relation to the 'truthfulness' of measurement.

A note on measurement

The Object reality should not be confused with objective reality. It is perhaps unfortunate they sound similar- I've been using the term for too long to change it now. Objective reality is multi or inter-subjective Image reality , where many measurement or observations by one or many observers gives a 'reliable' output. Eg. many measurements of a single dimension with a ruler of any object will generally be regarded as an objective, reliable measurement.(I'll come back to that idea later)

The outcome of measurements are on the Image reality side, they are what we see. As you have probably experienced trying to measure the height of a liquid in a measuring flask, where the observer is situated relative to the scale can affect the outcome, as can the judgement of where on the meniscus to measure. It should always be read at the bottom of the meniscus at eye level-but the point is its a subjective call.As is the measurement of a quantity on an analogue weighing scale. The measurement is relative to the observer position.
Johnathan Dickau's examples of measuring coast lines illustrates another facet of the subjectivity of measurement. Outcome depends upon the scale of the measuring device. There has to be a subjective call as to what scale of measurement is good enough. Coming back to the objective ruler measurement- the result though objective can not be considered absolute reality because the measurement may have been in inches, what if its done in cm? or mm? or microns? or angstroms? A convenient scale can be selected if the aim is only gross comparison of something against other things measured at the same scale. But as you and others demonstrate the greater the complexity of an object's perimeter the greater its length if measured at an appropriately small scale. The complex Object does not have just one measurement that fully describes what it is like. Not only is the scale of measurement important but where on the object the measurement is made. As over or past this or that bump, and into or past this or that crevice could make a significant difference to the outcome.
We see images of things, and can have knowledge of things because output is fabricated from sensory data input. If it is necessary to produce data (as in your proton experiment example) in order to 'see' or measure a thing then the output is Image reality. The representation fabricated is not the Object itself. Just because a measurement is objective we should not regard it as absolutely true but only representative- and relative to how the measurement was made.


Georgina Parry replied on Jul. 12, 2012 @ 14:03 GMT
Hi J.C.N.Smith, you ask what is truth?I would like to separate truth from truthfulness. I think we can have greater and lesser degrees of truthfulness but no access to absolute truth. Complete or absolute truth is not a particular viewpoint (physical perspective or subjective opinion or relative measurement). Absolute truth has correspondence to all possible measurements and physical perspectives of some thing or event, from all directions, scales and distances and times and even different observer kinds. It is the whole elephant,seen every-way, so to speak not any part or individual glimpse.What is or was in every possible way it could be described.
.................shortened
Georgina Parry replied on Jul. 12, 2012 @ 20:33 GMT 
  I still might not have made that differentiation clear.The truth is in the Object reality.It is the structures and patterns and their relationships.Former iterations being the home of historical truth. It is independent of human thought about it or description of it. Any physical viewpoint, or opinion, or measurement, (only) has a relationship to the truth, that is what I meant by correspondence. It isn't the absolute truth itself. Several different observers can give their own accurate accounts, according to their perspective. All of those accounts can be different and seemingly contradictory. All can correspond to foundational truth, the source of the data they have used for their personal representation of reality. So all are truthful. There are far more perceptions and measurements that -could be- made than the few that are selected and from which a representation of reality is constructed. The greater the amount of reliable data that is available, and the higher the quality of the data, the greater is the likelihood there is a high level of truthfulness in the fabrication made from it.

Maximising truthfulness should be the aim. There shouldn't be a presumption that the truth itself is known on the basis of scant, partial evidence.Conspiring against that aim are two human biases: the tendency to draw strong conclusions from incomplete information, ie. "what you see is all there is" bias, which was mentioned in the essay, -and- the tendency to form coherent causal stories from unrelated facts,ie. "the narrative fallacy". Two of many human biases that Daniel Kahneman identifies in his book "Thinking fast and Slow" -see the essay references.

Georgina Parry replied on Jul. 14, 2012 @ 01:56 GMT
Dear J.C. N. Smith,

I'm glad the elephant analogy works for you. It is insufficient on its own though because it seems to imply that the elephant has to be detected in some way to be the truth. Which is not so. It can still be truth, but unknown, even if there is no observation or description of it. Also as I said there are many more possible observations and measurements, giving truthful but partial information, than are actually selected:Also many more different but truthful descriptions that might be found. Sorry for my earlier rambling replies.I was trying to pin it down.Yes, I think you have nicely summarized the role of science.I don't think it should ever be declaring the truth but continually finding what is truthful and not truthful to help piece together a better understanding of nature. 

Some  discussion of truth , Measurement and Observables.

Georgina Parry replied on May. 19, 2011 @ 22:11 GMT FQXi.org blogs

It might be asked where is truth in this model when each observer produces their own observed reality. Is truth purely subjective or is there an objective truth? The answer is that truth is not the reality that is perceived by any individual observer or group of observers but the correspondence between object reality or what is (and as time passes what was) and what is perceived, via the available data in the environment.There can of course be interference, obstruction, perturbation, distortion, alteration and fabrication of the data that is available and that will alter what is perceived. If the data has been altered or prevented from reaching the observer then the observer will not perceive the truth. Knowing of the distortion, such as gravitational lensing, an observation closer to the truth can be uncovered. Seeing something does not make it true just because it has been seen. Think of the illusionist at work and the lengths he goes to to limit the data available to the audience, so controlling their perception. Sometimes more data is all that is required to uncover the truth. Preventing or obstructing access to that data hides the truth.Data, just because it is data, is also not necessarily true, as in closely corresponding to what was in object reality. Think of the mirage, an aberration of the available data caused by the refraction of light or subjective editorial and censorship. The careful cropping of a photograph to exclude data which would give a different opinion of the content. So the quality, reliability, completeness and origin of the data must also be considered when considering if something observed is true. The truth requires the faithful and complete transmission of potential sensory data from object reality to the observer's image reality. Without complete and faithful data transmission the image reality constructed is not the complete truth but its veracity will fall somewhere along a spectrum from complete truth to complete falsehood.  (Thought added 2012 9th Aug- or absence of truth.)

Georgina Parry replied on May. 24, 2011 @ 00:39 GMT FQXi.org blogs

Isn't that controversial? Doesn't any one want to say that scientific observations are objective and therefore true? Or that space time is the only reality, there is no other and therefore what ever is observed is the truth? Seeing is believing? Or that quantum physics is so counter-intuitive that we should give up all ideas of truth?

Trying to think like a mathematician, here's an idea. Perhaps truth confidence limits could be assigned to observations. Truth confidence limits would be a mathematical representation of a qualitative assessment. I do not think this necessarily requires fuzzy logic mathematics but some kind of qualitative valuation of data attached alongside the quantitative value. Its "baggage" that it takes everywhere with it. So if different data is mathematically processed together they each bring along their own "baggage" which then are taken into account when the outcome is determined, and truth confidence limits could be given to the outcome.

Important factors in determining limits that spring to mind would be; the amount of data collected (important for statistical significance), the breadth of the data ( important to get multiple perspectives), how much distortion such as refraction or temporal delay has occurred, clarity of the data ( how many gaps or level of interference), how much addition of data or superimposition/ contamination of data from outside sources has occurred, the source of the data and its reliability (which might include experimental design), how likely to be genuine or a fabricated data. Some of these are already taken into account in experimental design such as number of repeats necessary for statistical significance. Or the limits of accuracy of experimental apparatus. These alone are not enough to have narrow limits put at a very high truth confidence level. New data, possibly from a different experiment giving a different perspective, can easily overturn the conclusions drawn from former experimental evidence. In many cases the value of those factors necessary to accurately determine truth confidence limits is just not known, which has to leave a large question about the truth confidence of the observation, however much we would want it to be true. 

Consider the disappearing elephant illusion. The observation can be performed countless times by numerous observers and the elephant dissapears. Due to the number of observations made, the disappearance is statistically significant and not an artifact or rare chance occurrence. Binoculars could be used to enhance the visual accuracy of the observation above that of normal eye sight. So the possibility that the effect was due to not seeing clearly enough is also overcome. However it is only by having numerous observers in-different- positions, not just the audience positions, so not replicating the former observers situation that the illusion can be uncovered. As then it is easy to see the elephant concealed behind the mirror.It is not the amount of data or the magnification of detail, in this case and others, that reveals the truth but the breadth of the data. Many different observer positions to ascertain the more complete truth and not just many observers reinforcing a similar limited perspective. Without numerous widely different perspectives there would have to be a very low lower limit to truth confidence but the number of repetitions and accuracy of the observation would heighten the upper limit of the truth confidence.Scientific results and conclusions would then not be mistakenly equated with truth but as evidence of the structure and function of the universe with a variable, truth confidence limits, qualitative assessment assigned to them. 

What do you think? Would such self assessment undermine the reputation and respect of science? Or is it honesty that could be of benefit in evaluating scientific endeavor? 

Georgina Parry replied on May. 24, 2011 @ 22:04 GMT FQXi.org blogs

I should perhaps just say that that is a little different from conventional statistical confidence limits, where a greater amount of data would give more results around the mean, for a normal distribution, narrowing the limits and so showing a more accurate result. We could say that with a greater amount of data the result is likely to be more accurate and therefore has greater potential truth, raising the upper truth confidence limit. Though it does not increase the lower limit of truth confidence if another factor relevant to truth confidence has not been considered.


A note on measurement

The Object reality should not be confused with objective reality. It is perhaps unfortunate they sound similar- I've been using the term for too long to change it now. Objective reality is multi or inter-subjective Image reality , where many measurement or observations by one or many observers gives a 'reliable' output. Eg. many measurements of a single dimension with a ruler of any object will generally be regarded as an objective, reliable measurement.(I'll come back to that idea later)

The outcome of measurements are on the Image reality side, they are what we see. As you have probably experienced trying to measure the height of a liquid in a measuring flask, where the observer is situated relative to the scale can affect the outcome, as can the judgement of where on the meniscus to measure. It should always be read at the bottom of the meniscus at eye level-but the point is its a subjective call.As is the measurement of a quantity on an analogue weighing scale. The measurement is relative to the observer position.

You own examples of measuring coast lines illustrates another facet of the subjectivity of measurement. Outcome depends upon the scale of the measuring device. There has to be a subjective call as to what scale of measurement is good enough. Coming back to the objective ruler measurement- the result though objective can not be considered absolute reality because the measurement may have been in inches, what if its done in cm? or mm? or microns? or angstroms? A convenient scale can be selected if the aim is only gross comparison of something against other things measured at the same scale. But as you and others demonstrate the greater the complexity of an object's perimeter the greater its length if measured at an appropriately small scale. The complex Object does not have just one measurement that fully describes what it is like. Not only is the scale of measurement important but where on the object the measurement is made. As over or past this or that bump, and into or past this or that crevice could make a significant difference to the outcome.

We see images of things, and can have knowledge of things because output is fabricated from sensory data input. If it is necessary to produce data (as in your proton experiment example) in order to 'see' or measure a thing then the output is Image reality. The representation fabricated is not the Object itself. Just because a measurement is objective we should not regard it as absolutely true but only representative- and relative to how the measurement was made.
 

On Observables
In the case of the unseen spinning, falling coin the possible outcome states are conjoined with the substantial matter of the coin and its flux as it falls and spins. The Object reality of the coin is thus providing real, substantial, carrier wave of the proto-observables. That upon interaction with the measurement protocol gives just one definite observable because the material-flux carrier relationship is destroyed. The coin is halted (carrier wave ceases to exist)and the material coin is fixed in a limited state (only one surface potentially visible. On observation an observer reference frame is imposed switching from the abstract theoretical observables superposition to the Image reality Definite Limited Fixed State output of sensory data processing,


In the case of the electron in the double slit experiment:It can be supposed that there is also a substantial carrier wave interaction prior to outcome observation. Prior to observation the electron is influenced by the waves produced from the vibration of the atoms of the apparatus [combined also with the effect of its own motion]in unseen Object reality. The interaction of the carrier waves with each other producing the interference pattern and the electron's final position on a screen being affected by the environment produced by the carrier waves. This model of the double slit experiment put forward in my FQXi essay What Is Reality In the Context of Physics? by Georgina Parry (created by Georgina Woodward • Feb. 7, 2011 @ 15:58 GMT)

These models of substantial carriers as the influential environment in which proto-observables actually exist are the realistic counterpart to disembodied observable superposition in a mathematical space. The models give the environment that makes wave-function collapse intuitive and overcomes any requirement for many worlds explanations as why this outcome and not the other is fully explained by the absolute environment in which the observable was formed. The observables in superposition model is useful but unrealistic as the outcomes do not exist until measurement they are not free but constrained by their carrier. Probabilistic outcomes from that deterministic picture are due to not having/knowing a starting state for a particular reference frame of any individual proto-observables-carrier ensemble.So the outcome that will be obtained can not be calculated with certainty. 


Wave function collapse is switching from a theoretical superposition of isolated observables (outcomes), not yet formed, as a Definite Fixed State observable is produced upon observation; to a definite limited fixed state manifestation in space-time, emergent, reality. What exists in Object reality, prior to measurement, are proto- observables conjoined with the carrier.

The (Definite Fixed State) observables do not exist in space-time prior to the observation. Space-time is the output of sensory data processing that one fixed viewpoint formed from the sensory data received. The proto-observables conjoined with material-wave carrier exist in absolute Object reality space (no singular reference frame)for which there is uni-temporal (same time everywhere) passage of time.It is interaction with the apparatus and or measurement protocol that forms an observable from a proto-observable.

For the unseen spinning falling coin example the proto-heads observable can have many different orientations in absolute space that are within in the repertoire allowed by the flux. Absolute (source reality for all reference frames), Not definite as no reference frame and no measurement yet applied and not fixed as in flux but the output observable produced by the measurement protocol has only one orientation in space-time, heads up seen by the observer. The definite limited fixed state has been produced by the measurement protocol it is not representative of the absolute actualized (substantial) proto-observable- matter-flux carrier ensemble pre-measurement. The superposition of outcomes in a 2D mathematical space is an impoverished model compared to the variation of the proto-observable during the material-flux(or wave) carrier interaction.

The hidden variables that make the outcomes deterministic rather than merely probabilistic are substantial and unseen in absolute space. The foundational space that is the source reality for (definite, limited view) space-time emergent reality. The outcomes of the many experiments remain probabilistic despite the deterministic flux of the proto-observables because 'starting state' of a particle or other unseen object is never known. Thus representing variability of states, rather than uniformity within the population.


See for further discussion