You are here

01 Introduction

Lawrence Denslow's picture
Submitted by Lawrence Denslow on Fri, 05/30/2014 - 08:03

Chapter I: Introduction

Introductory Observations

Each person has learned to deal with his/her world in ways that made sense to oneself. The way each one sees his/her world is the result of personal experience which has had to be resolved individually and made to fit together in a personally rational manner. Our experiences are made up of the things shown to us, told to us, or read by or to us individually, as well as the things which we have done. Each of us has had to deal with other people telling us one thing and in the very next breath saying the opposite. Each has had to deal with conflicts of all sorts, all of which we have had to resolve and integrate into a Personal Conceptual Framework. Every time something new came along that did not initially fit, we determined whether it represented a conflict with previously accepted concepts or merely had to be added to the previous pattern of concepts, thereby expanding the Personal Conceptual Framework.

If a new concept conflicts with a concept previously accepted as valid, one may have difficulty. The ability to resolve such conflicts often rests on how the previous concept was integrated into the Personal Conceptual Framework. Any previously held concepts which have emotional overtones may be genuine stumbling blocks. However, those which were gained through the educational process, which is generally based on logic, can usually be resolved by spending time with the new idea and playing with its ramifications. Concepts can be categorized into various groupings such as social, political, religious, scientific, etc. Among these each person can probably identify several concepts which fall into both the emotional and the logical learning modes.

In reading and studying this introduction to the Fundamentals of Scalar Motion you may come face to face with concepts that have definite conflict with previously accepted ideas. You will be faced with the necessity of examining the bases from which all of your interpretations regarding descriptions of this physical world are made. The situation being faced is well described by two ideas which apply to all categories: “The most difficult task anyone ever faces is to take an old set of data [the way one is used to interpreting his world] and interpret it from a new perspective”, i.e., using a totally different set of rules and “One cannot confute with logic that which was not learned by logic”.

Historical Perspectives

Consider from a historical perspective of western cultures the picture that is presently held for the universe in which we live. Prior to the publication in 1543 of Nicoiaus Copernicus’ De Revolutionibus, the geocentric or earth centered universe was the only available explanation for the positional relationships among the objects observed in the heavens. Even after Copernicus published his work, the vast majority of natural philosophers (the scientists of the era) continued to hold onto the ideas of geocentrism due to the lack of adequate supporting evidence to the contrary and mathematical analysis of currently available evidence.

In 1601 the mass of astronomical data, particularly of the relative positions of the planets, accumulated by Tycho Brahe (1546-1601) became the property of Johannes Kepler (1571-1630), who had been Brahe’s assistant. Kepler published the first of his works on the analysis of that and other data in 1609. With the publication of the last of his four volumes, he had shown that the data supported elliptical rather than circular orbits around the sun for the planets in a heliocentric system as previously proposed by Copernicus. The three empirical laws which Kepler had derived placed the sun at one focus of a unique ellipse for each planet and showed that a radius vector from the sun to each planet would sweep out equal areas in equal time intervals. It was also shown that the square of the Period of revolution divided by the cube of the average distance from the sun remained a constant value for all of the planets. Another way of expressing the relation between the Period of revolution and the average distance of each planet from the sun is by the relation:

$\frac{P_i^2}{d^3_i} = \frac{P_o^2}{d_o^3}$

Galileo Galilei (1564-1642) is recorded as the first person to have used experimental as well as philosophical evidence to decide between conflicting ideas. He used telescopic evidence of the visual phases of Venus and verbal discourse to show the validity of the heliocentric concept as it contradicted the geocentric system and its multiple epicycles. Galileo was not much different from most of us today; he bowed to the pressure of the church (the establishment of the day) rather than face imprisonment and torture (the equivalent today is to lose one’s job and not have one’s work published). Fortunately, his Dialogue Concerning Two Chief World Systems was published in Protestant Holland and soon his ideas met with widespread acceptance which caused the eventual death of the geocentric theory.

It was many long years after Kepler and Galileo that the majority of scientists accepted heliocentrism since it was not until 1687 that Issac Newton integrated most of the concepts in his Principia. Newton’s invention of the reflecting telescope was no small contribution to the advancement of experimental science, but the derivation of the three laws of motion has been shown to be an even greater contribution to theoretical science. Although he proposed no theoretical basis for why these laws should exist, the relations themselves have been the impetus for nearly all subsequent scientific work. The empirical relation for the effect recognized as gravity does not conform to his previous definition for a force and, therefore, remains only as an observed relationship. Newton’s insistence that space and time were not only separate entities but were unrelated continues to limit the theoretical development of explanations for the behavior of the universe.

It is interesting to note that prior to Olaus Roemer’s determination in 1675 of the speed of light, light had been assumed to have instantaneous propagation or infinite velocity. That determination along with subsequent experimental evidence that light is not propagated instantaneously led to a reversal of viewpoint and the conclusion that there can be no instantaneous effect of any kind, which may or may not be true.

Because of the environment of people and other organisms on this planet, the concept of an “ether” in which the planets and stars “floated” was a natural development and thoroughly accepted concept well into the early part of the 20th century. The Michelson-Morley experimental results (1887) verifying the constancy of the velocity of light regardless of direction or velocity of the emitter tended to confound more than clarify the thinking of a majority of physicists at that time.

Einstein claimed to have not heard of their results at the time he developed his Special Theory of Relativity. In the development of the Special Theory, the medium-like properties that previously had been reserved for the “ether” that was said to permeate all of space were attributed to space. By redefining the properties of space and elevating to the level of postulates, the constancy of the speed of light and the idea that all inertial frames of reference are equivalent, the contribution of the possibility of a curvature of space has led to speculations which can be shown to be completely unnecessary, and thereby, unwarranted. In developing the mathematics of the Special Theory, Einstein indicated that he was working from the descriptions and difficulties with magnetic induction. There is no argument with the mathematics of the Special Theory, only certain subsequent interpretations, thereof.

Prior to the development of the atomic theory of matter, all solid materials were considered to be continuous substance. With the advent of the atomic theory of matter as derived for the behavior of gases, the continuity concept was modified to become atoms in contact in the solid phase. This is a concept which dominated the thinking of members of the scientific professions until around the turn of the last century and led Ernest Rutherford to the conclusions to which he came as a result of his now famous gold foil experiment. The idea that atoms in solids are like marbles arranged neatly in a box and that their different sizes control the geometries in which they can be stacked for the many different crystalline and amorphous solid structures observed is an assumption, not an experimental observation. There is a difference between the experimental observation that the atoms of the different elements seem to require varying interatomic distances in their associations with each other and the statement that atoms have different sizes; the statement being based on the assumption of geometric solidity for each atomic structure. In spite of more recent modifications, this assumption has continued to place conceptual blinders on virtually all people who have come in contact with Rutherford’s explanation of the experimental results of bombarding a thin gold foil with positively charged helium atoms, alpha particles. The interpretation of experimental results in terms of current theories implies that the amount of measurable space between the centers of location for atoms of gold is essentially unoccupied. Whether that space is totally unoccupied or is occupied very sparsely or what the meaning of occupancy may be is a matter of theoretical interpretation.

Another result of the idea of semi-solid atoms is that added heat causes the atoms to have more violent translational movement and become far enough apart on an average (linear and volume expansion) allowing the individual atoms or molecules to slip out of geometric positions to positions in between others, thereby melting becomes thought of as a function of the entire aggregate. A similar conclusion concerning the gas phase is that atoms or molecules having sufficiently greater translational movement in random directions in space causes them to be able to eliminate any permanent positional relationships with their neighbors. Both of these conclusions are subject to reinterpretation by noting the difference between experimental observation and interpretation of those observations.

As a result of the idea that atoms are in contact in the solid phase of matter, the conclusion that the mass encountered at the center of the region allotted to each atom in any crystal structure is the nucleus of the atom seems completely logical and beyond question. However, the idea that the apparently small massive effect at the center of the allotted region may be the entire atom is also a possible interpretation. But such an interpretation at this point in scientific investigations into the nature of matter upsets many favored ideas and, thus, is generally ignored. Not only would it wipe out the idea of a nuclear atom constructed from sub-atomic particles which must be arranged in some reasonable and consistent pattern in each different kind of atom, but it would demolish the necessity of having hypothetical forces to hold the particles together, as well as the necessity of proposing unobservable properties for these and other hypothetical particles. It has been mathematically shown that the experimentally observed relations among electrostatic charges are definitely inadequate to produce and maintain the proposed nuclear arrangements, thereby, making it necessary to propose a theoretical nuclear force.

Since almost all of our explanations about atoms, molecules, crystals and such, are based on the concept of a nuclear atom, there are a multitude of ideas that would change as a result of considering the idea of extremely small atoms held apart by a heretofore unknown force. What we would not need to do, as a result of the abolishment of the nuclear atom, is go through basic chemistry and physics and try to identify all the things that would need to be changed. To properly consider the idea of a previously unknown force holding very small atoms apart, that which should be done and has been done, is to develop a completely self consistent set of explanations that do not violate or contradict any verified experimental evidence. Thereby, all explanations become reoriented so as to make it possible to merely abandon all previous explanations based on the nuclear concept of atomic structure.

Laws of Behavior of Matter

We observe and interact with a world that for all practical purposes is made up of matter having energetic behavior and have deduced some of the laws for its behavior. The first such law was formulated by Newton in what we now call the First Law of Motion: objects stay in the same state of motion until acted on by an unbalanced force. For defining that motion, use of our everyday concept of space is adequate: space is a fixed three dimensional framework for defining locations of objects. The velocity of an object is defined in terms of its rate of change of location in that space; i.e. the amount of change of spatial location in ratio to the corresponding amount of a progressive characteristic of that which is called time:

Equation 1: Speed

$v = \frac{\Delta_s}{\Delta_t}$

Even though it is observed that objects may remain in the same state of motion—i.e., they are either apparently motionless or are moving with constant velocity—they are observed to undergo changes in their state of motion when acted upon in some manner. If the rate of change of location does not remain constant the rate at which the rate of change of location undergoes a change is called acceleration and is analyzed in the same manner.

Equation 2: Acceleration

$a = \Delta_v / \Delta_t = \Delta_s / \Delta_t / \Delta_t$

The Second Law of Motion is usually stated as the acceleration which an object experiences is directly proportional to the magnitude of the force applied and inversely proportional to the mass of the object undergoing the change in its state of motion. Thus the empirical relation for acceleration is equated to the statement of the second law by relating the two concepts implied by the statement of the First Law of motion in an inertial frame of reference, mass and force.

Equation 3: Force to Mass Relationship

$\frac {\Delta_v}{\Delta_t} = a = \frac{F}{m}$

Mass may have been initially defined in some other way, but it is now used as a measure of inertia, the magnitude of the inherent material characteristic of resisting any change in the state of motion of an object. By the use of measured values of mass obtained by comparison effects and acceleration based on accepted arbitrary definitions, the definition of force is refined by appropriate algebraic manipulation:

Equation 4: Force

$F=ma$

Since we are not able to experimentally show that mass is the effect of anything else more fundamental, notwithstanding certain interpretations of high velocity effects, mass is assumed to be a fundamental measure of existence for anything that is to be called matter. From this assumption and the observations which led to the laws of motion, all of the relationships referred to under the heading of mechanics were derived. Position, velocity, acceleration, force, momentum, impulse, work, kinetic and potential energy and all the other relationships of mechanics are represented either as directionless or as directed quantities depending on the requirements due to experimental observations and mathematical concepts. After gaining the mental ability to handle mathematical abstractions, a personal system of coordinates previously incorporated into the Personal Conceptual Framework is given quantification and representation in an arbitrarily chosen graphical depiction for locations of objects. Consider these diagrams for a system of coordinates used on a personal basis for locating objects in the personal environment; how far right or left of the XZ plane and how far above or below the XY plane and how far in front of or behind the YZ plane is a given object located?

The mathematical formula for physical processes involve the use of units of mass, length of space, time, intensity of radiation, and strength of charge or the field surrounding at a specified distance either an electric charge or magnetic source. Molecular quantities, and sometimes temperature, are also considered to be fundamental quantities because both have to be arbitrarily defined in terms of the presence of matter. Since charge effects and magnetic effects are not observed to exist except in the presence of matter (existence other than this is only a hypothesis) and since radiation cannot be produced except by interactions of matter, it is left to mass, space, and time to supply the actual fundamental entities of the physical universe.

Since the various entities do not carry labels which brand them as fundamental or not fundamental, any development of concepts and theory, should involve an attempt to eliminate ambiguities in definitions by defining all terms and concepts sharply and explicitly, as well as, identify the most fundamental entities from which all others can be derived. All conclusions that are reached—the intermediate as well as final results—should be capable of being verified by comparison with the findings of observation and measurement, to the extent that observational knowledge becomes available. There should be no deliberate attempt to minimize or maximize the use of mathematics. An approach that is intended primarily to clarify the conceptual framework would be composed mostly of words and would not require the use of complex mathematics, even though such skills would be found to be invaluable when it comes to clarity of thinking, and therefore, would not be avoided. All of the phenomena and entities, such as light, atomic and sub-atomic particles, and other objects developed with that approach, would be built up from simple foundations and thus may require only simple mathematics to represent the quantitative aspects thereof.

All previous theoretical approaches have been based on units of something having some kind of mass effect and none have produced the desired result of a complete general theory for the structure of the physical universe. Most people who support a relational hypothesis of space and time use the idea that observable “events” require the presence of matter. The underlying assumption for these kinds of “events” is that the matter involved must be logically prior to space and time and thereby, that the space and time by which the “events” are identified could not be more fundamental than matter. But it must be remembered that that assumption is purely hypothetical, even though we as observers of phenomenal “events” require the presence of matter from which to make our observations. The opposing concept of space and time existing prior to observable “events”, and therefore prior to matter, cannot be ruled out from a purely logical viewpoint. In this latter concept it is implicit that space and time become the cause of matter and “events”.

Motion: Definitions and Assumptions

The principal common denominator of all phenomena seems to be related in some way to motion, whether we define the commonality as motion or not. But, WHAT IS MOTION? In our present ideas there are many unspecified or hidden assumptions that we make about motion that may not be true even though they seem to be implicit in our observations. The first hidden assumption is that space is immovable and provides a fixed reference system from which to make statements about movement and motion in general. We observe it to be that way with respect to ourselves, and therefore, assume that that is a fundamental characteristic of space.

Another hidden assumption is that there must be something there that can move before there is motion. We ignore the fact that the “something” identified as moving does not enter into the mathematical relation we call motion, but only makes its appearance in the definitions of force, momentum, and energy, as well as other derived functions. We have completely ignored the mathematical implication because the only movement we can directly observe is the movement of matter. The movement of light and other radiation becomes a non-material phenomenal requirement due to the mathematics of its analysis. From an everyday practical activity viewpoint light might as well be instantaneous.

It is quite probable that the reader has never paid any real attention to the fact that all observable motions are vectorial motions because the quantity of space is always a directed quantity, a vector. To be accurate, a true scalar quantity does not have a directional property and ignoring the direction does not make the space involved in any measurement a true scalar quantity. Any measurement of a quantity of space automatically imputes a direction whether that direction is used as part of the data for calculations or not. The specific direction can be ignored if one chooses to because of the isotropy of space but that does not eliminate the directionality of the measurement. If space were not homogeneous and isotropic it would be very obvious that direction would always have to be included in calculations involving spatial quantities.

Because of these and perhaps other hidden assumptions or general practices resulting from the homogeneous and isotropic character of space, it becomes necessary to come up with other assumptions about the universe and then develop the consequences of those assumptions and compare the new theoretical consequences with the observed facts. Since only a relatively small portion of the universe is accessible to direct and accurate observation, we cannot make completely general determinations directly. But from those which we can make, if any disagreement occurs one or more of our assumptions is incorrect and we must go back and start over with new assumptions. We’ve been doing that for quite a while so we could do it again just as easily as adding assumptions of impotence. If there seems to be full agreement between theoretical and practical consequences, the validity of the assumptions is substantiated to a degree which depends on the number and variety of the correlations that were made.

The most important of all possible assumptions are those which scientists have accepted as conditions for becoming scientists and which are seldom even mentioned in scientific discourse. In order to make science possible it is assumed that the universe is rational, that the same physical laws apply throughout the universe (perhaps not the same mathematical form of expression but the law is still the same), and that the results of experiments are reproducible. It must also be assumed that the accepted principles of mathematics, to the extent that they are used in any development, are valid.

In the course of the development of any theory, the use of one other assumption that is far superior to any other subsequent assumption must be made. This assumption is that the relations which are found in the region accessible to observation must also hold good in regions not directly accessible for observation. This is called an extrapolation assumption and is the single most important tool that any scientist has ever had. There are many “so-called” errors or failures from the use of this extrapolation assumption, but all previous “failures” of extrapolated relations can be shown to involve undeclared and erroneous assumptions, which of course, lead to erroneous results. Such failures resulting from erroneous assumptions become completely irrelevant in judging the reliability of the extrapolation process.

The extrapolation process is of such great importance because it is not usually possible to test the consequences of a single physical hypothesis in isolation. Most of the phenomena which are used for test purposes are complex events resulting from several properties and long sequences of operations which increase the chances for error in the extrapolation. When such complex theoretical events and processes correlate exactly with observations, the probability of errors of assumption, as well as extrapolation errors, is greatly reduced.

Before making any assumptions beyond those already specified, the general nature of space and time and the relation between them must be determined through a critical examination of current observations.

Attachment: