1. Special relativity
2. General relativity
3. Periodic motion
4. The spectrum of physical interactions
5. The character of electrons and
of other leptons
6. The quantum ladder of composite systems
7. Individualized currents
8. Aggregates and statistics
9. Coming into being, change and decay
This essay describes the transition from nineteenth-century ‘functional’ physics, emphasizing the relevance of quantitative,
spatial, kinetic, and physical generic relations, to twentieth-century ‘structural’ physics, in which the specific kinds of physical things and events are most important. This transition is generally recognized as one
of the most revolutionary in the history of science, comparable with Darwin’s introduction of evolution in biology.
Special relativity, general relativity and the
quantum theory of motion constitute a detailed critique of classical physics, forming the start of the new physics (chapters 1-3). This became the study of specific physical interactions in the standard model (chapter 4); of physical types of electrons and
other leptons (chapter 5); of the variety of complex systems ordered on the quantum ladder (chapter 6); of the meaning of photons and other interactive particles (chapter 7); of aggregates and their statistics (chapter 8); and finally, of the significance
of processes like coming into being, change, and decay (chapter 9).
This essay describes the transition from classical to modern physics from a critical-realistic point
of view. It is realistic as far as it assumes that physics describes a real existing world
determined by natural laws which can be known, albeit partially and tentatively. It is critical as far as it assumes that its theories are fallible and improvable, but intended to be approximately true. Critical-realism appears to be the only philosophy able
to account for the success of science in explaining the architecture of the world and of the phenomenal development of both modern technology and medical practice.
Symmetry and transformation are key concepts in twentieth-century physics, occurring in all its various branches. In mathematics these concepts are treated in the theory of groups,
introduced in 1831 by Évariste Galois. A group is a set of elements A, B, ... that can be combined such that each pair A, B generates a third element of the group, AB. The combination is associative: (AB)C = A(BC) = ABC for short. Any group contains
at least one element, called the identity I, such that IA = AI = A. Each element A of the group has an inverse element A’, such that AA’ = A’A = I. It is clear that the elements of a group are mutually strongly connected. They have a relation,
to be defined as AB’, the combination of A with the inverse of B. The relation of an element A to itself is AA’ = I, A is identical with itself. Moreover, (AB’)’
= BA’, the inverse of a relation of A to B is the relation of B to A.
Each group is complete. If we combine each element
with one of these, A, the identity element I is converted into A, and the inverse of A becomes I. The new group as a whole has exactly the same elements as the original group. Hence, the combination of all elements
with an element A is a transformation of a group into itself. It expresses a symmetry, in which the relations between the elements are invariant. The relation between the elements CA and BA is (CA)(BA)’ =
(CA)(A’B’) = CB’, the relation between C and B.
If two groups can be projected one-to-one onto each
other, they are called isomorphic. Two groups are isomorphic if their elements can be paired such that A1B1 = C1 in the first group implies that A2B2
= C2 for the corresponding elements in the second group and conversely. This may be the case even if the combination rules in the two groups are different. The phenomenon of isomorphy means that a group is not fully determined by the axioms
alone. Besides the combination rule, at least some of the group’s elements must be specified, such that the other elements are found by applying the combination rule.
1872, Felix Klein in his ‘Erlangen Program’ pointed out the relevance of the theory of groups for geometry, which he proposed to be the study of properties invariant under transformations.
In physics, to start with, the Lorentz-transformation characterizes Albert Einstein’s special theory of relativity (1905). It describes the symmetry properties of the space and time coordinates in a way radically different
from classical physics, which was much more characterized by local motion than by transformation.
Until the end of the nineteenth century, motion was considered
as change of place, with time as the independent variable. Isaac Newton thought space to be absolute, the expression of God’s omnipresence, a sensorium Dei. Newton’s contemporaries Christiaan Huygens and Gottfried Wilhelm Leibniz were
more impressed by the relativity of motion. They believed that anything only moves relative to something else, not relative to absolute space. As soon as Thomas Young, Augustin Fresnel and other physicists in the nineteenth century established that light is
a moving wave, they started the search for the ether, considered the material medium for wave motion. They identified the ether with Newton’s absolute space, now without the speculative reference to God’s omnipresence. This search had little success,
the models for the ether being inconsistent or even contrary to observed facts. In 1865, James Clerk Maxwell formulated his electromagnetic theory, connecting magnetism with electricity, and interpreting light as an electromagnetic wave motion. Although Maxwell’s
theory did not require the ether, he persisted in believing its existence. In 1905, Albert Einstein suggested to abandon the ether.
He did not prove that it does not exist, but showed it to be superfluous. Physicists intended the ether as a material substratum for electromagnetic waves. However, in Einstein’s theory the ether would not be able to interact with anything else. Consequently,
the nineteenth-century concept of an ether lost its physical meaning.
Until Einstein, kinetic time and space were considered independent frames of reference. In 1905, Albert Einstein shook the world by proving that the kinetic order implies a relativization
of the quantitative and spatial orders. Two events being synchronous according to one observer turn out to be diachronous according to an observer moving at high speed with respect to the former one. This relativizing is unheard of in the common conception
of time, and it surprised both physicists and philosophers.
Einstein based the special theory of relativity on two postulates or requirements for the theory. The first
postulate is the principle of relativity. It requires each natural law to be formulated in the same way with respect to each inertial frame of reference. Such a frame is determined by the law of inertia, Newton’s first law of motion, which is only valid
if time and space are measured in an inertial frame. The second postulate demands light having the same speed in every inertial system.
From these two axioms, Einstein could derive the mentioned relativization of the quantitative and spatial orders. He also showed that the units of length and of time depend on the choice of the reference system. Moving rulers are shorter and moving clocks
are slower than resting ones. In the theory of Hendrik Lorentz and others, time dilation and space contraction were explained as molecular properties of matter. Einstein explained them as kinetic effects. Only the speed of light is in all reference systems
the same, acting as a unit of motion. Indeed, relativity theory often represents velocities in proportion to the speed of light.
An inertial system is a system of reference in which Newton’s first law of motion, the
law of inertia, is valid. Unless some unbalanced force is acting on it, a body moves with constant velocity (both in magnitude and in direction) with respect to an inertial system. This is a reference system for motions; hence, it includes clocks
besides a spatial co-ordinate system. If we have one inertial system, we can find many others by shifting, rotating, reflecting, or inversing the spatial co-ordinates; or by moving the system at a constant speed; or by resetting the clock, as long as it displays
kinetic time uniformly. These operations form a mathematical group, for classical physics called the Galileo group.
Here time is treated as a variable parameter independent of the three-dimensional spatial co-ordinate system. Since Einstein proved this to be wrong, an inertial system is taken to
be four-dimensional. The corresponding group of operations transforming one inertial system into another one is called the Lorentz group.
The distinction between the classical Galileo group and the special relativistic Lorentz group concerns relatively moving systems. Both have an Euclidean subgroup of inertial systems not moving with respect to each other. The distinction concerns the combination
of motions, objectified by velocities. Restricted to one direction, in the Galileo group velocities are combined by addition (v+w), in the Lorentz group by the formula (v+w)/(1+vw/c2).
In a four-dimensional inertial system, a straight line represents a uniform motion. Each point on this line represents the position (x,y,z) of the moving subject at the time t. If
the speed of light is the unit of velocity, a line at an angle of π/4 with respect to the t-axis represents the motion of a light signal. The relativistic metric concerns the spatio-temporal interval between two events. The metric
of special relativity theory is Δs2= Δx2+Δy2+Δz2-Δt2=Δr2-Δt2. There are no mixed terms as in general relativity (section 2), and the interval is not necessarily infinitesimal. This metric is pseudo-Euclidean
because of the minus sign in front of Δt2. If the speed of light is not taken as the unit of speed, this term becomes (cΔt)2.The metric can be made apparently Euclidean by considering time an imaginary co-ordinate: Δs2=Δx2+Δy2+Δz2+(iΔt)2. It is preferable to make visible that kinetic space is less symmetric
than the Euclidean four-dimensional space, for lack of symmetry between the time axis and the three spatial axes. According to the formula, Δs2
can be positive or negative, and Δs real or imaginary. Therefore, one defines the interval as the absolute value of Δs.
The combination rule in the Lorentz group is formulated such that the interval is invariant at each transformation of one inertial system into another one. Only
then, the speed of light (the unit of motion) is equal in all inertial systems. A flash of light expands spherically at the same speed in all directions, in any inertial reference system in which this phenomenon is registered. This system is called the block
universe or Hermann Minkowski’s space-time continuum.
The magnitude of the interval is an objective representation of the relation between two events, combining a time difference with a spatial distance. For the same pair of
events in another inertial system, both the time difference Δt and the spatial distance Δr may be different. Only the magnitude Δs of the interval is independent of the choice of the inertial system.
Whereas the Euclidean metric is always positive or zero, the pseudo-Euclidean metric, determining the interval between two events may be negative as well. For the motion of a light signal between two points,
the interval is zero. For a light signal, Δs=0, for the covered distance Δr equals cΔt. If Δr=0,
the two events have the same position and the interval is a time difference (Δt). If Δt=0, the interval is a spatial distance (Dr) and the two events are simultaneous. In other cases, an interval is called space-like if the distance Δr>cΔt, or time-like if the time difference Δt>Δr/c (in absolute values). In the first case, light cannot bridge the distance within the mentioned time difference, in the
second case it can.
For two events having a space-like interval, an inertial system exists such that the time difference is zero (Δt=0),
hence the events are simultaneous. In another system, the time difference may be positive or negative. The distance between the two events is too large to be bridged even by a light signal, hence the two events cannot be causally related. Whether such a pair
of events is diachronous or synchronous appears to depend on the choice of the inertial system.
Other pairs of events are diachronous in every inertial system, their interval
being time-like (Δs2<0). If in a given inertial system event A occurs before event B, this is the case in any other inertial system as well. Now A may be a cause of B, anticipating
the physical relation frame. The causal relation is irreversible, the cause preceding the effect.
The formula for the relativistic metric shows that space and time are not equivalent, as is often stated. By a rotation about the z-axis, the x-axis
can be transformed into the y-axis. In contrast, no physically meaningful transformation exists from the t-axis into one of the spatial axes or conversely.
the four-dimensional space-time continuum, the spatial and temporal co-ordinates form a vector. Other vectors are four-dimensional as well, often by combining a classical three-dimensional vector with a scalar. This is meaningful if the vector field has the
same or a comparable symmetry as the space-time continuum. For instance, the linear momentum and the energy of a particle are combined into the four-dimensional momentum-energy vector (px,py,pz,E/c).
Its magnitude (the square root of px2+py 2+pz 2-E2/c2) has in all inertial systems the same value, and is therefore called
An unexpected consequence of the
symmetry of physical space and time is that the laws of conservation of energy, linear and angular momentum turn out to be derivable from the principle of relativity. Emmy Noether first showed this in 1915. Because natural laws have the same symmetry as kinetic
space, the conservation laws in classical mechanics differ from those in special relativity.
Considering the homogeneity and isotropy of a field-free space and the uniformity
of kinetic time, theoretically the principle of relativity allows of two possibilities for the transformations of inertial systems.
According to the classical Galileo group, the metric for time is independent of the metric for space. The units of length and time are invariant under all transformations. The speed of light is different in relatively moving inertial systems. In the relativistic
Lorentz group, the metrics for space and time are interwoven into the metric for the interval between two events. The units of length and time are not invariant under all transformations. Instead, the unit of velocity (the speed of light) is invariant under
all transformations. On empirical grounds, the speed of light being the same in all inertial systems, physicists accept the second possibility. Not the Galileo group but the Lorentz group turns out to represent the symmetry of the space-time continuum.
According to the principle of relativity, the natural laws can be formulated independent
of the choice of an inertial system. Albert Einstein called this a postulate, a demand imposed on a theory. In contrast, Mario Bunge calls it a norm, a ‘normative metanomological principle …’ constituting ‘…a necessary though
insufficient condition for objectivity …’ I suggest that it rests on the
irreducibility of physical interaction to spatial or kinetic relations. The principle of relativity is not merely a convention, an agreement to formulate natural laws as simple as possible. It is first of all a requirement of objectivity, to formulate the
laws such that they have the same expression in every appropriate reference system.
Yet, physicists do not always stick to the principle of relativity. When standing on a revolving merry-go-round, anyone feels an outward centrifugal force.
When trying to walk on the roundabout they experience the force called after Gustave-Caspar Coriolis (1835) as well. These forces are not the physical cause of acceleration, but its effect. Both are ‘inertial forces’, only occurring in a reference
system accelerating with respect to the inertial systems. Because these forces do not satisfy Newton’s third law of motion (although they derive their being called ‘forces’ from the second law), they are sometimes called ‘fictitious’.
However, although the centrifugal force and the Coriolis force do not exist with respect to inertial systems, they are real, being measurable and exerting influence. In particular,
the earth is a rotating system. The centrifugal force causes the acceleration of a falling body to be larger at the eath’s poles than at the equator, partly directly, partly due to the flattening of the earth at the poles, another effect of the centrifugal
force. The Coriolis force causes the rotation of the pendulum called after Léon Foucault (1851), and it has a strong influence on the weather. The wind does not blow directly from a high- to a low-pressure area, but it is deflected by the Coriolis force
to encircle such areas.
Another example of an inertial force occurs in a reference system having a constant acceleration with respect to inertial systems. This force experienced
in an accelerating or braking lift or train is equal to the product of the acceleration and the mass of the subject on which the force is acting. It is a universal force, influencing the motion of all subjects that we wish to refer to the accelerated system
Often, physicists and philosophers point to that inertial force in order to argue that the choice of inertial systems is arbitrary and conventional. Only
because of simplicity, we prefer inertial systems, because it is awkward to take into account these universal forces. A better reason to avoid such universal forces is that they do not represent subject-subject relations. Inertial forces do not satisfy Newton’s
third law, the law of equal action and reaction, for an inertial force has no reaction.
The source of the force is not another subject. A Newtonian physicist would call such a force fictitious.
The use of inertial forces is only acceptable for practical reasons. For instance, this applies to weather forecasting, because the rotation of the earth strongly influences the weather.
Another hallmark of inertial forces is to be proportional to the mass of the subject on which they act. In fact, it does not concern a force but an acceleration, i.e., the acceleration of the reference system with respect to inertial systems.
We experience and interpret it as a force, according to Isaac Newton’s second law, but it does not satisfy his third law.
2. General relativity
Gravity too happens to be proportional to the mass of the subject on which it acts. Newton’s proof of the law of gravity being proportional to the mass of both interacting bodies was based on a symmetry-argument applied to his third law
of motion, the law of action and reaction: if the sun attracts the earth with a force proportional to the earth’s mass, then the earth attracts the sun with an equal force proportional to the sun’s mass.
At any place, all freely falling subjects experience the same acceleration. Hence, gravity looks like an inertial force. This inspired Albert Einstein to develop the general theory of relativity, defining the
metric of space and time such that gravity is eliminated. It leads to a curved space-time, having a strong curvature at places where - according to the classical view - the gravitational field is strong. Besides subjects having mass, massless things experience
this field as well. Even light moves according to this metric, since 1919 confirmed by ingenious observations.
Yet, gravity is not an inertial force, because it satisfies
Newton’s third law. Contrary to the centrifugal and Coriolis forces, gravity expresses a mutual subject-subject relation. The presence of heavy matter determines the curvature of space-time. In classical physics, gravity was the prototype of a physical
subject-subject relation. One of the unexpected results of Isaac Newton’s Principia was that the planets attract the sun, besides the sun attracting the planets. It undermined Newton’s Copernican view that the sun is at rest at the centre
of the world if conceived as the centre of gravity of the solar system.
Principle of equivalence
Einstein observed that a gravitational field in a classical inertial frame is equivalent with an accelerating reference system without gravity, like an earth satellite. The popular argument for this principle of equivalence is that
locally one could not measure any difference. This gives occasion to four comments.
First, on a slightly larger scale the difference between a homogeneous acceleration and a non-homogeneous gravitational field is easily determined.
Even in an earth satellite, differential effects are measurable. Except for a homogeneous field, the principle of equivalence is only locally (approximately) valid.
Second, the curvature of space-time is determined by matter, hence it has a physical source. The gravity of the sun causes the deflection of starlight observed during a total eclipse.
An inertial force lacks a physical source.
Third, in non-inertial systems of reference, the law of inertia is invalid. In contrast, the general theory of relativity maintains
this law, taking into account the correct metric. A subject on which no force is acting – apart from gravity – moves uniformly with respect to the general relativistic metric. If considered from a classical inertial system, this means a curved
and accelerated motion due to gravity. The general relativistic metric does not eliminate, but incorporates gravity.
Finally, in the general relativistic space-time, the
speed of light remains the universal unit of velocity. Light moves along a ‘straight’ line (the shortest line according to Bernhard Riemann’s definition). Accelerating reference systems still give rise to inertial forces. This means that
Einstein’s original intention to prove the equivalence of all moving reference systems has failed.
Preceded by Gauss, in 1854 Bernhard Riemann formulated the general metric for an infinitesimal
small distance in a multidimensional space. Riemann’s metric is dr2=gxxdx2+gyydy2+gxydxdy+gyxdydx+… Mark
the occurrence of mixed terms besides quadratic terms. In the Euclidean metric gxx=gyy=1, gxy=gyx=0, and Δx and Δy are not necessarily infinitesimal. According to Riemann, a multiply
extended magnitude allows of various metric relations.
For a non-Euclidean space, the co-efficients in the metric depend on the position. If i and j indicate x or y, the gij’s, are components of a tensor. In the two-dimensional
case gij is a second derivative (like d2r/dxdy). For a more-dimensional space it is a partial derivative, meaning that other variables remain constant. To calculate a finite displacement requires the
application of integral calculus. The result depends on the choice of the path of integration. The distance between two points is defined as the smallest value of these paths. On the surface of a sphere, the distance between two points corresponds to the path
along a ‘great circle’ whose centre coincides with the centre of the sphere.
Gravity and electromagnetism
The metric is determined by the structure and eventually the symmetry of the space, in physics called a field. The
general theory of relativity is concerned with the gravitational field. In the general theory of relativity, the co-efficients for the four-dimensional space-time manifold form a symmetrical tensor, i.e., gij=gji for each combination
of i and j. Hence, among the sixteen components of the tensor ten are independent.
An electromagnetic field is also described by a tensor having sixteen components. Its
symmetry demands that gij=-gji for each combination of i and j, hence the components of the quadratic terms are zero. This leaves six independent components, three for the electric vector and three for the magnetic pseudovector.
Gravity having a different symmetry than electromagnetism is related to the fact that mass is definitely positive and that gravity is an attractive force. In contrast, electric charge
can be positive or negative and the electric force named after Charles-Augustin Coulomb (1785) may be attractive or repulsive. A positive charge attracts a negative one, two positive charges (as well as two negative charges) repel each other.
In general, a non-Euclidean space is less symmetrical than an Euclidean one having the same number of dimensions. Motion as well as physical interaction causes a break of symmetry in spatial
The speed of light
The metrics of special and general relativity theory presuppose that light moves at a constant speed everywhere. The empirically confirmed fact that the direction of light is subject to gravity necessitates
an adaptation of the metric. In the general theory of relativity, kinetic space-time is less symmetric than in the special theory. Because gravity is quite weak compared to other interactions, this symmetry break is only observable at a large scale, at distances
where other forces do not act or are neutralized. Where gravity can be neglected, the special theory of relativity is applicable.
The general relativistic space-time is
not merely a kinetic, but foremost a physical manifold. The objection against the nineteenth-century concept of the ether was that it did not allow of interaction. This objection does not apply to the general relativistic space-time. This acts on matter and
is determined by matter.
The expanding universe
The general theory of relativity
presents testable models for the physical space-time. It leads to the insight that the physical cosmos is finite and expanding. It came into being about thirteen billions years ago, in a ‘big bang’, the popular name of the start of the expanding
universe as was first proposed in 1927 by Georges Lemaître. According to the standard model to be discussed later, the fundamental forces initially formed a single universal interaction. Shortly after the big bang they fell apart by a symmetry break
into the present electromagnetic, strong and weak nuclear interaction besides the even weaker gravity. Only then the characters of nuclei and atoms were gradually realized in the astrophysical evolution of the universe.
The model of the expanding universe was confirmed by Edwin Hubble. He proved that many objects previously thought to be clouds of dust and gas and classified as
nebulae were actually galaxies beyond the Milky Way. In 1929 he used the strong direct relation between the periods of the pulsation and luminosity of Cepheid variables (discovered in 1908 by Henrietta
Swan Leavitt) for scaling galactic and extragalactic distances. Hubble provided evidence that the recessional velocity of a galaxy increases with its distance from the Earth, a property now known as 'Hubble's Law', despite the fact that it had been both
proposed and demonstrated observationally two years earlier by Georges Lemaitre.
The Hubble Space Telescope, with a diameter of 2.4 meter and mounted on an earth satellite (in order to eliminate atmospheric
disturbances), was launched in 1990. Initially its results were below expectations, but later maintenance made it a very successful instrument for optical research of the milky way and outer space.
In 2015 a gravitational wave as predicted by the general theory of relativity was detected, not by the Hubble Space Telescope however, but by very sensitive interferometers stationed at the earth. Its source was identified as
the collision of two black holes at a distance of 1.5 billion lightyears from the earth. Later sources of gravitational waves turned out to be the collision of neutron stars, which could be verified with optical means.
3. Periodic motion
In ancient and medieval philosophy,
local motion was considered a kind of change. Classical mechanics emphasized uniform and accelerated motion of unchanging matter. In modern physics, the periodic motion of oscillations and waves is the main theme. In living nature and technology, rhythms play
an important part as well.
Twentieth-century physics is characterized by the theory of relativity, by the investigation of the structure of matter, and by quantum physics.
The latter differs from classical physics because of the duality of waves and particles. This was experimentally established beyond reasonable doubt, but it gave rise to much theoretical discussion and popular philosophical misunderstanding.
Section 3.1 discusses kinetic relations, specified in section 3.2 to oscillations and waves. Section 3.3 deals with the properties of wave packets with their anticipations on the
physical interaction between particles. Section 3.4 concerns the meaning of symmetrical and antisymmetrical wave functions for physical aggregates.
Like numbers and spatial
forms, periodic motions take part in our daily experience. And like irrational numbers and non-Euclidean space, some aspects of periodic phenomena collide with common sense. Section 3 aims to demonstrate that a realistic interpretation of quantum physics is
feasible and even preferable to the standard non-realistic interpretations.
3.1. Kinetic time
The uniformity of
Kinetic time is subject to the kinetic order of uniformity and is expressed in the periodicity of the motion of celestial bodies (in particular the sun),
as well as in mechanical or electric clocks.
Like the rational and real numbers, points on a continuous line are ordered, yet no point has a unique successor. One cannot
say that a point A is directly succeeded by a point B, because there are infinitely many other points between A and B. Yet, a uniformly or accelerating moving subject is supposed to pass the points of its path successively. The succession of temporal moments cannot be reduced to quantitative and/or spatial relations.
It presupposes the numerical order of earlier and later and the spatial order of simultaneity, being diachronic and synchronic aspects of kinetic time. Zeno of Elea recognized this long before the Christian era. Nevertheless, until the seventeenth century,
motion was not recognized as an independent principle of explanation. Later on, it was reinforced by Albert Einstein’s theory of relativity.
The uniformity of kinetic
time seems to rest on a convention. Sometimes it is meaningful to use a clock that is
not uniform. For instance, the non-uniform physical order of radioactive decay is applied in the dating of archaeological and geological finds.
However, the uniformity of kinetic time together with the periodicity of many kinds of natural motion yields a kinetic norm for clocks. A norm is more than a mere agreement or convention. If applied by human beings constructing clocks, the law of
inertia becomes a norm. A clock does not function properly if it represents a uniform motion as non-uniform.
With increasing clarity, the law of inertia was formulated
by Galileo Galilei, René Descartes and others, finding its ultimate form in Isaac Newton’s first law of motion: ‘Every body continues in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state
by forces impressed upon it.’
Inertial motion is not in need of a physical cause. Classical and modern physics consider inertial motion to be a state, not a change. In this respect, modern kinematics differs from Aristotle’s, who assumed that each change
needs a cause, including local motion. Contrary to Aristotle (being the philosopher of common sense), the seventeenth-century physicists considered friction to be a force. Friction causes an actually moving subject to decelerate. In order to maintain a constant
speed, another force is needed to compensate for friction. Aristotelians did not recognize friction as a force and interpreted the compensating force as the cause of uniform motion.
Uniformity of motion means that the subject covers equal distances in equal times. But how do we know which times are equal? The diachronous order of earlier and later allows of counting hours, days, months, and years. These units do not necessarily
have a fixed duration. In fact, months are not equal to each other, and a leap year has an extra day. Until the end of the Middle Ages, an hour was not defined as 1/24th of a complete day, but as the 1/12th part of a day taken from sunrise to sunset. A day
in winter being shorter than in summer, the duration of an hour varied with the seasons. Only after the introduction of mechanical clocks in the fifteenth century, it became customary to relate the length of an hour to the period from noon to noon, such that
all hours are (approximately) equal.
Mechanical clocks measure kinetic time. Time as measured by a clock is called uniform if the clock correctly shows that a
subject on which no net force is acting moves uniformly. This appears to be circular
reasoning. On the one side, the uniformity of motion means equal distances in equal times. On the other hand, the equality of temporal intervals is determined by a clock subject to the norm that it represents uniform motion correctly.
This circularity is unavoidable, meaning that the uniformity of kinetic time is an unprovable axiom. However, this axiom is not a convention, but an expression of a fundamental and irreducible natural law.
The uniformity of time is sometimes derived from a ceteris paribus argument. If one repeats a process at different moments under exactly equal circumstances, there is no reason to suppose that the process would proceed
differently. In particular the duration should be the same. This reasoning is applicable to periodic motions, like in clocks. But it betrays a deterministic vision and is not applicable to stochastic processes like radioactivity. Albert Einstein observed that
the equality of covered distances provides a problem as well, because spatial relations are subject to the order of simultaneity, dependent on the state of motion of the clocks used for measuring uniform motion.
Uniformity is a law for kinetic time, not an intrinsic property of time. There is nothing like a stream of time,
flowing independently of the rest of reality. Positivist philosophers denied the ontological status of uniform time. Ernst Mach states emphatically: ‘The question of whether a motion is uniform in itself has no meaning at all. No more can we
speak of an “absolute time” (independent of any change).’ In my view,
the law of inertia determines the meaning of the uniformity of time. According to Hans Reichenbach, it is an ‘empirical fact’ that different definitions give rise to the same ‘measure of the flow of time’: natural, mechanical, electronic
or atomic clocks, the laws of mechanics, and the fact that the speed of light is the same for all observers.
‘It is obvious, of course, that this method does not enable us to discover a “true” time, but that astronomers simply determine with the aid of the laws of mechanics that particular flow of time which the laws of physics implicitly define.’ However, if ‘truth’ means law conformity, ‘true time’ is the time
subject to natural laws. It seems justified to generalize Reichenbach’s ‘empirical fact’, to become the law concerning the uniformity of kinetic time. Rudolf Carnap poses that the choice of the metric of time rests on simplicity: the formulation
of natural laws is simplest if one sticks to this convention. But then it is quite remarkable
that so many widely different systems confirm to this human agreement. More relevant is to observe that physicists are able to explain all kinds of periodic motions and processes based on laws presupposing the uniformity of kinetic time. Such an explanation
is completely lacking with respect to any alternative metric invented by philosophers. Time only exists in relations between events. The uniformity of kinetic time expressed by the law of inertia asserts the existence of motions being uniform with
respect to each other. Only critical realism is able to account for this state of affairs.
Both classical and relativistic mechanics use the law
of uniform motion to introduce inertial systems. An inertial system is a spatio-temporal reference system in which the law of inertia is valid. It can be used to measure accelerated motions as well. Starting with one inertial system, all others can be constructed
by using either the Galileo group or the Lorentz group, reflecting the relativity of motion. Both start from the realistic axiom that kinetic time is uniform.
The law of uniformity concerns all dimensions of kinetic space. Therefore,
it is possible to project kinetic time on a linear scale, irrespective of the number of dimensions of kinetic space. Equally interesting is that kinetic time can be projected on a circular scale, as displayed on a traditional clock. The possibility of establishing
the equality of temporal intervals is actualized in uniform circular motion, in oscillations, waves, and other periodic processes. Therefore, besides the generic aspect of uniformity, the time measured by clocks has a specific component as
well, the periodicity of any clock. Mechanical clocks depend on the regularity of a pendulum or a balance. Electronic clocks apply the periodicity of oscillations in a quartz crystal. Periodicity has always been used for the measurement of time. The days,
months, and years refer to periodic motions of celestial bodies. The modern definition of the second depends on atomic oscillations. The periodic character of clocks allows of digitalizing kinetic time, each cycle being a unit, whereas the cycles are countable.
The uniformity of kinetic time as a universal law for kinetic relations and the periodicity of all kinds of periodic processes reinforce each other. Without uniformity, periodicity
cannot be understood, and vice versa.
The positivist idea that the uniformity of kinetic time is no more than a convention has the rather absurd consequence, that the periodicity
of oscillations, waves and other natural rhythms would be a convention as well. In contrast, twentieth-century science has discovered many natural rhytms at an astronomical scale as well as on molecular, atomic, nuclear and sub-nuclear scales. Critical realism
accepts this as a natural phenomenon.
3.2. The character of oscillations and waves
is the distinguishing mark of each primary kinetic character with a tertiary physical characteristic. The motion of a mechanical pendulum, for instance, is primarily characterized by its periodicity, secondarily by the pendulum’s length, and tertiarily
by gravitational acceleration. For such an oscillation, the period is constant if the metric for kinetic time is subject to the law of inertia. This follows from an analysis of pendulum motion. The character of a pendulum is applied in a clock. The dissipation
of energy by friction is compensated such that the clock is periodic within a specified margin.
Kepler’s laws determine the character of periodic planetary motion.
Strictly speaking, these laws only apply to a system consisting of two celestial bodies: a binary star or a star with one planet. Both Isaac Newton’s law of gravity and the general theory of relativity allow of a more refined analysis. Hence, the periodic
motions of the earth and other systems cannot be considered completely apart from physical interactions. However, in this section I shall abstract from physical interaction in order to concentrate on the primary and secondary characteristics of periodic motion.
The simplest case of a periodic motion appears to be uniform circular motion. Its velocity
has a constant magnitude whereas its direction changes constantly. Ancient and medieval philosophy considered uniform circular motion to be the most perfect, only applicable to celestial bodies. Seventeenth-century classical mechanics discovered uniform rectilinear
motion to be more fundamental, the velocity being constant in direction as well as in magnitude. Christiaan Huygens assumed that the outward centrifugal acceleration is an effect of circular motion. Robert Hooke and Isaac Newton demonstrated the inward
centripetal acceleration to be the cause needed to maintain a uniform circular motion. This force should be specified for any instance of uniform circular motion.
moving itself, the circular path of motion is simultaneously a kinetic object and a spatial subject. The position of the centre and the magnitude and direction of the circle’s radius vector determine the spatial position of the moving subject on its
path. The radius is connected to magnitudes like orbital or angular speed, acceleration, period and phase. The phase (φ) indicates a moment in the periodic motion, the kinetic time (t) in proportion to the period (T): φ=t/T=ft
modulo 1. If considered an angle, φ=2πft modulo 2π. A phase difference of one quarter (or π) between two oscillations means that one oscillation reaches its maximum when the other passes its central position.
These quantitative properties allow of calculations and an objective representation of motion.
Composition of periodic motions
A uniform circular motion can be constructed as a composition
of two mutually perpendicular linear harmonic motions, having the same period and amplitude and a phase difference of one quarter. But then circular uniform motion turns out to be merely a single instance of a large class of two-dimensional harmonic motions.
A similar composition of two harmonics – having the same period but different amplitudes or a phase difference other than one quarter – does not produce a circle but an ellipse. If the force is inversely proportional to the square of the distance
(like the gravitational force of the sun exerted on a planet), the result is a periodic elliptic motion as well, but this one cannot be constructed as a combination of only two harmonic oscillations. Observe that an ellipse can be defined primarily (spatially)
as a conic section, secondarily (quantitatively) by means of a quadratic equation between the co-ordinates [e.g., (x-x0)2/a2+(y-y0)2/b2=1],
and tertiarily as a path of motion, either kinetically as a combination of periodic oscillations or physically as a planetary orbit.
We can also make a composition of two
mutually perpendicular oscillations with different periods. Now according to Jules Lissajous (1857), this constitutes a closed curve if and only if the two periods have a harmonic ratio, i.e., a rational number. If the proportion is an octave, then
the resulting figure is a lemniscate (a figure eight). The Lissajous figures derive their specific regularity from periodic motions. Clearly, the two-dimensional Lissajous motions constitute a kinetic character. This character has a primary rational variation
in the harmonic ratio of the composing oscillations, as well as a secondary variation in frequency, amplitude and phase. It is interlaced with the character of linear harmonic motion and several other characters. The structure of the path like the circle or
the lemniscate is primarily spatially and secondarily quantitatively founded. A symmetry group is interlaced with the character of each Lissajous-figure, the circle being the most symmetrical of all.
In all mentioned characters, we find a typical subject-object relation determining an ensemble of possible variations. In the structure of the circle, the circumference has a fixed proportion to the diameter. This allows of an
unbounded variation in diameter. In the character of the harmonic motion, we find the period (or its inverse, the frequency) as a typical magnitude, allowing of an unlimited variability in period as well as a bounded variation of phase. Varying the typical
harmonic ratio results in an infinite but denumerable ensemble of Lissajous-figures.
A linear harmonic oscillation is quantitatively represented by a harmonic function. This is a sine or cosine function or a complex
exponential function, being a solution of a differential equation. This equation, the law for harmonic motion, states that the acceleration a is proportional to the distance x of the subject to the centre of oscillation x0,
according to: a=d2x/dt2=-(2πf)2(x-x0) wherein the frequency f=1/T is the inverse of the period T.
The minus sign means that the acceleration is always directed to the centre.
This equation, the law for harmonic motion, concerns mechanical or electronic
oscillations, for instance. Primarily, a harmonic oscillation has a specific kinetic character. It is a special kind of motion, characterized by its law and its period. An oscillation is secondarily characterized by magnitudes like its amplitude and phase,
not determined by the law but by accidental initial conditions. Hence, the character of an oscillation is kinetically qualified and quantitatively founded.
oscillation can be considered the basic form of any periodic motion, including the two-dimensional periodic motions discussed above. In 1822, Joseph Fourier demonstrated that each periodic function is the sum or integral of a finite or infinite number
of harmonic functions. The decomposition of a non-harmonic periodic function into harmonics is called Fourier analysis.
A harmonic oscillator has a single natural frequency
determined by some specific properties of the system. This applies, for instance, to the length of a pendulum; or to the mass of a subject suspended from a spring together with its spring constant; or to the capacity and the inductance in an electric oscillator
consisting of a capacitor and a coil. This means that the kinetic character of a harmonic oscillation is interlaced with the physical character of an electric artefact.
for energy dissipation by adding a velocity-dependent term leads to the equation for a damped oscillator. Now the initial amplitude decreases exponentially. In the equation for a forced oscillation, an additional acceleration accounts for the action of an
external periodic force. In the case of resonance, the response is maximal. Now the frequency of the driving force is approximately equal to the natural frequency. Applying a periodic force, pulse or signal to an unknown system and measuring its response is
a widely used method of finding the system’s natural frequency, revealing its characteristic properties.
An oscillation moving in space is called a wave. It has primarily a kinetic character, but contrary to an
oscillation it is secondarily founded in the spatial relation frame. Whereas the source of the wave determines its period, the velocity of the wave, its wavelength and its wave number express the character of the wave itself.
In an isotropic medium, the wavelength λ is the distance covered by a wave with wave velocity v in a time equal to the period T: λ=νT=ν/f. The
inverse of the wavelength is the wave number (the number of waves per metre), σ=1/λ=f/ν. In three dimensions, the wave number is replaced by the wave vector k, which besides the number of waves
per metre also indicates the direction of the wave motion. In a non-isotropic medium, the wave velocity depends on the direction. The wave velocity has a characteristic value independent of the motion of the source. It is a property of the medium, the kinetic
space of a wave that specifically differs from the general kinetic space as described by the Galileo or Lorentz group.
Usually, the wave velocity depends
on the frequency as well. This phenomenon is called dispersion. Only light moving in a vacuum is free of dispersion. (The medium of light in vacuum is the electromagnetic field.) The observed frequency of a source depends on the relative motions of
source, observer and medium. This is the effect called after Christian Doppler (1842).
A wave has a variability expressed by its frequency, phase, amplitude, and polarization.
Polarization concerns the direction of oscillation. A sound wave in air is longitudinal, the direction of oscillation being parallel to the direction of motion. Light is transversal, the direction of oscillation being perpendicular to the direction of motion.
Light is called unpolarized if it contains waves having all directions of polarization. Light may be partly or completely polarized. It may be linearly polarized (having a permanent direction of oscillation) or circularly polarized (the direction of oscillation
itself rotating at a frequency independent of the frequency of the wave itself).
During the motion, the wave’s amplitude may decrease. For instance, in a spherical
wave the amplitude decreases in proportion to the distance from the centre.
Waves do not interact with each other, but are subject to superposition. This is a combination of waves taking into account amplitude as well as phase. Superposition occurs
when two waves are crossing each other. Afterwards each wave proceeds as if the other had been absent. Interference is a special case of superposition. Now the waves concerned have exactly the same frequency as well as a fixed phase relation. If the
phases are equal, interference means an increase of the net amplitude. If the phases are opposite, interference may result in the mutual extinction of the waves.
an oscillation, each wave has a tertiary, usually physical disposition. This explains why waves and oscillations give a technical impression, because technology opens dispositions. During the seventeenth century, the periodic character of sound was discovered
in musical instruments. The relevance of oscillations and waves in nature was only fully realized at the beginning of the nineteenth century. This happened after Thomas Young and Augustin Fresnel at the beginning of the nineteenth century brought about a break-through
in optics by discovering the wave character of light in quite technical experiments. Since the end of the same century, oscillations and waves dominate communication and information technology.
Interlacement of oscillations and waves
be clear that the characters of waves and oscillations are interlaced with each other. A sound wave is caused by a loudspeaker and strikes a microphone. Such an event has a physical character and can only occur if a number of physical conditions are satisfied.
However, there is a kinetic condition as well. The frequency of the wave must be adapted to the oscillation frequency of the source or the detector. The wave and the oscillating system are correlated. This correlation concerns the property they have
in common, i.e., their periodicity, their primary kinetic qualification.
Sometimes an oscillation and a wave are directly interlaced, for instance in a violin string. Here
the oscillation corresponds to a standing wave, the result of interfering waves moving forward and backward between the two ends. The length of the string determines directly the wavelength and indirectly the frequency, dependent on the string’s physical
properties determining the wave velocity. Amplified by a sound box, this oscillation is the source of a sound wave in the surrounding air having the same frequency. In fact, all musical instruments perform according to this principle. The wave is always spatially
determined by its wavelength. The length of the string fixes the fundamental tone (the keynote or first harmonic) and its overtones. The frequency of an overtone is an integral number times the frequency of the first harmonic.
wave equation represents the law for a wave, and a real or complex wave function represents an individual wave. Whereas the equation for oscillations only contains derivatives with respect to time, the wave equation also involves differentiation with respect
to spatial co-ordinates. Usually a linear wave equation provides a good approximation for a wave, for example, the equations for the propagation of light. Edwin Schrödinger’s non-relativistic equation (1926) and Paul Dirac’s relativistic equation
(1928) describe the motion of material waves.
One usually studies linear wave equations. If j and f are solutions of a linear wave equation, then aj+bf
is a solution as well, for each pair of real (or complex) numbers a and b. Hence, a linear wave equation has an infinite number of solutions, an ensemble of possibilities. Whereas the equation for an oscillation determines its frequency,
a wave equation allows of a broad spectrum of frequencies. The source determines the frequency, the initial amplitude and the phase. The medium determines the wave velocity, the wavelength and the decrease of the amplitude when the wave proceeds away from
Typical properties of waves
Events having their origin in relative motions may be characteristic or not. A solar or lunar eclipse depends on the relative motions of sun, moon and earth. It is accidental and probably unique that the moon
and the sun are equally large as seen from the earth, such that the moon is able to cover the sun precisely. Such an event does not correspond to a character. However, wave motion gives rise to several characteristic events satisfying specific laws.
Willebrord Snell’s law (seventeenth century, published posthumously by Christiaan Huygens, and earlier found by Thomas Harriot and independently by René Descartes) and
David Brewster’s law (1815) for the refraction and reflection of light at the boundary of two media only depend on the ratio of the wave velocities, the index of refraction. Because this index depends on the frequency, light passing a boundary usually
displays dispersion, like in a prism. Dispersion gives rise to various special natural phenomena like a rainbow or a halo, or artificial ones, like Isaac Newton’s rings.
If the boundary or the medium has a periodic character like the wave itself, a special form of reflection or refraction occurs if the wavelength fits the periodicity of the lattice. In optical technology, diffraction and reflection gratings are widely
applied. Each crystal lattice forms a natural three-dimensional grating for X-rays, if their wavelength corresponds to the periodicity of the crystal lattice according to Lawrence and William Bragg’s law (1913).
These are characteristic kinetic phenomena, not because they lack a physical aspect, but because they can be explained satisfactorily by a kinetic theory of wave motion.
3.3. A wave packet as an aggregate
The wave theory of light
sounds are signals. A signal being a pattern of oscillations moves as an aggregate of waves from the source to the detector. This motion has a physical aspect as well, for the transfer of a signal requires energy. But the message is written in the oscillation
pattern, being a signal if a human or an animal receives and recognizes it.
A signal composed from a set of periodic waves is called a wave packet. Although a wave packet
is a kinetic subject, it achieves its foremost meaning if considered interlaced with a physical subject having a wave-particle character. The wave-particle duality has turned out to be equally fundamental and controversial. Neither experiments nor theories
leave room for doubt about the existence of the wave-particle duality. However, it seems to contradict common sense, and its interpretation is the object of hot debates.
René Descartes and Christiaan Huygens assumed that space is completely filled up with matter, that space and matter coincide. They considered light to be a succession of mechanical
pulses in space. Descartes believed that light does not move, but has a tendency to move. Huygens denied that wave motion is periodical.
From the fact that planets move without friction, Isaac Newton inferred that interplanetary space is empty. He supposed that light consists of a stream of particles. In order to explain interference phenomena like the rings named after him, he ascribed the
light particles (or the medium) properties that we now consider to apply to waves.
Between 1800 and 1825, Thomas Young in England and Augustin Fresnel in France developed the wave theory of light. Common sense dictated waves and particles to exclude each other,
meaning that light is either one or the other. When the wave theory turned out to explain more phenomena than the particle model, the battle was over.
Decisive was Léon Foucault’s experimental confirmation in 1854 of the wave-theoretical prediction that light has a lower speed in water than in air. Isaac Newton’s particle theory predicted the converse. Light is wave motion, as was later
confirmed by James Clerk Maxwell’s theory of electromagnetism. Nobody realized that this conclusion was a non sequitur. At most, it could be said that light has wave properties, as follows from the interference experiments of Young and Fresnel,
and that Newton’s particle theory of light was refuted.
Nineteenth-century physics discovered and investigated many other kinds of rays. Some looked like light, such as infrared and ultraviolet radiation (about 1800), radio waves (1887), X-rays
and gamma rays (1895-96). These turned out to be electromagnetic waves. Other rays consist of particles. Electrons were discovered in cathode rays (1897), in the photoelectric effect and in beta-radioactivity. Canal rays consist of ions and alpha rays of helium
nuclei. Cathode rays, canal rays and X-rays are generated in a cathode tube, a forerunner of our television tube, fluorescent lamp and computer screen.
At the end of the
nineteenth century, this gave rise to a rather neat and rationally satisfactory worldview. Nature consists partly of particles, for the other part of waves, or of fields in which waves are moving. This dualistic worldview assumes that something is
either a particle or a wave, but never both, tertium non datur.
It makes sense to distinguish a dualism, a partition of the world into two compartments, from a
duality, a two-sidedness. The dualism of waves and particles rested on common sense, one could not imagine an alternative. However, twentieth-century physics had to abandon this dualism perforce and to replace it by the wave-particle duality.
All elementary things have both a wave and a particle character.
Almost in passing, another phenomenon, called quantization, made its appearance. It turned out that some magnitudes are not continuously variable. The mass of an atom can only have
a certain value. Atoms emit light at sharply defined frequencies. Electric charge is an integral multiple of the elementary charge. In 1905 Albert Einstein suggested that light consists of quanta of energy. Einstein never had problems with the duality of waves
and particles, but he rejected its probability interpretation. In Niels Bohr’s
atomic theory (1913), the angular momentum of an electron in its atomic orbit is an integer times Max Planck’s reduced constant.
(Planck’s reduced constant is h/2π. In Bohr’s theory the angular momentum L=nh/2π, n being the orbit’s number. For the hydrogen atom, the corresponding energy is En=E1/n2,
with E1=-13.6 eV, the energy of the first orbit.)
Until Erwin Schrödinger and Werner Heisenberg in 1926 introduced modern quantum mechanics, repeatedly
atomic scientists found new quantum numbers with corresponding rules.
interaction of field with matter
The dualism of matter and field, of particles and waves, was productive as long as its components were studied separately.
Problems arose when scientists started to work at the interaction between matter and field. The first problem concerned the specific emission and absorption of light restricted to spectral lines, characteristic for chemical elements and their compounds. Niels
Bohr tentatively solved this problem in 1913. The spectral lines correspond to transitions between stationary energy states. The second question was under which circumstances light can be in equilibrium with matter, for instance in an oven. This concerns the
shape of the continuous spectrum of black radiation. After a half century of laborious experimental and theoretical work, this problem led to Max Planck’s theory (1900) and Albert Einstein’s photon hypothesis (1905). According to Planck, the interaction
between matter and light of frequency f is in need of the exchange of energy packets of E = hf (h being Planck’s constant). Einstein suggested that light itself consists of quanta of energy. Later he added that these
quanta have linear momentum as well, proportional to the wave number s=1/λ: p=E/c=hs=h/λ. The relation between energy and frequency (E=hf), applied by Bohr in his atomic theory of 1913, was experimentally confirmed by Robert Millikan in 1916, and
the relation between momentum and wave number (p=hs) in 1922 by Arthur Compton. The particle character of electromagnetic radiation is easiest to demonstrate with high-energetic photons in gamma- or X-rays. The wave character is easiest proven
with low-energetic radiation, with radio or microwaves.
1920, Planck and Einstein did not have many adherents to their views. As late as 1924, Niels Bohr, Hendrik Kramers and John Slater published a theory of electromagnetic radiation, fighting the photon hypothesis at all cost.
They went as far as abandoning the laws of conservation of energy and momentum at the atomic level. That was after the publication of Arthur Compton’s effect, describing the collision of a gamma-particle with an electron conserving energy and momentum.
Within a year, experiments by Walther Bothe and Hans Geiger proved the ‘BKS-theory’ to be wrong. In 1924 Satyendra Bose and Albert Einstein derived Max Planck’s law from the assumption that electromagnetic radiation in a cavity behaves like
an ideal gas consisting of photons.
In 1923, Louis de Broglie
published a mathematical paper about the wave-particle character of light.  Applying
the theory of relativity, he predicted that electrons too would have a wave character. The motion of a particle or energy quantum does not correspond to a single monochromatic wave but to a group of waves, a wave packet. The speed of a particle cannot be related
to the wave velocity (λ/T=ƒ/s), being larger than the speed of light for a material particle. Instead, the particle speed corresponds
to the speed of the wave packet, the group velocity. This is the derivative of frequency with respect to wave number (df/ds) rather than their quotient. Because of the relations of Planck and Einstein, this is the derivative of energy with
respect to momentum as well (dE/dp). At most, the group velocity equals the speed of light. (The group velocity df/ds=dE/dp equals approximately Δf/Δs. E/p>c and dE/dp<c follow from the relativistic relation between energy and momentum, E=(Eo2+c2p2)1/2,
where Eo is the particle’s rest energy. Only if Eo=0, E/p=dE/dp=c. Observe that the word ‘group’ for a wave packet has a different meaning than in the mathematical
theory of groups.)
In order to test these suggestions, physicists had to find out whether electrons show interference phenomena. Experiments by Clinton
Davisson and Lester Germer in America and by George Thomson in England (1927) proved convincingly the wave character of electrons, thirty years after George’s father Joseph Thomson established the particle character of electrons. As predicted by
Louis De Broglie, the linear momentum turned out to be proportional to the wave number. Afterwards the wave character of atoms and nucleons was demonstrated experimentally.
From duality to dualism
We have seen that it took quite a long time before physicists
accepted the particle character of light. Likewise, the wave character of electrons was not accepted immediately, but about 1930 no doubt was left among pre-eminent physicists.
This meant the end of the wave-particle (or matter-field) dualism, implying all phenomena to have either a wave character or a particle character, and the beginning of wave-particle duality being recognized as a universal property
of matter. In 1927, Niels Bohr called the wave and particle properties complementary.
Bohr’s principle of complementarity presupposes that quantum phenomena only occur at an atomic level, which is refuted in solid state physics. According to Bohr, a measuring system is an indivisible whole, subject to the laws of classical physics, showing
either particle or wave phenomena. In different measurement systems, these phenomena would give incompatible results. This view is out of date.
The concept of complementarity
is not well-defined. Sometimes, non-commuting operators and the corresponding variables (like position and momentum) are called ‘complementary’ as well, at least if their ‘commutator’ is a number.
An interesting aspect of a wave is that it concerns a movement in motion, a propagating oscillation. Classical mechanics
restricted itself to the motion of unchangeable pieces of matter. For macroscopic bodies like billiard balls, bullets, cars and planets, this is a fair approximation, but for microscopic particles it is not. Even in classical physics, the idea of a point-like
particle is controversial. Both its mass density and charge density are infinite, and its intrinsic angular momentum cannot be defined.
The experimentally established
fact of photons, electrons, and other microsystems having both wave and particle properties does not fit the still popular mechanistic worldview. However, the theory of characters accounts for this fact as follows.
The character of an electron consists of an interlacement of two characters, a generic kinetic wave character and an accompanying specific particle character that is physically qualified. The specific
character (different for different physical kinds of particles) determines primarily how electrons interact with other physical subjects, and secondarily which magnitudes play a role in this interaction. These characteristics distinguish the electron from
other particles, like protons and atoms being spatially founded, and like photons having a kinetic foundation.
Interlaced with the specific character is a generic pattern
of motion having the kinetic character of a wave packet. Electrons share this generic character with all other particles. In experiments demonstrating the wave character, there is little difference between electrons, protons, neutrons, or photons.
The generic wave character has primarily a kinetic qualification and secondarily a spatial foundation. The specific physical character determines the boundary conditions and the actual shape of the wave packet. Its wavelength is proportional to its linear
momentum, its frequency to its energy. A free electron’s wave packet looks different from that of an electron bound in a hydrogen atom.
The wave character representing
the electron’s motion has a tertiary characteristic as well, anticipating physical interaction. The wave function describing the composition of the wave packet determines the probability of the electron’s performance as a particle in any kind of
A purely periodic wave is infinitely extended in both space and time. It is unfit to give an adequate description of a moving particle, being localized in space and time. A packet of waves having various amplitudes,
frequencies, wavelengths, and phases delivers a pattern that is more or less localized. The waves are superposed such that the net amplitude is zero almost everywhere in space and time. Only in a relatively small interval (to be indicated by Δ) the net
amplitude differs from zero.
Let us restrict the discussion to rectilinear motion of a wave packet at constant speed. Now the motion is described by four magnitudes. These
are the position (x) of the packet at a certain instant of time (t), the wave number (s) and the frequency (f).
The packet is an aggregate of
waves with frequencies varying within an interval Δf and wave numbers varying within an interval Δs. Generally, it is provable that the wave packet in the direction of motion has a minimum dimension Δx such that Δx.Δs>1.
In order to pass a certain point, the packet needs a time Δt, for which Δt.Δf>1. If we want to compress the packet (Δx and Δt small), the packet consists of a wide spectrum of waves
(Δs and Δf large). Conversely, a packet with a well defined frequency (Δs and Δf small) is extended in time and space (Dx and Dt large). It is impossible to produce a wave packet whose frequency (or
wave number) has a precise value, and whose dimension is point-like simultaneously. If we make the variation Δs small, the length of the wave packet Δx is large. Or we try to localize the packet, but then the wave number shows a large
Sometimes a wave packet is longer than one might believe. A photon
emitted by an atom has a dimension of Δx=cΔt, Δt being equal to the mean duration of the atom’s metastable state before the emission. Because Δt is of the order of 10-8 sec
and c=3*108 m/sec, the photon’s ‘coherence length’ in the direction of motion is several metres. This is confirmed by interference experiments, in which the photon is split into two parts, to be reunited after the parts
have transversed different paths. If the path difference is less than a few metres, interference will occur, but this is not the case if the path difference is much longer. The coherence length of photons in a laser ray is many kilometres long, because in
a laser, Δt has been made artificially long.
system emits or absorbs a wave packet as a whole.During its motion, the coherence of the composing waves is not always spatial. A wave packet can split itself without losing its kinetic coherence. This coherence is expressed by phase relations,
as can be demonstrated in interference experiments as described above. In general, two different wave packets do not interfere in this way, because their phases are not correlated. This means that a wave packet maintains its kinetic identity
during its motion. The physical unity of the particle comes to the fore when it is involved in some kind of interaction, for instance if it is absorbed by an atom causing a black spot on a photographic plate or a pulse in a counter tube named after Hans Geiger
and Walther Müller (1928). Emission and absorption are physically qualified events, in which an electron or a photon acts as an indivisible whole.
The identification of a particle with a wave packet seems to be problematic for various reasons. The first problem, the possible splitting and absorption of a wave packet, is mentioned
Second, the wave packet of a freely moving particle always expands, because the composing waves having different velocities. (Light in vacuum is an exception.) Even
if the wave packet is initially well localized, gradually it is smeared out over an increasing part of space and time. However, the assumption that the wave function satisfies a linear wave equation is a simplification of reality. Wave motion can be non-linearly
represented by a ‘soliton’ that does not expand. Unfortunately, a non-linear wave equation is mathematically more difficult to treat than a linear one.
in 1926 Werner Heisenberg observed that the wave packet is subject to a law known as indeterminacy relation, uncertainty relation or Heisenberg relation. As a matter of fact, there is as little agreement about its definition as about its name.
Combining the relations Δx.Δs>1 and Δt.Δf>1 with those of Max Planck (E=hf) and Albert Einstein (p=hs)
leads to Heisenberg’s relations for a wave packet: Δx.Δp>h and Δt.ΔE>h. (The values of ‘1’ respectively ‘h’ in the mentioned relations
indicate an order of magnitude. Sometimes other values are given). The meaning of Δx
etc. is given above. In particular, Δt is the time the wave packet needs to pass a certain point. If Δx.Δs=Δt.Δf=1, the wave packet’s speed v=Δx/Δt=Δf/Δs
is approximately the group velocity df/ds, according to Louis De Broglie (1924). This interpretation is the oldest one, for the indeterminacy relations – without Planck’s constant - were applied in communication theory (where
Δf is the band width) long before the birth of quantum mechanics. It is
interesting to observe that the indeterminacy relations are not characteristic for quantum mechanics, but for wave motion. The relations are an unavoidable consequence of the wave character of particles and of signals. I shall discuss some alternative interpretations,
in particular paying attention to Heisenberg’s relation between energy and time.
Energy and time
Quantum mechanics connects any variable magnitude with a Hermitean operator having eigenfunctions and eigenvalues. The eigenvalues are the possible values for the magnitude in the system concerned. In a measurement,
the scalar product of the system’s state function with an eigenfunction of the operator is the square of the probability that the corresponding eigenvalue will be realized.
If two operators act successively on a function, the result may depend on their order. Heisenberg’s relation Δx.Δp > h can be derived as a property of the non-commuting operators for position and linear
momentum. In fact, each pair of non-commuting operators gives rise to a similar relation. This applies, e.g., to each pair out of the three components of angular momentum. Consequently, only one component of an electron’s magnetic moment (usually along
a magnetic field) can be measured. The other two components are undetermined, as if the electron exerts a precessional motion about the direction of the magnetic field.
there is no operator for kinetic time. Therefore, some people deny the existence of a Heisenberg relation for time and energy.
On the other hand, the operator for energy, called Hamilton-operator or Hamiltonian after William Hamilton, is very important. Its eigenvalues are the energy levels characteristic for e.g. an atom or a molecule. Each operator commuting with the Hamiltonian
represents a ‘constant of the motion’ subject to a conservation law.
From the wave function, the probability to find a particle in a certain state can be calculated. Now the indeterminacy is a measure of the mean standard deviation, the statistical inaccuracy of a probability calculation. The indeterminacy of time can
be interpreted as the mean lifetime of a metastable state. If the lifetime is large (and the state is relatively stable), the energy of the state is well defined. The rest energy of a short living particle is only determined within the margin given by the
Heisenberg relation for time and energy.
This interpretation is needed to understand why an atom is able to absorb a light quantum emitted by another atom in similar circumstances.
Because the photon carries linear momentum, both atoms get momentum and kinetic energy. The photon’s energy would fall short to excite the second atom. Usually this shortage is smaller than the uncertainty in the energy levels concerned. However, this
is not always the case for atomic nuclei. Unless the two nuclei are moving towards each other, the process of emission followed by absorption would be impossible. Rudolf Mössbauer discovered this consequence of Heisenberg’s relations in 1958. Since
then, Mössbauer’s effect became an effective instrument for investigating nuclear energy levels.
The position of a wave packet is measurable within a margin of Δx and its linear momentum within a margin of Δp. Both are as small as experimental circumstances permit, but their product has a minimum
value determined by Heisenberg’s relation. The accuracy of the measurement of position restricts that of momentum.
Initially the indeterminacy was interpreted as
an effect of the measurement disturbing the system. The measurement of one magnitude disturbs the system such that another magnitude cannot be measured with an unlimited accuracy. Heisenberg explained this by imagining a microscope exploiting light
to determine the position and the momentum of an electron. Later, this has appeared to
be an unfortunate view. It seems better to consider Heisenberg’s relations to be the cause of the limited accuracy of measurement, rather than to be its effect.
The Heisenberg relation for energy and time has a comparable consequence for the measurement of energy. If a measurement has duration Δt, its accuracy cannot be better than ΔE>h/Δt.
In quantum mechanics, the law of conservation of energy achieves a slightly
different form. According to the classical formulation, the energy of a closed system is constant. In this statement, time does not occur explicitly. The system is assumed to be isolated for an indefinite time, and that is questionable. Heisenberg’s
relation suggests a new formulation. For a system isolated during a time interval Δt, the energy is constant within a margin of ΔE≈h/Δt. Within this margin, the system shows spontaneous energy fluctuations,
only relevant if Δt is very small. In fact, the value of ΔE is less significant than the relative indeterminacy ΔE/E. For a macroscopic system the energy E is so much larger than ΔE
that the energy fluctuations can be neglected, and the law of conservation of energy remains valid.
According to quantum field theory, a physical vacuum is not an empty
space. Spontaneous fluctuations may occur. A fluctuation leads to the creation and annihilation of a virtual photon or a virtual pair consisting of a particle and an antiparticle, having an energy of ΔE, within the interval Δt<h/ΔE.
Meanwhile the virtual particle or pair is able to exert an interaction, e.g. a collision between two real particles. (Such virtual processes are depicted in the diagrams called after Richard Feynman.) Virtual particles are not directly observable but play
a part in several real processes.
Amplitude and probability
The amplitude of waves in water, sound, and light corresponds to a measurable physical real magnitude. In water this is the height of its surface, in sound the pressure of
air, in light the electromagnetic field strength. The energy of the wave is proportional to the square of the amplitude. This interpretation is not applicable to the waves for material particles like electrons. In this case the wave has a less concrete character,
it has no direct physical meaning. Even in mathematical terms, the wave is not real, for the wave function has a complex value.
In 1926, Max Born offered a new interpretation,
since then commonly accepted. He stated that a wave function (real or complex) is a probability
function. In a footnote added in proof, Born observed that the probability is proportional to the square of the wave function.
The wave function we are talking
about is prepared at an earlier interaction, for instance, the emission of the particle. It changes during its motion, and one of its possibilities is realized at the next interaction, like the particle’s absorption. The wave function expresses the transition
probability between the initial and the final state.
This probability may concern any measurable property that is variable. Hence, it does not concern natural constants like the speed of light or the charge of the electron. According to Born, the probability interpretation
bridges the apparently incompatible wave and particle aspects: ‘The true philosophical import of the statistical interpretation consists in the recognition that the wave-picture and the corpuscle-picture are not mutually exclusive, but are two complementary
ways of considering the same process’. Wave properties determine the probability
of position, momentum, etc., to manifest themselves at some later moment. These are traditionally considered properties of particles, but now they appear to be propensities, to be actualized in some future interaction.
Classical mechanics used statistics as a mathematical means, assuming that the particles behave deterministic in principle. In 1926, Born’s probability interpretation put a definitive
end to mechanist determinism, having lost its credibility before because of radioactivity. Waves and wave motion are still determined, e.g. by Schrödinger’s equation, even if no experimental method exists to determine the phase of a wave. However,
the wave function determines only the probability of future interactions. The fact that quantum physics is a stochastic theory has evoked widely differing reactions. Albert Einstein considered the theory incomplete. Max Born stressed that at least waves behave
deterministically, only its interpretation having a statistical character. Niels Bohr accepted a fundamental stochastic element in his world-view. In quantum mechanics, the particles themselves behave stochastically.
Interference of chance
Even more strange is that chance is subject to interference. In the traditional probability calculus probabilities can be added or multiplied. Nobody ever imagined that probabilities could interfere. Interference of waves may result in an increase of
probability, but to a decrease as well, even to the extinction of probability. Hence, besides a probability interpretation of waves, we have a wave interpretation of probability.
Outside quantum mechanics, this is still unheard of, not only in daily life and the humanities, but in sciences like biology and ethology as well. The reason is that interference
of probabilities only occurs as long as there is no physical interaction by which a chance realizes itself. Observe that an interference-experiment aims at demonstrating interference. This is only possible if the interference of
waves is followed by an interaction of the particles concerned with, e.g., a screen. The absence of physical interaction is an exceptional situation. It only occurs if the system concerned has no internal interactions (or if these are frozen), as
long as it moves freely. In macroscopic bodies, interactions occur continuously and interference of probabilities does not occur. Therefore, the phenomenon of interference of chances is unknown outside quantum physics.
Probability as kinetic anticipation on physical interaction
The concept of probability or chance anticipates the physical relation frame, because only by means of a physical interaction a chance can be realized. An open-minded spectator observes an asymmetry in time.
Probability always concerns future events. It draws a boundary line between a possibility in the present and a realization in the future. For this realization, a physical interaction is needed. The wave equation and the wave function describe probabilities,
not their realization. The wave packet anticipates a physical interaction leading to the realization of a chance, but is itself a kinetic subject, not a physical subject. If the particle realizes one of its possibilities, it simultaneously destroys all alternative
possibilities. In that respect, there is no difference between quantum mechanics and classical theories of probability.
As long as the position of an electron is not determined,
its wave packet is extended in space and time. As soon as an atom absorbs the electron at a certain position, the probability to be elsewhere collapses to zero. Theoretically, this means the projection of a state vector on one of the eigenvectors of Hilbert
space, representing all possible states of the system. ‘No other permanent or transient principle of physics has ever given rise to so many comments, criticisms, pleadings, deep remarks, and plain nonsense as the wave function collapse.’ In particular, the assumptions that probability is an expression of our limited knowledge of a system
and that the observer causes the reduction of the wave packet, have led to a number of subjectivist and solipsist interpretations of quantum physics and related problems, of which I shall only briefly discuss that of Schrödinger’s cat. This so-called
reduction of the wave packet requires a velocity far exceeding the speed of light. However, this reduction concerns the wave character, not the physical character of the particle. It does not counter the physical law that no material particle can move faster
Likewise, Schrödinger’s equation describes the states of an atom or molecule and the transition probabilities between states. It does not account
for the actual transition from a state to an eigenstate, when the system experiences a measurement or another kind of interaction. According to Nancy Cartwright, ‘This transition therefore does not belong to elementary quantum dynamics. But it is meant
to express a physical interaction between the measured object and the measuring apparatus, which one would expect to be a direct consequence of dynamics’.
‘Von Neumann claimed that the reduction of the wave packet occurs when a measurement is made. But it also occurs when a quantum system is prepared in an eigenstate, when one particle scatters from another, when a radioactive nucleus disintegrates, and
in a large number of other transition processes as well … There is nothing peculiar about measurement, and there is no special role for consciousness in quantum mechanics.’
Cartwright also states: ‘… there are not two different kinds of evolution in quantum mechanics. There are evolutions that are correctly described by Schrödinger’s equation, and there are evolutions that are correctly described
by something like von Neumann’s projection postulate. But these are not different kinds in any physically relevant sense’.
However, there is a significant difference. The first concerns a reversible motion, the second an irreversible physical process: ‘Indeterministically and irreversibly, without the intervention of any external observer, a system can change its state …
When such a situation occurs, the probabilities for these transitions can be computed; it is these probabilities that serve to interpret quantum mechanics.’
From Schrödinger’s cat to decoherence
Is the problem of the reduction of the wave packet relevant for macroscopic bodies as well? Historically, this question is concentrated on the popular problem of Edwin Schrödinger’s cat, hypothetically
locked up alive in a non-transparent case. A mechanism releases a mortal poison at an unpredictable instant, for instance controlled by a radioactive process. As long as the case is not opened, one may wonder whether the cat is still alive. If quantum mechanics
is applied consequently, the state of the cat is a mixture, a superposition of two eigenstates, dead and alive, respectively.
The principle of decoherence, developed at
the end of the twentieth century, may provide a satisfactory answer. For a macroscopic body, a state being a combination of eigenstates will spontaneously change very fast into an eigenstate, because of the many interactions taking place within the
macroscopic system itself. This solves the problem of Schrödinger’s cat, for each superposition of dead and alive transforms itself almost immediately into a state of dead or alive. The principle of decoherence is in some cases
provable, though it is not proved generally. Decoherence even occurs in quite small molecules. There are exceptions too, in systems without much internal energy dissipation, e.g. electromagnetic
radiation in a transparent medium and superconductors.
The principle of decoherence is part of a realistic interpretation of quantum physics. It does not idealize the ‘reduction of the wave packet’ to a projection in an abstract state space. It takes
into account the character of the macroscopic system in which a possible state is realized by means of a physical interaction.
The so-called measurement problem
The so-called measurement problem constitutes the nucleus of what is usually
called the interpretation of quantum mechanics. ‘The interpretive challenge of quantum theory is often presented in terms of the measurement problem: i.e., that the formalism itself does not specify that only one outcome happens, nor does it explain
why or how that particular outcome happens. This is the context in which it is often asserted that the theory is incomplete and is therefore in need of alteration in some way.’
It is foremost a philosophical problem, not a physical one, which is remarkable, because measurement is part of experimental physics, and the starting point of theoretical physics. After the development of quantum physics, both experimental and theoretical
physicists have investigated the relevance of symmetry, and the structure of atoms and molecules, solids and stars, and subatomic structures like nuclei and elementary particles. Apparently, this has escaped the attention of many philosophers, who are still
discussing the consequences of Heisenberg’s indeterminacy relations.
3.4. Symmetric and antisymmetric
Fermions and bosons
The concept of
probability is applicable to a single particle as well as to a homogeneous set of similar particles, a gas consisting of molecules, electrons or photons. In order to study such systems, since circa 1860 statistical physics has developed various mathematical
methods. A distribution function points out how the energy is distributed over the particles, how many particles have a certain energy value, and how the average energy depends on temperature. In any distribution function, the temperature is an important
Classical physics assigned each particle its own state, but in quantum physics, this would lead to wrong results. It is better to design the possible
states, and to calculate how many particles occupy a given state, without questioning which particle occupies which state. It turns out that there are two entirely different cases, referring to ‘bosons’, respectively ‘fermions’.
In the first
case, the occupation number of particles in a well-defined state is unlimited. Bosons like photons are subject to a distribution function in 1924 derived by Satyendra Bose and published by Albert Einstein, hence called Bose-Einstein statistics. Bosons
have an integral spin, the occupation number of each state may vary from zero to infinity. An integral spin means that the intrinsic angular momentum is an integer times Planck’s reduced constant, 0, h/2π, 2h/2π,
etc. A half-integral spin means that the intrinsic angular moment has values like (1/2)h/2π, (3/2)h/2π.
In the latter
case, each well-defined state is occupied by at most one particle, according to Wolfgang Pauli’s exclusion principle. The presence of a particle in a given state excludes the presence of another similar particle in the same state. Fermions like
electrons, protons, and neutrons have a half-integral spin. They are subject to the distribution function that Enrico Fermi and Paul Dirac derived in 1926.
In both cases,
the distribution approximates the classical Maxwell-Boltzmann distribution function, if the mean occupation of available states is much smaller than 1. This applies to molecules in a classical gas.
The distinction of fermions and bosons rests on permutation symmetry. In a finite set the elements can be ordered into a sequence and
numbered using the natural numbers as indices. For n elements, this can be done in n!=22.214.171.124…n different ways. The n! permutations are symmetric if the elements are indistinguishable. Permutation symmetry is not
spatial but quantitative.
In a system consisting of a number of similar particles, the state of the aggregate can be decomposed into a product of separate states for each
particle apart. (It is by no means obvious that the state function of an electron or photon gas can be written as a product (or rather a sum of products) of state functions for each particle apart, but it turns out to be a quite close approximation.) A permutation
of the order of similar particles should not have consequences for the state of the aggregate as a whole. However, in quantum physics only the square of a state is relevant to probability calculations. Hence, exchanging two particles allows of two
possibilities: either the state is multiplied by +1 and does not change, or it is multiplied by –1. In both cases, a repetition of the exchange produces the original state. In the first case, the state is called symmetric with respect to a permutation,
in the second case antisymmetric.
In the antisymmetric case, if two particles would occupy the same state an exchange would simultaneously result in multiplying the state
by +1 (because nothing changes) and by –1 (because of antisymmetry), leading to a contradiction. Therefore, two particles cannot simultaneously occupy the same state. This is Wolfgang Pauli’s exclusion principle concerning fermions. No comparable
principle applies to bosons, having symmetric wave functions with respect to permutation.
Both a distribution function like the Fermi-Dirac statistics and Pauli’s
exclusion principle are only applicable to a homogeneous aggregate of similar particles. In a heterogeneous aggregate like a nucleus, they must be applied to the protons and neutrons separately.
The distinction of fermions and bosons, and the exclusion principle for fermions, have a fundamental significance for the
understanding of the characters of material things containing several similar particles. To a large extent, it explains the orbital structure of atoms and the composition of nuclei from protons and neutrons.
When predicting the wave character of electrons, Louis de Broglie suggested that the stability of the electronic orbit in a hydrogen atom is explainable by assuming that the electron moves around the nucleus
as a standing wave. This implies that the circumference of the orbit is an integral number times the wavelength. From the classical theory of circular motion, he derived that the orbital angular momentum should be an integral number times Max Planck’s
reduced constant (h/2π). This is precisely the quantum condition applied by Niels Bohr in 1913 in his first atomic theory. For a uniform circular motion with radius r, the angular momentum L=rπ. The linear
momentum p = h/λ according to Einstein. If the circumference 2πr = nλ, n being a positive integer, then L=nλp/2π=nh/2π. Quantum mechanics allows of the value L=0 for orbital
angular momentum. This has no analogy as a standing wave on the circumference of a circle.
The atomic physicists at Copenhagen, Göttingen, and Munich
considered de Broglie’s idea rather absurd, but it received support from Albert Einstein, and it inspired Edwin Schrödinger to develop his wave equation.
In a stable system, Schrödinger’s equation is independent of time and its solutions are stationary waves, comparable to the standing waves in a violin string or an organ pipe. Only a limited number of frequencies are possible, corresponding
to the energy levels in atoms and molecules. In contrast, a time-dependent Schrödinger equation describes transitions between energy levels, giving rise to the discrete emission and absorption spectra characteristic for atoms and molecules. Although
one often speaks of the Schrödinger equation, there are many variants, one for each physical character. Each variant specifies the system’s boundary conditions and expresses the law for the possible motions of the particles concerned.
Particles in a box
In the practice of solid-state physics, the exclusion principle is more important than Schödinger’s equation. This can be elucidated by discussing the model of particles confined to a rectangular box. Again, the wave
functions look like standing waves.
In a good approximation the valence electrons in a metal or semiconductor are not bound to individual atoms but are free to move around.
The mutual repulsive electric force of the electrons compensates for the attraction by the positive ions. The electron’s energy consists almost entirely of kinetic energy, E=p2/2m, if p is its linear momentum
and m its mass.
Because the position of the electron is confined to the box, in Heisenberg’s relation Δx equals the length of the box (analogous
for y and z). Because Δx is relatively large, Δp is small and the momentum is well defined. Hence the momentum characterizes the state of each electron and the energy states are easy to calculate. In a three-dimensional
momentum space a state denoted by the vector p occupies a volume Δp. Momentum space is a three-dimensional diagram for the vector p’s components, px, py
and pz. The volume of a state equals Δp=ΔpxΔpyΔpz. In the described model, the states are mostly occupied up till the energy value EF,
the ‘Fermi-energy’, determining a sphere around the origin of momentum space. Outside the sphere, most states are empty. A relatively thin skin, its thickness being proportional to the temperature, separates the occupied and empty states.
According to the exclusion principle, a low energy state is occupied by two electrons (because there are two possible spin states), whereas high-energy states are empty. In a metal,
this leads to a relatively sharp separation of occupied and empty states. The mean kinetic energy of the electrons is almost independent of temperature, and the specific heat is proportional to temperature, strikingly different from other aggregates of particles.
Mechanical oscillations or sound waves in a solid form wave packets. These bosons are called phonons or sound particles. Bose-Einstein statistics leads to Peter Debije’s law
for the specific heat of a solid. At a moderate temperature the specific heat is proportional to the third power of temperature. Except for very low temperatures, the electrons contribute far less to the specific heat of a solid than the phonons do. The number
of electrons is independent of temperature, whereas the number of phonons in a solid or photons in an oven strongly depends on temperature.
A similar situation applies
to an oven, in which electromagnetic radiation is in thermal equilibrium. According to Planck’s law of radiation, the energy of this boson gas is proportional to the fourth power of temperature. For a gas satisfying the Maxwell-Boltzmann distribution,
the energy is proportional to temperature. Some people who got stuck in classical mechanics define temperature as a measure of the mean energy of molecules. Which meaning such a definition should have for a fermion gas or boson gas is unclear.
Hence, the difference between fermion and boson aggregates comes quite dramatically to the fore in the temperature dependence of their energy. Amazingly, the physical character
of the electrons, phonons, and photons plays a subordinate part compared to their kinetic character. Largely, the symmetry of the wave function determines the properties of an aggregate. Consequently, a neutron star has much in common with an electron
gas in a metal.
The existence of antiparticles is a consequence of a symmetry of the relativistic wave equation. The quantum mechanics of Erwin Schrödinger and Werner Heisenberg in 1926 was not relativistic, but about 1927
Paul Dirac found a relativistic formulation. From his equation follows the electron’s
half-integral angular momentum, not as a spinning motion as conceived by its discoverers, Samual Goudsmit and George Uhlenbeck, but as a symmetry property (still called spin).
Dirac’s wave equation had an unexpected result, to wit the existence of negative energy eigenvalues for free electrons. According to relativity theory, the energy E and momentum p for a freely moving particle with rest energy
Eo=moc2 are related by the formula: E2=Eo2+(cp)2. For a given value of the linear momentum p, this equation has both
positive and negative solutions for the energy E. De positive values are minimally equal to the rest energy Eo and the negative values are maximally -Eo. This leaves a gap of twice the rest energy, about 1 MeV
for an electron, much more than the energy of visible light, being about 5 eV per photon. Classical physics could ignore negative solutions, but this is not allowed in quantum physics. Even if the energy difference between positive and negative energy levels
is large, the transition probability is not zero. In fact, each electron should spontaneously jump to a negative energy level, releasing a gamma particle having an energy of at least 1 MeV.
Dirac took recourse to Pauli’s exclusion principle. By assuming all negative energy levels to be occupied, he could explain why these are unobserved most of the time, and why many electrons have positive energy values. An electron in
one of the highest negative energy levels may jump to one of the lowest positive levels, absorbing a gamma particle having an energy of at least 1 MeV. The reverse, a jump downwards, is only possible if in the nether world of negative energy levels, at least
one level is unoccupied. Influenced by an electric or magnetic field, such a hole moves as if it were a positively charged particle. Initially, Dirac assumed protons to correspond to these holes, but it soon became clear that the rest mass of a hole should
be the same as that of an electron.
After Carl Anderson in 1932 discovered the positron, a positively charged particle having the electron’s rest mass, this particle
was identified with a hole in Dirac’s nether world. This identification took some time.
The assumption of the existence of a positive electron besides the negative one was in 1928 much more difficult to accept than in 1932. In 1928, physics acknowledged only three elementary particles, the electron, the proton and the photon. In 1930, the existence
of the neutrino was postulated and in 1932, Chadwick discovered the neutron. The completely occupied nether world of electrons is as inert as the nineteenth century ether. It neither moves nor interacts with any other system. That is why we do not observe
it. For those who find this difficult to accept, alternative theories are available explaining the existence of antiparticles. Experiments pointed out that an electron is able to annihilate a positron, releasing at least two gamma particles. In the inertial
system in which the centre of mass for the electron-positron pair is at rest, their total momentum is zero. Because of the law of conservation of momentum, the annihilation causes the emergence of at least two photons, having opposite momentum.
Meanwhile it is established that besides electrons all particles, bosons included, have antiparticles. Only a photon is identical to its antiparticle. The existence of antiparticles
rests on several universally valid laws of symmetry. A particle and its antiparticle have the same mean lifetime, rest energy and spin, but opposite values for charge, baryon number, or lepton number (5.2).
However, if the antiparticles are symmetrical to particles, why are there so few? (Or why is Dirac’s nether world nearly completely occupied?) Probably, this problem can only be solved within the framework of a theory about
the early development of the cosmos.
Band theory in solid state physics
The image of an infinite set of unobservable electrons having negative energy, strongly defeats common sense. However, it received unsolicited support from the so-called band theory
in solid-state physics, being a refinement of the earlier discussed free-electron model. The influence of the ions is not completely compensated for by the electrons. An electric field remains having the same periodic structure as the crystal. Taking this
field into account, Rudolf Peierls developed the band model. It explains various properties of solids quite well, both quantitatively and qualitatively.
A band is a set
of neighbouring energy levels separated from other bands by an energy gap. (A band is comparable to an atomic shell but has a larger bandwidth.) It may be fully or partly occupied by electrons, or it is unoccupied. Both full and empty bands are physically
inert. In a metal, at least one band is partly occupied, partly unoccupied by electrons. An isolator has only full (i.e., entirely occupied) bands besides empty bands. The same applies to semiconductors, but now a full band is separated from an empty band
by a relatively small gap. According to Peierls in 1929, if energy is added in the form of heat or light (a phonon or a photon), an electron jumps from the lower band to the higher one, leaving a hole behind. This hole behaves like a positively charged particle.
In many respects, an electron-hole pair in a semiconductor looks like an electron-positron pair. Only the energy needed for its formation is about a million times smaller. Dirac and Heisenberg corresponded with each other about both theories, initially without
observing the analogy.
Another important difference should be mentioned. The set of electron states in Dirac’s theory is an ensemble. In the class of possibilities independent of time and space, half is mostly occupied, the other half is mostly
empty. There is only one nether world of negative energy values. In contrast, the set of electrons in a semiconductor is a spatially and temporally restricted collection of electrons, in which some electron states are occupied, others unoccupied.
There are as many of these collections as there are semiconductors. To be sure, Peierls was interested in an ensemble as well. In his case, this is the ensemble of all semiconductors of a certain kind. This may be copper oxide, the standard example of a semiconductor
in his days, or silicon, the base material of modern chips. But this only confirms the distinction from Dirac’s ensemble of electrons.
Common sense did not turn out to be a reliable guide in the investigation of characters.
At the end of the nineteenth century, classical mechanics was considered the paradigm of science. Yet, even then is was clear that daily experience was in the way of the development of electromagnetism, for instance. The many models of the ether were more
an inconvenience than a stimulus for research.
When relativity theory and quantum physics unsettled classical mechanics, this led to uncertainty about the reliability of
science. At first, the oncoming panic was warded off by the reassuring thought that the new theories were only valid in extreme situations. These situations were, for example, a very high speed, a total eclipse, or a microscopic size. However, astronomy cannot
cope without relativity theory, and chemistry fully depends on quantum physics. All macroscopic properties and phenomena of solid-state physics can only be explained in the framework of quantum physics.
Largely, daily experience rests on habituation. In hindsight, it is easy to show that classical mechanics collided with common sense in its starting phase with respect to the law of inertia. Action at a distance in Isaac Newton’s
Principia evoked the abhorrence of his contemporaries, but the nineteenth-century public did not experience any trouble with this concept. In the past, mathematical discoveries would cause heated discussions, but the rationality of irrational numbers
or the reality of non-Euclidean spaces is now accepted almost as a matter of course.
This does not mean that common sense is always wrong in scientific affairs. The irreversibility
of physical processes is part of daily experience. In the framework of the mechanist worldview of the nineteenth century, physicists and philosophers have stubbornly but in vain tried to reduce irreversible processes to reversible motion, and to save determinism.
This is also discernible in attempts to find (mostly mathematical) interpretations of quantum mechanics that allow of temporal reversibility and of determinism, such as the so-called many-worlds interpretation, and the transaction interpretation.
Since the twentieth century, mathematics, science and technology dominate our society to such an extent, that new developments are easier to integrate in our daily experience than
before. Science has taught common sense to accept that the characters of natural things and events are neither manifest nor evident. The hidden properties of matter and of living beings brought to light by the sciences are applicable in a technology that is
accessible for anyone but understood by few. This technology has led to an unprecedented prosperity. Our daily experience adapts itself easily and eagerly to this development.
4. The spectrum of physical interactions
4.1. The irreversibility of physical interaction
The relevance of a philosophical analysis of physical characters can hardly be overestimated. Yet it receives very little attention from professional philosophers of science.
The discovery of the electron in 1897 provided the study of the structure of matter with a strong impulse, both in physics and in chemistry. Our knowledge of atoms and molecules, of nuclei and sub-atomic particles,
of stars and stellar systems, dates largely from the twentieth century. The significance of electrotechnology and electronics for the present society is overwhelming.
physical aspect of the cosmos is characterized by interactions between two or more subjects. Interaction is a relation different from the quantitative, spatial, or kinetic relations, on which it can be projected. It is subject to natural laws. Some laws are
specific, like the electromagnetic ones, determining characters of physical kinds. Some laws are general, like the laws of thermodynamics and the laws of conservation of energy, linear and angular momentum. The general laws constitute the physical-chemical
relation frame. Both for the generic and the specific laws, physics has reached a high level of unification.
Because of their relevance to study types of characters, this
chapter starts with an analysis of the projections of the physical relation frame onto the three preceding ones. Next, I investigate the characters of physically stable things, consecutively quantitatively, spatially, and kinetically founded. Section 7 surveys
aggregates and statistics. Finally, section 8 reviews processes of coming into being, change, and decay.
The existence of physically qualified things and events implies their interaction, the universal physical relation. If
something could not interact with anything else it would be inert. It would not exist in a physical sense, and it would have no physical place in the cosmos. Groups, spatial figures, waves and oscillations do not interact, hence are not physical unless interlaced
with physical characters. The noble gases are called inert because they hardly ever take part in chemical compounds, yet their atoms are able to collide with each other. The most inert things among subatomic particles are the neutrino’s, capable of flying
through the earth with a very small probability of colliding with a nucleus or an electron. Nevertheless, neutrinos are detectable and have been detected.
postulated the existence of neutrinos in 1930 in order to explain the phenomenon of β-radioactivity. Neutrino’s were not detected experimentally before 1956. According to a physical criterion, neutrino’s exist if they demonstrably interact
with other particles. Sometimes it is said that the neutrino was ‘observed’ for the first time in 1956. Therefore one has to stretch the concept of ‘observation’ quite far. In no experiment neutrino’s can be seen, heard, smelled,
tasted or felt. Even their path of motion cannot be made visible in any experiment. But in several kinds of experiment, from observable phenomena the energy and momentum (both magnitude and direction) of individual neutrino’s can be calculated. For a
physicist, this provides sufficient proof for their physical existence as interacting particles.
The universality of the relation frames allows science of comparing characters with each other and to determine their specific
relations. The projections of the physical relation frame onto the preceding frames allow us to measure these relations. Measurability is the base of the mathematization of the exact sciences. It allows of applying statistics and designing mathematical models
for natural and artificial systems.
The simplest case of interaction concerns two isolated systems interacting only with each other. Thermodynamics characterizes an isolated
or closed system by magnitudes like energy and entropy. The two systems have thermal, chemical, or electric potential differences, giving rise to currents creating entropy. According to the second law of thermodynamics, this interaction is irreversible. Here
‘system’ is a general expression for a bounded part of space inclusive of the enclosed matter and energy. A closed system does not exchange energy or matter with its environment. Entropy can only be defined properly if the system concerned is in
internal equilibrium and isolated from its environment.
In kinematics, an interactive event may have the character of a collision, minimally leading to a change in the state of motion of the colliding subjects. Often, the internal state
of the colliding subjects changes as well. Except for the boundary case of an elastic collision, these processes are subject to the physical order of irreversibility. Frictionless motion influenced by a force is the standard example of a reversible interaction.
In fact, it is also a boundary case, for any kind of friction or energy dissipation causes motion to be irreversible.
The law of inertia expresses the independence of uniform motion from physical interaction. It confirms the existence of uniform and rectilinear motions having no physical cause. This is an abstraction, for concrete
things experiencing forces have a physical aspect as well. In reality a uniform rectilinear motion only occurs if the forces acting on the moving body balance each other.
time is symmetric with respect to past and future. If in the description of a motion the time parameter (t) is replaced by its reverse (–t), we achieve a valid description of a possible motion. In the absence of friction or any other
kind of energy dissipation, motion is reversible. By distinguishing past and future we are able to discover cause-effect relations, assuming that an effect never precedes its cause. According to relativity theory, the order of events having a causal relation
is in all inertial systems the same, provided that time is not reversed.
In our common understanding of time, the discrimination of past and future is a matter of course, but in the philosophy of science it is problematic. The existence of irreversible processes
cannot be denied. All motions with friction are irreversible. Apparently, the absorption of light by an atom or a molecule is the reverse of emission, but Albert Einstein demonstrated that the reverse of (stimulated) absorption is stimulated emission
of light, making spontaneous emission a third process, having no reverse. This applies to radioactive processes as well. The phenomenon of decoherence makes most quantum processes irreversible.
Only wave motion subject to Edwin Schrödinger’s equation is symmetric in time. Classical mechanics usually expresses interaction by a force between two subjects, this relation being symmetric according to Newton’s third law of motion. However,
this law is only applicable to spatially separated subjects if the time needed to establish the interaction is negligible, i.e., if the action at a distance is (almost) instantaneous. Einstein made clear that interaction always needs time, hence even interaction
at a distance is asymmetric in time.
Irreversibility does not imply that the reverse process is impossible. It may be less probable, or requiring quite different initial
conditions. The transport of heat from a cold to a hotter body (as occurs in a refrigerator) demands different circumstances from the reverse process, which occurs spontaneously if the two bodies are not thermally isolated from each other. A short living point-like
source of light causes a flash expanding in space. It is not impossible but practically very difficult to reverse this wave motion, for instance applying a perfect spherical mirror with the light source at the centre. But even in this case, the reversed motion
is only possible thanks to the first motion, such that the experiment as a whole is still irreversible.
Yet, irreversibility as a temporal order is philosophically controversial, for it does not fit into the reductionist worldview influenced by nineteenth-century mechanism.
This worldview assumes each process to be reducible to motions of as such unchangeable pieces of matter, interacting through Newtonian forces. Ludwig Boltzmann attempted to bridge reversible motion and irreversible processes by means of the concepts of probability
and randomness. In order to achieve the intended results, he had to assume that the realization of chances is irreversible. According to Hans Reichenbach, ‘the direction of time is supplied by the direction of entropy, because the latter direction is
made manifest in the statistical behaviour of a large number of separate systems, generated individually in the general drive to more and more probable states.’
But he also observes: ‘The inference from time to entropy leads to the same result whether it is referred to the following or to preceding events’.
One may conclude that ‘… the one great law of irreversibility (the Second Law) cannot be explained from the reversible laws of elementary particle mechanics…’.
It is sometimes stated that all ‘basic’ laws of physics are symmetrical in time. This seems to be true as far as kinetic time is concerned, and if any law that
belies temporal symmetry (like the second law of thermodynamics, or the law for spontaneous decay) is not considered ‘basic’. Anyhow, all philosophical attempts to reduce irreversibility to the subject side of the physical aspect of reality have
4.2. Projections of physical interaction
Interaction is first of all subject to general laws independent of the specific character
of the things involved. Some conservation laws are derivable from Albert Einstein’s principle of relativity, stating that the laws of physics are independent of the motion of inertial systems.
Being the physical subject-subject relation, interaction may be analysed with the help of quantitative magnitudes like energy, mass, and charge; spatial concepts like force, momentum, field strength,
and potential difference; as well as kinetic expressions like currents of heat, matter, or electricity.
Like interaction, energy, force, and current are abstract
concepts. Yet these are not merely covering concepts without physical content. They can be specified as projections of characteristic interactions like the electromagnetic one. Electric energy, gravitational force, and the flow of heat specify the abstract
concepts of energy, force, and current.
For energy to be measurable, it is relevant that one concrete form of energy is convertible into another one. For instance, a generator
transforms mechanical energy into electric energy. Similarly, a concrete force may balance another force, whereas a concrete current accompanies currents of a different kind. This means that characteristically different interactions are comparable,
they can be measured with respect to each other. The physical subject-subject relation, the interaction projected as energy, force, and current, is the foundation of the whole system of measuring, characteristic for astronomy, biology, chemistry,
physics, as well as technology. The concepts of energy, force, and current enable us to determine physical subject-subject relations objectively.
Measurement of a quantity
requires several conditions to be fulfilled. First, a unit should be available. A measurement compares a quantity with an agreed unit. Secondly, a magnitude requires a law, a metric, determining how a magnitude is to be projected on a set
of numbers, on a scale. The third requirement, being the availability of a measuring instrument, cannot always be directly satisfied. A magnitude like entropy can only be calculated from measurements of other magnitudes. Fourth, therefore, there must
be a fixed relation between the various metrics and units, a metrical system. This allows of the application of measured properties in theories. Unification of units and scales such as the metric system is a necessary requirement for the communication
of both measurements and theories.
I shall discuss the concepts of energy, force, and current in some more detail. It is by no means evident that these concepts are the
most general projections of interaction. Rather, their development has been a long and tedious process, leading to a general unification of natural science, to be distinguished from a more specific unification to be discussed later on.
Since the middle of the nineteenth century, energy is the most important quantitative expression of physical, chemical, and biotic interactions.
As such it has superseded mass, in particular since it is known that mass and energy are equivalent, according to physics’ most famous formula, E=mc2. The formula means that mass and energy are equivalent, that each
amount of energy corresponds with an amount of mass and conversely. It does not mean that mass is a form of energy, or can be converted into energy, as it is often misunderstood. Energy is specifiable as kinetic and potential energy, thermal energy, nuclear
energy, or chemical energy. Affirming the total energy of a closed system to be constant, the law of conservation of energy implies that one kind of energy can be converted into another one, but not mass into energy. For this reason, energy forms a universal
base for comparing various types of interaction.
Before energy, mass became a universal measure for the amount of matter, serving as a measure for gravity as well as for
the amount of heat that a subject absorbs when heated by one degree. Energy and mass are general expressions of physical interaction. This applies to entropy and related thermodynamic concepts too. In contrast, the rest energy and the rest mass of a particle
or an atom are characteristic magnitudes.
Velocity is a measure for motion, but if it concerns physically qualified things, linear momentum (quantity of motion, the product
of mass and velocity) turns out to be more significant. The same applies to angular momentum (quantity of rotation, the product of moment of inertia and angular frequency; angular frequency equals 2π times the frequency. The moment of inertia is an expression
of the distribution of matter about a body with respect to a rotation axis.) In the absence of external forces, linear and angular momentum are subject to conservation laws. Velocity, linear and angular momentum, and moment of inertia are not expressed by
a single number (a scalar) but by vectors or tensors. Relativity theory combines energy (a scalar) with linear momentum (a vector with three components) into a single vector, having four components.
According to Isaac Newton’s
third law, the mechanical force is a subject-subject relation. If A exerts a
force F on B, then B exerts a force –F on A. The minus sign indicates that the two forces being equal in magnitude have opposite directions. The third law has exerted a strong influence on
the development of physics during a quite long time. In certain circumstances, the law of conservation of linear momentum can be derived from it. However, nowadays physicists allot higher priority to this conservation law than to Newton’s third law.
In order to apply Newton’s laws when more than one force is acting, we have to consider the forces simultaneously. This does not lead to problems in the case of two forces acting on the same body. But the third law is especially important for action
at a distance, inherent in the Newtonian formulation of gravity, electricity, and magnetism. In Albert Einstein’s theory of relativity, simultaneity at a distance turns out to depend on the motion of the reference system. The laws of conservation of
linear momentum and energy turn out to be easier to amend to relativity theory than Newton’s third law. Now one describes the interaction as an exchange of energy and momentum (mediated by a field particle like a photon). This exchange requires a certain
span of time.
Newton’s second law provides the relation between force and momentum: the net force equals the change of momentum per unit of time. The law of inertia
seems to be deductible from Newton’s second law. If the force is zero, momentum and hence velocity is constant, or so it is argued. However, if the first law would not be valid, there could be a different law, assuming that each body experiences a frictional
force, dependent on speed, in a direction opposite to the velocity. (In its most simple form, F=-bv, b>0.) Accordingly, if the total force on a body is zero, the body would be at rest. A unique
reference system would exist in which all bodies on which no forces act would be at rest. This is the nucleus of Aristotle’s mechanics, but it contradicts both the classical principle of relativity and the modern one. The principle of relativity is an
alternative expression of the law of inertia, pointing out that absolute (non-relative) uniform motion does not exist. Just like spatial position on the one hand and interaction on the other side, motion is a universal relation.
Besides to a rigid body, a force is applicable to a fluid, usually in the form of a pressure
(i.e., force per area). A pressure difference causes a change of volume or a current subject to Daniel Bernoulli’s law, if the fluid is incompressible. Besides, there are non-mechanical forces causing currents. A temperature gradient causes a heat current,
chemical potentials drive material flows (e.g., diffusion) and an electric potential difference directs an electric current.
To find a metric for a thermodynamic or an
electric potential is not an easy task. On the basis of an analysis of the idealized cycles devised by Sadi Carnot, William Thomson (later Lord Kelvin) established the theoretical metric for the thermodynamic temperature scale.
The practical definition of the temperature scale takes this theoretical ‘absolute’ scale as a norm. The definition of the metric of pressure is relatively easy, but finding the metric of electric potential caused almost as much trouble as the
development of the thermodynamic temperature scale.
The Newtonian force can
sometimes be written as the derivative of a potential energy (i.e., energy as a function of spatial position). Since the beginning of the nineteenth century, the concept of a force is incorporated in the concept of a field. At first a field was considered
merely a mathematical device, until James Clerk Maxwell proved the electromagnetic field to have physical reality of its own. A field is a physical function projected on space. Usually one assumes the field to be continuous and differentiable almost everywhere.
A field may be constant or variable. There are scalar fields (like the distribution of temperature in a gas), vector fields (like the electrostatic field), and tensor fields (like the electromagnetic field). A field of force is called ‘conservative’
if the forces are derivable from a space-dependent potential energy. This applies to the classical gravitational and electrostatic fields. It does not apply to the force derived by Hendrik Antoon Lorentz, because it depends on the velocity of a charged body
with respect to a magnetic field. (The Lorentz force and Maxwell’s equations for the electromagnetic field are derivable from a gauge-invariant vector potential. ‘Gauge-invariance’ is the relativistic successor to the static concept of a
A further analysis of thermodynamics and electricity makes clear that current is a third projection, now from the physical onto the kinetic relation frame. The concept of entropy points to a general
property of currents. In each current, entropy is created, making the current irreversible. (A current in a superconductor is a boundary case. In a closed superconducting circuit without a source, an electric current may persist indefinitely, whereas a normal
current would die out very fast.)
In a system in which currents occur, entropy increases. Only if a system as a whole is in equilibrium, there are no net currents and the
entropy is constant. Like several mechanical forces are able to balance each other, so do thermodynamic forces and currents. This leads to mutual relations like thermo-electricity, the phenomenon that a heat current causes an electric current (Thomas Seebeck’s
effect) or reverse (Charles Peltier’s effect). This is applied in the thermo-electric
thermometer, measuring a temperature difference by an electric potential difference. Relations between various types of currents are subject to a symmetry relation discovered by William Kelvin and generalized by Lars Onsager.
The laws of thermodynamics are generally valid, independent of the specific character of a physical thing or aggregate. For a limited set of specific systems (e.g., a gas
consisting of similar molecules), statistical mechanics is able to derive the second law from mechanical interactions, starting from assumptions about their probability.
Whereas the thermodynamic law states that the entropy in a closed system is constant or increasing, the statistical law allows of fluctuations. The source of this difference is that thermodynamics supposes matter to be continuous, whereas statistical mechanics
takes into account the molecular character of matter.
There are many different interactions,
like electricity, magnetism, contact forces (e.g., friction), chemical forces (e.g., glue), or gravity. Some are reducible to others. The contact forces turn out to be of an electromagnetic nature, and chemical forces are reducible to electrical ones.
Besides the general unification discussed above allowing of the comparison of widely differing interactions, a characteristic unification can be discerned. James Clerk Maxwell’s
unification of electricity and magnetism implies these interactions to have the same character, being subject to the same specific cluster of laws and showing symmetry. The fact that they can still be distinguished points to an asymmetry, a break of symmetry.
The study of characteristic symmetries and symmetry breaks supplies an important tool for achieving a characteristic unification of natural forces.
Since the middle of
the twentieth century, physics discerns four fundamental specific interactions. These are gravity and electromagnetic interaction besides the strong and weak nuclear forces. Later on, the electromagnetic and weak forces were united into the electroweak interaction,
whereas the strong force is reducible to the colour force between quarks. In the near future, physicists expect to be able to unite the colour force with the electroweak interaction. The ultimate goal, the unification of all four forces is still far away.
About 1900, the ‘electromagnetic world view’ supposed that all physical and chemical interactions could be reduced to electromagnetism.
Just like the modern standard model, it aimed at deducing the (rest-) mass of elementary particles from this supposed fundamental interaction.
These characteristic interactions are distinguished in several ways, first by the particles between which they act. Gravity acts between all particles, the colour force only between
quarks, and the strong force only between particles composed from quarks. A process involving a neutrino is weak, but the reverse is not always true.
is their relative strength. Gravity is weakest and only plays a part because it cannot be neutralized. It manifests itself only on a macroscopic scale. The other forces are so effectively neutralized, that the electrical interaction was largely unknown until
the eighteenth century, and the nuclear forces were not discovered before the twentieth century. Gravity conditions the existence of stars and systems of stars.
and electromagnetic interaction have an infinite range, whereas the other forces do not act beyond the limits of an atomic nucleus. For gravity and electricity the inverse-square law is valid (the force is inversely proportional to the square of the distance
from a point-like source). This law is classically expressed in Isaac Newton’s law of gravity and Charles Coulomb’s electrostatic law, with mass respectively charge acting as a measure of the strength of the source. A comparable law does not apply
to the other forces, and the lepton and baryon numbers do not act as a measure for their sources. As a function of distance, the weak interaction decreases much faster than quadratically. The colour force is nearly constant over a short distance (of the order
of the size of a nucleus), beyond which it decreases abruptly to zero.
The various interactions also differ because of the field particles involved. Each fundamental interaction
corresponds to a field in which quantized currents occur. For gravity, this is an unconfirmed hypothesis. Field particles have an integral spin and they are bosons. If the spin is even (0 of 2), it concerns an attractive force between equal particles and a
repulsive force between opposite particles (if applicable). For an uneven spin it is the other way around. The larger the field particle’s rest mass, the shorter is the range of the interaction. If the rest mass of the field particles is zero (as is
the case with photons and gravitons), the range is infinite. Unless mentioned otherwise, the field particles are electrically neutral.
The mean lifetime of spontaneous
decay differs widely. The stronger the interaction causing a transition, the faster the system changes. If a particle decays because of the colour force or strong force, it happens in a very short time (of the order of 10-23 to 10-19 sec).
Particles decaying due to weak interaction have a relatively long lifetime (10-12 sec for a tauon up to 900 sec for a free neutron). Electromagnetic interaction is more or less between.
4.4. The standard model
In high-energy physics, symmetry considerations and group theory play an important part in the analysis of collision processes. New properties like isospin and strangeness have led
to the introduction of groups named SU(2) and SU(3) and the discovery of at first three, later six quarks. (SU(3) means special unitary group with three variables. The particles in a representation of this group have the same spin and parity (together one
variable), but different values for strangeness and one component of isospin.)
Quantum electrodynamics reached its summit shortly after the Second World War, but the other
interactions are less manageable, being developed only after 1970. Now each field has a symmetry property called gauge invariance, related to the laws of conservation of electric charge, baryon number and lepton number. Symmetry is as much an empirical property
as any other one. After the discovery of antiparticles it was assumed that charge conjugation C (symmetry with respect to the interchange of a particle with its antiparticle), parity P (mirror symmetry) and time reversal T are properties
of all fundamental interactions. Since 1956, it is experimentally established that β-decay has no mirror symmetry unless combined with charge conjugation (CP). In 1964 it turned out that weak interactions are only symmetrical with respect to
the product CPT, such that even T alone is no longer universally valid.
The appropriate theory is called the standard model, since the discovery
of the J/y particle in 1974 explaining successfully a number of properties and interactions of subatomic particles. Dating from the seventies of the twentieth century, it was tentatively confirmed in 2012 by the experimental discovery of Peter Higgs’
particle, already predicted in 1964. Tentatively: the model does not include gravity, and some recently discovered properties of neutrinos do not quite fit into it. The general theory of relativity is still at variance with quantum electrodynamics, with the
electroweak theory of Steven Weinberg and Abdus Salam, as well as with quantum chromodynamics.
5. The character of electrons
and of other leptons
Ontology, the doctrine
of on (or ontos, Greek for being), aims to answer the question of how matter is composed according to present-day insights. Since the beginning of the twentieth century, many kinds of particles received names ending with on, like
electron, proton, neutron and photon. At first sight, the relation with ontology seems to be obvious. Historically the suffix –on goes back to the electron. Whether the connection with ontology has really played a part is unclear.
The word electron comes from the Greek word for amber or fossilized resin, since antiquity known for its properties that we now recognize as static electricity. From 1874, Stoney used the word electron for the elementary amount of charge. Only in the twentieth
century, electron became the name of the particle identified by Thomson in 1897. Rutherford introduced the names proton and neutron in 1920 (long before the actual discovery of the neutron in 1932). Lewis baptized the photon in 1926, 21 years after Einstein
proposed its existence. Yet, not many physicists would affirm that an electron is the essence of electricity, that the proton forms the primeval matter, that the neutron and its little brother, the neutrino, have the nature of being neutral, or that in the
photon light comes into being, and in the phonon sound. In pion, muon, tauon, and kaon, on is no more than a suffix of the letters π, μ, τ and K, whereas Paul Dirac baptized fermion and boson after Enrico Fermi and Satyendra Bose. In 1833
Michael Faraday, advised by William Whewell, introduced the words ion, kation, and anion, referring to the Greek word for to go. In an electrolyte, an ion moves from or to an electrode, an anode or cathode (names proposed by Whewell as well). An intruder
is the positive electron. Meant as positon, the positron received an additional r, possibly under the influence of electron or new words like magnetron and cyclotron, which however are machines, not particles.
Only after 1925 quantum physics and high-energy physics allowed of the study of the characters of elementary physical things. Most characters have been discovered after 1930. But the discovery of the electron (1897),
of the internal structure of an atom, composed from a nucleus and a number of electrons (1911) and of the photon (1905) preceded the quantum era. These are typical examples of characters founded in the quantitative, spatial, and kinetic projections of physical
interaction. Above these projections were pointed out to be respectively energy; force or field; and current.
An electron is characterized by a specific amount of mass and charge and is therefore quantitatively founded. The foundation is not in the quantitative relation frame itself (because that is not physical), but in the most important
quantitative projection of the physical relation frame. This is energy, expressing the quantity of interaction. Like other particles, an electron has a typical rest energy, besides specific values for its electric charge, magnetic moment and lepton number.
As we have seen, an electron has the character of a wave packet as well, kinetically qualified and spatially founded, anticipating physical interactions. An electron has a specific
physical character and a generic kinetic character. The two characters are interlaced within the at first sight simple electron. The combined dual character is called the wave-particle duality. Electrons share it with all other elementary
particles. As a consequence of the kinetic character and the inherent Heisenberg relations, the position of an electron cannot be determined much better than within 10-10 m (about the size of a hydrogen atom). But the physical character implies
that the electron’s collision diameter (being a measure of its physical size) is less than 10-17 m.
Except for quarks, all quantitatively founded particles are leptons, to be distinguished from field particles
and baryons. Leptons are not susceptible to the strong nuclear force or the colour force. They are subject to the weak force, sometimes to electromagnetic interaction, and like all matter to gravity. Each lepton has a positive or negative value for the lepton
number (L), which significance appears in the occurrence or non-occurrence of certain processes. Each process is subject to the law of conservation of lepton number, i.e., the total lepton number cannot change. For instance, a neutron (L=0) does not decay
into a proton and an electron, but into a proton (L=0), an electron (L=1) and an antineutrino (L=-1). The lepton number is just as characteristic for a particle as its electric charge. For non-leptons the lepton number is 0, for leptons it is +1 or -1.
Leptons satisfy a number of characteristic laws. Each particle has an electric charge being an integral multiple (positive, negative or zero) of the elementary charge. Each particle
corresponds with an antiparticle having exactly the same rest mass and lifetime, but opposite values for charge and lepton number. Having a half-integral spin, leptons are fermions satisfying the exclusion principle and the characteristic Fermi-Dirac statistics.
Three generations of leptons are known, each consisting of a negatively charged particle, a neutrino, and their antiparticles. These generations are related to similar generations
of quarks. A tauon decays spontaneously into a muon, and a muon into an electron. Both are weak processes, in which simultaneously a neutrino and an anti-neutrino are emitted.
The leptons display little diversity, their number is exactly 6. Like their diversity, the variation of leptons is restricted. It only concerns their external relations: their position, their linear and angular momentum, and the orientation of their
magnetic moment or spin relative to an external magnetic field.
This description emphasizes the quantitative aspect of leptons. But leptons are first of all physically
qualified. Their specific character determines how they interact by electroweak interaction with each other and with other physical subjects, influencing their coming into being, change and decay.
Electrons are by far the most important
leptons, having the disposition to become part of systems like atoms, molecules and solids. The other leptons only play a part in high-energy processes. In order to stress the distinction between a definition and a character as a set of laws, I shall dwell
a little longer on hundred years of development of our knowledge of the electron.
Although more scientists were involved, it is generally accepted that Joseph J. Thomson in 1897 discovered the electron. He identified his cathode ray as a stream of particles and
established roughly the ratio e/m of their charge e and mass m, by measuring how an electric and/or magnetic field deflects the cathode rays. In 1899 Thomson determined the value of e separately, allowing him to
calculate the value of m. Since then, the values of m and e, which may be considered as defining the electron, are determined with increasing precision. In particular Robert Millikan did epoch-making work, between 1909 and 1916.
Almost simultaneously with Thomson, Hendrik Lorentz observed that the Zeeman effect (1896) could be explained by the presence in atoms of charged particles having the same value for e/m as the electron. Shortly afterwards, the particles emerging
from β-radioactivity and the photoelectric effect were identified as electrons.
The mass m depends on the electron’s speed, as was first established
experimentally by Walter Kaufmann, later theoretically by Albert Einstein. Since then, instead of the mass m the rest mass mo is characteristic for a particle. Between 1911 and 1913, Ernest Rutherford and Niels Bohr developed the
atomic model in which electrons move around a much more massive nucleus. The orbital angular momentum turned out to be quantized. In 1923 Louis de Broglie made clear that an electron sometimes behaves like a wave, interpreted as the bearer of probability by
Max Born in 1926. In 1925, Samuel Goudsmit and George Uhlenbeck suggested a new property, half-integral spin, connected to the electron’s intrinsic magnetic moment. In the same year, Wolfgang Pauli discovered the exclusion principle. Enrico Fermi and
Paul Dirac derived the corresponding statistics in 1926. Since then, the electron is a fermion, playing a decisive part in all properties of matter. In 1930 it became clear that in β-radioactivity besides the electron a neutrino emerges from a nucleus.
Neutrinos were later on recognized as members of the lepton family. β-radioactivity is not caused by electromagnetic interaction, but by the weak nuclear force. Electrons turned out not to be susceptible to strong nuclear forces. In 1931 the electron
got a brother, the positron or anti-electron. This affirmed that an electron has no eternal life, but may be created or annihilated together with a positron. In β-radioactivity, too, an electron emerges or disappears (in a nucleus, an electron cannot
exist as an independent particle), but apart from these processes, the electron is the most stable particle we know besides the proton. According to Paul Dirac, the positron is a hole in the nether world of an infinite number of electrons having a negative
energy. In 1953, the law of conservation of lepton number was discovered. After the second world war, Richard Feynman, Julian Schwinger and Shin’ichiro Tomonaga developed quantum electrodynamics. This is a field theory in which the physical vacuum is
not empty, but is the stage of spontaneous creations and annihilations of virtual electron-positron pairs. Interaction with other (sometimes virtual) particles is partly responsible for the properties of each particle. As a top performance counts the theoretical
calculation of the magnetic moment of the electron in eleven decimals, a precision only surpassed by the experimental measurement of the same quantity in twelve decimals. Moreover, the two values differ only in the eleventh decimal, within the theoretical
margin of error. ‘The agreement between experiment and theory shown by these examples, the highest point in precision reached anywhere in the domain of particles and fields, ranks among the highest achievements of tentieth-century physics.’ Finally, the electron got two cousins, the muon and the tauon.
Besides these scientific developments, electronics revolutionized the world of communication, information, and control.
Joseph Thomson’s discovery, the concept of an electron has been changed and expanded considerably. Besides being a particle having mass and charge, it is now a wave, a top, a magnet, and a fermion, half of a twin, and a lepton. Yet, few people doubt
that we are still talking about the same electron.
What the essence of an electron is appears to be a hard question, if ever posed. It may very well be a meaningless
question. But we achieve a growing insight into the laws constituting the electron’s character, determining the electron’s relations with other things and the processes in which it is involved. The electron’s charge
means that two electrons exert a force on each other according to the laws of Charles Coulomb and Hendrik Lorentz. The mass follows from the electron’s acceleration in an electric and/or magnetic field, according to James Clerk Maxwell’s laws.
The lepton number makes only sense because of the law of conservation of lepton number, allowing of some processes and prohibiting others. Electrons are fermions, satisfying the exclusion principle and the distribution law of Enrico Fermi and Paul Dirac.
The character of electrons is not logically given by a definition, but physically by a specific set of laws, which are successively discovered and systematically
connected by experimental and theoretical research.
An electron is to be
considered an individual satisfying the character described above. A much-heard objection to the assignment of individuality to electrons and other elementary particles is the impossibility to distinguish one electron from another. Electrons are characteristically
equal to each other, having much less variability than plants or animals, even less than atoms.
This objection can be retraced to the still influential worldview of mechanism.
This worldview assumed each particle to be identifiable by objective kinetic properties like its position and velocity at a certain time. Quantum physics observes that the identification of physically qualified things requires a physical interaction. In general,
this interaction influences the particle’s position and momentum. Therefore, the electron’s position and momentum cannot be determined with unlimited accuracy, as follows from Werner Heisenberg’s relations. This means that identification
in a mechanistic sense is not always possible. Yet, in an interaction such as a measurement, an electron manifests itself as an individual.
If an electron is part of an
atom, it can be identified by its state, because the exclusion principle precludes that two electrons would occupy the same state. The two electrons in the helium atom exchange their states continuously without changing the state of the atom as a whole. But
it cannot be doubted that at any moment there are two electrons, each with its own mass, charge and magnetic moment. For instance, in the calculation of the energy levels the mutual repulsion of the two electrons plays an important part.
The individual existence of a bound electron depends on the binding energy being much smaller than its rest energy. Binding energy equals the energy needed to liberate an electron
from an atom. It varies from a few eV (the outer electrons) to several tens of keV (the inner electrons in a heavy element like uranium). The electron’s rest mass is about 0.5 MeV, much larger than its binding energy in a hydrogen atom (13.6 eV). To
keep an electron as an independent particle in a nucleus would require a binding energy of more than 100 MeV, much more than the electron’s rest energy of 0,5 MeV. For this reason, physicists argue that electrons in a nucleus cannot exist as
independent, individual particles, like they are in an atom’s shell.
In contrast, protons and neutrons in a nucleus satisfy the criterion that an independent particle
has a rest energy substantially larger than the bindingenergy. Their binding energy is about 8 MeV, their rest energy is almost 1000 MeV. A nucleus is capable of emitting an electron (this is β-radioactivity). The electron’s existence starts at
the emission and eventually ends at the absorption by a nucleus. Because of the law of conservation of lepton number, the emission of an electron is accompanied by the emission of an anti-neutrino, and at the absorption of an electron a neutrino is emitted.
This would not be the case if the electron could exist as an independent particle in the nucleus. Neutrino’s are stable, their rest mass is zero or very small, and they are only susceptible to weak interaction. Neutrino’s and anti-neutrino’s
differ by their parity, the one being left handed, and the other right handed. (This symmetry distinction is only possible for particles having zero restmass. If neutrinos have a rest mass different from zero, as some experiments suggest, the theory has to
be adapted with respect to parity). That the three neutrinos differ from each other is established by processes in which they are or are not involved, but in what respect they differ is less clear. For some time, physicists expected the existence of a fourth
generation, but the standard model restricts itself to three, because astrophysical cosmology implies the existence of at most three different types of neutrino’s with their antiparticles.
More than as free particles, the electrons display their characteristic properties as components of atoms, molecules and solids, as well as in
processes. The half-integral spin of electrons was discovered in the investigation of atomic spectra. The electron’s fermion character largely determines the shell structure of atoms. In 1930, Wolfgang Pauli suggested the existence of the neutrino because
of the character of β-radioactivity. The lepton number is discovered by an analysis of specific nuclear reactions.
Electrons have the affinity or propensity of functioning
as a component of atoms and molecules because electrons share electromagnetic interaction with nuclei. Protons and electrons have the same but opposite charge, allowing of the formation of neutral atoms, molecules and solids. Electric neutrality is of tremendous
importance for the stability of these systems. This tertiary characteristic determines the meaning of electrons in the cosmos.
6. The quantum ladder
of composite systems
An important spatial manifestation of interaction is the force between two spatially separated bodies. An atom or molecule having a spatially
founded character consists of a number of nuclei and electrons kept together by the electromagnetic force. More generally, any interaction is spatially projected on a field.
Sometimes a field can be described as the spatial derivative of the potential energy. A set of particles constitutes a stable system if the potential energy has an appropriate shape, characteristic for the spatially founded structure. In a spatially
founded structure, the relative spatial positions of the components are characteristic, even if their relative motions are taken care of. Atoms have a spherical symmetry restricting the motions of the electrons. In a molecule, the atoms or ions have characteristic
relative positions, often with a specific symmetry. In each spatially founded character a number of quantitatively founded characters are interlaced.
It is a remarkable fact that in an atom the nucleus acts like a quantitatively founded character, whereas the nucleus itself is a spatial configuration of protons and neutrons
kept together by forces. The nucleus itself has a spatially founded character, but in the atom it has the disposition to act as a whole, characterized by its mass, charge and magnetic moment. Similarly, a molecule or a crystal is a system consisting of a number
of atoms or ions and electrons, all acting like quantitatively founded particles. Externally, the nucleus in an atom and the atoms or ions in a molecule act as quantitatively founded wholes, as units, while preserving their own internal spatially founded structure.
However, an atom bound in a molecule is not completely the same as a free atom. In contrast to a nucleus, a free atom is electrically neutral and it has a spherical symmetry. Consequently,
it cannot easily interact with other atoms or molecules, except in collisions. In order to become a part of a molecule, an atom has to open up its tertiary character. This can be done in various ways. The atom may absorb or eject an electron, becoming an ion.
A common salt molecule does not consist of a neutral sodium atom and a neutral chlorine atom, but of a positive sodium ion and a negative chlorine ion, attracting each other by the Coulomb force. This is called heteropolar or ionic bonding. Any change of the
spherical symmetry of the atom’s electron cloud leads to the relatively weak Van der Waals interaction. A very strong bond results if two atoms share an electron pair. This homopolar or covalent bond occurs in diatomic molecules like hydrogen, oxygen
and nitrogen, in diamond and in many carbon compounds. Finally, especially in organic chemistry, the hydrogen bond is important. It means the sharing of a proton by two atom groups.
The possibility of being bound into a larger configuration is a very significant tertiary characteristic of many physically qualified systems, determining their meaning in the cosmos.
The first stable system studied by physics is the solar system, in the seventeenth century investigated by Johannes Kepler, Galileo Galilei, Christiaan
Huygens, and Isaac Newton. The law of gravity, mechanical laws of motion, and conservation laws determine the character of planetary motion. The solar system is not unique, there are more stars with planets, and the same character applies to a planet with
its moons, or to a double star. Any model of the system presupposes its isolation from the rest of the world, which is only approximately the case. This approximation is pretty good for the solar system, less good for the system of the sun and each planet
apart, and pretty bad for the system of earth and moon.
Spatially founded physical characters display a large disparity.
Various specific subtypes appear. According to the standard model, these characters form a hierarchy, called the quantum ladder.
At the first rung there are six (or eighteen, see below) different quarks, with the antiquarks grouped into three generations related to those of leptons, as follows from analogous processes.
Like a lepton, a quark is quantitatively founded, it has no structure. But a quark cannot exist as a free particle. Quarks are confined as a duo in a meson (e.g., a pion) or as a trio in a baryon (e.g., a proton or a neutron) or
an antibaryon. Confinement is a tertiary characteristic, but it does not stand apart from the secondary characteristics of quarks, their quantitative properties. Whereas quarks have a charge of 1/3 or 2/3 times the elementary charge, their combinations satisfy
the law that the electric charge of a free particle can only be an integral multiple of the elementary charge. Likewise, in confinement the sum of the baryon numbers (for quarks ±1/3 of ±2/3) always yields an integral number. For a meson this
number is 0, for a baryon it is +1, for an antibaryon it is -1.
Between quarks the colour force is acting, mediated by gluons. The colour force has no effect on leptons
and is related to the strong force between baryons. In a meson the colour force between two quarks hardly depends on their mutual distance, meaning that they cannot be torn apart. If a meson breaks apart, the result is not two separate quarks but two quark-antiquark
Quarks are fermions, they satisfy the exclusion principle. In a meson or baryon, two identical quarks cannot occupy the same state. But an omega particle (sss) consists
of three strange quarks having the same spin. This is possible because each quark exists in three variants, each indicated by a ‘colour’ besides six ‘flavours’. For the antiquarks three complementary colours are available. The metaphor
of ‘colour’ is chosen because the colours are able to neutralize each other, like ordinary colours can be combined to produce white. This can be done in two ways, in a duo by adding a colour to its anticolour, or in a trio by adding three different
colours or anticolours. The law that mesons and baryons must be coulorless yields an additional restriction on the number of possible combinations of quarks. A white particle is neutral with respect to the colour force, like an uncharged particle is neutral
with respect to the Coulomb force. Nevertheless, an electrically neutral particle may exert electromagnetic interaction because of its magnetic moment. This applies e.g. to a neutron, but not to a neutrino. Similarly, by the exchange of mesons, the colour
force manifests itself as the strong nuclear force acting between baryons, even if baryons are ‘white’. Two quarks interact by exchanging gluons, thereby changing of colour.
The twentieth-century standard model has no solution to a number of problems. Why only three generations? If all matter above the level of hadrons consists of particles from the first generation, what is the tertiary disposition of the particles
of the second and third generation? Should the particles of the second and third generation be considered excited states of those of the first generation? Why does each generation consist of two quarks and two leptons (with corresponding antiparticles)? What
is the origin of the mass differences between various leptons and quarks?
The last question might be the only one to receive an answer in the twenty-first century, when
the existence of Peter Higgs’ particle and its mass were experimentally established (2012). For the other problems, at the end of the twentieth century no experiment is proposed providing sufficient information to suggest a solution.
The second level of the hierarchy consists of hadrons, baryons having half integral spin and mesons having integral spin. Although
the combination of quarks is subject to severe restrictions, there are quite a few different hadrons. A proton consists of two up and one down quark (uud), and a neutron is composed of one up and two down quarks (udd). These two nucleons are the lightest baryons,
all others being called hyperons. A pion consists of dd, uu (charge 0), du (–e) or ud (+e). As a free particle, only the proton is stable, whereas the neutron is stable within a nucleus. A free neutron decays into a proton, an electron
and an antineutrino. The law of conservation of baryon number is responsible for the stability of the proton, being the baryon with the lowest rest energy. All other hadrons have a very short mean lifetime, a free neutron having the longest (900 sec). Their
diversity is much larger than that of leptons and of quarks. Based on symmetry relations, group theory orders the hadrons into sets of e.g. eight baryons or ten mesons.
a large part, the interaction of hadrons consists of rearranging quarks accompanied by the creation and annihilation of quark-antiquark pairs and lepton-antilepton pairs. The general laws of conservation of energy, linear and angular momentum, the specific
laws of conservation of electric charge, lepton number and baryon number, and the laws restricting electric charge and baryon number to integral values, characterize the possible processes between hadrons in a quantitative sense. Besides, the fields described
by quantum electrodynamics and quantum chromodynamics characterize these processes in a spatial sense, and the exchange of field particles in a kinetic way.
Atomic nuclei constitute the third layer in the hierarchy. With the exception of hydrogen, each nucleus consists of protons and neutrons, determining together the coherence, binding energy, stability, and lifetime
of the nucleus. The mass of the nucleus is the sum of the masses of the nucleons less the mass equivalent to the binding energy. Decisive is the balance of the repulsive electric force between the protons and the attractive strong nuclear
force binding the nucleons independent of their electric charge. In heavy nuclei, the surplus of neutrons compensates for the mutual repulsion of the protons. To a large extent, the exclusion principle applied to neutrons and protons separately determines
the stability of the nucleus and its internal energy states.
The nuclear force is negligible for the external functioning of a nucleus in an atom or molecule. Only the
mass of the nucleus, its electric charge and its magnetic moment are relevant for its external relations. Omitting the magnetic moment leads to two diversities in nuclei.
first diversity concerns the number of protons. In a neutral atom it equals the number of electrons determining the atom’s chemical propensities. The nuclear charge together with the exclusion principle dominates the energy states of the electrons, hence
the position of the atom in the periodic system of elements.
The second diversity concerns the number of neutrons in the nucleus. Atoms having the same number of protons
but differing in neutron number are called isotopes, because they have the same position (topos) in the periodic system. They have similar chemical propensities.
diversity of atomic nuclei is represented in a two-dimensional diagram, a configuration space. The horizontal axis represents the number of protons (Z = atomic number), the vertical axis the number of neutrons (N). In this diagram the isotopes (same Z, different
N) are positioned above each other. The configuration space is mostly empty, because only a restricted number of combinations of Z and N lead to stable or metastable (radioactive) nuclei. The periodic system of elements is a two-dimensional diagram as well.
Dmitri Mendelejev ordered the elements in a sequence according to a secondary property (the atomic mass) and below each other according to tertiary propensities (the affinity of atoms to form molecules, in particular compounds with hydrogen and oxygen). Later
on, the atomic mass was replaced by the atomic number Z. However, quantum physics made clear that the atomic chemical properties are not due to the nuclei, but to the electrons subject to the exclusion principle. The vertical ordering in the periodic system
concerns the configuration of the electronic shells. In particular the electrons in the outer shells determine the tertiary chemical propensities.
This is not an ordering
according to a definition in terms of necessary and sufficient properties distinguishing one element from the other, but according to their characters. The properties do not define a character, as essentialism assumes, but the character (a set of laws) determines
the properties and propensities of the atoms.
In the hierarchical order,
we find globally an increase of spatial dimensions, diversity of characters and variation within a character, besides a decrease of the binding energy per particle and the significance of strong and weak nuclear forces. For the characters
of atoms, molecules, and crystals, only the electromagnetic interaction is relevant.
The internal variation of
a spatially founded character is very large. Quantum physics describes the internal states with the help of David Hilbert’s space, having the eigenvectors of William Hamilton’s operator as a base. A Hilbert space describes the ensemble of possibilities
(in particular the energy eigenvalues) determined by the system’s character. In turn, the atom or molecule’s character itself is represented by Edwin Schrödinger’s time-independent equation. This equation is exactly solvable only in
the case of two interacting particles, like the hydrogen atom, the helium ion, the lithium ion, and positronium. (Positronium is a short living composite of an electron and a positron, the only spatially founded structure entirely consisting of leptons.) In
other cases, the equation serves as a starting point for approximate solutions, usually only manageable with the help of a computer.
The hierarchical connection implies
that the spatially founded characters are successively interlaced, for example nucleons in a nucleus, or the nucleus in an atom, or atoms in a molecule. Besides, these characters are interlaced with kinetically, spatially, and quantitatively qualified characters,
and often with biotically qualified characters as well.
The characters described
depend strongly on a number of natural constants, which value can be established only experimentally, not theoretically. Among others, this concerns the gravitational constant G, the speed of light c, Planck’s constant h
and the elementary electric charge e, or combinations like the fine structure constant (2pe2/hc=1/137.036) and the mass ratio of the proton and the electron (1836.104). If the constants of nature would be slightly different,
both nuclear properties and chemical properties would change drastically.
The quantum ladder is of a physical and chemical nature. As an ordering principle, the ladder has a few flaws from a logical point of view. For instance, the proton occurs on three
different levels, as a baryon, as a nucleus, and as an ion. The atoms of the noble gases are their molecules as well. This is irrelevant for their character. The character of a proton consists of the specific laws to which it is subjected. The classification
of baryons, nuclei or ions is not a characterization, and a proton is not ‘essentially’ a baryon and ‘accidentally’ a nucleus or an ion.
The number of molecular characters is enormous and no universal classification of molecules exists. In particular the characters in which carbon is an important element show a large diversity.
The molecular formula indicates the number of atoms of each element in a molecule. Besides, the characteristic spatial structure of a molecule determines its chemical properties. The composition
of a methane molecule is given by the formula CH4, but it is no less significant that the methane molecule has the symmetrical shape of a regular tetrahedron, with the carbon atom at the centre and the four hydrogen atoms at the vertices. The V-like
shape of a water molecule (the three atoms do not lie on a straight line, but form a characteristic angle of 105o) causes the molecule to have a permanent electric dipole moment, explaining many of the exceptional properties of water. Isomers are
materials having the same molecular formula but different spatial orderings, hence different chemical properties. Like the symmetry between a left and a right glove, the spatial symmetry property of mirroring leads to the distinction of dextro- and laevo-molecules.
The symmetry characteristic for the generic (physical) character is an emergent property, in general irreducible to the characters of the composing systems. Conversely, the original
symmetry of the composing systems is broken. In methane, the outer shells of the carbon atom have exchanged their spherical symmetry for the tetrahedron symmetry of the molecule. Symmetry break also occurs in fields. The symmetry of strong nuclear interaction
is broken by electroweak interaction. For the strong interaction, the proton and the neutron are symmetrical particles having the same rest energy, but the electroweak interaction causes the neutron to have a slightly larger rest energy and to be metastable
as a free particle.
From quantum field theory, in principle it should be possible to derive successively the emergent properties of particles and their spatially founded
composites. This is the synthetic, reductionist or fundamentalist trend, constructing complicated structures from simpler ones. It cannot explain symmetry breaks.
For practical reasons too, a synthetic approach is usually impossible. The alternative is the analytical or holistic method, in which the symmetry break is explained from the empirically established symmetry of the original character. Symmetries and other
structural properties are usually a posteriori explained, and hardly ever a priori derived. However, analysis and synthesis are not contrary but complementary methods.
Climbing the quantum ladder, complexity seems to increase. On second thoughts, complexity is not a clear concept. An atom would be more
complex than a nucleus and a molecule even more. However, in the character of a hydrogen atom or a hydrogen molecule, weak and strong interactions are negligible, and the complex spatially founded nuclear structure is reduced to the far simpler quantitatively
founded character of a particle having mass, charge, and magnetic moment. Moreover, a uranium nucleus consisting of 92 protons and 146 neutrons has a much more complicated character than a hydrogen molecule consisting of two protons and two electrons, having
a position two levels higher on the quantum ladder.
Inward a system is more complex than outward. An atom consists of a nucleus and a number of electrons, grouped into
shells. If a shell is completely filled in conformity with the exclusion principle, it is chemically inert, serving mostly to reduce the effective nuclear charge. A small number of electrons in partially occupied shells determines the atom’s chemical
propensities. Consequently, an atom of a noble gas, having only completely occupied shells, is less complicated than an atom having one or two electrons less. The complexity of molecules increases if the number of atoms increases. But some very large organic
molecules consist of a repetition of similar atomic groups and are not particularly complex.
In fact, there does not exist an unequivocal criterion for complexity.
An important property of hierarchically ordered characters is that for the explanation
of a character it is sufficient to descend to the next lower level. For the understanding of molecules, a chemist needs the atomic theory, but he does not need to know much about nuclear physics. A molecular biologist is acquainted with the chemical molecular
theory, but his knowledge of atomic theory may be rather superficial. This is possible because of the phenomenon that a physical character interlaced in another one both keeps its properties and hides them.
Each system derives its stability from an internal equilibrium that is hardly observable from without. The nuclear forces do not range outside the nucleus. Strong electric forces bind an atom or a molecule, but as a whole it is
electrically neutral. The strong internal equilibrium and the weak remaining external action are together characteristic for a stable physical system. If a system exerts a force on another one, it experiences an equal external force. This external force should
be much smaller than the internal forces keeping the system intact, otherwise it will be torn apart. In a collision between two molecules, the external interaction may be strong enough to disturb the internal equilibrium, such that the molecules fall apart.
Eventually, a new molecule with a different character emerges. Because the mean collision energy is proportional to the temperature, the stability of molecules and crystals depend on this parameter. In the sun’s atmosphere no molecules exist and in its
centre no atoms occur. In a very hot star like a neutron star, even nuclei cannot exist.
Hence, a stable physical or chemical system is relatively inactive. It looks like
an isolated system. This is radically different from plants and animals that can never be isolated from their environment. The internal physical equilibrium of a plant or an animal is maintained by metabolism, the continuous flow of energy and matter through
7. Individualized currents
I consider the primarily physical character of a photon to be secondarily kinetically founded. A photon is a field particle in the
electromagnetic interaction, transporting energy, linear and angular momentum from one spatially founded system to another. Besides photons, nuclear physics recognizes gluons being field particles for the colour force, mesons for the strong nuclear force,
and three types of vector bosons for the weak interaction. The existence of the graviton, the field particle for gravity, has not been experimentally confirmed. All these interaction particles have an integral spin and are bosons. Hence, these are not subject
to the exclusion principle. Field particles are not quantitatively or spatially founded things, but individualized characteristic currents, hence kinetically founded ‘quasiparticles’. Bosons carry forces, whereas fermions feel
or experience forces.
By absorbing a photon, an atom comes into an excited state, i.e. a metastable state at a higher energy than the ground state. Whereas an
atom in its ground state can be considered an isolated system, an excited atom is always surrounded by the electromagnetic field.
A photon is a wave packet, like an electron
it has a dual character. Yet there is a difference. Whereas the electron’s motion has a wave character, a photon is a current in an electromagnetic field, a current being a kinetic projection of physical interaction. With respect to
electrons, the wave motion only determines the probability of what will happen in a future interaction. In a photon, besides determining a similar probability, the wave consists of periodically changing electric and magnetic fields. A real particle’s
wave motion lacks a substratum, there is no characteristic medium in which it moves, and its velocity is variable. Moving quasiparticles have a substratum, and their wave velocity is a property of the medium. The medium for light in empty space is the electromagnetic
field, all photons having the same speed independent of any reference system.
inorganic solid consists of crystals, sometimes microscopically small. Amorphous solid matter does not exist or is very rare. The ground state of a crystal is the hypothetical state at zero temperature. At higher temperatures, each solid is in an excited state,
determined by the presence of quasiparticles.
The crystal symmetry, adequately described by the theory of groups, has two or three levels. First, each crystal is composed
of space filling unit cells. All unit cells of a crystal are equal to each other, containing the same number of atoms, ions or molecules in the same configuration. A characteristic lattice point indicates the position of a unit cell. The lattice points constitute
a Bravais lattice (called after Auguste Bravais), representing the crystal’s translation symmetry. Only fourteen types of Bravais lattices are mathematically possible and realized in nature. Each lattice allows of some variation, for instance with respect
to the mutual distance of the lattice points, as is seen when the crystal expands on heating. Because each crystal is finite, the translation symmetry is restricted and the surface structure of a crystal may be quite different from the crystal structure.
Second, the unit cell has a symmetry of its own, superposed on the translation symmetry of the Bravais lattice. The cell may be symmetrical with respect to reflection, rotation or
inversion. The combined symmetry determines how the crystal scatters X-rays or neutrons, presenting a means to investigate the crystalline structure empirically. Hence, the long distance spatial order of a crystal evokes a long time kinetic
order of specific waves.
Third, in some materials we find an additional ordering, for instance that of the magnetic moments of electrons or atoms in a ferromagnet. Like
the first one, this is a long-distance ordering. It involves an interaction that is not restricted to nearest neighbours. It may extend over many millions of atomic distances.
The atoms in a crystal oscillate around their equilibrium positions. These elastic oscillations are transferred from one atom to the next like a sound wave, and because the crystal has a finite volume, this is a stationary wave, a collective oscillation.
The crystal as a whole is in an elastic oscillation, having a kinetically founded character. These waves have a broad spectrum of frequencies and wavelengths, being sampled into wave packets. In analogy with light, these field particles are called sound quanta
Like the electrons in a metal, the phonons act like particles in a box. Otherwise they differ widely. The number of electrons is constant, but the
number of phonons increases strongly at increasing temperature. Like all quasiparticles, the phonons are bosons, not being subject to the exclusion principle. The mean kinetic energy of the electrons hardly depends on temperature, and their specific heat is
only measurable at a low temperature. In contrast, the mean kinetic energy of phonons strongly depends on temperature, and the phonon gas dominates the specific heat of solids. At a low temperature this increases proportional to T3 to become
constant at a higher temperature. Peter Debije’s theory (originally 1912, later adapted) explains this from the wave and boson character of phonons and the periodic character of the crystalline structure.
In a solid or liquid, besides phonons many other quantized excitations occur, corresponding, for instance, with magnetization waves or spin waves. The interactions of quasiparticles and electrons cause the photoelectric
effect and transport phenomena like electric resistance and thermo-electricity.
The specific properties of some superconductors can be described with the help of quasiparticles. (This applies to the superconducting metals and alloys known before 1986. For the ceramic superconductors, discovered since 1986, this explanation is not
sufficient.) In a superconductor two electrons constitute a pair called after Leon Cooper. This is a pair of electrons in a bound state, such that both the total linear momentum and the total angular momentum are zero. The two electrons are not necessarily
close to each other. Superconductivity is a phenomenon with many variants, and the theory is far from complete.
Superconductivity is a collective phenomenon in which the
wave functions of several particles are macroscopically coherent. There is no internal dissipation of energy. It appears that on a macroscopic scale the existence of kinetically founded characters is only possible if there is no decoherence. Therefore, kinetically
founded physical characters on a macroscopic scale are quite exceptional.
8. Aggregates and statistics
We have now discussed three types of physically qualified characters, respectively quantitatively,
spatially, and kinetically based, but this does not exhaust the theory of matter. The inorganic sciences acknowledge many kinds of mixtures, aggregates, alloys or solutions. In nature, these are more abundant than pure matter. Often, the possibility to form
a mixture is restricted and some substances do not mix at all. In order to form a stable aggregate, the components must be tuned to each other. Typical for an aggregate is that the characteristic magnitudes (like pressure, volume and temperature for a gas)
are variable within a considerable margin, even if there is a lawful connection between these magnitudes.
Continuous variability provides quantum physics with a criterion
to distinguish a composite thing (with a character of its own) from an aggregate. Consider the interaction between an electron and a proton. In the most extreme case this leads to the absorption of the electron and the transformation of the proton into a neutron
(releasing a neutrino). At a lower energy, the interaction may lead to a bound state having the character of a hydrogen atom if the total energy (kinetic and potential) is negative. Finally, if the total energy is positive, we have an unbound state, an aggregate.
In the bound state the energy can only have discrete values, it is quantized, whereas in the unbound state the energy is continuously variable.
Hence, if the rest energy
has a characteristic value and internal energy states are lacking, we have an elementary particle (a lepton or a quark). If there are internal discrete energy states we have a composite character, whereas we have an aggregate if the internal energy is continuously
With aggregates it is easier to abstract from specific properties
than in the case of the characters of composite systems discussed above. Studying the properties of macroscopic physical bodies, thermodynamics starts from four general laws, for historical reasons numbered 0 to 3 and written with capitals.
The Zeroth Law states that two or more bodies (or parts of a single body) can be in mutual equilibrium. Now the temperature of the interacting bodies is the same, and in a body as a whole
the temperature is uniform. Depending on the nature of the interaction, this applies to other intensive magnitudes as well, for instance the pressure of a gas, or the electric or chemical potential. In this context bodies are not necessarily spatially separated.
The thermodynamic laws apply to the components of a mixture as well. Equilibrium is an equivalence relation. An intensive magnitude like temperature is an equilibrium parameter, to be distinguished from an extensive magnitude like energy, which is additive.
If two unequal bodies are in thermal equilibrium with each other, their temperature is the same, but their energy is different and the total energy is the sum of the energies of the two bodies apart. An additive magnitude refers to the quantitative relation
frame, whereas an equilibrium parameter is a projection on the spatial frame.
According to the First Law of thermodynamics, the total energy is constant, if the interacting
bodies are isolated from the rest of the world. The thermodynamic law of conservation of energy forbids all processes in which energy would be created or annihilated. The First Law does not follow from the fact that energy is additional. Volume, entropy, and
the mass of each chemical component are additive as well, but not always constant in an interaction.
The Second Law states that interacting systems proceed towards an equilibrium
state. The entropy decreases if a body loses energy and increases if a body gains energy, but always in such a way that the total entropy increases as long as equilibrium is not reached. Based on this law only entropy differences can be calculated.
According to the Third Law the absolute zero of temperature cannot be reached. At this temperature all systems would have the same entropy, to be considered the zero point on the
From these axioms other laws are derivable, such as Joshua Gibbs’s phase rule (see below). As long as the interacting systems are not in equilibrium,
the gradient of each equilibrium parameter acts as the driving force for the corresponding current causing equilibrium. A temperature gradient drives a heat current, a potential difference drives an electric current, and a chemical potential difference drives
a material current. Any current (except a superconducting flow) creates entropy.
The thermodynamic axioms describe the natural laws correctly in the case of interacting
systems being close to equilibrium. Otherwise, the currents are turbulent and a concept like entropy cannot be defined. Another restriction follows from the individuality of the particles composing the system. In the equilibrium state, the entropy is not exactly
constant, but it fluctuates spontaneously around the equilibrium value. Quantum physics shows energy to be subject to Werner Heisenberg’s relations. In fact, the classical thermodynamic axioms refer to a continuum, not to the actually coarse matter.
Thermodynamics is a general theory of matter, whereas statistical physics studies matter starting from the specific properties of the particles composing a system. This means that thermodynamics and statistical physics complement each other.
An equilibrium state is sometimes called an ‘attractor’, attracting a system from any instable state toward a stable state. Occasionally, a system has several attractors, now called
local equilibrium states. If there is a strong energy barrier between the local equilibrium states, it is accidental which state is realized. By an external influence, a sudden and apparently drastic transition may occur from one attractor to another
one. In quantum physics a similar phenomenon is called ‘tunneling’, to which I shall return later.
a. A homogeneous set of particles having the same character may be considered a quantitatively founded aggregate, if the set does not constitute a structural whole with a spatially founded character of its own (like the
electrons in an atom). In a gas the particles are not bound to each other. Usually, an external force or a container is needed to keep the particles together. In a fluid, the surface tension is a connective force that does not give rise to a characteristic
whole. The composing particles’ structural similarity is a condition for the applicability of statistics. Therefore I call a homogeneous aggregate quantitatively founded.
It is not sufficient to know that the particles are structurally similar. At least it should be specified whether the particles are fermions or bosons. Consider, for instance, liquid helium, having two varieties. In the most common isotope, a helium
nucleus is composed of two protons and two neutrons. The net spin is zero, hence the nucleus is a boson. In a less common isotope, the helium nucleus has only one neutron besides two protons. Now the nucleus’ net spin is ½ and it is a fermion.
This distinction (having no chemical consequences) accounts for the strongly diverging physical properties of the two fluids.
Each homogeneous gas is subjected to a specific
law, called the statistics or distribution function. It determines how the particles are distributed over the available states, taking into account parameters like volume, temperature, and total energy. The distribution function does not specify which
states are available. Before the statistics is applicable, the energy of each state must be calculated separately.
The Fermi-Dirac statistics based on Wolfgang Pauli’s
exclusion principle applies to all homogeneous aggregates of fermions, i.e., particles having half-integral spin. For field particles and other particles having an integral spin, the Bose-Einstein statistics applies, without an exclusion principle. If the
mean occupation number of available energy states is low, both statistics may be approximated by the classical Maxwell-Boltzmann distribution function. Except at very low temperatures, this applies to every dilute gas consisting of similar atoms or molecules.
The law of Robert Boyle and Louis Gay-Lussac follows from this statistics. It determines the relation between volume, pressure and temperature for a dilute gas, if the interaction between the molecules is restricted to elastic collisions and if the molecular
dimensions are negligible. Without these two restrictions, the state equation of Johannes Van der Waals counts as a good approximation. Contrary to the law of Boyle and Gay-Lussac, Van der Waals’ equation contains two constants characteristic for the
gas concerned. It describes the condensation of a gas to a fluid as well as the phenomena occurring at the critical point, the highest temperature at which the substance is liquid.
b. It is not possible to apply statistics directly to a mixture of subjects having different characters. Sometimes, it can be done with respect to
the components of a mixture apart. For a mixture of gases like air, the pressure exerted by the mixture equals the sum of the partial pressures exerted by each component apart in the same volume at the same temperature (John Dalton’s law). The chemical
potential is a parameter distinguishing the components of a heterogeneous mixture.
I consider a heterogeneous mixture like a solution to have a spatial foundation, because
the solvent is the physical environment of the dissolved substance. Solubility is a characteristic disposition of a substance dependent on the character of the solvent as the potential environment.
Stable characters in one environment may be unstable in another one. Common salt molecules solved in water fall apart into sodium and chlorine ions. In the environment of water, the dielectric constant is much higher than in air.
Now Charles Coulomb’s force between the ions is proportionally smaller, too small to keep the ions together. (A more detailed explanation depends on the property of a water molecule to have a permanent electric dipole moment. Each sodium or chlorine
ion is surrounded by a number of water molecules, decreasing their net electric charge. This causes the binding energy to be less than the mean kinetic energy of the molecules.)
The composition of a mixture, like the number of grams of solved substance in one litre water, is accidental. It is not determined by any character but by its history. This does not mean that two substances can be mixed in any proportion whatsoever.
However, within certain limits dependent on the temperature and the characters of the substances concerned, the proportion is almost continuously variable.
c. Even if a system only consists of particles of the same character, it may not appear homogeneous. It exists in two or more different ‘phases’ simultaneously,
for example, the solid, liquid, and vaporous states. A glass of water with melting ice is in internal equilibrium at 0 °C. If heat is supplied, the temperature remains the same until all ice is melted. Only chemically pure substances have a characteristic
melting point. In contrast, a heterogeneous mixture has a melting trajectory, meaning that during the melting process, the temperature increases. A similar characteristic transition temperature applies to other phase transitions in a homogeneous substance,
like vaporizing, the transition from a paramagnetic to a ferromagnetic state, or the transition from a normal to a superconducting state. Addition of heat or change of external pressure shifts the equilibrium. A condition for equilibrium is that the particles
concerned move continuously from one phase to the other. Therefore I call it a homogeneous kinetically founded aggregate.
An important example of a heterogeneous
kinetic equilibrium concerns chemical reactions. Water consists mostly of water molecules, but a small part (10-7 at 25oC) is dissociated into positive H-ions and negative OH-ions. In the equilibrium state, equal amounts of molecules
are dissociated and associated. By adding other substances (acids or bases), the equilibrium is shifted.
Both phase transitions and chemical reactions are subject
to characteristic laws and to general thermodynamic laws, for instance Joshua Gibbs’s phase rule.
9. Coming into being,
change and decay
I call an event physically qualified if it is primarily characterized by an interaction between two or more subjects. A process is a characteristic set of events, partly simultaneously, partly successively. Therefore,
physically qualified events and processes often occur in an aggregate, sometimes under strictly determined circumstances, among which the temperature. In a mixture, physical, chemical and astrophysical reactions lead to the realization of characters. Whereas
in physical things properties like stability and life time are most relevant, physical and chemical processes concern the coming into being, change and decay of those things.
In each characteristic event a thing changes of character (it emerges or decays) or
of state (preserving its identity). With respect to the thing’s character considered as a law, the first case concerns a subjective event (because the subject changes). The second case concerns an objective event (for the objective
state changes). Both have secondary characteristics. I shall briefly mention some examples.
Annihilation or creation of particles is a subjective
numerically founded event. Like any other event, it is subject to conservation laws. An electron and a positron emerge simultaneously from the collision of a γ-particle with some other particle, if the photon’s energy is at least twice the electron’s
rest energy. The presence of another particle, like an atomic nucleus, is required in order to satisfy the law of conservation of linear momentum. For the same reason, at least two photons emerge when an electron and a positron destroy each other.
By emitting or absorbing a photon, a nucleus, atom or molecule changes its state. This is a spatially founded objective transformation. In contrast,
in a nuclear or chemical reaction one or more characters are transformed, constituting a subjective spatially founded event. In a- or b-radioactivity, a nucleus changes subjectively its character, in g-activity it only changes objectively its state.
An elastic collision is an event in which the kinetic state of a particle is changed without consequences for its character or its internal state. Hence,
this concerns an objective kinetically founded event. In a non-elastic collision a subjective change of character or an objective change of state occurs. Quantum physics describes such events with the help of operators determining the transition probability.
A process is an aggregate of events. In a homogeneous aggregate, phase transitions may occur. In a heterogeneous aggregate chemical reactions occur. Both
are kinetically founded. This also applies to transport phenomena like electric, thermal or material currents, thermo-electric phenomena, osmosis and diffusion.
Conservation laws are ‘constraints’ restricting the possibility of processes. For instance, a process
in which the total electric charge would change is impossible. In atomic and nuclear physics, transitions are known to be forbidden or improbable because of selection rules for quantum numbers characterizing the states concerned.
Physicists and chemists take for granted that each process that is not forbidden is possible and therefore experimentally realizable. In fact, several laws of conservation
like those of lepton number and baryon number were discovered because certain reactions turned out to be impossible. Conversely, in 1930 Wolfgang Pauli postulated the existence of neutrinos, because otherwise the laws of conservation of energy and momentum
would not apply to b-radioactivity. Experimentally, the existence of neutrinos was not confirmed until 1956.
In common parlance, a collision is a rather dramatic event, but in physics and chemistry a collision is just an interaction between two or more subjects moving towards each
other, starting from a large distance, where their interaction is negligible. In classical mechanics, this interaction means an attractive or repelling force. In modern physics, it implies the exchange of real or virtual particles like photons.
In each collision, at least the state of motion of the interacting particles changes. If that is all, we speak of an elastic collision, in which
only the distribution of kinetic energy, linear and angular momentum over the colliding particles changes. A photon can collide elastically with an electron (Arthur Compton’s effect), but an electron cannot absorb a photon. Only a composite thing like
a nucleus or an atom is able to absorb a particle.
Collisions are used to investigate the character of the particles concerned. A famous example
is the scattering of a-particles (helium nuclei) by gold atoms (1911). For the physical process, it is sufficient to assume that the particles have mass and charge and are point-like. It does not matter whether the particles are positively or negatively charged.
The character of this collision is statistically expressed in a mathematical formula derived by Ernest Rutherford. The fact that the experimental results (by Hans Geiger and Ernest Marsden) agreed with the formula indicated that the nucleus is much smaller
than the atom, and that the mass of the atom is almost completely concentrated in the nucleus. A slight deviation between the experimental results and the theoretical formula allowed of an estimate of the size of the nucleus, its diameter being about 104
times smaller than the atom’s. The dimension of a microscopic invisible particle is calculable from similar collision processes, and is therefore called its collision diameter. Its value depends on the projectiles used. The collision diameter of a proton
differs if determined from collisions with electrons or neutrons.
In a non-elastic collision the internal structure of one or more colliding subjects changes in some respect. With billiard balls only the temperature increases, kinetic energy being transformed into heat, causing
the motion to decelerate.
In a non-elastic collision between atoms or molecules, the state of at least one of them changes into an excited
state, sooner or later followed by the emission of a photon. This is an objective characteristic process.
The character of the colliding subjects
may change subjectively as well, for instance, if an atom loses an electron and becomes an ion, or if a molecule is dissociated or associated.
as a means to investigate the characters of subatomic particles have become a sophisticated art in high-energy physics.
Spontaneous decay became first known at the end of the nineteenth century from radioactive processes. It involves strong, weak or electromagnetic interactions,
respectively in α-, β-, and γ-radiation. The decay law of Ernest Rutherford and Frederick Soddy (1902) approximately represents the character of a single radioactive process. This statistical law is only explainable by assuming that each atom
decays independently of all other atoms. It is a random process. Besides, radioactivity is almost independent of circumstances like temperature, pressure and the chemical compound in which the radioactive atom is bound. Such decay processes occur in nuclei
and sub-atomic particles, as well as in atoms and molecules being in a metastable state. The decay time is the mean duration of existence of the system or the state.
Besides spontaneous ones, stimulated transformations occur. Albert Einstein first investigated this phenomenon in 1916, with respect to transitions between two energy levels of an atom or molecule, emitting or absorbing a photon.
He found that (stimulated) absorption and stimulated emission are equally probable, whereas spontaneous emission has a different probability.
In stimulated emission, an incoming photon causes the emission of another photon such that there are two photons after the event, mutually coherent, i.e., having the same phase and frequency. Stimulated emission plays an important part in lasers and masers,
in which coherent light respectively microwave radiation is produced. Absorption is always stimulated.
Stimulated emission is symmetrical with
stimulated absorption, but spontaneous emission is asymmetric and irreversible.
A stable system or a stable state may be separated from other systems or states by an energy barrier. It may be imagined that a particle is confined in an energy well, for instance an α-particle
in a nucleus. According to classical mechanics, such a barrier is insurmountable if it has a larger value than the kinetic energy of the particle in the well, but quantum physics proves that there is some probability that the particle leaves the well. This
is called ‘tunneling’, for it looks like the particle digging a tunnel through the energy mountain.
Consider a chemical reaction
in which two molecules A and B associate to AB and conversely, AB dissociates into A and B. The energy of AB is lower than the energy of A+B apart, the difference being the binding
energy. A barrier called the activation energy separates the two states. In an equilibrium situation, the binding energy and the temperature determine the proportion of the numbers of molecules (NA.NB/NAB). It is independent
of the activation energy. At a low temperature, if the total number of A’s equals the total number of B’s, only molecules AB will be present. In an equilibrium situation at increasing temperatures, the number of molecules
A and B increases, and that of AB decreases. In contrast, the speed of the reaction depends on the activation energy (and again on temperature). Whereas the binding energy is a characteristic magnitude for AB, the
activation energy partly depends on the environment. In particular the presence of a catalyst may lower the activation energy and stimulate tunneling, increasing the speed of the reaction.
The possibility to overcome energy barriers explains the possibility of transitions from one stable system to another one. It is the basis of theories about radioactivity and other spontaneous transitions, chemical
reaction kinetics, the emergence of chemical elements and of phase transitions, without affecting theories explaining the existence of stable or quasi-stable systems.
In such transition processes the characters do not change, but a system may change of character. The laws do not change, but their subjects do.
The chemical elements have arisen in a chain of nuclear processes, to be distinguished as fusion and fission. The
chain starts with the fusion of hydrogen nuclei (protons) into helium nuclei, which are so stable that in many stars the next steps do not occur. Further processes lead to the formation of all known natural isotopes up to uranium. Besides helium with 4 nucleons,
beryllium (8), carbon (12), oxygen (16), and iron (56) are relatively stable. In all these cases, both the number of protons and the number of neutrons is even.
The elements only arise in specific circumstances. In particular, the temperature and the density are relevant. The transition from hydrogen to helium occurs at 10 to 15 million Kelvin and at a density of 0.1 kg/cm3. The transition
of helium into carbon, oxygen and neon occurs at 100 to 300 million Kelvin and 100 kg/cm3.
Only after a considerable cooling down, these nuclei form with electrons the atoms and molecules to be found on the earth.
Once upon a time
the chemical elements were absent. This does not mean that the laws determining the existence of the elements did not apply. The laws constituting the characters of stable and metastable isotopes are universally valid, independent of time and place. But the
realization of the characters into actual individual nuclei does not depend on the characters only, but on circumstances like temperature as well. On the other hand, the available subjects and their relations determine these circumstances. Like initial and
boundary conditions, characters are conditions for the existence of individual nuclei. Mutatis mutandis, this applies to electrons, atoms and molecules as well.
In the preceding sections, I discussed quantitative, spatial and kinetic characters. About the corresponding subjects,
like groups of numbers, spatial figures or wave packets, it cannot be said that they come into being or decay, except in relation to physical subjects. Only interacting things emerge and disappear. Therefore there is no quantitative, spatial or kinetic evolution
comparable to the astrophysical one, even if the latter is expressed in numerical proportions, spatial relations and characteristic rhythms.
stars have a lifetime far exceeding the human scale, it is difficult to consider them stable. Each star is a reactor in which continuously processes take place. Stars are subject to evolution. There are young and old stars, each with their own character. Novae
and supernovae, neutron stars and pulsars represent various phases in the evolution of a star. The simplest stellar object may be the black hole, behaving like a thermodynamic black body subject to the laws of thermodynamics.These
processes play a part in the theory about the astrophysical evolution, strongly connected to the standard model. It correctly explains the relative abundance of the chemical elements.
After the start of the development of the physical cosmos, about thirteen billion years ago, it has expanded. As a result all galaxies move away from each other, the larger the distance, the higher their speed. Because light needs time to travel, the picture
we get from galaxies far away concerns states from era’s long past. The most remote systems are at the spatio-temporal horizon of the physical cosmos. In this case, astronomers observe events that occurred shortly after the big bang, the start of the
Its real start remains forever behind the horizon of our experience. Astrophysicists
are aware that their theories based on observations may approach the big bang without ever reaching it. The astrophysical theory describes what has happened since the beginning - not the start itself - according to laws discovered in our era. The extrapolation
towards the past is based on the supposition that these laws are universally valid and constant. This agrees with the realistic view that the cosmos can only be investigated from within. It is not uncommon to consider our universe as one realized possibility
taken from an ensemble of possible worlds. However, there is no way to investigate these
alternative worlds empirically.
The cosmic electromagnetic background radiation discovered by Arno Penzias and Robert Wilson in 1964 may be considered to be an ether.
In fact, it is sufficient to postulate a universal unit of speed. Experimentally, this unit turns out to equal the speed of light.
The name ‘Galileo group’ dates from the twentieth century,after group theory entered physics.
This is sometimes called the Poincaré group, of which the Lorentz group (now without spatial and temporal translations) would be a subgroup.
The theory of relativity further distinguishes invariant, covariant and contravariant magnitudes, vectors etc.
Rindler 1969, 24, 51-53.
1967a, 213, 214.
Rindler 1969, 19; Sklar 1974, 70.
Jammer 1954, 150-166; Sklar 1974, 13-54; Torretti 1999, 157.
Reichenbach 1957, 116-119; Grünbaum 1968, 19, 70; 1973, 22.
Grünbaum 1973, 22-23.
Maxwell 1877, 29; Cassirer 1921, 364.
Reichenbach 1957, 117.
Reichenbach 1957, 118.
Carnap 1966, chapter 8.
Huygens 1690, 15; Sabra 1967, 212; Stafleu 2019, 3.2.
Newton 1704, 278-282; Sabra 1967, chapter 13.
Achinstein 1991, 24.
Hanson 1963, 13; Jammer 1966, 31.
Klein 1964, Pais 1982, part IV.
Bohr, Kramers, Slater 1924; cp. Slater 1975, 11; Pais 1982, chapter 22; 1991, 232-239.
Bohr 1934, chapter 2; Bohr 1949; Meyer-Abich 1965; Jammer 1966, chapter 7; 1974, chapter 4; Pais 1991, 309-316, 425-436.
Margenau 1950, chapter 18; Messiah 1961, 129-149; Jammer 1966, chapter 7; Jammer 1974, chapter 3; Omnès 1994, chapter 2.
Bunge 1967a, 248, 267.
Heisenberg 1930, 21-23.
Cartwright 1983, 179.
M. Born, Atomic physics, Blackie 1944, quoted by Bastin (ed.) 1971, 5.
Heisenberg 1958, 25.
Cartwright 1983, 195.
Cartwright 1983, 179.
Cartwright 1983, 179.
Omnès 1994, chapter 7, 484-488; Torretti 1999, 364-367.
Omnès 1994, 299-302.
Jammer 1966, 338-345.
Klein 1964; Raman, Forman 1969.
Kragh 1990, chapter 3, 5.
Hanson 1963, chapter IX.
Kragh 1990, 104-105.
Omnès 1994, 193-198, 315-319.
Dijksterhuis 1950; Reichenbach 1956; Gold (ed.) 1967; Grünbaum 1973; 1974; Sklar 1974, chapter V; Sklar 1993; Prigogine 1980; Coveney, Highfield 1990; Stafleu 2019, chapter 9.
 Reichenbach 1956, 135.
 Reichenbach 1956, 115.
 von Laue 1949; Jammer
1961; Elkana 1974a; Harman 1982.
Jammer 1957; Cohen, Smith (eds.) 2002.
Morse 1964, 53-58; Callen 1960, 79-81; Stafleu 1980, 70-73.
Callen 1960, 293-308.
Morse 1964, 106-118; Callen 1960, 288-292; Prigogine 1980, 84-88.
Sklar 1993, chapters 5-7.
McCormmach 1970a; Kragh 1999, chapter 8.
Jammer 1961, chapter 11.
Pickering 1984, chapter 9-11; Pais 1986, 603-611.
See Walker, Slack 1970, who do not mention Faraday’s ion.
See Millikan 1917; Anderson 1964; Thomson 1964; Pais 1986; Galison 1987; Kragh 1990; 1999.
Pais 1986, 466; Pickering 1984, 67.
See Barrow, Tipler 1986, 5, 252-254.
Callen 1960, 206-207.
Hawking 1988, chapter 6, 7.
Mason 1991, chapter 4.
Barrow, Tipler 1986, 6-9.