5.1. The unification of physical interactions
The discovery of the electron in 1897 provided the study of the structure of matter with a strong
impulse, both in physics and in chemistry. Our knowledge of atoms and molecules, of nuclei and sub-atomic particles, of stars and stellar systems, dates largely from the twentieth century. The significance of electrotechnology and electronics for the present
society can hardly be overestimated. A philosophical analysis of physical characters is the aim of chapter 5.
The physical aspect of the cosmos is characterized by interactions between
two or more subjects. Interaction is a relation different from the quantitative, spatial, or kinetic relations, on which it can be projected. It is subject to natural laws. Some laws are specific, like the electromagnetic ones, determining characters of physical
kinds. Some laws are general, like the laws of thermodynamics and the laws of conservation of energy, linear and angular momentum. General laws constitute the physical-chemical relation frame, specific laws determine physical characters.. Both for the general
and the specific laws, physics has reached a high level of unification.
Because of their relevance to study types of characters, this chapter starts with an analysis of the projections
of the physical relation frame onto the three preceding ones (5.1). Next, I investigate the characters of physically stable things, consecutively quantitatively, spatially, and kinetically founded (5.2-5.4). In section 5.5, I survey aggregates and statistics.
Finally, in section 5.6 I shall review processes of coming into being, change, and decay.
of physically qualified things and events implies their interaction, the universal physical relation. If something could not interact with anything else it would be inert. It would not exist in a physical sense, and it would have no physical place in the cosmos. The noble gases are called inert because they hardly
ever take part in chemical compounds, yet their atoms are able to collide with each other. The most inert things among subatomic particles are the neutrino’s, capable of flying through the earth with a very small probability of colliding with a nucleus
or an electron. Nevertheless, neutrinos are detectable and have been detected.
The universality of the relation frames allows science of comparing characters with each other and to determine their specific relations. The projections of the physical relation frame onto the
preceding frames allow us to measure these relations. Measurability is the base of the mathematization of the exact sciences. It allows of applying statistics and designing mathematical models for natural and artificial systems.
The simplest case of interaction concerns two isolated systems interacting only with each other. Thermodynamics characterizes an isolated or closed system by magnitudes like energy and entropy.
The two systems have thermal, chemical, or electric potential differences, giving rise to currents creating entropy. According to the second law of thermodynamics, this interaction is irreversible.
In kinematics, an interactive event may have the character of a collision, minimally leading to a change in the state of motion of the colliding subjects. Often, the internal state of the colliding subjects changes as well. Except for the boundary case
of an elastic collision, these processes are subject to the physical order of irreversibility. Frictionless motion influenced by a force is the standard example of a reversible interaction. In fact, it is also a boundary case, for any kind of friction or energy
dissipation causes motion to be irreversible.
The law of inertia (4.1) expresses the independence of
uniform motion from physical interaction. It confirms the existence of uniform and rectilinear motions having no physical cause. This is an abstraction, for concrete things experiencing forces have a physical aspect as well. In reality a uniform rectilinear
motion only occurs if the forces acting on the moving body balance each other.
Kinetic time is symmetric with respect to past and future. If in the description of a motion the time parameter
(t) is replaced by its reverse (–t), we achieve a valid description of a possible motion. In the absence of friction or any other kind of energy dissipation, motion is reversible. By distinguishing past and future we are able to discover
cause-effect relations, assuming that an effect never precedes its cause. According to relativity theory, the order of events having a causal relation is in all inertial systems the same, provided that the time parameter is not reversed (3.3).
In our common understanding of time, the discrimination of past and future is a matter of course,
but in the philosophy of science it is problematic. The existence of irreversible processes cannot be denied. All motions with friction are irreversible. Apparently, the absorption of light by an atom or a molecule is the reverse of emission, but Albert Einstein
demonstrated that the reverse of (stimulated) absorption is stimulated emission of light, making spontaneous emission a third process, having no reverse (5.6). This applies to radioactive processes as well. The phenomenon of decoherence (4.3)
makes most quantum processes irreversible. Only
wave motion subject to Schrödinger’s equation is symmetric in time. Classical mechanics usually expresses interaction by a force between two subjects, this relation being symmetric according to Newton’s third law of motion. However, this law
is only applicable to spatially separated subjects if the time needed to establish the interaction is negligible, i.e., if the action at a distance is (almost) instantaneous. Einstein made clear that interaction always requires time, hence even interaction
at a distance is asymmetric in time.
Irreversibility does not imply that the reverse process is impossible. It may be less probable, or requiring quite different initial conditions.
The transport of heat from a cold to a hotter body (as occurs in a refrigerator) demands different circumstances from the reverse process, which occurs spontaneously if the two bodies are not thermally isolated from each other. A short living point-like source
of light causes a flash expanding in space. It is not impossible but practically very difficult to reverse this wave motion, for instance applying a perfect spherical mirror with the light source at the centre. But even in this case, the reversed motion is
only possible thanks to the first motion, such that the experiment as a whole is still irreversible.
Yet, irreversibility as a temporal order is philosophically controversial, for it
does not fit into the reductionist worldview influenced by nineteenth-century mechanism.
This worldview assumes each process to be reducible to motions of as such unchangeable pieces of matter, interacting through Newtonian forces. Ludwig Boltzmann attempted to bridge reversible motion and irreversible processes by means of the concepts of probability
and randomness. In order to achieve the intended results, he had to assume that the realization of chances is irreversible.
Moreover, it is stated that all ‘basic’ laws of physics are symmetrical in time. This seems to be true as far as kinetic time is concerned, and if any law that belies temporal symmetry (like the second law of thermodynamics, or the law
for spontaneous decay) is not considered ‘basic’. Anyhow, all attempts to reduce irreversibility to the subject side of the physical aspect of reality have failed.
Interaction is first of all subject to general laws independent of the specific character of the things involved. Some conservation laws are derivable from Einstein’s principle of relativity,
stating that the laws of physics are independent of the motion of inertial systems.
Being the physical subject-subject relation, interaction may be analysed with the help of
quantitative magnitudes like energy, mass, and charge; spatial concepts like force, momentum, field strength, and potential difference; as well as kinetic expressions like currents of heat, matter, or electricity.
Like interaction, energy, force, and current are abstract concepts. Yet these are not merely covering concepts without physical content. They can be specified as projections of characteristic interactions like the electromagnetic
one. Electric energy, gravitational force, and the flow of heat specify the abstract concepts of energy, force, and current.
For energy to be measurable, it is relevant that one concrete
form of energy is convertible into another one. For instance, a generator transforms mechanical energy into electric energy. Similarly, a concrete force may balance another force, whereas a concrete current accompanies currents of a different kind. This means
that characteristically different interactions are comparable, they can be measured with respect to each other. The physical subject-subject relation, the interaction projected as energy, force, and current, is the foundation of the whole
system of measuring, characteristic for astronomy, biology, chemistry, physics, as well as technology. The concepts of energy, force, and current enable us to determine physical subject-subject relations objectively.
Measurement of a quantity requires several conditions to be fulfilled. First, a unit should be available. A measurement compares a quantity with an agreed unit. Secondly, a magnitude requires a law, a metric, determining
how a magnitude is to be projected on a set of numbers, on a scale (3.1). The third requirement, being the availability of a measuring instrument, cannot always be directly satisfied. A magnitude like entropy can only be calculated from measurements
of other magnitudes. Fourth, therefore, there must be a fixed relation between the various metrics and units, a metrical system. This allows of the application of measured properties in theories. Unification of units and scales is a necessary requirement
for the communication of both measurements and theories.
I shall discuss the concepts of energy, force, and current in some more detail. It is by no means evident that these concepts are the most general projections of interaction. Rather, their development
has been a long and tedious process, leading to a general unification of natural science, to be distinguished from a more specific unification to be discussed later on.
a. Since the middle of the nineteenth century, energy is the most important quantitative expression of physical, chemical, and biotic interactions.
As such it has superseded mass, in particular since it is known that mass and energy are equivalent, according to physics’ most famous (but often misinterpreted)
formula, E=mc2. Energy is specifiable as kinetic and potential energy, thermal energy, nuclear energy, or chemical energy. Affirming the total energy of a closed system to be constant, the law of conservation of energy implies
that one kind of energy can be converted into another one. For this reason, energy forms a universal base for comparing various types of interaction.
Before energy, mass became a universal measure for the amount of matter,
serving as a measure for gravity as well as for the amount of heat that a subject absorbs when heated by one degree. Energy and mass are general expressions of physical interaction. This applies to entropy and related thermodynamic concepts too. In contrast,
the rest energy and the rest mass of a particle or an atom are characteristic magnitudes.
Velocity is a measure for motion, but if it concerns physically qualified things, linear momentum
(quantity of motion, the product of mass and velocity) turns out to be more significant. The same applies to angular momentum (quantity of rotation, the product of moment of inertia and angular frequency).
In the absence of external forces, linear and angular momentum are subject to conservation laws. Velocity, linear and angular momentum, and moment of inertia are not expressed by a single number (a scalar) but by vectors or tensors. Relativity theory combines
energy (a scalar) with linear momentum (a vector with three components) into a single vector, having four components (3.3).
b. According to Newton’s third law, the mechanical force is a subject-subject relation.
If A exerts a force F on B, then B exerts a force –F on A. The minus sign indicates that the two forces being equal in magnitude have opposite directions. The third law has exerted
a strong influence on the development of physics during a quite long time. In certain circumstances, the law of conservation of linear momentum can be derived from it. However, nowadays physicists allot higher priority to the conservation law than to Newton’s
third law. In order to apply Newton’s laws when more than one force is acting, we have to consider the forces simultaneously. This does not lead to problems in the case of two forces acting on the same body. But the third law is especially important
for action at a distance, inherent in the Newtonian formulation of gravity, electricity, and magnetism. In Einstein’s theory of relativity, simultaneity at a distance turns out to depend on the motion of the reference system. The laws of conservation
of linear momentum and energy turn out to be easier to amend to relativity theory than Newton’s third law. Now one describes the interaction as an exchange of energy and momentum (mediated by a field particle like a photon). This exchange requires a
certain span of time.
Newton’s second law provides the relation between force and momentum: the net force equals the change of momentum per unit of time. The law of inertia seems
to be deductible from Newton’s second law. If the force is zero, momentum and hence velocity is constant, or so it is argued. However, if the first law would not be valid, there could be a different law, assuming that each body experiences a frictional
force, dependent on speed, in a direction opposite to the velocity. (In its most simple form, F=-bv, b>0.) Accordingly, if the total force on a body is zero, the body would be at rest. A unique
reference system would exist in which all bodies on which no forces act would be at rest. This would agree with Aristotle’s mechanics, but it contradicts both the classical principle of relativity and the modern one. The principle of relativity is an
alternative expression of the law of inertia, pointing out that absolute (non-relative) uniform motion does not exist. Just like spatial position on the one hand and interaction on the other side, motion is a universal relation.
Besides to a rigid body, a force is applicable to a fluid, usually in the form of a pressure (i.e., force per area). A pressure difference causes a change of volume or a current subject to Bernoulli’s law, if
the fluid is incompressible. Besides, there are non-mechanical forces causing currents. A temperature gradient causes a heat current, chemical potentials drive material flows (e.g., diffusion) and an electric potential difference directs an electric current.
To find a metric for a thermodynamic or an electric potential is not an easy task. On the basis of an analysis of idealized Carnot-cycles, William Thomson (later Lord Kelvin) established the theoretical
metric for the thermodynamic temperature scale.
The practical definition of the temperature scale takes this theoretical scale as a norm.
The Newtonian force can sometimes be written as the derivative of a potential energy (i.e.,
energy as a function of spatial position). Since the beginning of the nineteenth century, the concept of a force is incorporated in the concept of a field. At first a field was considered merely a mathematical device, until Maxwell proved the electromagnetic
field to have reality of its own. A field is a physical function projected on space. Usually one assumes the field to be continuous and differentiable almost everywhere. A field may be constant or variable. There are scalar fields (like the distribution of
temperature in a gas), vector fields (like the electrostatic field) and tensor fields (like the electromagnetic field). A field of force is called ‘conservative’ if the forces are derivable from a space-dependent potential energy. This applies
to the classical gravitational and electrostatic fields. It does not apply to the Lorentz force, because it depends on the velocity of a charged body with respect to a magnetic field. The Lorentz force and Maxwell’s equations for the electromagnetic
field are derivable from a gauge-invariant vector potential. ‘Gauge-invariance’ is the relativistic successor to the static concept of a conservative field.
c. A further analysis of thermodynamics and electricity makes clear that current is a third projection, now from the physical onto the kinetic relation frame. The concept of entropy
points to a general property of currents. In each current, entropy is created, making the current irreversible.
In a system in which currents occur, entropy increases. Only if a system as a whole is in equilibrium, there are no net currents and the entropy is constant. Like several mechanical forces are able to balance each other, so do thermodynamic forces and currents.
This leads to mutual relations like thermo-electricity.
The laws of thermodynamics are generally valid, independent of the specific character of a physical thing or aggregate. For a limited set of specific systems (e.g., a gas consisting of
similar molecules), statistical mechanics is able to derive the second law from mechanical interactions, starting from assumptions about their probability.
Whereas the thermodynamic law states that the entropy in a closed system is constant or increasing, the statistical law allows of fluctuations. The source of this difference is that thermodynamics supposes matter to be continuous, whereas statistical mechanics
takes into account the molecular character of matter.
There are many different interactions, like electricity,
magnetism, contact forces (e.g., friction), chemical forces (e.g., glue), or gravity. Some are reducible to others. The contact forces turn out to be of an electromagnetic nature, and chemical forces are reducible to electrical ones.
Besides the general unification discussed above allowing of the comparison of widely differing interactions, a characteristic unification can be discerned. Maxwell’s unification of electricity and magnetism implies
these interactions to have the same character, being subject to the same specific cluster of laws and showing symmetry. The fact that they can still be distinguished points to an asymmetry, a break of symmetry. The study of characteristic symmetries and symmetry
breaks supplies an important tool for achieving a characteristic unification of natural forces.
Since the middle of the twentieth century, physics discerns four fundamental specific
interactions. These are gravity and electromagnetic interaction besides the strong and weak nuclear forces. Later on, the electromagnetic and weak forces were united into the electroweak interaction, whereas the strong force is reducible to the colour force
between quarks. In the near future, physicists expect to be able to unite the colour force with the electroweak interaction. The ultimate goal, the unification of all four forces is still far away.
These characteristic interactions are distinguished in several ways, first by the particles between which they act. Gravity acts between all particles, the colour force only between quarks, and
the strong force only between particles composed from quarks. A process involving a neutrino is weak, but the reverse is not always true.
Another difference is their relative strength.
Gravity is weakest and only plays a part because it cannot be neutralized. It manifests itself only on a macroscopic scale. The other forces are so effectively neutralized, that the electrical interaction was largely unknown until the eighteenth century, and
the nuclear forces were not discovered before the twentieth century. Gravity conditions the existence of stars and systems of stars.
Next, gravity and electromagnetic interaction have
an infinite range, the other forces do not act beyond the limits of an atomic nucleus. For gravity and electricity the inverse-square law is valid (the force is inversely proportional to the square of the distance from a point-like source). This law is classically
expressed in Newton’s law of gravity and Coulomb’s electrostatic law, with mass respectively charge acting as a measure of the strength of the source. A comparable law does not apply to the other forces, and the lepton and baryon numbers do not
act as a measure for their sources. As a function of distance, the weak interaction decreases much faster than quadratically. The colour force is nearly constant over a short distance (of the order of the size of a nucleus), beyond which it decreases abruptly
The various interactions also differ because of the field particles involved. Each fundamental interaction corresponds to a field in which quantized currents occur. For gravity,
this is an unconfirmed hypothesis. Field particles have an integral spin and they are bosons (3.2, 4.4). If the spin is even (0 of 2), it concerns an attractive force between equal particles and a repulsive force between opposite particles (if applicable).
For an uneven spin it is the other way around. The larger the field particle’s rest mass, the shorter is the range of the interaction. If the rest mass of the field particles is zero (as is the case with photons and gravitons), the range is infinite.
Unless mentioned otherwise, the field particles are electrically neutral.
The mean lifetime of spontaneous decay differs widely. The stronger the interaction causing a transition, the
faster the system changes. If a particle decays because of the colour force or strong force, it happens in a very short time (of the order of 10-23 to 10-19 sec). Particles decaying due to weak interaction have a relatively long lifetime
(10-12 sec for a tauon up to 900 sec for a free neutron). Electromagnetic interaction is more or less between.
In high-energy physics, symmetry considerations and group theory play an important part in the analysis of collision processes. New properties like isospin and strangeness have led to the introduction of groups named SU(2) and SU(3) and the discovery
of at first three, later six quarks. Quantum
electrodynamics reached its summit shortly after the Second World War, but the other interactions are less manageable, being developed only after 1970. Now each field has a symmetry property called gauge invariance, related to the laws of conservation of electric
charge, baryon number and lepton number. The
appropriate theory is the standard model, since the discovery of the J/y particle in 1974 explaining successfully a number of properties and interactions of subatomic particles. However, the general theory of relativity is still at variance with quantum
electrodynamics, with the electroweak theory of Weinberg and Salam, as well as with quantum chromodynamics.
These fundamental interactions are specifications of the abstract concept of interaction being the universal physical and chemical relation. Their laws, like those of Maxwell for electromagnetism,
form a specific set, which may be considered a character. But this character does not determine a class of things or events, but a class of relations.
5.2. The character of electrons
Ontology, the doctrine of on (or ontos, Greek for being), aims to answer the question of how matter is composed according to present-day insights. Since the beginning
of the twentieth century, many kinds of particles received names ending with on, like electron, proton, neutron and photon. At first sight, the relation with ontology seems to be obvious.
Yet, not many physicists would affirm that an electron is the essence of electricity, that the proton forms the primeval matter, that the neutron and its little brother, the neutrino, have the nature of being neutral, or that in the photon light comes into
being, and in the phonon sound. In pion, muon, tauon, and kaon, on is no more than a suffix of the letters π, μ, τ and K, whereas Paul Dirac baptized fermion and boson after Enrico Fermi and Satyendra Bose. In 1833 Michael Faraday, advised
by Willim Whewell, introduced the words ion, kation, and anion, referring to the Greek word for to go. In an electrolyte, an ion moves from or to an electrode, an anode or cathode (names proposed by Whewell as well). An intruder is the positive electron.
Meant as positon, the positron received an additional r, possibly under the influence of electron or new words like magnetron and cyclotron, which however are machines, not particles.
after 1925 quantum physics and high-energy physics allowed of the study of the characters of elementary physical things. Most characters have been discovered after 1930. But the discovery of the electron (1897), of the internal structure of an atom, composed
from a nucleus and a number of electrons (1911) and of the photon (1905) preceded the quantum era. These are typical examples of characters founded in the quantitative, spatial, and kinetic projections of physical interaction. In section 5.1, these projections
were pointed out to be energy, force or field, and current.
An electron is characterized by a specific
amount of mass and charge and is therefore quantitatively founded. The foundation is not in the quantitative relation frame itself (because that is not physical), but in the most important quantitative projection of the physical relation frame. This is energy,
expressing the quantity of interaction. Like other particles, an electron has a typical rest energy, besides specific values for its electric charge, magnetic moment and lepton number.
chapter 4, I argued that an electron has the character of a wave packet as well, kinetically qualified and spatially founded, anticipating physical interactions. An electron has a specific physical character and a generic kinetic character.
The two characters are interlaced within the at first sight simple electron. The combined dual character is called the wave-particle duality. Electrons share it with all other elementary particles. As a consequence of the kinetic character and the
inherent Heisenberg relations, the position of an electron cannot be determined much better than within 10-10 m (about the size of a hydrogen atom). But the physical character implies that the electron’s collision diameter (being a measure
of its physical size) is less than 10-17 m.
Except for quarks, all quantitatively founded particles are leptons, to be distinguished from field particles and baryons
(5.3, 5.4). Leptons are not susceptible to the strong nuclear force or the colour force. They are subject to the weak force, sometimes to electromagnetic interaction, and like all matter to gravity. Each lepton has a positive or negative value for the lepton
number (L), which significance appears in the occurrence or non-occurrence of certain processes. Each process is subject to the law of conservation of lepton number, i.e., the total lepton number cannot change. For instance, a neutron (L=0) does not decay
into a proton and an electron, but into a proton (L=0), an electron (L=1) and an antineutrino (L=-1). The lepton number is just as characteristic for a particle as its electric charge. For non-leptons the lepton number is 0, for leptons it is +1 or -1.
Leptons satisfy a number of characteristic laws. Each particle has an electric charge being an integral multiple (positive, negative or zero) of the elementary charge. Each particle corresponds
with an antiparticle having exactly the same rest mass and lifetime, but opposite values for charge and lepton number. Having a half-integral spin, leptons are fermions satisfying the exclusion principle and the characteristic Fermi-Dirac statistics (4.3,
Three generations of leptons are known, each consisting of a negatively charged particle, a neutrino, and their antiparticles. These generations are related to similar
generations of quarks (5.3). A tauon decays spontaneously into a muon, and a muon into an electron. Both are weak processes, in which simultaneously a neutrino and an anti-neutrino are emitted.
The leptons display little diversity, their number is exactly 12. Like their diversity, the variation of leptons is restricted. It only concerns their external relations: their position, their linear and angular momentum, and the orientation of their
magnetic moment or spin relative to an external magnetic field.
This description emphasizes the quantitative aspect of leptons. But leptons are first of all physically qualified. Their
specific character determines how they interact by electroweak interaction with each other and with other physical subjects, influencing their coming into being, change and decay.
Electrons are by far the most important leptons, having the disposition to become part of systems like atoms, molecules and solids. The other leptons only play a part in high-energy processes.
In order to stress the distinction between a definition and a character as a set of laws, I shall dwell a little longer on hundred years of development of our knowledge of the electron.
Although more scientists were involved, it is generally accepted that Joseph J. Thomson in 1897 discovered the electron. He identified his cathode ray as a stream of particles and established roughly
the ratio e/m of their charge e and mass m, by measuring how an electric and/or magnetic field deflects the cathode rays. In 1899 Thomson determined the value of e separately, allowing him to calculate the value
of m. Since then, the values of m and e, which may be considered as defining the electron, are determined with increasing precision. In particular Robert Millikan did epoch-making work, between 1909 and 1916. Almost simultaneously
with Thomson, Hendrik Antoon Lorentz observed that the Zeeman effect (1896) could be explained by the presence in atoms of charged particles having the same value for e/m as the electron. Shortly afterwards, the particles emerging from β-radioactivity
and the photoelectric effect were identified as electrons.
The mass m depends on the electron’s speed, as was first established experimentally by Walter Kaufmann, later
theoretically by Albert Einstein. Since then, instead of the mass m the rest mass mo is characteristic for a particle. Between 1911 and 1913, Ernest Rutherford and Niels Bohr developed the atomic model in which electrons move around
a much more massive nucleus. The orbital angular momentum turned out to be quantized. In 1923 Louis De Broglie made clear that an electron sometimes behaves like a wave, interpreted as the bearer of probability by Max Born in 1926 (4.3). In 1925, Samuel Goudsmit
and George Uhlenbeck suggested a new property, half-integral spin, connected to the electron’s intrinsic magnetic moment. In the same year, Wolfgang Pauli discovered the exclusion principle. Enrico Fermi and Paul Dirac derived the corresponding statistics
in 1926. Since then, the electron is a fermion, playing a decisive part in all properties of matter (4.3, 5.3, 5.5). In 1930 it became clear that in β-radioactivity besides the electron a neutrino emerges from a nucleus. Neutrino’s were later on
recognized as members of the lepton family characterized by the electroweak interaction. β-radioactivity is not caused by electromagnetic interaction, but by the weak nuclear force. Electrons turned out not to be susceptible to strong nuclear forces.
In 1931 the electron got a brother, the positron or anti-electron. This affirmed that an electron has no eternal life, but may be created or annihilated together with a positron. In β-radioactivity, too, an electron emerges or disappears (in a nucleus,
an electron cannot exist as an independent particle), but apart from these processes, the electron is the most stable particle we know besides the proton. According to Dirac, the positron is a hole in the nether world of an infinite number of electrons having
a negative energy (4.3). After the second world war, Richard Feynman, Julian Schwinger and Sin-Itiro Tomonaga developed quantum electrodynamics. This is a field theory in which the physical vacuum is not empty, but is the stage of spontaneous creations and
annihilations of virtual electron-positron pairs. Interaction with other (sometimes virtual) particles is partly responsible for the properties of each particle. As a top performance counts the theoretical calculation of the magnetic moment of the electron
in eleven decimals, a precision only surpassed by the experimental measurement of the same quantity in twelve decimals. Moreover, the two values differ only in the eleventh decimal, within the theoretical margin of error.
Finally, the electron got two cousins, the muon and the tauon.
Besides these scientific developments, electronics revolutionized the world of communication, information, and control.
Since Thomson’s discovery, the concept of an electron has been changed and expanded considerably. Besides being a particle having mass and charge, it is now a wave, a top, a magnet, and a
fermion, half of a twin, and a lepton. Yet, few people doubt that we are still talking about the same electron.
What the essence of an electron is appears to be a hard question,
if ever posed. It may very well be a meaningless question. But we achieve a growing insight into the laws constituting the electron’s character, determining the electron’s relations with other things and the processes
in which it is involved. The electron’s charge means that two electrons exert a force on each other according to the laws of Coulomb and Lorentz. The mass follows from the electron’s acceleration in an electric and/or magnetic field, according
to Maxwell’s laws. The lepton number makes only sense because of the law of conservation of lepton number, allowing of some processes and prohibiting others. Electrons are fermions, satisfying the exclusion principle and the distribution law of Fermi
The character of electrons is not logically given by a definition, but physically by a specific set of laws, which are successively discovered and systematically
connected by experimental and theoretical research.
An electron is to be considered an individual satisfying
the character described above. A much-heard objection to the assignment of individuality to electrons and other elementary particles is the impossibility to distinguish one electron from another. Electrons are characteristically equal to each other, having
much less variability than plants or animals, even less than atoms.
This objection can be retraced to the still influential worldview of mechanism. This worldview assumed each particle
to be identifiable by objective kinetic properties like its position and velocity at a certain time. Quantum physics observes that the identification of physically qualified things requires a physical interaction. In general, this interaction influences the
particle’s position and momentum (4.3). Therefore, the electron’s position and momentum cannot be determined with unlimited accuracy, as follows from Heisenberg’s relations. This means that identification in a mechanistic sense is not always
possible. Yet, in an interaction such as a measurement, an electron manifests itself as an individual.
If an electron is part of an atom, it can be identified by its state, because the exclusion principle precludes that two electrons would occupy the same state. The two electrons in the helium atom
exchange their states continuously without changing the state of the atom as a whole. But it cannot be doubted that at any moment there are two electrons, each with its own mass, charge and magnetic moment. For instance, in the calculation of the
energy levels the mutual repulsion of the two electrons plays an important part.
The individual existence of a bound electron depends on the binding energy being much smaller than its
rest energy. Binding energy is the energy needed to liberate an electron from an atom. It varies from a few eV (the outer electrons) to several tens of keV (the inner electrons in a heavy element like uranium). The electron’s rest mass is about 0.5 MeV,
much larger than its binding energy in an atom (13.6 eV).
To keep an electron as an independent particle in a nucleus would require a binding energy of more than 100 MeV, much more than the electron’s rest energy of 0,5 MeV. For this reason, physicists argue that electrons in a nucleus cannot exist
as independent, individual particles, like they are in an atom’s shell.
In contrast, protons and neutrons in a nucleus satisfy the criterion that an independent particle has a
rest energy substantially larger than the bindingenergy. Their binding energy is about 8 MeV, their rest energy is almost 1000 MeV. A nucleus is capable of emitting an electron (this is β-radioactivity). The electron’s existence starts at the emission
and eventually ends at the absorption by a nucleus. Because of the law of conservation of lepton number, the emission of an electron is accompanied by the emission of an anti-neutrino, and at the absorption of an electron a neutrino is emitted. This would not be the case if the electron could exist
as an independent particle in the nucleus.
More than as free particles, the electrons display their characteristic properties as components of atoms, molecules and solids,
as well as in processes. The half-integral spin of electrons was discovered in the investigation of atomic spectra. The electron’s fermion character largely determines the shell structure of atoms. In 1930, Wolfgang Pauli suggested the existence of neutrino’s
because of the character of β-radioactivity. The lepton number is discovered by an analysis of specific nuclear reactions.
Electrons have the affinity or propensity of functioning
as a component of atoms and molecules because electrons share electromagnetic interaction with nuclei. Protons and electrons have the same but opposite charge, allowing of the formation of neutral atoms, molecules and solids. Electric neutrality is of tremendous
importance for the stability of these systems. This tertiary characteristic determines the meaning of electrons in the cosmos.
5.3. The quantum ladder
An important spatial manifestation of interaction is the force between two spatially separated bodies. An atom or molecule having a spatially founded character consists of a number of
nuclei and electrons kept together by the electromagnetic force. More generally, any interaction is spatially projected on a field.
Sometimes a field can be described as the
spatial derivative of the potential energy. A set of particles constitutes a stable system if the potential energy has an appropriate shape, characteristic for the spatially founded structure. In a spatially founded structure, the relative spatial positions
of the components are characteristic, even if their relative motions are taken care of. Atoms have a spherical symmetry restricting the motions of the electrons. In a molecule, the atoms or ions have characteristic relative positions, often with a specific
symmetry. In each spatially founded character a number of quantitatively founded characters are interlaced.
It is a remarkable fact that in an atom the nucleus acts like a quantitatively founded character, whereas the nucleus itself is a spatial configuration of protons and neutrons kept together by forces. The nucleus itself has a spatially founded
character, but in the atom it has the disposition to act as a whole, characterized by its mass, charge and magnetic moment. Similarly, a molecule or a crystal is a system consisting of a number of atoms or ions and electrons, all acting like quantitatively
founded particles. Externally, the nucleus in an atom and the atoms or ions in a molecule act as a quantitatively founded whole, as a unit, while preserving their own internal spatially founded structure.
However, an atom bound in a molecule is not completely the same as a free atom. In contrast to a nucleus, a free atom is electrically neutral and it has a spherical symmetry. Consequently, it cannot easily interact with other atoms or molecules, except
in collisions. In order to become a part of a molecule, an atom has to open up its tertiary character. This can be done in various ways. The atom may absorb or eject an electron, becoming an ion. A common salt molecule does not consist of a neutral sodium
atom and a neutral chlorine atom, but of a positive sodium ion and a negative chlorine ion, attracting each other by the Coulomb force. This is called heteropolar or ionic bonding. Any change of the spherical symmetry of the atom’s electron cloud leads
to the relatively weak VanderWaals interaction. A very strong bond results if two atoms share an electron pair. This homopolar or covalent bond occurs in diatomic molecules like hydrogen, oxygen and nitrogen, in diamond and in many carbon compounds. Finally,
especially in organic chemistry, the hydrogen bond is important. It means the sharing of a proton by two atom groups.
The possibility of being bound into a larger configuration is a
very significant tertiary characteristic of many physically qualified systems, determining their meaning in the cosmos.
The first stable system studied by physics is the solar system, in the seventeenth century investigated by Kepler, Galileo, Huygens, and Newton. The law of gravity, mechanical laws of motion, and conservation laws determine the character of planetary
motion. The solar system is not unique, there are more stars with planets, and the same character applies to a planet with its moons, or to a double star. Any model of the system presupposes its isolation from the rest of the world, which is the case only
approximately. This approximation is pretty good for the solar system, less good for the system of the sun and each planet apart, and pretty bad for the system of earth and moon.
Spatially founded physical characters display a large disparity. Various specific subtypes appear. According to the standard model (5.1), these characters form a hierarchy, called the quantum ladder. At the first rung there are six (or eighteen,
see below) different quarks, with the antiquarks grouped into three generations related to those of leptons, as follows from analogous processes.
Like a lepton, a quark is quantitatively
founded, it has no structure. But a quark cannot exist as a free particle. Quarks are confined as a duo in a meson (e.g., a pion) or as a trio in a baryon (e.g., a proton or a neutron) or an antibaryon.
Confinement is a tertiary characteristic, but it does not stand apart from the secondary characteristics of quarks, their quantitative properties. Whereas quarks have a charge of 1/3 or 2/3 times the elementary charge, their combinations satisfy the law that
the electric charge of a free particle can only be an integral multiple of the elementary charge. Likewise, in confinement the sum of the baryon numbers (for quarks ±1/3 of ±2/3) always yields an integral number. For a meson this number is 0,
for a baryon it is +1, for an antibaryon it is -1.
Between quarks the colour force is acting, mediated by gluons. The colour force has no effect on leptons and is related to the strong
force between baryons. In a meson the colour force between two quarks hardly depends on their mutual distance, meaning that they cannot be torn apart. If a meson breaks apart, the result is not two separate quarks but two quark-antiquark pairs.
Quarks are fermions, they satisfy the exclusion principle. In a meson or baryon, two identical quarks cannot occupy the same state. But an omega particle (sss) consists of three strange quarks
having the same spin. This is possible because each quark exists in three variants, each indicated by a ‘colour’ besides six ‘flavours’. For the antiquarks three complementary colours are available. The metaphor of ‘colour’
is chosen because the colours are able to neutralize each other, like ordinary colours can be combined to produce white. This can be done in two ways, in a duo by adding a colour to its anticolour, or in a trio by adding three different colours or anticolours.
The law that mesons and baryons must be coulorless yields an additional restriction on the number of possible combinations of quarks. A white particle is neutral with respect to the colour force, like an uncharged particle is neutral with respect to the Coulomb
force. Nevertheless, an electrically neutral particle may exert electromagnetic interaction because of its magnetic moment. This applies e.g. to a neutron, but not to a neutrino. Similarly, by the exchange of mesons, the colour force manifests itself as the
strong nuclear force acting between baryons, even if baryons are ‘white’. Two quarks interact by exchanging gluons, thereby changing of colour.
The twentieth-century standard
model has no solution to a number of problems. Why only three generations? If all matter above the level of hadrons consists of particles from the first generation, what is the tertiary disposition of the particles of the second and third generation? Should
the particles of the second and third generation be considered excited states of those of the first generation? Why does each generation consist of two quarks and two leptons (with corresponding antiparticles)? What is the origin of the mass differences between
various leptons and quarks?
The last question might be the only one to receive an answer in the twentyfirst century, when the existence of the Higgs-particle and its mass were experimentally
established (2012). For the other problems, at the end of the twentieth century no experiment is proposed providing sufficient information to suggest a solution.
The second level of the hierarchy consists of hadrons, baryons having half integral spin and mesons having integral spin. Although the combination of quarks is subject to severe restrictions, there
are quite a few different hadrons. A proton consists of two up and one down quark (uud), and a neutron is composed of one up and two down quarks (udd). These two nucleons are the lightest baryons, all others being called hyperons. A pion consists of dd, uu (charge 0), du (–e) or ud (+e). As a
free particle, only the proton is stable, whereas the neutron is stable within a nucleus.
All other hadrons have a very short mean lifetime, a free neutron having the longest (900 sec). Their diversity is much larger than that of leptons and of quarks. Based on symmetry relations, group theory orders the hadrons into sets of e.g. eight baryons
or ten mesons.
For a large part, the interaction of hadrons consists of rearranging quarks accompanied by the creation and annihilation of quark-antiquark pairs and lepton-antilepton
pairs. The general laws of conservation of energy, linear and angular momentum, the specific laws of conservation of electric charge, lepton number and baryon number, and the laws restricting electric charge and baryon number to integral values, characterize
the possible processes between hadrons in a quantitative sense. Besides, the fields described by quantum electrodynamics and quantum chromodynamics characterize these processes in a spatial sense, and the exchange of field particles in a kinetic way.
Atomic nuclei constitute the third layer in the hierarchy. With the exception of hydrogen, each nucleus consists
of protons and neutrons, determining together the coherence, binding energy, stability, and lifetime of the nucleus. The mass of the nucleus is the sum of the masses of the nucleons less the mass equivalent to the binding energy. Decisive is the balance of
the repulsive electric force between the protons and the attractive strong nuclear force binding the nucleons independent of their electric charge. In heavy nuclei, the surplus of neutrons compensates for the mutual repulsion of the protons.
To a large extent, the exclusion principle applied to neutrons and protons separately determines the stability of the nucleus and its internal energy states.
The nuclear force is negligible
for the external functioning of a nucleus in an atom or molecule. Only the mass of the nucleus, its electric charge and its magnetic moment are relevant. Omitting the latter, we recognize two diversities in nuclei.
The first diversity concerns the number of protons. In a neutral atom it equals the number of electrons determining the atom’s chemical propensities. The nuclear charge together with the exclusion principle dominates the energy
states of the electrons, hence the position of the atom in the periodic system of elements.
The second diversity concerns the number of neutrons in the nucleus. Atoms having the same
number of protons but differing in neutron number are called isotopes, because they have the same position (topos) in the periodic system. They have similar chemical propensities.
The diversity of atomic nuclei is represented in a two-dimensional diagram, a configuration space. The horizontal axis represents the number of protons (Z = atomic number), the vertical axis the number of neutrons (N). In this diagram the isotopes (same
Z, different N) are positioned above each other. The configuration space is mostly empty, because only a restricted number of combinations of Z and N lead to stable or metastable (radioactive) nuclei. The periodic system of elements is a two-dimensional diagram
as well. Dmitri Mendelejev ordered the elements in a sequence according to a secondary property (the atomic mass) and below each other according to tertiary propensities (the affinity of atoms to form molecules, in particular compounds with hydrogen and oxygen).
Later on, the atomic mass was replaced by the atomic number Z. However, quantum physics made clear that the atomic chemical properties are not due to the nuclei, but to the electrons subject to the exclusion principle. The vertical ordering in the periodic
system concerns the configuration of the electronic shells. In particular the electrons in the outer shells determine the tertiary chemical propensities.
This is not an ordering according
to a definition in terms of necessary and sufficient properties distinguishing one element from the other, but according to their characters. The properties do not define a character, as essentialism assumes, but the character (a set of laws) determines the
properties and propensities of the atoms.
In the hierarchical order, we find globally an increase
of spatial dimensions, diversity of characters and variation within a character, besides a decrease of the binding energy per particle and the significance of strong and weak nuclear forces. For the characters of atoms, molecules, and crystals, only
the electromagnetic interaction is relevant.
The internal variation of a spatially founded character is very large. Quantum physics describes the internal states with the help of a Hilbert
space, having the eigenvectors of the Hamiltonian operator as a base (2.3). A Hilbert space describes the ensemble of possibilities (in particular the energy eigenvalues) determined by the system’s character. In turn, the atom or molecule’s character
itself is represented by Schrödinger’s equation.
This equation is exactly solvable only in the case of two interacting particles, like the hydrogen atom, the helium ion, the lithium ion, and positronium.
In other cases, the equation serves as a starting point for approximate solutions, usually only manageable with the help of a computer.
The hierarchical connection implies that the spatially
founded characters are successively interlaced, for example nucleons in a nucleus, or the nucleus in an atom, or atoms in a molecule. Besides, these characters are interlaced with kinetically, spatially, and quantitatively qualified characters, and often with
biotically qualified characters as well.
The characters described depend strongly on a number of natural constants, which value can be established only experimentally, not theoretically.
Among others, this concerns the gravitational constant G, the speed of light c, Planck’s constant h and the elementary electric charge e, or combinations like the fine structure constant (2pe2/hc=1/137.036)
and the mass ratio of the proton and the electron (1836.104). If the constants of nature would be slightly different, both nuclear properties and chemical properties would change drastically.
The quantum ladder is of a physical and chemical nature. As an ordering principle, the ladder has a few flaws from a logical point of view. For instance, the proton occurs on three different levels,
as a baryon, as a nucleus, and as an ion. The atoms of the noble gases are their molecules as well. This is irrelevant for their character. The character of a proton consists of the specific laws to which it is subjected. The classification of baryons, nuclei
or ions is not a characterization, and a proton is not ‘essentially’ a baryon and ‘accidentally’ a nucleus or an ion.
The number of molecular characters is enormous and no universal classification of molecules exists. In particular the characters in which carbon is an important element show a large diversity.
The molecular formula indicates the number of atoms of each element in a molecule. Besides, the characteristic spatial structure of a molecule determines its chemical properties. The composition of a methane molecule
is given by the formula CH4, but it is no less significant that the methane molecule has the symmetrical shape of a regular tetrahedron, with the carbon atom at the centre and the four hydrogen atoms at the vertices. The V-like shape of a water
molecule (the three atoms do not lie on a straight line, but form a characteristic angle of 105o) causes the molecule to have a permanent electric dipole moment, explaining many of the exceptional properties of water. Isomers are materials having
the same molecular formula but different spatial orderings, hence different chemical properties. Like the symmetry between a left and a right glove, the spatial symmetry property of mirroring leads to the distinction of dextro- and laevo-molecules.
The symmetry characteristic for the generic (physical) character is an emergent property, in general irreducible to the characters of the composing systems. Conversely, the original symmetry of
the composing systems is broken. In methane, the outer shells of the carbon atom have exchanged their spherical symmetry for the tetrahedron symmetry of the molecule. Symmetry break also occurs in fields.
From quantum field theory, in principle it should be possible to derive successively the emergent properties of particles and their spatially founded composites. This is the synthetic, reductionist or fundamentalist trend, constructing complicated structures
from simpler ones. It cannot explain symmetry breaks.
For practical reasons too, a synthetic approach is usually impossible. The alternative is the analytical or holistic method, in which the symmetry break is explained from the empirically established symmetry of the original character. Symmetries and other
structural properties are usually a posteriori explained, and hardly ever a priori derived. However, analysis and synthesis are not contrary but complementary methods.
Climbing the quantum ladder, complexity seems to increase. On second thoughts, complexity is not a clear concept. An atom would be more complex than a nucleus and a molecule even more.
However, in the character of a hydrogen atom or a hydrogen molecule, weak and strong interactions are negligible, and the complex spatially founded nuclear structure is reduced to the far simpler quantitatively founded character of a particle having mass,
charge, and magnetic moment. Moreover, a uranium nucleus consisting of 92 protons and 146 neutrons has a much more complicated character than a hydrogen molecule consisting of two protons and two electrons, having a position two levels higher on the quantum
Inward a system is more complex than outward. An atom consists of a nucleus and a number of electrons, grouped into shells. If a shell is completely filled in conformity with
the exclusion principle, it is chemically inert, serving mostly to reduce the effective nuclear charge. A small number of electrons in partially occupied shells determines the atom’s chemical propensities. Consequently, an atom of a noble gas, having
only completely occupied shells, is less complicated than an atom having one or two electrons less. The complexity of molecules increases if the number of atoms increases. But some very large organic molecules consist of a repetition of similar atomic groups
and are not particularly complex.
In fact, there does not exist an unequivocal criterion for complexity.
An important property of hierarchically ordered characters is that for the explanation of a character it is sufficient to descend to the next lower level.
For the understanding of molecules, a chemist needs the atomic theory, but he does not need to know much about nuclear physics. A molecular biologist is acquainted with the chemical molecular theory, but his knowledge of atomic theory may be rather superficial.
This is possible because of the phenomenon that a physical character interlaced in another one both keeps its properties and hides them.
Each system derives its stability from an internal
equilibrium that is hardly observable from without. The nuclear forces do not range outside the nucleus. Strong electric forces bind an atom or a molecule, but as a whole it is electrically neutral. The strong internal equilibrium and the weak remaining external
action are together characteristic for a stable physical system. If a system exerts a force on another one, it experiences an equal external force. This external force should be much smaller than the internal forces keeping the system intact, otherwise it
will be torn apart. In a collision between two molecules, the external interaction may be strong enough to disturb the internal equilibrium, such that the molecules fall apart. Eventually, a new molecule with a different character emerges. Because the mean
collision energy is proportional to the temperature, the stability of molecules and crystals depend on this parameter. In the sun’s atmosphere no molecules exist and in its centre no atoms occur. In a very hot star like a neutron star, even nuclei cannot
Hence, a stable physical or chemical system is relatively inactive. It looks like an isolated system. This is radically different from plants and animals that can never be isolated
from their environment. The internal equilibrium of a plant or an animal is maintained by metabolism, the continuous flow of energy and matter through the organism.
I consider the primarily physical character of a photon to be secondarily kinetically founded. A photon is a field particle in the electromagnetic interaction,
transporting energy, linear and angular momentum from one spatially founded system to another. Besides photons, nuclear physics recognizes gluons being field particles for the colour force, mesons for the strong nuclear force, and three types of vector bosons
for the weak interaction (5.1). The existence of the graviton, the field particle for gravity, has not been experimentally confirmed. All these interaction particles have an integral spin and are bosons. Hence, these are not subject to the exclusion principle.
Field particles are not quantitatively or spatially founded things, but individualized characteristic currents, hence kinetically founded ‘quasiparticles’. Bosons carry forces, whereas fermions feel forces.
By absorbing a photon, an atom comes into an excited state, i.e. a metastable state at a higher energy than the ground state. Whereas an atom in its ground state can be considered an isolated system, an excited atom
is always surrounded by the electromagnetic field.
A photon is a wave packet, like an electron it has a dual character. Yet there is a difference. Whereas the electron’s motion
has a wave character, a photon is a current in an electromagnetic field, a current being a kinetic projection of physical interaction. With respect to electrons, the wave motion only determines the probability of what will happen in a future interaction.
In a photon, besides determining a similar probability, the wave consists of periodically changing electric and magnetic fields. A real particle’s wave motion lacks a substratum, there is no characteristic medium in which it moves, and its velocity is
variable. Moving quasiparticles have a substratum, and their wave velocity is a property of the medium. The medium for light in empty space is the electromagnetic field, all photons having the same speed independent of any reference system.
Each inorganic solid consists of crystals, sometimes microscopically small. Amorphous solid matter does not exist or is very rare. The
ground state of a crystal is the hypothetical state at zero temperature. At higher temperatures, each solid is in an excited state, determined by the presence of quasiparticles.
crystal symmetry, adequately described by the theory of groups, has two or three levels. First, each crystal is composed of space filling unit cells. All unit cells of a crystal are equal to each other, containing the same number of atoms, ions or molecules
in the same configuration. A characteristic lattice point indicates the position of a unit cell. The lattice points constitute a Bravais lattice, representing the crystal’s translation symmetry. Only fourteen types of Bravais lattices are mathematically
possible and realized in nature. Each lattice allows of some variation, for instance with respect to the mutual distance of the lattice points, as is seen when the crystal expands on heating. Because each crystal is finite, the translation symmetry is restricted
and the surface structure of a crystal may be quite different from the crystal structure.
Second, the unit cell has a symmetry of its own, superposed on the translation symmetry of the
Bravais lattice. The cell may be symmetrical with respect to reflection, rotation or inversion. The combined symmetry determines how the crystal scatters X-rays or neutrons, presenting a means to investigate the crystalline structure empirically. Hence, the
long distance spatial order of a crystal evokes a long time kinetic order of specific waves.
Third, in some materials we find an additional ordering, for instance that
of the magnetic moments of electrons or atoms in a ferromagnet. Like the first one, this is a long-distance ordering. It involves an interaction that is not restricted to nearest neighbours. It may extend over many millions of atomic distances.
The atoms in a crystal oscillate around their equilibrium positions.
These elastic oscillations are transferred from one atom to the next like a sound wave, and because the crystal has a finite volume, this is a stationary wave, a collective oscillation. The crystal as a whole is in an elastic oscillation, having a kinetically
founded character. These waves have a broad spectrum of frequencies and wavelengths, being sampled into wave packets. In analogy with light, these field particles are called sound quanta or phonons.
Like the electrons in a metal, the phonons act like particles in a box (4.4). Otherwise they differ widely. The number of electrons is constant, but the number of phonons increases strongly at increasing temperature. Like all quasiparticles,
the phonons are bosons, not being subject to the exclusion principle. The mean kinetic energy of the electrons hardly depends on temperature, and their specific heat is only measurable at a low temperature. In contrast, the mean kinetic energy of phonons strongly
depends on temperature, and the phonon gas dominates the specific heat of solids. At a low temperature this increases proportional to T3 to become constant at a higher temperature. Peter Debije’s theory (originally 1912, later adapted)
explains this from the wave and boson character of phonons and the periodic character of the crystalline structure.
In a solid or liquid, besides phonons many other quantized excitations
occur, corresponding, for instance, with magnetization waves or spin waves. The interactions of quasiparticles and electrons cause the photoelectric effect and transport phenomena like electric resistance and thermo-electricity.
The specific properties of some superconductors can be described with the help of quasiparticles.
In a superconductor two electrons constitute a Cooper-pair. This is a pair of electrons in a bound state, such that both the total linear momentum and the total angular momentum are zero. The two electrons are not necessarily close to each other. Superconductivity
is a phenomenon with many variants, and the theory is far from complete.
Superconductivity is a collective phenomenon in which the wave functions of several particles are macroscopically
coherent. There is no internal dissipation
of energy. It appears that on a macroscopic scale the existence of kinetically founded characters is only possible if there is no decoherence (4.3). Therefore, kinetically founded physical characters on a macroscopic scale are quite exceptional.
5.5. Aggregates and statistics
We have now discussed three types of physically qualified
characters, but this does not exhaust the treatment of matter. The inorganic sciences acknowledge many kinds of mixtures, aggregates, alloys or solutions. In nature, these are more abundant than pure matter. Often, the possibility to form a mixture is restricted
and some substances do not mix at all. In order to form a stable aggregate, the components must be tuned to each other. Typical for an aggregate is that the characteristic magnitudes (like pressure, volume and temperature for a gas) are variable within a considerable
margin, even if there is a lawful connection between these magnitudes.
Continuous variability provides quantum physics with a criterion to distinguish a composite thing (with a character
of its own) from an aggregate. Consider the interaction between an electron and a proton. In the most extreme case this leads to the absorption of the electron and the transformation of the proton into a neutron (releasing a neutrino). At a lower energy, the
interaction may lead to a bound state having the character of a hydrogen atom if the total energy (kinetic and potential) is negative.
Finally, if the total energy is positive, we have an unbound state, an aggregate. In the bound state the energy can only have discrete values, it is quantized, whereas in the unbound state the energy is continuously variable.
Hence, if the rest energy has a characteristic value and internal energy states are lacking, we have an elementary particle (a lepton or a quark). If there are internal discrete energy states we have a composite character,
whereas we have an aggregate if the internal energy is continuously variable.
With aggregates it is easier
to abstract from specific properties than in the case of the characters of composite systems discussed in section 5.3. Studying the properties of macroscopic physical bodies, thermodynamics starts from four general laws, for historical reasons numbered 0 to
The zeroth law states that two or more bodies (or parts of a single body) can be in mutual equilibrium. Now the temperature of the interacting bodies is the same, and in a body as
a whole the temperature is uniform. Depending on the nature of the interaction, this applies to other intensive magnitudes as well, for instance the pressure of a gas, or the electric or chemical potential. In this context bodies are not necessarily spatially
separated. The thermodynamic laws apply to the components of a mixture as well. Equilibrium is an equivalence relation (2.1). An intensive magnitude like temperature is an equilibrium parameter, to be distinguished from an extensive magnitude like energy,
which is additive. If two unequal bodies are in thermal equilibrium with each other, their temperature is the same, but their energy is different and the total energy is the sum of the energies of the two bodies apart. An additive magnitude refers to the quantitative
relation frame, whereas an equilibrium parameter is a projection on the spatial frame.
According to the first law of thermodynamics, the total energy is constant, if the interacting
bodies are isolated from the rest of the world. The thermodynamic law of conservation of energy forbids all processes in which energy would be created or annihilated. The first law does not follow from the fact that energy is additional. Volume, entropy, and
the mass of each chemical component are additive as well, but not always constant in an interaction.
The second law states that interacting systems proceed towards an equilibrium state.
The entropy decreases if a body loses energy and increases if a body gains energy, but always in such a way that the total entropy increases as long as equilibrium is not reached. Based on this law only entropy differences can be calculated.
According to the third law the absolute zero of temperature cannot be reached. At this temperature all systems would have the same entropy, to be considered the zero point on the entropy scale.
From these axioms other laws are derivable, such as Gibbs’s phase rule (see below). As long as the interacting systems are not in equilibrium, the gradient of each equilibrium parameter acts as the driving force for the corresponding current causing
equilibrium. A temperature gradient drives a heat current, a potential difference drives an electric current, and a chemical potential difference drives a material current. Any current (except a superconducting flow) creates entropy.
The thermodynamic axioms describe the natural laws correctly in the case of interacting systems being close to equilibrium. Otherwise, the currents are turbulent and a concept like entropy cannot be defined. Another
restriction follows from the individuality of the particles composing the system. In the equilibrium state, the entropy is not exactly constant, but it fluctuates spontaneously around the equilibrium value. Quantum physics shows energy to be subject to a Heisenberg-relation
(4.3). In fact, the classical thermodynamic axioms refer to a continuum, not to the actually coarse matter. Thermodynamics is a general theory of matter, whereas statistical physics studies matter starting from the specific properties of the particles composing
a system. This means that thermodynamics and statistical physics complement each other.
An equilibrium state is sometimes called an ‘attractor’, attracting a system from
any instable state toward a stable state. Occasionally, a system has several attractors, now called local equilibrium states. If there is a strong energy barrier between the local equilibrium states, it is accidental which state is realized. By an
external influence, a sudden and apparently drastic transition may occur from one attractor to another one. In quantum physics a similar phenomenon is called ‘tunneling’, to which I shall return in section 5.6.
a. A homogeneous set of particles having the same character may be considered a quantitatively founded aggregate, if the set
does not constitute a structural whole with a spatially founded character of its own (like the electrons in an atom). In a gas the particles are not bound to each other. Usually, an external force or a container is needed to keep the particles together. In
a fluid, the surface tension is a connective force that does not give rise to a characteristic whole. The composing particles’ structural similarity is a condition for the applicability of statistics. Therefore I call a homogeneous aggregate quantitatively
It is not sufficient to know that the particles are structurally similar. At least it should be specified whether the particles are fermions or bosons (4.4). Consider, for instance,
liquid helium, having two varieties. In the most common isotope, a helium nucleus is composed of two protons and two neutrons. The net spin is zero, hence the nucleus is a boson. In a less common isotope, the helium nucleus has only one neutron besides two
protons. Now the nucleus’ net spin is ½ and it is a fermion. This distinction (having no chemical consequences) accounts for the strongly diverging physical properties of the two fluids.
Each homogeneous gas is subjected to a specific law, called the statistics or distribution function. It determines how the particles are distributed over the available states, taking into account parameters like volume, temperature, and total
energy. The distribution function does not specify which states are available. Before the statistics is applicable, the energy of each state must be calculated separately.
statistics based on Pauli’s exclusion principle applies to all homogeneous aggregates of fermions, i.e., particles having half-integral spin. For field particles and other particles having an integral spin, the Bose-Einstein statistics applies, without
an exclusion principle. If the mean occupation number of available energy states is low, both statistics may be approximated by the classical Maxwell-Boltzmann distribution function. Except at very low temperatures, this applies to every dilute gas consisting
of similar atoms or molecules. The law of Boyle and Gay-Lussac follows from this statistics. It determines the relation between volume, pressure and temperature for a dilute gas, if the interaction between the molecules is restricted to elastic collisions
and if the molecular dimensions are negligible. Without these two restrictions, the state equation of Van der Waals counts as a good approximation. Contrary to the law of Boyle and Gay-Lussac, the VanderWaals equation contains two constants characteristic
for the gas concerned. It describes the condensation of a gas to a fluid as well as the phenomena occurring at the critical point, the highest temperature at which the substance is liquid.
b. It is not possible to apply statistics directly to a mixture of subjects having different characters. Sometimes, it can be done with respect to the components
of a mixture apart. For a mixture of gases like air, the pressure exerted by the mixture equals the sum of the partial pressures exerted by each component apart in the same volume at the same temperature (Dalton’s law). The chemical potential is a parameter
distinguishing the components of a heterogeneous mixture.
I consider a heterogeneous mixture like a solution to have a spatial foundation, because the solvent is the physical environment
of the dissolved substance. Solubility is a characteristic disposition of a substance dependent on the character of the solvent as the potential environment.
Stable characters in one
environment may be unstable in another one. Common salt molecules solved in water fall apart into sodium and chlorine ions. In the environment of water, the dielectric constant is much higher than in air. Now the Coulomb force between the ions is proportionally
smaller, too small to keep the ions together.
The composition of a mixture, the number of grams of solved substance in one litre water, is accidental. It is not determined by any character but by its history. This does not mean that two substances
can be mixed in any proportion whatsoever. However, within certain limits dependent on the temperature and the characters of the substances concerned, the proportion is almost continuously variable.
c. Even if a system only consists of particles of the same character, it may not appear homogeneous. It exists in two or more different ‘phases’ simultaneously,
for example, the solid, liquid, and vaporous states. A glass of water with melting ice is in internal equilibrium at 0 °C. If heat is supplied, the temperature remains the same until all ice is melted. Only chemically pure substances have a characteristic
melting point. In contrast, a heterogeneous mixture has a melting trajectory, meaning that during the melting process, the temperature increases. A similar characteristic transition temperature applies to other phase transitions in a homogeneous substance,
like vaporizing, the transition from a paramagnetic to a ferromagnetic state, or the transition from a normal to a superconducting state. Addition of heat or change of external pressure shifts the equilibrium. A condition for equilibrium is that the particles
concerned move continuously from one phase to the other. Therefore I call it a homogeneous kinetically founded aggregate.
An important example of a heterogeneous kinetic
equilibrium concerns chemical reactions. Water consists mostly of water molecules, but a small part (10-7 at 25 oC) is dissociated into positive H-ions and negative OH-ions. In the equilibrium state, equal amounts of molecules are dissociated
and associated. By adding other substances (acids or bases), the equilibrium is shifted.
Both phase transitions and chemical reactions are subject to characteristic laws and to general thermodynamic laws, for instance Gibbs’s phase rule.
5.6. Coming into being, change and decay
I call an event physically qualified if it is primarily
characterized by an interaction between two or more subjects. A process is a characteristic set of events, partly simultaneously, partly successively. Therefore, physically qualified events and processes often occur in an aggregate, sometimes under strictly
determined circumstances, among which the temperature. In a mixture, physical, chemical and astrophysical reactions lead to the realization of characters. Whereas in physical things properties like stability and life time are most relevant, physical
and chemical processes concern the coming into being, change and decay of those things.
In each characteristic event a thing changes of character (it emerges or decays) or of state (preserving its identity).
With respect to the thing’s character considered as a law, the first case concerns a subjective event (because the subject changes). The second case concerns an objective event (for the objective state changes). Both have secondary
characteristics. I shall briefly mention some examples.
Annihilation or creation of particles is a subjective numerically founded event. Like any other event, it is subject to conservation
laws. An electron and a positron emerge simultaneously from the collision of a γ-particle with some other particle, if the photon’s energy is at least twice the electron’s rest energy. The presence of another particle, like an atomic nucleus,
is required in order to satisfy the law of conservation of linear momentum. For the same reason, at least two photons emerge when an electron and a positron destroy each other.
or absorbing a photon, a nucleus, atom or molecule changes its state. This is a spatially founded objective transformation. In contrast, in a nuclear or chemical reaction one or more characters are transformed, constituting a subjective spatially
founded event. In a- or b-radioactivity, a nucleus changes subjectively its character, in g-activity it only changes objectively of its state.
An elastic collision is an event in which
the kinetic state of a particle is changed without consequences for its character or its internal state. Hence, this concerns an objective kinetically founded event. In a non-elastic collision a subjective change of character or an objective change of state
occurs. Quantum physics describes such events with the help of operators determining the transition probability.
A process is an aggregate of events. In a homogeneous aggregate, phase
transitions may occur. In a heterogeneous aggregate chemical reactions occur (5.5). Both are kinetically founded. This also applies to transport phenomena like electric, thermal or material currents, thermo-electric phenomena, osmosis and diffusion.
Conservation laws are ‘constraints’ restricting the possibility of processes. For instance, a process
in which the total electric charge would change is impossible. In atomic and nuclear physics, transitions are known to be forbidden or improbable because of selection rules for quantum numbers characterizing the states concerned.
Physicists and chemists take for granted that each process that is not forbidden is possible and therefore experimentally realizable. In fact, several laws of conservation like those of lepton number and baryon number
were discovered because certain reactions turned out to be impossible. Conversely, in 1930 Pauli postulated the existence of neutrino’s, because otherwise the laws of conservation of energy and momentum would not apply to b-radioactivity. Experimentally,
the existence of neutrinos was not confirmed until 1956.
In common parlance, a collision is a rather
dramatic event, but in physics and chemistry a collision is just an interaction between two or more subjects moving towards each other, starting from a large distance, where their interaction is negligible. In classical mechanics, this interaction means an
attractive or repelling force. In modern physics, it implies the exchange of real or virtual particles like photons.
In each collision, at least the state of motion of the interacting
particles changes. If that is all, we speak of an elastic collision, in which only the distribution of kinetic energy, linear and angular momentum over the colliding particles changes. A photon can collide elastically with an electron (this is the
Compton effect), but an electron cannot absorb a photon. Only a composite thing like a nucleus or an atom is able to absorb a particle.
Collisions are used to investigate the character
of the particles concerned. A famous example is the scattering of a-particles by gold atoms (1911). For the physical process, it is sufficient to assume that the particles have mass and charge and are point-like. It does not matter whether the particles are
positively or negatively charged. The character of this collision is statistically expressed in a mathematical formula derived by Ernest Rutherford. The fact that the experimental results (by Hans Geiger and Ernest Marsden) agreed with the formula indicated
that the nucleus is much smaller than the atom, and that the mass of the atom is almost completely concentrated in the nucleus. A slight deviation between the experimental results and the theoretical formula allowed of an estimate of the size of the nucleus,
its diameter being about 104 times smaller than the atom’s. The dimension of a microscopic invisible particle is calculable from similar collision processes, and is therefore called its collision diameter. Its value depends on the projectiles
used. The collision diameter of a proton differs if determined from collisions with electrons or neutrons.
a non-elastic collision the internal structure of one or more colliding subjects changes in some respect. With billiard balls only the temperature increases, kinetic energy being transformed into heat, causing the motion to decelerate.
In a non-elastic collision between atoms or molecules, the state of at least one of them changes into an excited state, sooner or later followed by the emission of a photon. This is an objective characteristic process.
The character of the colliding subjects may change subjectively as well, for instance, if an atom loses an electron and becomes an ion, or if a molecule is dissociated or associated.
Collisions as a means to investigate the characters of subatomic particles have become a sophisticated art in high-energy physics.
Spontaneous decay became first known at the end of the nineteenth century from radioactive processes. It involves strong, weak or electromagnetic interactions, respectively
in α-, β-, and γ-radiation. The decay law of Rutherford and Soddy (1902) approximately gives the character of a single radioactive process.
This statistical law is only explainable by assuming that each atom decays independently of all other atoms. It is a random process. Besides, radioactivity is almost independent of circumstances like temperature, pressure and the chemical compound in which
the radioactive atom is bound. Such decay processes occur in nuclei and sub-atomic particles, as well as in atoms and molecules being in a metastable state. The decay time is the mean duration of existence of the system or the state.
Besides spontaneous ones, stimulated transformations occur. Einstein first investigated this phenomenon in 1916, with respect to transitions between two energy levels of an atom or molecule, emitting
or absorbing a photon. He found that (stimulated) absorption and stimulated emission are equally probable, whereas spontaneous emission has a different probability.
Stimulated emission is symmetrical with stimulated absorption, but spontaneous emission is asymmetric and irreversible.
A stable system or a stable state may be separated from other systems or states by an energy barrier. It may be imagined that a particle is confined in an energy well, for instance an α-particle in a nucleus. According to classical
mechanics, such a barrier is insurmountable if it has a larger value than the kinetic energy of the particle in the well, but quantum physics proves that there is some probability that the particle leaves the well. This is called ‘tunneling’, for
it looks like the particle digging a tunnel through the energy mountain.
Consider a chemical reaction in which two molecules A and B associate to AB and conversely,
AB dissociates into A and B. The energy of AB is lower than the energy of A+B apart, the difference being the binding energy. A barrier called the activation energy separates the two states. In an equilibrium
situation, the binding energy and the temperature determine the proportion of the numbers of molecules (NA.NB/NAB). It is independent of the activation energy. At a low temperature, if the total number of A’s
equals the total number of B’s, only molecules AB will be present. In an equilibrium situation at increasing temperatures, the number of molecules A and B increases, and that of AB decreases. In contrast,
the speed of the reaction depends on the activation energy (and again on temperature). Whereas the binding energy is a characteristic magnitude for AB, the activation energy partly depends on the environment. In particular the presence of
a catalyst may lower the activation energy and stimulate tunneling, increasing the speed of the reaction.
The possibility to overcome energy barriers explains the possibility of transitions
from one stable system to another one. It is the basis of theories about radioactivity and other spontaneous transitions, chemical reaction kinetics, the emergence of chemical elements and of phase transitions, without affecting theories explaining the existence
of stable or quasi-stable systems.
In such transition processes the characters do not change, but a system may change of character. The laws do not change, but their subjects do.
The chemical elements have arisen in a chain of nuclear processes, to be distinguished as fusion and fission. The
chain starts with the fusion of hydrogen nuclei (protons) into helium nuclei, which are so stable that in many stars the next steps do not occur. Further processes lead to the formation of all known natural isotopes up to uranium. Besides helium with 4 nucleons,
beryllium (8), carbon (12), oxygen (16) and iron (56) are relatively stable. In all these cases, both the number of protons and the number of neutrons is even.
The elements only arise
in specific circumstances. In particular, the temperature and the density are relevant. The transition from hydrogen to helium occurs at 10 to 15 million Kelvin and at a density of 0,1 kg/cm3. The transition of helium into carbon, oxygen and neon
occurs at 100 to 300 million Kelvin and 100 kg/cm3.
Only after a considerable cooling down, these nuclei form with electrons the atoms and molecules to be found on the earth.
Once upon a time the chemical elements were absent. This does
not mean that the laws determining the existence of the elements did not apply. The laws constituting the characters of stable and metastable isotopes are universally valid, independent of time and place. But the realization of the characters into actual individual
nuclei does not depend on the characters only, but on circumstances like temperature as well. On the other hand, the available subjects and their relations determine these circumstances. Like initial and boundary conditions, characters are conditions for the
existence of individual nuclei. Mutatis mutandis, this applies to electrons, atoms and molecules as well.
the preceding chapters, I discussed quantitative, spatial and kinetic characters. About the corresponding subjects, like groups of numbers, spatial figures or wave packets, it cannot be said that they come into being or decay, except in relation to physical
subjects. Only interacting things emerge and disappear. Therefore there is no quantitative, spatial or kinetic evolution comparable to the astrophysical one, even if the latter is expressed in numerical proportions, spatial relations and characteristic rhythms.
Although stars have a lifetime far exceeding the human scale, it is difficult to consider them stable. Each star is a reactor in which continuously processes take place. Stars are subject to evolution.
There are young and old stars, each with their own character. Novae and supernovae, neutron stars and pulsars represent various phases in the evolution of a star. The simplest stellar object may be the black hole, behaving like a thermodynamic black body subject
to the laws of thermodynamics.
These processes play a part in the theory about the astrophysical evolution, strongly connected to the standard model discussed in section 5.1. It correctly explains the relative abundance of the
chemical elements. After the start of the development
of the physical cosmos, about thirteen billion years ago, it has expanded. As a result all galaxies move away from each other, the larger the distance, the higher their speed. Because light needs time to travel, the picture we get from galaxies far away concerns
states from era’s long past. The most remote systems are at the spatio-temporal horizon of the physical cosmos. In this case, astronomers observe events that occurred shortly after the big bang, the start of the astrophysical evolution.
Its real start remains forever behind the horizon of our experience. Astrophysicists are aware that their theories based on observations may approach the big bang without ever reaching it. The astrophysical
theory describes what has happened since the beginning - not the start itself - according to laws discovered in our era. The extrapolation towards the past is based on the supposition that these laws are universally valid and constant. This agrees with the
realistic view that the cosmos can only be investigated from within. It is not uncommon to consider our universe as one realized possibility taken from an ensemble of possible worlds.
However, there is no way to investigate these alternative worlds empirically.
Groups, spatial figures, waves and oscillations do not interact, hence are not physical unless interlaced with physical characters.
Wolfgang Pauli postulated the existence of neutrinos in 1930 in order to explain the phenomenon of β-radioactivity. Neutrino’s were not detected experimentally before 1956. According to a physical criterion, neutrino’s exist if they demonstrably
interact with other particles. Sometimes it is said that the neutrino is ‘observed’ for the first time in 1956. Therefore one has to stretch the concept of ‘observation’ quite far. In no experiment neutrino’s can be seen, heard,
smelled, tasted or felt. Even their path of motion cannot be made visible in any experiment. But in several kinds of experiment, from observable phenomena the energy and momentum (both magnitude and direction) of individual neutrino’s can be calculated.
For a physicist, this provides sufficient proof for their existence.
‘System’ is a general expression for a bounded part of space inclusive of the enclosed matter and energy. A closed system does not exchange energy or matter with its environment. Entropy can only be defined properly if the system is in internal
Omnès 1994, 193-198, 315-319.
Dijksterhuis 1950; Reichenbach 1956; Gold (ed.) 1967; Grünbaum 1973; 1974; Sklar 1974, chapter V; Sklar 1993; Prigogine 1980; Coveney, Highfield 1990.
Compare Reichenbach 1956, 135: ‘The direction of time is supplied by the direction of entropy, because the latter direction is made manifest in the statistical behaviour of a large number of separate systems, generated individually in the general drive
to more and more probable states.’ But on p. 115 Reichenbach observes: ‘The inference from time to entropy leads to the same result whether it is referred to the following or to preceding events’. Putnam 1975, 88 concludes that ‘…
the one great law of irreversibility (the Second Law) cannot be explained from the reversible laws of elementary particle mechanics…’.
international physical community, organized in the Conférence Générale des Poids et Mesures, designed the metric system of units and scales. The basic magnitudes and units of the Système International (SI) are:
length (metre), mass (kilogram), kinetic time (second), electric current (ampère), temperature (kelvin), amount of matter (mol) and luminosity (candela). All other units are derived from these. Theoretically, a different base could have been chosen,
e.g. electric charge or potential difference instead of current. The choice is made especially with regard of the possibility to establish the unit and metric concerned with large precision. Physicists and astronomers do not always stick to these
agreements, using the speed of light, the light year or the charge of the electron as alternatives to the standard units.
von Laue 1949; Jammer 1961; Elkana 1974a; Harman 1982.
The formula means that mass and energy are equivalent, that each amount of energy corresponds with an amount of mass and conversely. It does not mean that mass is a form of energy, or can be converted into energy.
Because energy is not easy to measure, its metric and unit (joule) are derived from those of mass, length and time: 1 J = 1 kg.m2/sec2, or alternatively from electric current, potential difference and time: 1 J = 1 A.V.sec.
For the amount of matter, moles are used as well. A mole is the quantity of matter containing as many elementary particles (i.e., atoms, molecules, ions, electrons etc.) as there are atoms in 0.012 kg of carbon-12.
Angular frequency equals 2π times the frequency. The moment of inertia is an expression of the distribution of matter about a body with respect to a rotation axis.
 About the history of the concept of force,
see Jammer 1957. On Newton’s views, see Cohen, Smith (eds.) 2002.
Morse 1964, 53-58; Callen 1960, 79-81; Stafleu 1980, 70-73. The definition of the metric of pressure is relatively easy, but finding the metric of electric potential caused almost as much trouble as the development of the thermodynamic temperature scale.
A current in a superconductor is a boundary case. In a closed superconducting circuit without a source, an electric current may persist indefinitely, whereas a normal current would die out very fast.
is the phenomenon that a heat current causes an electric current (Seebeck-effect) or reverse (Peltier-effect), see Callen 1960, 293-308. This is applied in the thermo-electric thermometer, measuring a temperature difference by an electric potential difference.
Relations between various types of currents are subject to a symmetry relation discovered by William Kelvin and generalized by Lars Onsager, see Morse 1964, 106-118; Callen 1960, 288-292; Prigogine 1980, 84-88.
1993, chapters 5-7.
About 1900, the electromagnetic worldview supposed that all physical and chemical interactions could be reduced to electromagnetism, see McCormmach 1970a; Kragh 1999, chapter 8. Just like the modern unification program, it aimed at deducing the (rest-) mass
of elementary particles from the fundamental interaction, see Jammer 1961, chapter 11.
SU(3) means special unitary group with three variables. The particles in a representation of this group have the same spin and parity (together one variable), but different values for strangeness and one component of isospin.
Symmetry is as much an empirical property as any other one. After the discovery of antiparticles it was assumed that charge conjugation C (symmetry with respect to the interchange of a particle with its antiparticle), parity P (mirror symmetry)
and time reversal T are properties of all fundamental interactions. Since 1956, it is experimentally established that β-decay has no mirror symmetry unless combined with charge conjugation (CP). In 1964 it turned out that weak
interactions are only symmetrical with respect to the product CPT, such that even T alone is no longer universally valid.
Pickering 1984, chapter 9-11; Pais 1986, 603-611. The J/ψ particle established the existence of charm as the fourth flavour of quarks in 1974. In 1977 the fifth quark was found (bottom), in 1978 the tauon, in 1995 the sixth quark (top). In order
to explain the mass of field particles and other particles, the standard model needs the Higgs particle in the Higgs field (called after Peter Higgs), which was found experimentally in 2012. In the standard model, some constants of nature serve as a datum
for the theory. Their values do not follow from the theory, but have to be established by experiments. New theories, replacing point-like particles by strings and postulating a ‘supersymmetry’ between fermions and bosons, have so far not led to
empirically confirmable results, see e.g. ’t Hooft 1992. Some other unsolved problems will be mentioned below.
Historically the suffix –on goes back to the electron. Whether the connection with ontology has really played a part is unclear. See Walker, Slack 1970, who do not mention Faraday’s ion. The word electron comes from the Greek word for amber or
fossilized resin, since antiquity known for its properties that we now recognize as static electricity. From 1874, George Stoney used the word electron for the elementary amount of charge. Only in the twentieth century, electron became the name of the particle
identified by Joseph Thomson in 1897. Ernest Rutherford introduced the names proton and neutron in 1920 (long before the actual discovery of the neutron in 1932). Gilbert Lewis baptized the photon in 1926, 21 years after Albert Einstein proposed its existence.
See Millikan 1917; Anderson 1964; Thomson 1964; Pais 1986; Galison 1987; Kragh 1990; 1999.
Pickering 1984, 67; Pais 1986, 466: ‘The agreement between experiment and theory shown by these examples, the highest point in precision reached anywhere in the domain of particles and fields, ranks among the highest achievements of twentieth-century
In a collision between two electrons, the assumption that they do or do not keep their identity leads to different predictions for the result. Experimentally, it turns out that they do not maintain their identity.
1 MeV is 1 million electronvolt. 1 eV equals the energy that a particle having the elementary charge gains by proceeding through an electric potential difference of 1 Volt.
 Neutrino’s are stable, their rest mass
is zero or very small, and they are only susceptible to weak interaction. Neutrino’s and anti-neutrino’s differ by their parity, the one being left handed, and the other right handed. (This distinction is only possible for particles having zero
restmass. If neutrinos have a rest mass different from zero, as some recent experiments suggest, the theory has to be adapted with respect to parity). That the three neutrinos differ from each other is established by processes in which they are or are not
involved, but in what respect they differ is less clear. For some time, physicists expected the existence of a fourth generation, but the standard model restricts itself to three, because astrophysical cosmology implies the existence of at most three different
types of neutrino’s with their antiparticles.
From scattering experiments of electrons at a high energy, it follows that a proton as well as a neutron has three hard kernels, each with an electric charge of (1/3)e or (2/3)e. Like electrons in an atom, quarks may have an orbital angular
momentum besides their spin angular momentum, such that mesons and baryons may have a spin larger than 2/3.
A free neutron decays into a proton, an electron and an antineutrino. The law of conservation of baryon number is responsible for the stability of the proton, being the baryon with the lowest rest energy. The assumption that this law is not absolutely valid,
the proton having a decay time of the order of 1031 years, is not confirmed experimentally.
This is the so-called time-independent Schrödinger equation, determining stationary states and energy levels.
Positronium is a short living composite of an electron and a positron, the only spatially founded structure entirely consisting of leptons.
See Barrow, Tipler 1986, 5, 252-254.
symmetry of strong nuclear interaction is broken by electroweak interaction. For the strong interaction, the proton and the neutron are symmetrical particles having the same rest energy, but the electroweak interaction causes the neutron to have a slightly
larger rest energy and to be metastable as a free particle.
1998, 288: ‘The unifying symmetry Weinberg seems to propose as a picture of the world as it is can, if true, be neither universal nor complete.’
 In the theory of evolution too, the idea of increasing
complexity is widely used but hard to define and to apply in practice, see McShea 1991.
Even in the ground state at zero temperature the atoms oscillate, but this does not give rise to a wave motion.
This applies to the superconducting metals and alloys known before 1986. For the ceramic superconductors, discovered since 1986, this explanation is not sufficient.
 This phenomenon is called Bose-condensation. A similar
situation occurs in liquid helium below 2.1 K.
The zero point of energy is the potential energy at a large mutual distance.
A small increase of entropy (DS) is equal to the corresponding increase of energy (DE) divided by the temperature (T): DS=(DE)/T, if other extensive magnitudes like volume are kept constant. If two bodies
at different temperatures make thermal contact, one body loses as much energy as the other gains. Hence, the entropy loss of the hot body is smaller than the entropy gain of the cold body, and the total entropy increases.
A more detailed explanation depends on the property of a water molecule to have a permanent electric dipole moment (5.3). Each sodium or chlorine ion is surrounded by a number of water molecules, decreasing their net electric charge. This causes the binding
energy to be less than the mean kinetic energy of the molecules.
negative logarithm (base 10) of the molar concentration of protons is called the pH-value. For pure water at 25 oC, pH = 7, meaning that one in a half billion molecules are ionized. A water molecule may lose or gain a proton.
Most H+-ions are coupled to a water molecule to become H3O+ (hydronium).
Callen 1960, 206-207. The number of degrees of freedom f is defined as the number of variables (temperature, pressure, and concentration) that can be chosen freely to describe the state of a chemical component. The number of components is r,
and between the components c different chemical reactions are possible. The number of different phases is m. Now Gibbs’s phase rule is f=(r+2) -m-c. For the equilibrium of ice, water, and its vapour
r=1, m=3, c=0, hence f=0. This means that this equilibrium can exist at only one value for temperature and pressure, the so-called triple point (temperature 273,16 K = 0,01 oC, pressure 611,2 Pascal).
As far as change seems to presuppose motion, only physical events and processes should be called real changes. But each motion means a change of position, and transformations are changes of form.
 The law of decay
is given by the exponential function: N(t)=N(t0) exp.–(t-t0)/τ. Herein N(t) is the number of radioactive particles at time t. τ is the characteristic
decay time. The better known half-life time equals τ.log 2=0,693 τ. This formula is an approximation because N is not a continuous variable but a natural number. Like all statistical laws, the decay law is only applicable to a homogeneous
Einstein 1916. In stimulated emission, an incoming photon causes the emission of another photon such that there are two photons after the event, mutually coherent, i.e., having the same phase and frequency. Stimulated emission plays an important part
in lasers and masers, in which coherent light respectively microwave radiation is produced. Absorption is always stimulated.
Hawking 1988, chapter 6, 7.
Mason 1991, chapter 4.
Barrow, Tipler 1986, 6-9.