Revised translation of chapters 1-5 of

Een wereld vol relaties,

Karakter en zin van natuurlijke dingen en processen

Amsterdam 2002: Buijten & Schipperheijn.

(A world full of relations, www.scribd.com, 2010)

 

 

© 2014, 2018 M.D.Stafleu

Weeshuislaan 31

3701 JV Zeist, Netherlands

m.d.stafleu@freeler.nl

 

 


 

 

Contents

 

Preface

 

1. Theory of characters

2. Sets

3. Symmetry

4. Periodic motion

5. Physical characters

 

Conclusion

 

Cited literature

 


 

 

Preface

 

Both Theory and experiment, Christian philosophy of science in a historical context (2016), and Nature and freedom, Philosophy of nature, Natural theology, Enlightenment and Romanticism (2018) are concerned with classical physics, and therefore naturally end with the emergence of quantum physics.The present treatise deals with twentieth-century physics and its mathematical foundations. Though it may be considered a sequel to Theory and experiment, it is an independent work. It starts from my interpretation of the Christian philosophy of the cosmonomic idea, simultaneously with quantum physics developed by Herman Dooyeweerd and Dirk Vollenhoven, which I applied to the physical sciences during more than forty years.

In this philosophy natural laws play an important part, both for relations and for structures. General laws govern mutually irreducible relation frames or modal aspects, whereas characters as sets of specific laws determine classes of  individual things and events.

According to this philosophy of dynamic development, physics is concerned with the physical relation frame and three preceding mathematical frames. Founded in mathematics means that physics projects its relations on mathematical ones, applying mathematical theories to develop its own. Therefore I shall pay ample attention to these relation frames and their mutual projections. It also means that physical structures, like those of atoms or solids, are interlaced with mathematical structures like groups. The characters of physical structures are even founded in the mathematical relation frames, such that the mathematical foundations of quantum physics involves a lot more than the usual interpretations can dream of.

After the introductory chapter 1, chapter 2 discusses numerical characters, in particular groups, expressing symmetry relations and transitions. The concept of probability will be introduced as a mathematical measure of possibilities, anticipating their realization by some kind of physical interaction. In chapter 3 spatial characters like vectors and the symmetry properties of vector spaces will be studied. Chapter 4 investigates kinetic characters with special attention to oscillations and waves, the motion of wave packets including the indeterminacy relations, and the relevance of symmetric and antisymmetric wave functions. Finally, chapter 5 is an extensive discussion of the physical characters explored during the twentieth century, constantly referring back to the preceding relation frames, and consequently to the preceding chapters. Also aggregates and statistics will be reviewed, as well as the problems of coming into being, change, and decay.

This treatise does not rely on highly sophisticated mathematics to arrive at another interpretation of quantum physics. Rather it is a philosophical analysis of physics and its mathematical foundations, emphasizing the experimental basis of the physical sciences.

 

 


 

 

 

 

Chapter 1

 

Theory of characters

 

 

  

 

 

 

 


 

1.1. What is a character? 

1.2. The first four relation frames

1.3. Types of natural characters

1.4. Interlacement of characters

 


 

 

1.1. What is a character?

 

This treatise is a philosophical discussion of twentieth-century mathematics and physics, stressing the investigation of structures, rather than general laws. After the introductory chapter 1 elaborating the concept of a character, chapters 2-5 will analyze natural characters as discovered and developed mostly during the twentieth century. The study of characters constitutes the main part of modern science, that (together with technology) has become one of the dominating forces in the world’s culture. Astrophysical evolution too is a typical twentieth-century problem. A well-developed theory hardly existed before 1930.

For the generic nature of things, processes and all that, no generally accepted word seems to exist. Mathematicians, physicists and chemists are concerned with structures and symmetries. Biologists deal with the design of an organism. By characters, they mean the traits that organisms have in common. Zoologists study the body plan of animals, and ethologists speak of patterns of behaviour or programs. Sociologists discuss systems in associations and society. As a common denominator, I adopt the word character for a generic set of laws and/or normative principles and norms characterizing similar things, events, relations, acts, artefacts, or associations. Having several meanings, the term character may give rise to misunderstandings, but the introduction of an entirely new word has disadvantages of their own. I prefer to add a new (though related) meaning to the existing word of character.

As a specific set of laws, a character determines the characteristic features of a specific thing, the conditions for its existence and its possible variations, its coming into being, development and perishing. Many kinds of events and processes have a character of their own. Usually one would define such a character by pointing out its essential properties. I shall not pursue this path. I introduce a natural character not as the essence of things or the nature of processes, but as a set of laws determining their internal and external relations. For instance, one can only establish the nature of a living being by looking for its relations to other living beings, to non-living things, and to many kinds of processes.

The Christian philosophy of dynamic development attempts to order the enormous diversity of characters with the help of sixteen relation frames.[1] Six natural frames precede ten interhuman relationships. Quantitative and spatial relations, relative motions, physical or chemical interactions, genetic kinship, and informative connections turn out to be subjected to sets of general natural laws. These sets of general laws must be distinguished from the sets of specific laws constituting natural characters.

Each character will be primarily characterized by one of these relation frames. Mathematics studies quantitative, spatial and kinetic characters, applied to physical characters. A molecule like DNA has primarily a physical character. Its secondary characteristic is its specific spatial shape, that of a double helix. Its biotic function is a tertiary characteristic, its disposition to play a part in biotic processes. The primary, secondary, and tertiary functioning according to the sixteen relation frames gives rise to a philosophical typology of characters.[2]

Characters are never independent of each other. They are not autonomous. Each character is interlaced with other characters in a specific way. Mathematical symmetries play an important part in physical, chemical, and biotic processes. The character of an atomic nucleus is interlaced with that of electrons into the character of an atom. Characters of molecules are intertwined in the structure of living cells. In particular, it is relevant that the characters of more or less stable things are interlaced with characters of events and processes. Modern science is more concerned with changing than with stable systems. In particular but by no means exclusively, this applies to the astrophysical evolution.

This treatise takes a natural character to be a set of natural laws, determining a class of individuals and sometimes an ensemble of possible variations.[3] Individuals may be things, plants or animals, events or processes, including numbers, spatial figures and signals. Characters may also concern relations. I shall call the scientific description of a character a model, like the atomic model. Section 1.1 reconnoitres the concept of a character. Is it a structure? Why a set of laws? What is meant by a natural law? What is a class? What is an ensemble of possible variations? How does a character differ from a model? 

 

In the history of science, a shift is observable from the search for universal laws, via structural laws, toward characters, determining processes besides structures. Even the investigation of structures is less ancient than might be expected. Largely, it dates from the nineteenth century. In mathematics, it resulted in the theory of groups (2.3), later to play an important part in physics and chemistry. Before the twentieth century, scientists were more interested in observable and measurable properties of materials than in their structure. Initially, the concept of a structure was used as an explanans, as an explanation of properties. Later on, structure as explanandum, as object for research, came to the fore. During the nineteenth century, the atomic theory functioned to explain the properties of chemical compounds and gases. In the twentieth century, atomic research was directed to the structure and functioning of the atoms themselves. Of course, people have always investigated the design of plants and animals. Yet, as an independent discipline, biology established itself not before the first half of the nineteenth century. Ethology, the science of animal behaviour, only emerged in the twentieth century.

Mainstream philosophy does not pay much attention to characters.[4] Philosophy of science is mostly concerned with epistemological problems (for instance, the meaning of models), and with the general foundations of science. A systematic philosophical analysis of characters as defined above is wanting. This is strange, for characters form the most important subject matter of twentieth-century research, in mathematics as well as in science.

 

In common language, a structure is the manner in which a building or organism or other complete whole is constructed. This concept is much more restrictive than the concept of a set of laws distinguishing a specific thing from things of a different nature, for this distinction is not merely spatial. Some things like electrons have a character but not a structure. On the other hand, a solid like ice displays several crystalline structures. During its lifetime, an animal may change its structure drastically, according to its invariable character. Similarly, the specific character of a star remains the same, whereas its structure changes considerably during its existence.

The character of a thing determines under which circumstances it has a certain spatial structure. Contrary to a structure, a character does not merely express a thing’s spatial composition, but also its properties and its propensities; how it functions; how it comes into being, changes and perishes; its mean life time and its dependence on various circumstances. Moreover, the concept of a character is applicable to the nature of events, processes and relations, whereas structure is not. An event like the lighting of a match lacks a structure, even in common language, but it has a specific character.

A character is definitely more than a structure. Molecules differ because of their structure but even more because of their chemical properties, which belong to their character no less than their structure. Often the structure of a molecule determines its properties, but properties depend on circumstances as well. A material may be combustible above a certain temperature and incombustible otherwise. The structure of a material may depend on circumstances. The character of water implies it being a solid below 0 oC, having no structure above 100 oC, and in between having the structure of a liquid.

The structure of a living being depends on its age and sometimes on its sex. Distinguishing between character and structure allows us to say that the character of a living being determines the development of its structure, or to say that the sexes differ structurally. The character is the same for both and does not change.

 

A law of nature does not necessarily have a mathematical format, and it is not necessarily fundamental. In the most general sense, I consider each natural regularity to be a law of nature.[5] The hereditary and species-specific behaviour of drakes during courting has a fixed pattern, recognizable for all ducks. As a law this pattern belongs to the character of the birds concerned. Not all natural laws take part in a character. The laws constituting a relation frame are not specific, but generally valid. For instance, it is a general law that the mass of a physical thing is equivalent to its energy, whereas it is a specific law that each electron has a rest mass equal to 9.109*10-31 kg.

A specific law often occurs in more than one character. All electrons have the same rest mass, electric charge, magnetic moment and lepton number, according to four natural laws. Positrons have the same rest mass and magnetic moment but a different charge and lepton number. Electrons and neutrino’s have the same lepton number but different rest mass, charge and magnetic moment. Electrons, positrons and neutrino’s are fermions, but so are all particles that are not bosons. Therefore, it is never a single law but always a specific set of laws characterizing things or events of a certain kind.

In no way should one conceive of these sets as logical definitions. It is very well possible to define an electron by some properties like its mass and charge. However, such a definition says very little about the natural laws constituting its character. Besides the electron’s mass and charge, these laws concern its spin, magnetic moment, and lepton number as well. From the definition it does not follow that the electron is a fermion; that it has an antiparticle; that an electron can annihilate a positron; or that both belong to the first of three families of leptons and quarks. The laws constituting the character of electrons do not follow from a definition, but were discovered during a century of experimental and theoretical research. We can never be sure of knowing the character of a thing or event completely. In fact, our knowledge of most characters is rather incomplete, even if it is possible to define them adequately.

Because a character is a set of laws, it is often possible to distinguish families of characters. The characters of leptons and quarks are grouped into three ‘generations’. Chemists recognize noble gases and halogens, acids, and bases. From a chemical point of view, all oxygen atoms have the same character, but nuclear physicists discern several isotopes of oxygen, each with its own character. In section 5.3, I shall discuss a hierarchy of physical and chemical characters. With respect to quantitative and spatial characters too, we shall meet the concept of a hierarchy of characters (2.2, 3.1).

 

This treatise defines a natural character to be a set of unchangeable natural laws, specifically valid for a class of individuals. The individual things, events or processes concerned are subject to the character. Therefore, I shall call them subjects. Individual things or processes cannot be taken apart from the laws valid for them. Conversely, a law expresses itself only in its subjects. Reality is two-sided, having a law-side and a subject-and-object-side. Like two sides of a coin, they cannot be separated from each other.

A class is not a collection. It is not restricted to a certain number, to a limited place or to an interval of time.[6] A class is no more temporal than the natural laws constituting the class. However, the individual things or events being elements of the class are by no means a-temporal. Each actual collection of similar individuals is a temporal subset of the class. For instance, it may serve as a sample for scientific research. If the sample consists of a single individual, it is an exemplar or specimen of the class. Individual things and events are intrinsically temporal, being unavoidably limited in number, space and time. Their character conditions the existence of the individuals in their temporal circumstances.

The character class, the class corresponding to a character, is complete. This means that each individual satisfying the laws of a character is an element of its character class.

 

A character allows of a certain margin of variation. It provides room for the individuality of the things or events subject to the character. The margin of individual variation is relatively small for spatial and kinetic characters, larger for physically qualified ones, and even more for plants and animals. In order to specify this kind of variation, I borrow the concept of an ensemble from statistical mechanics.[7]

The number of possibilities may be restricted. Besides the character, circumstances dependent on space and time may determine the ensemble of possibilities. Hence, an ensemble is not always a class. It is not always sensible to distinguish a class of individuals from an ensemble of possibilities. Sometimes each individual corresponds exactly with one possibility, such that the ensemble coincides with the character class (2.1, 3.2).

The concept of an ensemble is especially relevant when statistics is applicable, distinguishing a possibility from its realization. This is only meaningful with respect to characters that are physically characterized, whether primary, secondary, or tertiary (1.3). The relative frequency by which possibilities are realized is called their probability. This is a mathematical concept anticipating some kind of physical interaction. The theory of probability (2.4) is extremely important for the study of characters. Sometimes, it is possible to design a theory for an ensemble and to calculate theoretical probabilities. More often, probabilities can only be determined in an empirical way.

We can now point out the difference between things and events. We speak of a thing if it has objective possibilities. We speak of an event or a process if a possibility is realized. A process is a complex of events. A thing is a characteristic unity, it has structural coherence and it maintains its identity during its motion. It has a certain stability and duration of existence. It comes into being, it changes, it generates other things, it influences its environment and it decays. An event is transitive and implies transformation and transport, generation, growth and behaviour.

Because the realization of a possibility always involves physical interactions, there are no quantitatively, spatially or kinetically qualified events. Even motion does not constitute an event but a relation.

 

The description of a character is usually called a model, although the word model has several other meanings in science. A model represents our knowledge of a character, sometimes our assumed knowledge. Often, a model is a simplified representation of the character, sufficient to solve a particular problem. The solution of the problem may lead us to construct a new model in order to solve other problems and to increase our knowledge of the character.

Sometimes, a model is considered a description. However, in science a model is always a theory, a deductively connected set of propositions, including some law statements. A law statement is a human formulation of a natural law. Newton’s formulation of the law of gravity was different both from Galileo’s before him and Einstein’s after him.

Making the distinction of a natural law from a law statement and the distinction of a character from a model means that the philosophy exposed in this treatise includes a critical-realistic worldview. A realist assumes that characters and other natural laws are part of reality, independent of human experience. On the other hand, scientists formulate law statements and construct models for the benefit of their research. Models are invented, characters are discovered. The natural laws constituting a character are not separated from concrete reality but are intrinsically connected to it. Characters can only be discovered investigating the individual things and processes concerned. This critical-realistic view confirms the empirical method of science.[8] It accords with the Christian view that natural laws are given by God and can be discovered by humans. 

 

 


 

 

1.2. The first four relation frames

 

It is not easy to tell what makes a certain subject a unity, a totality, or a well-distinguished individual.[9] In our daily experience, it is clear that a plant or an animal is such an individual. Through natural experience, we know of the unity of a thing that comes into being and perishes, that maintains its identity while changing and is recognizable as an individual. However, for scientific purposes natural experience is not a reliable source of information. Common sense is not documented, and it cannot be legitimated in a scientifically justifiable way. Since the nineteenth century, science has discovered an increasing number of characters and corresponding individuals unknown to everyday experience, such as atoms and molecules, cells and neutron stars.

According to Immanuel Kant, a Ding an sich (a thing in itself) is unknowable, and I concur with this view. Our experience and knowledge of things and events follows from the relations they have with other things and events. Therefore, this treatise is not only concerned with characters of natural things and events, but with their relations as well. I shall distinguish subject-subject relations from subject-object relations (see below). I shall give some examples presently.

According to present-day science, reality is entirely relational. Nothing exists in itself. Things and events only exist in relation to other things and events. In order to map the cosmos, it is useful to have a lattice, a reference system, by which anything can be localized and identified. As co-ordinate systems, the time measured by clocks and the calendar have no meaning apart from the things and events which they connect, even time and space cannot be separated from reality. However, there are more relations than the spatio-temporal ones. Natural science is concerned with quantitative, spatial, kinetic, physical, biotic, and psychic relations. The humanities and social sciences investigate ten more relation frames, called ‘normative’, because these are not governed by natural laws, but by normative principles, which people in the course of their history have elaborated into a large variety of norms.

In this section, I introduce four sets of natural laws called relation frames, each governing a type of general, non-specific relations between individuals. In section 1.3, I shall argue that these relation frames provide each character with a primary, secondary and/or tertiary characteristic.

Each relation frame makes one think of a temporal order, at least if we interpret cosmic time in a wider sense than the usual kinetic time as measured by a clock.[10] This order is the law side of a relation frame. The corresponding relations constitute the subject side.

 

a. First, putting things or events in a sequence we find a serial order. We express this order by numbering the members of the sequence. The numerical order gives rise to numerical differences and ratio’s, being quantitative subject-subject relations. The subjects of the laws belonging to the first relation frame are numbers: natural and integral numbers, fractions or rational numbers and real numbers, all ordered on the same scale of increasing magnitude. Numbers are subject to laws of addition and multiplication. Everything in reality has a quantitative aspect. If we express some relation quantitatively, we aim at an exact and objective representation. The numerical relation frame is a condition for the existence of the next frames.

 

b. The second relation frame concerns spatial ordering. The relative position of two figures is the universal spatial relation between any two subjects, the spatial subject-subject relation. Whereas the serial order is one-dimensional, the spatial order consists of several mutually independent dimensions. In each dimension the positions are serially ordered and numbered. Relative to each of these dimensions, there are many equivalent positions. Independence and equivalence are spatial key concepts, just like the relation of a whole and its parts. The spatial relation frame returns in wave motion as a medium; in physical interactions as a field; in ecology as the environment; and in animal psychology as observation space, such as an animal’s field of vision.

 

c. The third relation frame records how things are moving and when events occur. Relative motion is a subject-subject relation. Motion presupposes the serial order (the diachronic order of earlier and later) and the order of equivalence (the synchronic order of simultaneity or co-existence), and it adds a new order, the uniform succession of temporal instants. Although a point on a continuous line has no unique successor, we nevertheless assume that a moving subject runs over the points of its path successively. Hence, relative motion is an intersubjective relation, irreducible to the preceding two. Because kinetic time is uniform, we are able to establish the proportion of different temporal intervals between events and the periodicity of oscillations, waves and many other rhythms. The law of uniformity concerns all kinds of relatively moving systems, including clocks. Therefore, it is possible to project kinetic time on a linear scale, independent of the number of dimensions of kinetic space.

 

d. Contrary to kinetic time, the physical or chemical ordering of events is marked by irreversibility.[11] Different events are physically related if one is the cause of the other, and this relation is irreversible. All physical and chemical things influence each other by some kind of interaction, by exchanging energy or matter, or by exerting a force on each other. Each physical or chemical process consists of interactions. Therefore, I consider the interaction between two things to be the universal physical subject-subject relation. Interaction presupposes the relation frames of quantity, space and motion.[12] Interaction is subjected to laws. Some laws are specific such as electromagnetic interaction, determining characters. Other laws are general, such as the laws of thermodynamics and the laws of conservation of energy, linear and angular momentum. The general laws constitute the physical and chemical relation frame.

 

The relation frames are not independent of each other. The subject-subject relations of one relation frame can be projected onto those of another one. Numbers represent spatial positions, and motions are measured by comparing distances covered in equal intervals. For the theory of characters, this turns out to be extremely important.

These projections are expressed as subject-object relations. In a nomic context, something is a subject if directly subjected to a given law, whereas an object is indirectly, via a subject, involved with that law. The same thing or event may be a subject with respect to a certain law, and an object with respect to another law.

Each concrete individual thing or event is a subject or an object in each relation frame. For instance, a tree is a subject to quantitative, spatial, kinetic, physical, and biotic laws. But a tree is an object with respect to the laws for the behaviour of animals. A tree is observable by animals and human beings, not by itself or by other trees. In the normative relation frames, only human beings and their associations can be subjects.

We speak of a subject-object relation if the object has a function with respect to the subject. We find subject-object relations in all relation frames, except the first one.

 

a. In the first relation frame, only numerical subject-subject relations occur. These cannot be projected on an earlier frame. Besides numbers, quantitative properties and relations of things and events, which are projections on the numerical frame, are directly subject to numerical laws. At first sight, the lack of a quantitative subject-object relation may look strange, because a representation in numbers is usually considered the highest form of objectivity. However, this would concern a state of affairs that is not primarily quantitative, but physical, psychic, or economical, objectified into quantities. Physical properties, for instance, are projected on a numerical scale. Indeed, numbers and numerical relations are objects in all relation frames following the quantitative one. We shall see that the real numbers have an indispensable function in determining spatial and other ratios.

 

b. Objects are first found with respect to spatial laws. Spatial figures are mutually related by their relative position. A co-ordinate system is a spatial figure. It functions as an intermediary for the establishment of subject-subject relations, like relative position, or the relation of a whole to its parts. In order to be able to calculate with co-ordinates, one needs a metric, a law stating how to determine the distance between two points. For an Euclidean space, the metric is based on Pythagoras’ law. Besides, non-Euclidean metrics exist. The quantification of kinetic and physical relations too requires a metric. In the next chapters, we shall pay much attention to the metric as a law for subject-object relations.

The distance between two points is an objective determination of the relative position between two spatial subjects. We may consider a point as being a subject to spatial laws. Usually, a point represents an extended figure in its relation to another figure, for example, the point of intersection for two lines. The distance of their centres objectifies the relative position of two circles. Distance is not a spatial subject, yet it is a spatial concept. We express a distance as a number by determining its ratio to the unit of length. The distance between two circles is so many metres. In the concept of distance, a spatial relation appears to be projected on a numerical one. The same applies to other spatial magnitudes like length, area, volume or angle. These quantitative relations between spatial figures we call spatial objects, being subject to numerical laws, and being involved in spatial laws through the spatial figures concerned. They have an important function in the determination of spatial relations. The distance between two circles is a spatial property, but a distance itself has no position, whereas it is meaningful to say that one distance is larger than the other one.

The relation of a spatial figure to its magnitude is a subject-object relation too. In contrast, the relation of a spatial whole to its parts is not a subject-object relation, but a subject-subject relation. A segment of a circle is subject to the same spatial laws as the circle itself. The relation of a circle with one of its segments is objectively expressed by the ratio of their areas. Not having an area, a radius is not a part of a circle. It determines the magnitude of the circle in an objective way. Having length, a radius has a one-dimensional subject-subject relation to the circumference of the circle.

 

c. Spatial and numerical relations are conditional for motion. Kinetic time projects motion on a series of numbers represented by a clock. The path of a moving subject is a spatial projection, a kinetic object as well as a spatial subject.[13] In principle, each spatial figure can be a path of motion, functioning as an object with respect to kinetic laws. Conversely, a motion can generate a figure. Combining two mutually perpendicular harmonic oscillations generates a Lissajous-figure (circle, ellipse, lemniscate, etc.). A combination of a circular motion with a perpendicular linear motion generates a spiral.

Usually a path of motion is thought to be a line. This abstraction is useful if a representative point functions to give the instantaneous position of the moving subject. In atomic physics, this turns out to be impossible. Instead, a wave packet objectifies the motion of a particle (4.2).

The ratio of the covered distance and the corresponding time interval is called the subject’s speed. This measure is only sufficient if the motion is uniform and rectilinear, subject to the law of inertia, Newton’s first law of motion. This is the only movement existing apart from the physical relation frame. Every other kind of motion demands a physical force, proportional to the subject’s acceleration, according to Newton’s second law. The kinetic concept of acceleration clearly anticipates the physical relation frame.

Like the path of motion, velocity and acceleration are kinetic objects having the function to quantify motion. Being vectors, they are subject to spatial laws, and having magnitudes, they are subject to numerical laws ass well. Hence, the relations of a moving thing to its path of motion, as well as its velocity and its acceleration are kinetic subject-object relations.

 

d. Physical relations like energy, mass and charge can be objectified as quantities just as we do with spatial and kinetic relations. Some vectors like force, momentum, or electric field strength are subject to spatial laws as well. Being vectors, they have both magnitude and direction.

A physical magnitude is a proportion, a relation to a unit. For instance, a potential difference is 220 Volt. During the nineteenth century, the metric system was introduced, mainly for economic reasons as a condition for making objective comparisons. It improves the objective communication of measurement results in science, the exchange of parts in machines, and the honesty of merchants.

For the use of a metric it has to be sure that numerical addition and multiplication correspond with non-numerical operations. For instance, forces are additive like vectors only if applied to the same thing. Newton’s third law is about two forces (equal but with contrary directions), but adding them is a blunder, because they do not act on the same subject.

In an experiment or a theory, some magnitudes are kept constant; others change due to physical interactions between physical subjects. Change is a physical or chemical process. The object of change is the state of a physical system (the subject), the state being the summary of all magnitudes objectively characterizing the subject.[14]

 


 

 

1.3. Types of natural characters

 

In this section, I elaborate the Christian philosophical typology of characters. I shall test and apply it in the following chapters. Each character has a primary, a secondary and a tertiary characteristic. In section 1.4, we shall see that characters are mutually interlaced. Whereas the relation frames discussed in section 1.2 are linearly ordered, the characters form a network. However, their typology depends on the linear ordering of the relation frames. In sections 1.3 and 1.4, I discuss this typology in broad lines. I shall elaborate it in the subsequent chapters. Then it should become clear whether this typology leads to a better understanding of natural and other characters, their coherence and their meaning.

 

Each character is primarily characterized by one of the relation frames, called the qualifying frame. Periodic motion primarily characterizes a rhythm. Interaction qualifies physical things and events. Plants are primarily characterized by genetic relations, and animals by informed behaviour. In each character, these relations are specific; for instance, a physical interaction may be electromagnetic.

For natural characters, the qualifying relation frame is the last one in which the thing concerned acts as a subject, in subsequent relation frames it is an object.[15] A physically qualified thing like a molecule can only be an object in the relation frames succeeding the physical frame. However, also a bird’s nest is not a subject with respect to biotic or psychic laws. It is not a living being, it has no ancestry or progeny, and it does not display behaviour. The bird’s nest is at most a subject to physical laws, whereas through the pair of birds, it is an object with respect to the biotic and psychic laws governing the birds’ behaviour. The birds construct the nest with a clear purpose, and it has a function in the birds’ reproduction. Therefore, the physical relation frame does not qualify it primarily as a physical subject. Rather, the psychic relation frame qualifies the bird’s nest primarily as a psychic object.

In principle, each relation frame qualifies a number of characters. According to a traditional viewpoint, there are only three natural kingdoms. These are the kingdoms of minerals, the kingdom of plants and the animal kingdom.[16] However, I believe there are characters qualified by the quantitative, spatial or kinetic relation frame as well (chapters 2-4).[17] For instance, a triangle has a spatially qualified structure, whereas waves and oscillations have a kinetic character.

 

Except if it is quantitatively qualified, I shall secondarily characterize the character of an individual thing or event by the projection of the qualifying relation frame on a preceding one, called the founding frame.[18] As many secondary types correspond with each primary type as relation frames precede the qualifying frame. For physically qualified characters, this means three secondary types of characters, respectively founded in projections on the numerical, the spatial or the kinetic relation frame. For instance, an electron is secondarily characterized by quantitative properties like charge, rest mass and magnetic moment, each having a strictly determined value. These properties characterize electrons and distinguish them from other particles like muons (5.2). The founding relation frame is just as typical for a character as its qualifying frame is.

However, mass, charge and magnetic moment are physical magnitudes, determining how and to what extent an electron can interact with other things. For a physically qualified character, its quantitative foundation is physical as well. Hence, the secondary characteristic does not concern the preceding relation frame itself, but a projection of the qualifying frame onto the founding one. In the subsequent chapters, I shall pay much attention to the secondary qualities of characters.

 

Sometimes, the characters of two individuals are tuned to each other such that they can be interlaced. The tertiary characteristic of a thing or event means that as an object it may become a part of another thing or a process. An electron, for instance, is not an iron atom but has the disposition to have a function in an iron atom. Hence, the tertiary characteristic concerns a specific subject-object relation. Earlier I observed that the nomological distinction between a subject and an object refers to some law. Each iron atom is directly subject to the character of iron atoms. On the other hand, its electrons and the iron nucleus are only as objects subjected to the set of laws for an iron atom. Besides their primary and secondary characteristics, nuclei and electrons have a tertiary characteristic. It is their disposition, tendency or affinity to become part of an atom. They are tuned to the character of the atoms to which they may belong.

A second example concerns the molecules playing a part in a living cell, in particular DNA and RNA molecules. Their primary characteristic is physical, for the so-called biomolecules are qualified by interaction. Their foundation is spatial, and the discovery of the double helix structure of DNA molecules by Francis Crick and James Watson (1953) is rightly considered a big step towards the understanding of the functioning of living cells. Nevertheless, much more interesting is the part these molecules play in biotic processes, like the organized assemblage of macromolecules, the fission of cells, and the development of a multicellular plant. That is their disposition, their tertiary characteristic.

Whereas a foundation refers to an earlier relation frame, a disposition often anticipates a later one, either later than the qualifying relation frame, or later than the founding one. The spatial and physical structure of a bird’s nest anticipates the psychically qualified behaviour of the birds using it. The quantitatively foundedcharacter of electrons anticipates the spatially founded characters of atoms. If a character qualified by one relation frame is interlaced with a character qualified by a later frame, the former character has an objective function in the latter one. This is, for instance, the case with the character of (physically qualified) molecules like DNA in the character of (biotically qualified) living cells.   

Whereas the primary and secondary characteristics concern properties, the tertiary characteristic is a propensity. A certain water molecule may or may not have an actual function in a plant, but it always has the potential to perform such a function.

 

Many a thing or process that we experience as an individual unit turns out to be an aggregate of individuals. I shall call an individual thing an aggregate if it lacks a characteristic unity. Examples are a pebble, a wood, or a herd of goats. A process is an aggregate as well. It is a chain of connected events. For a physicist or a chemist, a plant is an aggregate of widely differing molecules, but for a biologist, a plant is a characteristic whole. An aggregate consists of at least two individual things, but not every set is an aggregate. The components should show some kind of coherence.

To establish whether something is an individual or an aggregate is not an easy matter. It requires knowledge of the character that determines its individuality. It appears to be important to distinguish between homogeneous and heterogeneous aggregates. A homogeneous aggregate is a coherent collection of similar individuals, for instance a wave packet conducting the motion of a photon or an electron; or a gas consisting of similar molecules; or a population of plants or animals of the same species. A heterogeneous aggregate consists of a coherent collection of dissimilar individuals, for instance a gaseous mixture like air, or an ecosystem in which plants and animals of various species live together.

 

 


 

 

1.4. Interlacement of characters

 

Even apart from the existence of aggregates, an individual never satisfies the simple character type described in section 1.3. Because of its tertiary characteristic, each character is interlaced with other characters. On the one side, character interlacement is a relation of dependence, as far as the leading character cannot exist without the characters interlaced in or with it. The character of a molecule exists thanks to the characters of its atoms. On the other hand, character interlacement rests on the disposition of a thing or event to become a part of a larger whole. If it actualizes its disposition, it retains its primary and secondary character largely. Sometimes characters are so strongly interlaced that one had better speak of a ‘dual character’, as, e.g. for the wave-particle duality (4.3).

I shall discern several types of character interlacement.

 

In the first type of interlacement, the whole has a qualifying relation frame different from those of the characters interlaced in the whole. In chapters 4 and 5 we shall meet this phenomenon in the wave-particle duality, where the particle character is physically qualified (particles interact with each other, which waves do not) and the wave character is primarily kinetic. As a measure of probability, the wave character anticipates physical interactions.

A second example is the physically qualified character of a DNA molecule being interlaced with the biotic character of a living cell. The molecule is physically qualified, the cell biotically. Their characters cannot be understood apart from each other. The cell is a biotic subject, the DNA-molecule a biotic object, the carrier of the genome, i.e., the ordered set of genes. A cell without DNA cannot exist, whereas DNA without a cell has no biotic function. The cell and the DNA molecule are mutually interlaced in a characteristic subject-object relation.

We find this type of interlacement in processes as well. For instance, the character of each biotic process is intertwined with that of a biochemical process. The behaviour of animals is interlaced with those of processes in their nervous system.[19]

 

The second type of interlacement occurs if one or more characters having the same qualifying relation frame but different foundations form a single whole.

For example, the character of an atom is interlaced with the characters of its nucleus and electrons. All these characters are physically qualified. The electron’s character is quantitatively founded, whereas the character of the nucleus is spatially founded like that of the atom. However, in the structure of the atom, the nucleus acts like a unit having a specific charge and mass, as if it were quantitatively founded, like the electrons. The (in this sense) quantitatively founded character of the nucleus and that of the electrons anticipate the spatially founded character of the atom. The nucleus and the electrons have a characteristic subject-subject relation, interacting with each other. Nevertheless, they do not interact with the atom of which they are a part, for they have a subject-object relation with the atom, and interaction is a subject-subject relation.

 

In the third type of interlacement of characters, there is no anticipation of one relation frame to another. For instance, in the interlacement of atomic groups into molecules all characters are physically qualified and spatially founded. For another example, the character of a plant is interlaced with those of its organs like roots and leaves, tissues and cells. Each has its own biotic character, interlaced with that of the plant as a whole. A comparable hierarchy of characters we find in two-, three- or more-dimensional spatial figures. A square is a two-dimensional subject having an objective function as the side of a cube.

Characters of processes are interlaced with the characters of the things involved. Individual things come into existence, change and perish in specific processes. Complex molecules come into existence by chemical processes between simpler molecules. A cell owes its existence to the never ending process called metabolism: respiration, photosynthesis, transport of water, acquisition of food, and secretion of waste, dependent on the character of the cell.

Usually processes occur on the substrate of things, and many thing-like characters depend on processes. Quantum physics proves that even the most elementary particles are continuously created and annihilated. The question of which is the first, the thing or the process, has no better answer than that of the chicken and its egg. There is only one cosmos in which processes and things occur, generating each other and having strongly interlaced characters.

 

When a character is interlaced with another one its properties change without disappearing entirely. If an atom becomes part of a molecule, its character remains largely the same, even if its distribution of charge is marginally adapted.

It is interesting that molecules have properties that the composing atoms do not have. A water molecule has properties which are absent in the molecules or atoms of hydrogen or oxygen. Water vapour is a substance completely different from a mixture of hydrogen and oxygen. This universally occurring phenomenon is called emergence.[20] It plays a part in discussions between reductionists and holists, not only in biology or in anthropology.[21]

Emergence is expressed in the symmetry of a system, for instance. A free atom has the symmetry of a sphere, but this is no longer the case with an atom being a part of a molecule. The atom adapts its symmetry to that of the molecule by lowering its spherical symmetry. The symmetry of the molecule is not reducible to that of the composing atoms. Symmetries (not only spatial ones) and symmetry breaks play an important part in physics and chemistry. ‘Constraints’ like initial and boundary conditions are possible causes of a symmetry break.

 

Scientific classification is different from the typology of characters based on universal relation frames. Classification means the formation of sets of characters based on specific similarities and differences. This is possible because each character is a set of laws, which it partly shares with other characters. A set of characters is determined by having some specific laws in common. An example of a specific classification is the biological taxonomy of living beings according to species, genera, etc. Other examples are the classification of chemical elements in the periodic system, of elementary particles in generations of leptons and quarks, and of solids according to their crystalline structure (5.3, 5.4).

Because specific classifications rest on specific laws, the chemical classification of the elements is hardly comparable to the biological classification of species. The general typology of characters developed in this treatise is applicable to widely different branches of natural science and may therefore lead to a deepened understanding of characters. Moreover, the typology provides insight into the coherence and the meaning of characters.

 

Each individual thing is either a subject or an object with respect to any relation frame in a way determined by its primary, secondary, and tertiary characteristics. Individual things and events present themselves in their relations to other things and events, allowing us to establish their identity.

The meaning of a thing or event can only be found in its connection with other things and events, and with the laws valid for them. In addition, the meaning of a character comes to the fore only if we take into account its interlacements with other characters. For instance, it is possible to restrict a discussion of water to its physical and chemical properties. Its meaning, however, will only become clear if we include in the discussion that water is a component of many other materials. Water plays a part in all kinds of biotic processes, and it appeases the thirst of animals and humans. Water has a symbolic function in our language and in many religions. The study of the character of water is not complete if restricted to the physical and chemical properties. It is only complete if we consider the characteristic dispositions of water as well.[22]

Likewise, the meaning of individual things and events is only clear in their lawful relations with other individuals. These relations we have subsumed in relation frames, which are of profound significance for the typology of characters. We find the meaning of the cosmos in the coherence of relation frames and of characters, and in particular in the religious concentration of humankind to the orgin of the creation, as we have seen before.

 

The theory developed in this treatise rests on the presupposition that a character as a set of laws determines the specific nature of things or processes. Such a set leaves room for individual variation. Hence, the theory is not deterministic. Reality has both a law side and a subject side that cannot be separated. Both are always present. In each thing and each process, we find lawfulness besides individuality.

The theory of characters is not essentialist either.[23] The primary characteristic of each character is not determined by a property of the thing or process itself. Rather, its relations with other things or processes, subject to the laws of a relation frame, are primarily characteristic of a character. Besides, the secondary and tertiary characteristics concern relations subject to general and specific laws as well. In particular the tertiary characteristic, the way by which a character is interlaced with other characters, provides meaning to the things and processes concerned. Essentialism seeks the meaning (the essence) of characters in the things and events themselves, attempting to catch them into definitions. In a relational philosophy, definitions do not have a high priority.

Next, the theory of characters is not reductionistic. This statement may be somewhat too strong, for there is little objection to raise against ‘constitutive reductionism’. This conception states that all matter consists of the same atoms or sub-atomic particles, and that physical and chemical laws act on all integration levels.[24] The theory of characters supposes that the laws for physical and chemical relations cannot be reduced to laws for quantitative, spatial, and kinetic relations.[25] It asserts the existence of laws for biotic and psychic relations transcending the physical and chemical laws. It is at variance with a stronger form of reductionism, presupposing that living organisms only differ from molecules by a larger degree of complexity,[26] whether or not supplied by the phenomena of supervenience and emergence.[27] I believe that the phenomenon of character interlacement gives a better representation of reality.

Finally, the theory of characters cannot be argued on a priori grounds. As an empirical theory, it should be justified a posteriori, by investigating whether it agrees with scientific results. This we shall do in the chapters to come.



[1] For an introduction see Stafleu 2017, The open future, Contours of a Christian philosophy of dynamic development, www.mdstafleu.nl.

[2] Stafleu 2018a, Encyclopedia of relations and characters. I. Natural laws. II. Normative principles, www.mdstafleu.nl.

[3] The concept of a character corresponds to the law side of Dooyeweerd’s ‘structure of individuality’. However, a definition of ‘structure of individuality’ comparable to mine of a character cannot be found in Dooyeweerd’s works.

[4]Sklar 1993, 3: ‘… little attention of a systematic and rigorous sort has been paid by the philosophical community to the foundational issues to which even our present, only partially formulated, theory of the constitution of matter gives rise.’

[5] Achinstein 1971, chapter 1. For the sake of convenience, I shall include mathematical laws like that of Pythagoras among natural laws. In chapters 1 and 2, I shall discuss mathematical characters and relations only as far as they are relevant to science. Laws are also known as axiom, characteristic, constant, design, equation, metric, pattern, phenomenon, postulate, prescription, principle, prohibition, property, proposition, relation, rule, symmetry, theorem or thesis. Hence, I understand the concept of ‘natural law’ much wider than usual, see Stafleu 2016, 8.6.

[6] I shall consider a class to be unbounded in number, space and time. A collection is bounded in number, space and/or time.

[7] Tolman 1938, 43: An ensemble of systems is ‘… a collection of systems of the same structure as the one of actual interest but distributed over a range of different possible states.’ The concept of an ensemble is circa 1900 introduced by J.W. Gibbs. In physics, it is sometimes possible to project an ensemble onto an abstract state space.

[8] For a review of critical realism (Karl Popper, Mario Bunge, Hilary Putnam and others), see Niiniluoto 1999 or Psillos 1999.

[9] Often, one calls a thing an ‘entity’, meaning ‘essential existence’, i.e., the existence of a thing apart from its properties and other relations. I want to make clear that nothing can exist without its relations, and I shall criticize essentialism, in which ‘entity’ is a key word. Hence, I prefer the neutral word ‘thing’. Sometimes one calls events and processes ‘phenomena’. However, I consider a phenomenon  not to be an individual but a character. I distinguish the timeless phenomenon of the rainbow from the temporally and spatially determined occurrence of a rainbow as an individual event.

[10] It is impossible to apply a character to the cosmos, because no class (of cosmosses) or ensemble (of possibilities), hence no variety corresponds to the would-be character of the cosmos. The cosmos as an ordered universe is not an individual but a totality, characterized by the mutual interlacements of all kinds of characters. See Dooyeweerd 1953-1958, III, 627-634.

[11] The irreversibility of physical processes presupposes the reversibility of all natural laws with respect to kinetic time, but cannot be derived from that, compare Mehlberg 1971, 28-29, 45-46, 48; Sklar 1993, chapter 10. Gold, in Gold (ed.) 1967, 184, observes: ‘It is a remarkable fact that … the laws of physics have turned out to possess symmetry and the boundary conditions seem to have turned up in such a way as not to produce symmetry. It seems to me that the world has thus supplied us with a reason for making this distinction. It seems to have arisen naturally in the description of physics. So I think maybe it is basic.’

[12] The mechanical conservation laws are related to Einstein’s principle of relativity, stating that the laws of physics can be formulated independent of the motion of the inertial frames.

[13] Motion is relative. If a car moves with respect to a road, the road moves with respect to the car. In this sense, a road is a kinetic subject. However, by the ‘path of motion’ we usually intend to present motion objectively in a spatial way, as the distance covered since the start of the motion.

[14] For a gas, the state is characterized by magnitudes like the amount of gas (number of moles, n), the pressure p, the volume V en the temperature T. The state of a gas is subject to a law called the equation of state. For an ideal gas it is the law of Boyle and Gay-Lussac: pV = nRT (R is the gas constant, having the same value for all gases).

[15] Dooyeweerd NC I, 108; III, 56, 58, 106-109.

[16] Dooyeweerd NC III, 79, 83. According to Dooyeweerd NC III, 83, structures qualified by the same modal aspect have the same ‘radical type’, and all things and events having structures of the same radical type form a ‘kingdom’. However, the word kingdom is not common in physics or chemistry, whereas biologists distinguish six kingdoms of living beings.

[17] Stafleu 1985.

[18] Dooyeweerd 1953-1958, III, 143, 266. Numerical relations do not allow of projections on a preceding relation frame, and quantitative characters only apply to relations (2.3).

[19] This looks like supervenience, see Charles, Lennon (eds.) 1992, 14-18. The idea of supervenience, usually applied to the relation of mind and matter, says that phenomena on a higher level are not always reducible to accompanying phenomena on a lower level. It is supposed that material states and processes invariantly lead to the same mental ones, but the reverse is not necessarily the case. A mental process may correspond with various material processes. Character interlacement implies much more than supervenience, which in fact is no more than a reductionist subterfuge.

[20] The theory of emergence states that at a higher level new properties emerge that do not occur at a lower level, the whole is more than the sum of its parts, see Popper 1972, 242-244, 289-295; 1974, 142; Popper, Eccles 1977, 14-31; Mayr 1982, 63-64. In suit of Dobzhansky, Stebbins 1982, 161-167 speaks of ‘transcendence’: ‘In living systems, organization is more important than substance. Newly organized arrangements of pre-existing molecules, cells, or tissues can give rise to emergent or transcendent properties that often become the most important attributes of the system’ (ibid. 167). Besides the emergence of the first living beings and of humanity, Stebbins mentions the following examples the first occurrence of eukaryotes, of multicellular animals, of invertebrates and vertebrates, of warm-blooded birds and mammals, of the higher plants and of flowering plants. According to Stebbins, reductionism and holism are contrary approximations in the study of living beings, with equal and complementary values.

[21]In physics, the planned construction of the superconducting supercollider (SSC) about 1990 gave rise to fierce discussions. Supporters (among whom Weinberg) assumed that the understanding of elementary particles will lead to the explanation of all material phenomena. Opponents (like Anderson) stated that solid state physics, e.g., owes very little to a deeper insight into sub-atomic processes. See Anderson 1995; Weinberg 1995; Kevles 1997; Cat 1998.

[22] Dooyeweerd  1953-1958,  III, 107: ‘Nowhere else is the intrinsic untenability of the distinction between meaning and reality so conclusively in evidence as in things whose structure is objectively qualified.’

[23] Essentialism means the hypostatization of being (Latin: esse), contrary to the view that the meaning of anything follows from its relations to everything else. According to Dooyeweerd, the ‘meaning nucleus’ and its ‘analogies’ with other aspects determine the meaning of each modal aspect. However, this incurs the risk of an essentialist interpretation, as if the meaning nucleus together with the analogies determines the ‘essence’ of the modal aspect concerned. In my view, the meaning of anything is determined by its relations to everything else, not merely by the universal relations as grouped into the relation frames, but by the mutual interlacements of the characters as well.

[24] Mayr 1982, 60: ‘Constitutive reductionism … asserts that the material composition of organisms is exactly the same as found in the inorganic world. Furthermore, it posits that none of the events and processes encountered in the world of living organisms is in any conflict with the physical or chemical phenomena at the level of atoms and organisms. These claims are accepted by modern biologists. The difference between inorganic matter and living organisms does not consist in the substance of which they are composed but in the organization of biological systems.’ Ernst Mayr rejects every other kind of reductionism. ‘Reduction is at best a vacuous, but more often a thoroughly misleading and futile, approach.’ (ibid. 63).

[25] However, we have observed already that physical and chemical relations can be projected onto quantitative, spatial and kinetic relations. This explains the success of ‘methodical reductionism’.

[26] Dawkins 1986, 13 calls his view ‘hierarchical reductionism’, that ‘… explains a complex entity at any particular level in the hierarchy of organization, in terms of entities only one level down the hierarchy; entities which, themselves, are likely to be complex enough to need further reducing to their own component parts; and so on. It goes without saying - … - that the kinds of explanations which are suitable at high levels in the hierarchy are quite different from the kinds of explanations which are suitable at lower levels.’ Richard Dawkins rejects the kind of reductionism ‘… that tries to explain complicated things directly in terms of the smallest parts, even, in some extreme versions of the myth, as the sum of the parts…’ (ibid.).

[27] Papineau 1993, 10: ‘Supervenience on the physical means that two systems cannot differ chemically, or biologically, or psychologically, or whatever, without differing physically; or, to put it the other way round, if two systems are physically identical, then they must also be chemically identical, biologically identical, psychologically identical, and so on.’ This does not imply reductionism, as Papineau himself illustrates in his chapter 2. See e.g., ibid. 44: ‘…I don’t in fact think that psychological categories are reducible to physical ones.’ According to David Papineau, in particular natural selection implies that biology and psychology are not reducible to physics, contrary to chemistry and meteorology (ibid. 47, see also Plotkin 1994, 52, 55; Sober 1993, 73-77). But elsewhere (ibid. 122) Papineau writes: ‘Everybody now agrees that the difference between living and non-living systems is simply having a certain kind of physical organization (roughly, we would now say, the kind of physical organization which fosters survival and reproduction)’, without realizing that this does not concern a physical but a biotic ordering, and that survival and reproduction are no more than natural selection physical concepts.

 

 


 

 

 

 

Chapter 2

 

Sets

 

  

 

 

 

 

 

 


 

 

2.1. Sets and natural numbers

2.2. Extension of the quantitative relation frame

2.3. Groups as characters

2.4. Ensemble and probability

 


 

2.1. Sets and natural numbers

 

Plato and Aristotle introduced the traditional view that mathematics is concerned with numbers and with space. Since the end of the nineteenth century, many people thought that the theory of sets would provide mathematics with its foundations.[1] Since the middle of the twentieth century, the emphasis is more on structures and relations.[2]

In chapter 1, I defined a natural character as a set of mathematical and natural laws, determining a class of individuals and an ensemble of possible variations. Because classes, ensembles and aggregates are sets, it is apt to pay attention to the theory of sets.

In sections 2.1-2.2 it will appear that with each set at least two relation frames are concerned, according to the tradition to be called the quantitative and the spatial frames. The elements of a quantitative or discrete set can be counted, whereas the parts of a spatial or continuous set can be measured. Section 2.3 discusses some quantitatively qualified characters, in particular groups. Section 2.4 relates the concept of an ensemble with that of probability.

Numbers constitute the relation frame for all sets and their relations. A set consists of a number of elements, varying from zero to infinity, whether denumerable or not, but there are sets of numbers as well. What was the first, the natural number or the set? Just as in the case of the chicken and the egg, an empiricist may wonder whether this is a meaningful question. We have only one reality available, to be studied from within. In the cosmos, we find chickens as well as eggs, sets as well as numbers. Of course, we have to start our investigations somewhere, but the choice of the starting point is relatively arbitrary. Rejecting the view that mathematics is part of logics (4.6), I shall treat sets and numbers in an empirical way, as mathematical phenomena occurring in the cosmos.

At first sight, the concept of a set is rather trivial, in particular if the number of elements is finite. Then the set is denumerable and countable; we can number and count the elements. It becomes more intricate if the number of elements is not finite yet denumerable (e.g., the set of integers), or infinite and non-denumerable (e.g., the set of real numbers). Let us start with finite sets.

 

Sets concern all kinds of elements, hence they are closer to concrete reality than numbers. (As a human act, collecting of fruits etc. is one of the oldest means to provide food.) Quantity or amount is a universal aspect of sets. It is an abstraction like the other five natural relation frames announced in section 1.3. For instance, by isolating the natural numbers we abstract from the equivalence relation.[3]

Two sets A and B are numerically equivalent if their elements can be paired one by one, such that each element of A is uniquely combined with an element of B and conversely. All sets being numerically equivalent to a given finite set A constitute the equivalence class [n] of A. One element of this class is the set of natural numbers from 1 to n. All sets numerically equivalent to A have the same number of elements n. I consider the cardinal number n to be a discoverable property (e.g., by counting or calculating) of each set that is an element of the equivalence class [n]. The numbers 1…n function as ordinal numbers or indices to put the elements of the set into a sequence, to number and to count them. It is a law of arithmetic that in whatever order the elements of a finite set are counted, their number will always be the same.

Sometimes the elements of an infinite set can also be numbered. Then we say that the set is infinite yet denumerable. The set of even numbers, e.g., is both infinite and denumerable. As a set of indices, the natural numbers constitute a universal relation frame for each denumerable set. However, the set of natural numbers is a character class as well. It is relevant to distinguish relation frames from characters, but they are not separable.

 

Peano’s axioms formulate the laws for the sequence N of the natural numbers. The axioms apply the concepts of sequence, successor and first number, but it does not apply the concept of equivalence. According to Giuseppe Peano, the concept of a successor is characteristic for the natural numbers:

 

1. N contains a natural number, indicated by 0.[4]

2. Each natural number a is uniquely joined by a natural number a+, the successor of a.[5]

3. There is no natural number a such that a+ = 0.

4. From a+ = b+ follows a = b.

5. If a subset M of N contains the element 0, and besides each element a its successor a+ as well, then M = N.[6]

 

The transitive relation ‘larger than’ is now applicable to the natural numbers. For each a, a+>a. If a>b and b>c, then a>c, for each trio a, b, c.

The natural numbers constitute a character class. Their character, expressed by Peano’s axioms, is primarily quantitatively characterized. It has no secondary foundation for lack of a relation frame preceding the quantitative one.[7] As a tertiary characteristic, the set of natural numbers has the disposition to expand itself into other sets of numbers (2.2).

The laws of addition, multiplication, and raising powers are derivable from Peano’s axioms.[8] The class of natural numbers is complete with respect to these operations.[9] If a and b are natural numbers, then a+b, a.b en ab are natural numbers as well. This does not always apply to subtraction, division or taking roots, and the laws for these inverse operations do not belong to the character of natural numbers.

Using the two ordering relations discussed, ‘larger than’ and ‘numerical equivalence’, we can order all denumerable sets. All sets having n elements are put together in the equivalence class [n], whereas the equivalence classes themselves are ordered into a sequence. The sets in the equivalence class [n] have no more in common than the number n of their elements.

 

The set of natural numbers is the oldest and best-known set of numbers. Yet it is still subject to active mathematical research, resulting in newly discovered regularities.[10]

Some theorems relate to prime numbers. Euclid proved that the number of primes is unlimited. An arithmetical law says that each natural number is the product of a unique set of primes. Several other theorems concerning primes are proved or conjectured.[11]

In many ways, the set of primes is notoriously irregular. There is no law to generate them. If one wants to find all prime numbers less than an arbitrarily chosen number n, this is only possible with the help of an empirical elimination procedure, known as Eratosthenes’ sieve.[12]

 

The relation of a set to its elements is a numerical law-subject relation, for a set is a number of elements. By contrast, the relation of a set to its subsets is a whole-part relation that can be projected on a spatial figure having parts. A subset is not an element of the set, not even a subset having only one element.[13] A set may be a member of another set. For instance, the numerical equivalence class [n] is a set of sets.[14] However, the set of all subsets of a given set A should not be confused with the set A itself.[15]

Overlapping sets have one or more elements in common. The intersection AÇB of two sets is the set of all elements that A and B have in common. The empty set or zero set Æ is the intersection of two sets having no elements in common. Hence, there is only one zero set. It is a subset of all sets.[16] If a set is considered a subset of itself, each set has trivially two subsets. (An exception is the zero set, having only itself as a subset).

The union AÈB of two sets looks more like a spatial than a numerical operation. Only if two sets have no elements in common, the total number of elements is equal to the sum of the numbers of elements of the two sets apart. Otherwise, the sum is less.[17]

Hence, even for denumerable sets the numerical relation frame is not sufficient. At least a projection on the spatial relation frame is needed. This is even more true for non-denumerable sets (2.2).

 

Some sets are really spatial, like the set of points in a plane contained within a closed curve. As its magnitude, one does not consider the number of points in the set, but the area enclosed by the curve. The set has an infinite number of elements, but a finite measure. A measure is a magnitude referring to but not reducible to the numerical relation frame. It is a number with a unit, a proportion.

This measure does not deliver a numerical relation between a set and its elements. It is not a measure of the number of elements in the set. A measure is a quantitative relation between sets, e.g., between a set and its subsets. If two plane spatial figures do not overlap but have a boundary in common, the intersection of the two point sets is not zero, but its measure is zero. The area of the common boundary is zero. In general, only subsets having the same dimension as the set itself have a non-zero measure. We shall see in section 2.2 that all numbers (including the natural ones) determine relations between sets. Only the natural numbers relate countable sets with their elements as well.

Integral calculus is a means to determine the measure of a spatial figure, its length, area or volume. In section 2.4, we discuss probability being a measure of subsets of an ensemble.

For each determination of a measure, each measurement, real numbers are needed. That is remarkable, for an actual measurement can only yield a rational number (2.2).

 

The number 2 is natural, but it is an integer, a fraction, a real number and a complex number as well. Precisely formulated: the number 2 is an element of the sets of natural numbers, integers, fractions, real and complex numbers. This leads to the conjecture that we should not conceive of the character of natural numbers to determine a class of things, but a class of relations. The natural numbers constitute a universal relation frame for all denumerable sets. Peano’s formulation characterizes the natural numbers by a sequence, that is a relation as well. We shall see that the integers, the rational, real and complex numbers are definable as relations. In that case, it is not strange that the number 2 answers different types of relations. A quantitative character determines a set of numbers, and a number may belong to several sets, each with its own character. The number 2 is a knot of relations, which is characteristic for a ‘thing’. On the other hand, it responds to various characters, and that is not very ‘thing-like’.

However, it is not fruitful to quarrel extensively about the question of whether a number is essentially a thing or a relation. Anyway, numbers are individual subjects to quantitative laws.

 


 

 

2.2. Extension of

the quantitative relation frame

 

The natural numbers satisfy laws for addition, multiplication, and taking powers, by which each pair of numbers generates another natural number. The inverse operations, subtraction, division and taking roots, are not always feasible within the set of natural numbers. Therefore, mathematics completes the set of natural numbers into the set of integers and the set of rational numbers. Put otherwise, the set of natural numbers has the disposition of generating the sets of integral numbers and of rational numbers. There remain holes in the set of rational numbers, there are still magnitudes (like the ratio of the diagonal of a square to one of its sides) which cannot be expressed in rational numbers. These holes are to be filled up by the irrational numbers. The various number sets constitute a hierarchy, consisting of the sets of, respectively, natural, integral, rational, real, and complex numbers. Each of these sets has a separate character. A natural number belongs to each of these sets. A negative integer belongs to all sets except that of the natural numbers. A fraction like ½ belongs to each set except the first two sets.

Before discussing the character of integral, rational, real, and complex numbers, I mention some properties.

 

Each integer is the difference between two natural numbers.[18] Several pairs may have the same difference. Hence, each integral number corresponds to the equivalence class of all pairs of integrals having the same difference. Likewise, each rational number corresponds to the equivalence class of all pairs of natural, integral and rational numbers having the same proportion or the same difference. If we do not want to relapse into an infinite regress, we had better not identify (in the way of an essentialist definition) an integer or a rational number with an equivalence class. The meaning of a number depends on its relation to all other numbers and the disposition of numbers to generate other numbers.[19]

The laws for addition, subtraction, multiplication, and division are now valid for the whole domain of rational numbers, including the natural and integral numbers.[20] After the recognition of the natural numbers as a set of indices, the introduction of negative and rational numbers means a further abstraction with respect to the concept of a set. A set cannot have a negative number of elements, and halving a set is not always possible. The integral and rational numbers are not numbers of sets, but quantitative relations between sets. They are applicable to other domains as well, for instance to the division of an apple. The universal applicability of the quantitative relation frame requires the extension of the set of natural numbers.

Meanwhile, two properties of natural numbers have been lost. Neither the integral nor the rational numbers have a first one, though the number 0 remains exceptional in various ways. Moreover, a rational number has no unique successor. Instead of succession, characteristic for the natural and integral numbers, rational numbers are subject to the order of increasing magnitude. This corresponds to the quantitative subject-subject relations (difference and proportion): if a > b then ab > 0, and if moreover b > 0 then a/b > 1. For each pair of rational numbers, it is clear which one is the largest, and for each trio, it is clear which one is between the other two.

The classes of natural numbers, integers and rational numbers each correspond to a character of their own. These characters are primarily qualified by quantitative laws and lack a secondary characteristic. We shall see that the character of the rational numbers has the (tertiary) disposition to function as the metric for the set of real numbers.

 

The road from the natural numbers to the real ones proceeds via the rational numbers. A set is denumerable if its elements can be put in a sequence. Cantor demonstrated that all denumerable infinite sets are numerically equivalent, such that they can be projected on the set of natural numbers. Therefore, he accorded them the same cardinal number, called Ào, aleph-zero, after the first letter of the Hebrew alphabet. Cantor assumed this ‘transfinite’ number to be the first in a sequence, Ào, À1, À2, … , where each is defined as the ‘power set’ of its predecessor, i.e., the set of all its subsets.

The rational numbers are denumerable, at least if put in a somewhat artificial order. The infinite sequence 1/1;1/2, 2/1;1/3, 2/3, 3/1, 3/2; 1/4, 2/4, 3/4, 4/1, 4/2, 4/3; 1/5, … including all positive fractions is denumerable. In this order it has the cardinal number of Ào. However, this sequence is not ordered according to increasing magnitude.

In their natural (quantitative) order of increasing magnitude, the fractions lay close to each other, forming a dense set. This means that no rational number has a unique successor. Between each pair of rational numbers a and b there are infinitely many others.[21] In their natural order, rational numbers are not denumerable, although they are denumerable in a different order. Contrary to a finite set, whether an infinite set is countable may depend on the order of its elements.

Though the set of fractions in their natural order is dense, it is still possible to put other numbers between them. These are the irrational numbers, like Ö2 and p. According to the tradition, Pythagoras or one of his disciples discovered that he could not express the ratio of the diagonal and the side of a square by a fraction of natural numbers. Observe the ambiguity of the word ‘rational’ in this context, meaning ‘proportional’ as well as ‘reasonable’. The Pythagoreans considered something reasonably understandable, if they could express it as a proportion. They were deeply shocked by their discovery that the ratio of a diagonal to the side of a square is not rational. The set of all rational and irrational numbers, called the set of real numbers, turns out to be non-denumerable. I shall argue presently that the set of real numbers is continuous, meaning that no holes are left to be filled.

Only in the nineteenth century, the distinction between a dense and a continuous set became clear.[22] Before, continuity was often defined as infinite divisibility, not only of space. For ages, people have discussed about the question whether matter would be continuous or atomic. Could one go on dividing matter, or does it consist of indivisible atoms? In this case, tertium non datur is invalid. There is a third possibility, generally overlooked, namely that matter is dense.

Even the division of space can be interpreted in two ways. The first was applied by Zeno when he divided a line segment by halving it, then halving each part, etc. This is a quantitative way of division, not leading to continuity but to density. Each part has a rational proportion to the original line segment. Another way of dividing a line is by intersecting it by one or more other lines. Now it is not difficult to imagine situations in which the proportion of two lines segments is irrational. (For instance, think of the diagonal of a square.) This spatial division shows the existence of points on the line that quantitative division cannot reach.

 

In 1892, Cantor proved by his famous diagonal method that the set of real numbers is not denumerable. Cantor indicated the infinite amount of real numbers by the cardinal number C. He posed the problem of whether C equals À1, the transfinite number succeeding À0. At the end of the twentieth century, this problem was still unsolved. Maybe it is not solvable.

A theorem states that each irrational number is the limit of an infinite sequence or series[23] of rational numbers, e.g., an infinite decimal fraction. This seems to prove that the set of real numbers can be reduced to the set of rational numbers, like the rational numbers are reducible to the natural ones, but that is arguable. Any procedure to find these limits cannot be done in a countable way, not consecutively. This would only lead to a denumerable (even if infinite) amount of real numbers.[24] To arrive at the set of all real numbers requires a non-denumerable procedure. But then we would use a property of the real numbers (not shared by the rational numbers) to make this reduction possible. And this appears to result in circular reasoning.

 

Suppose we want to number the points on a straight or curved line, would the set of rational numbers be sufficient? Clearly not, because of the existence of spatial proportions like that between the diagonal and the side of a square, or between the circumference and the diameter of a circle. Conversely, is it possible to project the set of rational numbers on a straight line? The answer is positive, but then many holes are left. By plugging the holes, we get the real numbers, in the following empirical way.[25]

Consider a continuous line segment AB. We want to mark the position of each point by a number giving the distance to one of the ends.[26] These numbers include the set of infinite decimal fractions that Cantor proved to be non-denumerable. Hence, the set of points on AB is not denumerable. If we mark the point A by 0 and B by 1, each point of AB gets a number between 0 and 1. This is possible in many ways, but one of them is highly significant, because we can use the rational numbers to introduce a metric. We assign the number 0.5 to the point halfway between A and B, and analogously for each rational number between 0 and 1. (This is possible in a denumerable procedure). Now we define the real numbers between 0 and 1 to be the numbers corresponding one-to-one to the points on AB. These include the rational numbers between 0 and 1, as well as numbers like p/4 and other limits of infinite sequences or series. The irrational numbers are surrounded by rational numbers (forming a dense set) providing the metric for the set of real numbers between 0 and 1.

A set is called continuous if its elements correspond one-to-one to the points on a line segment.[27] On the one hand, the continuity of the set of real numbers anticipates the continuity of the set of points on a line. On the other hand, it allows of the possibility to project spatial relations on the quantitative relation frame.

 

The set of real numbers is continuous because it does not contain any holes, contrary to the dense set of rational numbers. The above mentioned procedures to divide a segment of a line, or to project the real numbers between 0 and 1 on a line segment, justify the following statement. Divide the ordered set of numbers into two subsets A and B, such that each element of A is smaller than each element of B. Then there is an element x of A or of B, that is larger than all (other) elements of A and smaller than all (other) elements of B. This is called Dedekind’s cut. The boundary element x can be rational or irrational. This means that the set of real numbers is complete with respect to the order of increasing magnitude, there are no holes left.

The set of real numbers constitutes the quantitative relation frame for spatial relations. Spatial concepts like distance, length, area and angle are projections on sets of numbers. To express spatial relations as magnitudes requires real numbers. Besides spatial relations, kinetic, physical and chemical magnitudes are expressed in real numbers. This is remarkable, considering the practice of measuring. Each measurement is inaccurate to a certain extent. Therefore, a measurement never yields anything but a rational number. Moreover, computers rely on rational numbers. Hence, the use of real numbers has a theoretical background. The assumption that a magnitude is continuously variable is not empirically testable.

 


 

 

2.3. Groups as characters

 

Mathematics knows several structures that may be considered quantitative characters. Among these, the character of mathematical groups expressing symmetries is of special interest to natural science.

A group is a set of elements that can be combined such that each pair generates a third element. In the world of numbers, such combinations are addition or multiplication. Because of the mutual coherence of the elements, a group may be considered an aggregate. The phenomenon of isomorphy allows of the projection of physical states of affairs on mathematical ones.

In 1831 Évariste Galois introduced the concept of a group in mathematics as a set of elements satisfying the following four axioms.[28]

1. A combination procedure exists, such that each pair of elements A and B unambiguously generates a new element AB of the group.[29]

2. The combination is associative, i.e., (AB)C = A(BC), to be written as ABC.

3. The group contains an element I, the identity element, such that for each element A of the group, AI = IA = A.

4. Each element A of the group has an inverse element A’, such that AA = AA’ = I.

 

It can be proved that each group has only one identity element, that each element has only one inverse element, and that I’ = I. Each group has at least one element, I. (Hence, the zero set is not a group.) If a subset of the group is a group itself with the same combination rule, then both groups share the identity element.

It is clear that the elements of a group are mutually strongly connected. They have a relation determined by the group’s character, to be defined as AB’, the combination of A with the inverse of B. The relation of an element A to itself is AA’ = I, A is identical with itself. Moreover, (AB’)’ = BA’, the inverse of a relation of A to B is the relation of B to A.

Each group is complete. If we combine each element with one of them, A, the identity element I is converted into A, and the inverse of A becomes I. The new group as a whole has exactly the same elements as the original group. Hence, the combination of all elements with an element A is a transition of a group into itself. It expresses a symmetry, in which the relations between the elements are invariant.[30]

If two groups can be projected one-to-one onto each other, they are called isomorphic.[31] The phenomenon of isomorphy means that the character of a group is not fully determined by the axioms alone. Besides the combination rule, at least some of the group’s elements must be specified, such that the other elements are found by applying the combination rule.

Isomorphy allows of the projection of one group onto the other one. It leads to the interlacement of various characters, as we shall see in the next few chapters. Hence, isomorphy is a tertiary property of groups, a disposition.

The elements of a group may be numbers, or number vectors, or functions of numbers, or operators transforming one function into another one. Let us first cast a glance at some number groups.

 

The first examples of groups we find in sets of numbers. Adding or multiplying two numbers yields a third number. With respect to addition, 0 is the identity element, for a+0=0+a=a for any number a. Besides 0, it is sufficient to introduce the number 1 in order to generate the whole group of integral numbers: 1+1=2, 1+2=3, etc. The inverse of an integer a is –a, for a+(-a)=0. The relation of a and b is the difference a-b. Instead of beginning with 1, we could also start with 2 or with 3, generating the groups of even numbers, threefold numbers, etc. Each of these subgroups is complete and isomorphic with the full group of integers.

The rational, real, and complex numbers, too, each form a complete addition group, but the natural numbers do not constitute a group. The natural numbers form a class with a quantitatively qualified character, expressed by Peano’s axioms (2.1) or an alternative formulation. However, this character does not include the laws for subtraction and division, because the set of natural numbers is not complete with respect to these operations.

The mentioned groups are infinite, but there are finite groups of numbers as well. The four integral numbers 0, 1, 2, and 3 form a group with the combination rule of ‘adding modulo 4’.[32] If the sum of two elements would exceed 3, we subtract 4 (hence 3+2=1, and 4=0). If the difference would be less than 0 we add 4 (hence 2-3=3). This group is isomorphic to the rotation group representing the symmetry of a square. Likewise, the infinite but bounded set of real numbers between 0 and 2p constituting the addition group modulo 2p is isomorphic to the rotation group of a circle.

In the multiplication of numbers, 1 is the identity element. For each number a, 1.a = a.1 = a. The inverse of multiplication is division, 1/a being the inverse of a. The relation between a and b is their proportion a/b. Introducing the positive integers as elements, we generate the group of positive rational numbers. The full set of rational numbers is not a group with multiplication as a combination rule, because division by 0 is excluded, hence 0 would be an element without an inverse. Likewise, the set of positive real numbers is a multiplication group, but the set of all real numbers is not.

 

Addition and multiplication are connected by the distributive law: (a+b)c =ac+bc. Some addition and multiplication groups are combined into a structure called a field, having two combination rules. Three number fields with an infinite number of elements are known, respectively having the rational, real and complex numbers as elements.[33] Because division by zero is excepted, I do not consider a field a character, but an interlacement of two characters.

For a given positive real number a all numbers an form a multiplication group, if the variable exponent n is an element of the set of integral, rational, real, or complex numbers. The character of this group depends on the fact that the integral, rational, real, or complex numbers each form an addition group. The combination of two elements of the power group, the product of two powers, arises from the addition of the exponents: an.am=a(n+m). The identity element of this multiplication group is a0=1 and the inverse of an is a- n. The group is isomorphic with the addition group of integral, rational, real, or complex numbers.

 

Each addition group, multiplication group, and power group is a character class. Their characters are primarily numerically qualified. They have no secondary foundation, and their tertiary disposition is to be found in many interlacements with spatial, kinetic, physical, and chemical characters (chapters 3-5).

Sometimes, a variable spatial, kinetic, or physical property or relation turns out to have the character of a group, isomorphic to a group of numbers. If that magnitude may be positive as well as negative (e.g., electric charge) this is an addition group. If only positive values are allowed (e.g., length or mass), it is a multiplication group. In other cases, the property or relation is projected on a vector group (e.g., velocity or force). If a property or relation is isomorphic to a group of numbers, it is called measurable.[34] Since antiquity, its importance is expressed in the name geometry for the science of space. The law expressing the measurability of a property or relation is called its metric. Measurable magnitudes isomorphic to a number group allow us to perform calculations, which is the basis of the mathematization of science.

Measurability is not trivial. A physical magnitude is only measurable if a physical combination procedure is available, which can be projected on a quantitative one. To establish whether this is the case requires experimental and theoretical research.[35]

 

Relativity theory demonstrates that a kinematic or physical combination rule in a group cannot always be projected on addition or multiplication. In the case of one-dimensional motion, the combination rule for two velocities v and w is not v+w (as in classical kinematics), but (v+w)/(1+vw/c2), where c is the speed of light. For small velocities, the numerator is about 1, and the classical formula is approximated. The meaning of this formula becomes clear by taking v or w equal to c: if w=c, the combination of v and w equals c. A combination of velocities smaller than that of light never yields a velocity exceeding the speed of light. The formula also expresses the fact that the speed of light has the same value with respect to each moving system.[36] (This, of course, was the starting point for the formula’s derivation.) The elements of the group are all velocities which magnitude is at most the speed of light.

 

Vectors play an important part in mathematics and in physics. With all kinds of vectors, like position, displacement, velocity, force, and electric field strength, the numerical vector character is interlaced. Spatial, kinetic and physical vectors are isomorphic with number vectors.[37]

A number vector r=(x,y,z,…) is an ordered set of n real numbers, called the components of the vector. Number vectors are subject to laws for addition and subtraction, by applying these operations to the components apart.[38] The set of all number vectors with the same number of components is an addition group, the zero vector 0=(0,0,0, …) being its identity element. Each vector multiplied by a real number yields a new vector within the group.[39] However, division by zero being excluded, this does not define a combination procedure for a group.

Besides the zero vector as the identity element, the set contains unit vectors. In a unit vector, one component is equal to 1, the others are equal to 0. Any vector can be written as a linear combination of the unit vectors.[40] The set of unit vectors constitutes the base of the set of vectors. For number vectors, the base is unique, [41] but in other cases, a group of vectors may have various bases. For spatial vectors, e.g., each co-ordinate system represents another base.

The scalar product of two number vectors can be used to determine relations between vectors.[42] If the scalar product is zero we call the vectors orthogonal, anticipating the spatial property of mutually perpendicular vectors. For instance, the unit vectors are mutually orthogonal. This multiplication of vectors is not a combination rule for groups, because the product is not a vector.[43]

Apart from being real, the components of number vectors may be rational or complex, or even functions of numbers. These anticipate spatial vectors representing relative positions. An important difference is that spatial vectors are in need of a co-ordinate system, with an arbitrary choice of origin and unit vectors (3.1). Hence, number vectors are not identical with spatial vectors determining positions or displacements. A fortiori, this applies to kinetic or physical vectors, representing velocities or forces. Rather, the character of number vectors has the disposition to become interlaced with the characters of spatial, kinetic, or physical vectors.

 

A special case is the set of complex numbers, two-component vectors with a specific arithmetic. Also known asc=a+bi, a complex number c=(a, b) is a two-dimensional number vector having real components a and b. The complex numbers for which b=0 have the same properties as real numbers, hence for convenience one writes a=(a,0). This makes the set of real numbers a subset of the set of complex numbers. The unit vectors are 1=(1,0) and i=(0,1), the imaginary unit. The complex numbers form an addition group.[44]

Complex numbers have the unique property that their multiplication yields a complex number. This is not the case for other number vectors.[45] The inverse operation also gives a complex number, but division by zero being excluded, this does not result in a group. As observed, the set of complex numbers is a field, an interlacement of two characters, subject to two combination rules.

Unlike the real numbers, the complex numbers cannot be projected on a line in an unambiguous order of increasing magnitude, because different complex numbers may have the same magnitude. However, they can be projected on a two-dimensional ‘complex plane’. The addition group of complex numbers is isomorphic to the addition group of two-dimensional spatial vectors.[46]

Interesting is that some theorems about real numbers can only be proved by considering them a subset of the set of complex numbers. The characters of real and complex numbers are strongly interlaced.

 

Mathematical functions may also have a character, a specific set of laws. A function is a prescription, connecting a set of numbers [x] onto another set [y], such that to every x only one y corresponds, y=f(x).[47] In a picture in which [x] is represented on the horizontal axis and [y] on the vertical axis, a graph represents the function spatially.

If the set [x] is finite, then [y] is finite as well and the prescription may be a table. More interesting are functions for which [x] is a non-denumerable set of real or complex numbers within a certain interval. A function may be continuous or discontinuous. An example of a discontinuous function is the stepfunction: y=0 if x<a and y=1 if x>a.[48]

Many a characteristic function defined by a specific lawful connection between two sets [y] and [x] is interlaced with spatial, kinetic, or physical characters. For instance, the quadratic function y=ax2+bx+c is interlaced with the spatial character of a parabola and with motion in free fall.[49] And the exponential function is interlaced with periodic motions and various physical processes.[50]

 

Besides the above mentioned number vectors, mathematics knows of vectors which components are functions. Now a vector is an ordered set of n functions. (The dimension n may be finite or infinite, denumerable or non-denumerable). This is only possible if the scalar product f.y is defined, including the magnitude of f (the square root of f.f), and if an orthonormal base of n unit functions f1, f2, … exists.[51] A function is an element of a complete addition group of functions if it is a linear combination of a set of basic functions.[52]

The basic functions being orthonormal, the group of functions is isomorphic with the group of number vectors having the same number of dimensions.

A function projects the elements of a number set onto another number set. Because many functions exist, sets of functions can be constructed. These too may be projected on each other, and such a projection is called an operator. Although the idea of an operator is developed and mostly applied in quantum physics, it is a mathematical concept. An operator A converses a function into another one, y(x)=Af(x). This has the profile of an event. Having a quantitative character, a transition made by an operator is interlaced with the character of events qualified by a later relation frame. A spatial operation may be a translation or a rotation. A change of state is an example of a physical event. Quantum physics projects a physical change of state on the mathematical transition of a function by means of an operator.

If the converted function is proportional to the original one (Af=af, such that a is a real number), we call f an eigenfunction (proper function) of A, and a the corresponding eigenvalue (proper value). Trivial examples are the identity operator, for which any function is an eigenfunction (the eigenvalue being 1); or the operator multiplying a function by a real number (being its eigenvalue).

An operation playing an important part in kinematics, physics and chemistry is differentiating a function. (The reverse operation is called integrating). By differentiating a function is converted into its derivative. In mechanics, the derivative of the position function indicates the velocity of a moving body. Its acceleration is found by calculating the derivative of the velocity function.

For the operator (d/dx), the real exponential function f=b.exp.ax is an eigenfunction, for (d/dx)f=ab.exp.ax=af. The eigenvalue is the exponent a. The imaginary exponential function y=b.exp.iat is an eigenfunction of the operator (1/i)(d/dt), in quantum physics called the Hamilton-operator or Hamiltonian (after William R. Hamilton). Again, the eigenvalue is the exponent a.[53]

 

Quantum physics calls a linear set of functions with complex components a Hilbert space, after David Hilbert.[54] This group is a representation of the ensemble of possible states of a physical system.

Consider an operator projecting a group onto itself. The operator A converts an element f of the group into another element Af of the same group. Such an operator is called linear if for all elements of the group A(f+y)=Af+Ay. If its eigenfunctions constitute an orthonormal basis for the group or a subgroup, the operator is called hermitean, after the mathematician Charles Hermite. The operation represented by a hermitean operator H is not a combination procedure for a group, but it projects a function on the eigenfunctions of H.

Besides hermitean operators, quantum physics applies unitary operators, which form a group representing the symmetry properties of Hilbert space.[55]

 


 

 

2.4. Ensemble and probability

 

In our daily life as well as in science, we experience a thing first of all as a unit having specific properties. We know that an atom has the spatially founded character of a nucleus surrounded by a cloud of electrons. However, we also know it as a unit with a specific mass and chemical properties. A character determines a class of similar things. There are many hydrogen atoms having the same characteristic properties, even if deploying individual differences.

The arithmetic of characteristically equal individuals has a specific application in statistics. Statistics makes sense if it concerns the mutual variations of similar individuals. Statistics is only applicable to a specific set of individuals, a subset of a character class, a sample representative for the ensemble of possible variations. Both theoretically and empirically, we can apply statistics to the casting of dice, supposing all dice to have the same cubic symmetry, and assuming that the casting procedure is arbitrary.

I call an ensemble the set of all possible variations allowed by a character. Just like other sets, an ensemble has subsets, and sometimes the measure of a subset represents the relative probability of the possibilities. The concept of probability makes only sense if it concerns possibilities that can be realized by some physical interaction. Therefore, probability is a mathematical concept anticipating the physical one. I shall present a short summary of the theory of probability.[56]

 

Consider the subsets A, B, … of the non-empty ensemble E of possibilities. Now AÈB is the union of A and B, the subset of all elements belonging to A, to B or to both. The intersection AÇB is the subset of all elements belonging to A as well as to B. If AÇB = Æ (the empty set) we call A and B disjunct, they have no elements in common. If A is a subset of B (AÌB), then AÈB=B and AÇB=A. Clearly, AÇE=A.

Formally, probability is defined as a quantitative measure p(A) for any subset AÌE. [57]

 

1. Probability is a non-negative measure: p(A)³0.

2. Probability is normalized: p(E)=1.

3. Probability is an additive function for disjunct subsets of E: if AÇB=Æ, then p(AÈB)=p(A)+p(B).

 

Starting from this definition, several theorems can be derived.[58]

The conditional probability, the chance having A if B is given and if p(B)¹0, is defined as p(A/B)=p(AÇB)/p(B). Because p(A)=p(A/E), each probability is conditional. If A and B exclude each other, being disjunct (AÇB=0), the conditional probability is zero. Now p(A/B)=p(B/A)=0. [59]

A and B are called statistically independent if p(A/B)=p(A) and p(B/A)=p(B). Then p(AÇB)=p(A)p(B) – for statistically independent subsets the chance of the combination is the product of their separate chances. Mark the distinction between disjunct and statistically independent subsets. In the first case probabilities are added, in the second case multiplied.

If an ensemble consists of n mutually statistically independent subsets, it can be projected onto an n-dimensional space. For instance, the possible outcomes of casting two dice simultaneously are represented on a 6x6 diagram.[60]

Finally, consider a set of disjunct subsets XÌE, such that their sum SX=E. Now the probability p(X) is a function over the subsets X of E. We call p(X) the probability distribution over the subsets X of E. Consider an arbitrary function y(X) defined on this set. The average value of the function, also called its expectation value, is the sum over all X of the product y(X)p(X), if the number of disjunct subsets is denumerable (otherwise it is the integral).[61] In this sum, the probability expresses the ‘weight’ of each subset X.

This is called the ensemble average of the property. In statistical mechanics, it is an interesting question of whether this average is equal to the time average for the same property for a single system during a long time interval. This so-called ergodic problem is only solved for some very special cases, sometimes with a positive, sometimes with a negative result.[62] Besides the average of a property, it is often important to know how sharply peaked its probability distribution is. The ‘standard deviation’, the average difference from the average, is a measure of this peak.[63]

The formal theory is applicable to specific cases only if the value p(A) can be theoretically or empirically established for the subsets AÌE. Often this is only a posteriori possible by performing measurements with the help of a representative sample. Sometimes, symmetries allow of postulating an a priori probability distribution. Games of chance are the simplest, oldest, and best-known examples.

Although the above-summarized theory is not only relatively simple but almost universally valid as well,[64] its application strongly depends on the situation. With respect to thing-like characters, the laws constituting the character determine the probability of possible variations. Another important field of application is formed by aggregates, for instance studied by statistical mechanics. For systems in or near equilibrium impressive results have been achieved, but for non-equilibrium situations (hence, for events and processes), the application of probability turns out to be fraught with problems.

 

Based on the characteristic similarity of the individuals concerned, statistical research is of eminent importance in all sciences. It is a means to research the character of individuals whose similarity is recognized or conjectured. It is also a means to study the properties of a homogeneous aggregate containing a multitude of individuals of the same character.

As early as 1860, James Clerk Maxwell applied statistics to an ideal gas, consisting of N molecules, each having mass m, in a container with volume V.[65] He neglected the molecules’ dimensions and mutual interactions. The vector r gives the position of a molecule, and the vector v represents its velocity. Maxwell assumed the probability for positions, p1(r), to be independent of the probability for velocities, p2(v).[66]

In equilibrium, the molecules are uniformly distributed over the available volume, hence the chance to find a molecule in a volume-element dr=dx.dy.dz equals p1(r)dr=1/Vdr.[67] Maxwell based the velocity distribution on two kinds of symmetry. First, he assumed that the direction of motion is isotropic. This means that p2(v) only depends on the magnitude of the molecular speed.[68] Secondly, Maxwell assumed that the components of the velocity (vx,vy,vz) are statistically independent. Only the exponential function satisfies these two requirements. [69]

By calculating the pressure P exerted by the molecules on the walls of the container, and comparing the result with the law of Boyle and Gay-Lussac, Maxwell found that the exponent depends on temperature.[70] Only in the twentieth century, experiments confirmed Maxwell’s theoretical distribution function. The expression ½m(vx2+vy2+vz2) is recognizable as the kinetic energy of a molecule. The mean kinetic energy turns out to be equal to (3/2)kT. For all molecules together the energy is (3/2)NkT, hence, the specific heat is (3/2)Nk. This result was disputed in Maxwell’s days, but it was later experimentally confirmed for mono-atomic gases.[71]

Ludwig Boltzmann generalized Maxwell’s distribution, by allowing other forms of energy besides kinetic energy. The Maxwell-Boltzmann distribution[72] turns out to be widely valid. The probabilities or relative occupation numbers of two-atomic, molecular, or nuclear states having energies E1 and E2 have a proportion according to the so-called Boltzmann-factor, determined by the difference between E1 and E2.[73] This means that a state having a high energy has a low probability.

The weakness of Maxwell’s theory was neglecting the mutual interaction of the molecules, for without interaction equilibrium cannot be reached. Boltzmann corrected this by assuming that the molecules collide continuously with each other, exchanging energy. He arrived at the same result.

Maxwell and Boltzmann considered one system consisting of a large number of molecules, whereas Joshua W. Gibbs studied an ensemble of a large number of similar systems. Assuming that all microstates are equally probable, the probability of a macrostate can be calculated by determining the number of corresponding microstates. The logarithm of this number is proportional to the entropy of a macrostate.[74]

 

Both in classical and in quantum statistics a character as a set of laws determines the ensemble of possibilities and the distribution of probabilities. It allows of individuality, the subject side of a character. Positivist philosophers defined probability as the limit of a frequency in an unlimited sequence of individual cases.[75] In this way, they tried to reduce the concept of probability to the subject side. Of course, the empirical measurement of a probability often has the form of a frequency determination. Each law statement demands testing, and that is only possible by taking a sample.[76] However, this does not justify the elimination of the law-side from probability theory.

An example of a frequency definition of probability is found in the study of radioactivity. A radioactive atom decays independent of other atoms, even if they belong to the same sample. During the course of time, the initial number of radioactive atoms (No) in a sample decreases exponentially to Nt. at time t.[77] Many scientists are content with this practical definition. However, a sample is a collection limited in time and space, it is not an ensemble of possibilities.

There are two limiting cases. In the one case, we extend the phenomenon of radioactivity to all similar atoms, increasing Noand Nt infinitely in order to get a theoretical ensemble. The ensemble has two possibilities, the initial state and the final state, and their distribution in the ensemble at time t after to can be calculated.[78] In the other limiting case we take No=1. Now exp.-(t-to)/t is the chance that a single atom decays after t-to sec. This quotient depends on a time difference, not on a temporal instant. As long as the atom remains in its initial state, the probability of decay to the final state is unchanged.

Both limiting cases are theoretical. An ensemble is no more experimentally determinable than an individual chance. Only a collection of atoms can be subjected to experimental research. It makes no sense to consider one limiting case to be more fundamental than the other one. The first case concerns the law side, the second case the subject-side of the same phenomenon of radioactivity.

 

Statistics is not only applicable in the investigation of the ensemble of possibilities of a character. If two characters are interlaced, their ensembles are related as well. Sometimes, a one-to-one relation between the elements of both ensembles exists. Now the realization of a possibility in one ensemble reduces the number of possibilities in the other ensemble to one. In other cases, several possibilities remain, with different probabilities.

Character interlacements are not always obvious. In a complex system, it is seldom easy to establish relations between structures, events and processes. Statistical research of correlations is a much applied expedient.



[1] For instance, Ernst Zermelo in 1908, quoted by Quine 1963, 4: ‘Set theory is that branch of mathematics whose task is to investigate mathematically the fundamental notions of ‘number’, ‘order’, and ‘function’ taking them in their pristine, simple form, and to develop thereby the logical foundations of all of arithmetic and analysis.’ See also Putnam 1975, chapter 2.

[2] Shapiro 1997, 98: ‘Mathematics is the deductive study of structures’.

[3] Equivalence is reflexive (A º A), symmetric (if A º B, then B º A), and transitive (if A º B and B º C, then A º C). On the other hand, numbers are subject to the order of increasing magnitude. This sequential order is exclusive (either a > b, or b > a), asymmetric (if a > b, then b < a), not reflexive (a is not larger or smaller than a), but it is transitive (if a > b and b > c, then a > c). For numbers, the equivalence relation reduces to equality: a = a; if a = b then b = a; if a = b and b = c then a = c. Usually equivalence is different from equality, however.

[4] Peano took 1 to be the first natural number. Nowadays one usually starts with 0, to indicate the number of elements in the zero set.

[5] In the decimal system 0+ = 1, 1+ = 2, 2+ = 3, etc., in the binary system 0+ = 1, 1+ = 10, 10+ = 11, 11+ = 100, etc. From axiom 2 it follows that N has no last number.

[6] The fifth axiom states that the set of natural numbers is unique. The sequence of even numbers satisfies the first four axioms but not the fifth one. On the axioms rests the method of proof by complete induction (4.1): if P(n) is a proposition defined for each natural number n ³ a, and P(a) is true, and P(n+) is true if P(n) is true, then P(n) is true for any n ³ a.

[7] Because the first relation frame does not have objects, it makes no sense to introduce an ensemble of possibilities besides any numerical character class.

[8] Quine 1963, 107-116.

[9] In 1931, Kurt Gödel (see Gödel 1962) proved that any system of axioms for the natural numbers allows of unprovable statements. This means that Peano’s axiom system is not logically complete.

[10] Putnam 1975, xi: ‘… the differences between mathematics and empirical science have been vastly exaggerated.’ Barrow 1992, 137: ‘Even arithmetic contains randomness. Some of its truths can only be ascertained by experimental investigation. Seen in this light it begins to resemble an experimental science.’ See Shapiro 1997, 109-112; Brown 1999, 182-191.

[11] Christian Goldbach’s conjecture, saying that each even number can be written as the sum of two primes in at least one way, dates from 1742, but is at the end of the twentieth century neither proved nor disproved.

[12] From the set of natural numbers 1 to n, starting from 3 the sieve eliminates all even numbers, all triples, all quintets except 5, (the quartets and sixtuplets have already been eliminated), all numbers divisible by 7 except 7 itself, etc., until one reaches the first number larger than Ön. Then all primes smaller than n remain on the sieve. For very large prime numbers, this method consumes so much time that the resolution of a very large number into its factors is used as a key in cryptography. There are much more sequences of natural numbers subject to a characteristic law or prescription. An example is the sequence of Fibonacci (Leonardo of Pisa, circa 1200). Starting from the numbers 1 and 2, each member is the sum of the two preceding ones: 1, 2, 3, 5, 8, 13, … This sequence plays a part in the description of several natural processes and structures, see Amundson 1994, 102-106

[13] Quine 1963, 30-32 assumes there is no objection to consider an individual to be a class with only one element, but I think that such an equivocation is liable to lead to misunderstandings.

[14] A well-known paradox arises if a set itself satisfies its prescription, being an instance of self-reference. The standard example is the set of all sets that do not contain themselves as an element. According to Brown 1999, 19, 22-23 restricting the prescription to the elements of the set may preclude such a paradox. This means that a set cannot be a member of itself, not even if the elements are sets themselves.

[15] The number of subsets is always larger than the number of elements, a set of n elements having 2n subsets. A set contains an infinite number of elements if it is numerically equivalent to one of its subsets. For instance, the set of natural numbers is numerically equivalent to the set of even numbers and is therefore infinite.

[16] This is a consequence of the axiom stating that two sets are identical if they have the same elements.

[17] If n(A) is the number of elements of A, then n(AÈB) = n(A) + n(B) – n(AÇB).

[18] Starting from its element 0, the set of integral numbers can also be defined by stating that each element a has a unique successor a+ as well as a unique predecessor a-, if (a+)- = a, see Quine 1963, 101.

[19] Cassirer 1910, 49.

[20] It can be proved that the sum, the difference, the product and the quotient of two rational numbers (excluding division by 0) always gives a rational number. Hence, the set of rational numbers is complete or closed with respect to these operations.

[21] If a < b then a < a+c(b-a) < b, for each rational value of c with 0 < c < 1.

[22] Grünbaum 1968, 13.

[23] A sequence is an ordered set of numbers (a, b, c, …). Sometimes an infinite sequence has a limit, for instance, the sequence 1/2, 1/4, 1/8, … converges to 0. A series is the sum of a set of numbers (a+b+c+…). An infinite series too may have a limit. For instance, the series 1/2+1/4+1/8+… converges to 1.

[24] By multiplying a single irrational number like p, with all rational numbers, one finds already an infinite, even dense, yet denumerable subset of the set of real numbers. Also the introduction of real numbers by means of ‘Cauchy-sequences’ only results in a denumerable subset of real numbers.

[25] This procedure differs from the standard treatment of real numbers, see e.g. Quine 1963, chapter VI.

[26] According to the axiom of Cantor-Dedekind, there is a one-to-one relation between the points on a line and the real numbers.

[27] It is not difficult to prove that the points on two different line segments correspond one-to-one to each other.

[28] In physics, groups were first applied in relativity theory, and since 1925 in quantum physics and solid state physics. Not to everyone’s delight, however, see e.g. Slater 1975, 60-62: about the ‘Gruppenpest’: ‘… it was obvious that a great many other physicists were as disgusted as I had been with the group-theoretical approach to the problem.’

[29] A group is called Abelean (after Niels H. Abel) or commutative if for each A and B, AB = BA. This is by no means always the case.

[30] The relation between the elements CA and BA is (CA)(BA)’ = (CA)(AB’) = CB’, the relation between C and B.

[31] Two groups are isomorphic if their elements can be paired such that A1B1 = C1 in the first group implies that A2B2 = C2 for the corresponding elements in the second group and conversely. This may be the case even if the combination rules in the two groups are different.

[32] Two numbers are ‘congruent modulo x’ if their difference is an integral multiple of x.

[33] There are finite fields as well.

[34] Stafleu 1980, chapter 3. Isomorphy is not trivial. Sometimes one has to be content with a weaker projection, called homomorphy. An example is Mohs’ scale, indicating the relative hardness of minerals by numbers between 0 en 10: if A is harder than B, A gets a higher numeral. It makes no sense to add or to multiply these ordinal numbers.

[35] During the nineteenth century, the establishment of the metric for intensive properties like thermodynamic temperature or electric potential cost a lot of labour, see Stafleu 1980, chapter 3. Almost all properties used in science have a well-defined metric. In psychology and the humanities this is far less the case.

[36] In the Lorentz-group the speed of light is the unit of speed (c=1), having the same value in all inertial frames (3.3).

[37] Besides, mathematics acknowledges tensors, matrices and other structures, hardly to be discussed in this treatise.

[38] For example, the difference between two vectors is Dr = r2r1=(x2x1,y2y1,z2z1, …).

[39] If c is an ordinary number, b=ca=c(a1,a2,a3, …)=(ca1,ca2,ca3, …).

[40] For each number vector, a=(a1,a2,a3, …)=a1(1,0,0, …)+a2(0,1,0, …)+a3(0,0,1, …)+ ...

[41] With the help of functions, other orthonormal bases for number vectors can be constructed.

[42] The scalar product of the vectors a and b is: a.b=a1b1+a2b2+a3b3+… The square root of the scalar product of a vector with itself (a.a=a12+a22+a32+ …) determines the magnitude of a. Each component of the vector a is equal to its scalar product with the corresponding unit vector, e.g.: a1=a.(1,0,0…). Analogous to the spatial case, this is called the projection of a on a unit vector.

[43] The vector product is an anti-symmetric tensor, having n2 components, of which ½(n-1)n components are independent. In a three-dimensional space, this yields exactly three independent components. Hence a vector product looks like a vector (only in three dimensions). However, it is a pseudovector. At perpendicular reflection, a real vector reverses its direction, whereas the direction of a pseudovector is not changed.

[44] The vector c*=(a, -b) is called the complex conjugated of c=(a, b). The magnitude of c is the square root of cc*=(a, b)(a, -b)=a2+b2 and is a real number. The complex numbers form an addition group with the combination rule: (a, b)+(c, d)=(a+c,b+d). The identity element is 0=(0,0), and  -(a,b) = (-a,-b) is the inverse of (a,b).

[45] The product of the complex numbers (a,b) and (c,d) is (a,b)(c,d) = (ac-bd,bc+ad), which is a complex number. Clearly, i2=(0,1)(0,1)=(-1,0)=-1.

[46] If we call j the angle with the positive real axis (for which a>0, b=0), then a complex number having magnitude c can be written as an imaginary exponential function c.exp.ij=c(cos j+i sin j). The product of two complex numbers is now cd.exp.i(j+f) and their quotient is (c/d).exp.i(j-f). In the complex plane, the unit circle around the origin represents the set of numbers exp.ij. Multiplication of a complex number with exp.ij corresponds with a rotation about the angle j.

[47] A function may depend on several variables, e.g. the components of a vector. A function is a relation between the elements of two or more sets, e.g., number sets. This relation is not always symmetrical. With each element of the first set [x] corresponds only one element of the second set [y]. Conversely, each element of [y] corresponds with zero, one or more elements of [x]. If the functional relation between [x] and [y] is symmetrical, the function is called one-to-one. This is important in particular in the case of a projection of a set onto itself. Sometimes such a projection is called a rotation.

[48] Here [x] is the set of all real numbers, and [y] is a subset of this set. The derivative of the step function is the characteristic delta function. The delta function equals zero for all values of x, except for x = a. For x = a, the delta function is not defined. The integral of the delta function is 1. An approximate representation of the deltafunction is a rectangle having height h and width 1/h. If h increases indefinitely, 1/h decreases, but the integral (the rectangle’s area) is and remains equal to 1. The well-known Gauss-function approximates the deltafunction equally well.

[49] Spatially defined, a parabola is a conic section. Of course, it can also be defined as the projection of the mentioned quadratic function. Contrary to laws, definitions are not very important.

[50] The exponential function with a real exponent (exp.at) indicates positive or negative growth. If it has an imaginary exponent, the exponential function (exp.iat) is periodical (i.e., exp.iat=exp.i(at+n.2p) for each integral value of n), hence its character is interlaced with those of periodic motions like rotations, oscillations and waves.

[51] ‘Orthonormal’ means that fi.fj=dij: the scalar product of each pair of basis functions  equals 1 if i=j, it equals 0 if i¹j.

[52] An n-dimensional linear combination of n basis functions is: f=c1f1+c2f2+ … +cn fn.  In a complex function set, the components c1, c2, c3, … are complex as well. The number of dimensions may be finite, denumerable infinite, or non-denumerable. In the latter case, the sum is an integral.

[53] If ABf = BAf for each f, A and B are called commutative. If two operators commute, they have the same eigenfunctions, but usually different eigenvalues.

[54] The quantum mechanical state space is called after David Hilbert, but invented by John von Neumann, in 1927.

[55] To each operator A, an operator A+ is conjugated such that the scalar product y. Af = A+f. y. For a Hermitean operator H, H+ = H, hence f.Hy = Hf.y. For a unitary operator U, UU+ = I, the identity operator. Hence, Uy.Uf=f.y=(y.f)*. This means that the probability of a state or a transition, being determined by a scalar product, is invariant under a unitary operation. Unitary operators are especially fit to describe symmetries and invariances.

[56] See Stafleu 1980, chapter 8. I discuss probability only in an ontological context, not in the epistemological meaning of the probability of a statement. Ontologically, probability does not refer to a lack of knowledge, but to the variation allowed by a character.

[57] Observe that the theory ascribes a probability to the subsets, not to the elements of a set.

[58] For instance: p(Æ) = 0; 0 £ p(A) £ 1; p(AÈB) = p(A) + p(B) – p(AÇB). If A and B are disjunct (AÇB = Æ) the probability is additive: p(AÈB) = p(A) + p(B).

[59] If A is a subset of B (AÌB) then: p(AÈB) = p(B); p(AÇB) = p(A); p(A/B) = p(A)/p(B).

[60] Genetics calls this a Punnett-square, after Reginald G. Punnett (1905). If E is a square with unit area, p(A) is the area of a part of the square. Hence, so far the theory is not intrinsically a probability theory.

[61] In the form of a formula: <y(X)>=SE y(X)p(X)

[62] Tolman 1938, 65-70; Khinchin 1949, Ch. III; Reichenbach 1956, 78-81; Prigogine 1980, 33-42, 64-65; Sklar 1993, 164-194.

[63] This is defined as <y(X)-<y(X)>>.

[64] Quantum physics allows of interference of states, influencing probability in a way excluded by classical probability theory (4.3).

[65] Maxwell 1890, I, 377-409; Born 1949, 50ff; Achinstein 1991, 171-206.

[66] This means: p(r, v)=p1(r)p2(v).

[67] Observe that p1(r) as well as p2(v) is a probability density.

[68] In this case, mathematically it does not matter to replace the speed by its square, hence p2(v)=p2(vx2+vy2+vz2).

[69] p2(v)=p2(vx2+vy2+vz2)=px(vx)py(vy) pz(vz)=a.exp.-½mb(vx2+vy2+vz2).

[70] From the law of Boyle and Gay-Lussac (PV=NkT, wherein T is the temperature and k is Boltzmann’s constant), it follows that b=N/PV=1/kT. The value of a follows from normalisation, i.e., the requirement that the total probability equals 1.

[71] When Maxwell published his theory, it was not generally accepted that most known gases (hydrogen, oxygen, or nitrogen) consist of bi-atomic molecules. These gases have a different specific heat than mono-atomic gases like mercury vapour and the later discovered noble gases like helium and argon. Boltzmann explained this difference by observing that bi-atomic molecules have rotation and vibration kinetic energy besides translational kinetic energy. An exact explanation became available only after the development of quantum physics.

[72] p(r,v)=p1(r)p2(v)=(a/V).exp.-E/kT.

[73] The Boltzmann-factor is: p(E1)/p(E2)=(exp.-E1/kT)/(exp.-E2/kT)=exp.-(E1-E2)/kT.

[74] Rudolf Clausius and Ludwig Boltzmann aimed to reduce the irreversibility expressed by the second law of thermodynamics to the reversible laws of mechanics. In how far they succeeded is still a matter of dispute. Anyhow, it could not be done without taking recourse to probability laws, see Bellone 1980, 91. Boltzmann demonstrated the equilibrium state of a gas to have a much larger probability than a non-equilibrium state. He assumed that any system moves from a state with a low probability to a state with a larger one as a matter of course. This means that the irreversibility of the realization of possibilities is presupposed. In quantum mechanics, the combination of reversible equations of motion with probability leads to irreversible processes as well, see Belinfante 1975, chapter 2.

[75] Von Mises 1939, 163-176, Reichenbach 1956, 96ff,  Hempel 1965, 387, and initially Popper 1959, chapter VIII. Later Karl Popper defended the ‘propensity-interpretation’ of probability: we have to ‘… interpret these weights of the possibilities (or of the possible cases) as measures of the propensity, or tendency, of a possibility to realize itself upon repetition’, Popper 1967, 32. Popper 1983, 286: A propensity is a physical disposition or tendency ‘… to bring about the possible state of affairs … to realize what is possible ... the relative strength of a tendency or propensity of this kind expresses itself in the relative frequency with which it succeeds in realizing the possibility in question.’ See Settle 1974 discussing Popper’s views; Margenau 1950, chapter 13; Nagel 1939, 23; Sklar 1993, 90-127. Besides subjectivist views, the frequency interpretation and the propensity interpretation, Sklar distinguishes ‘“probability” as a theoretical term’ (ibid. 102-108). ‘… the meaning of probability attributions would be the rules of interference that take us upward from assertions about observed frequencies and proportions to assertions of probabilities over kinds in the world, and downward from such assertions about probabilities to expectations about frequencies and proportions in observed samples. These rules of “inverse” and “direct” inference are the fundamental components of theories of statistical inference.’ (ibid. 103). This comes close to my interpretation of probability determined by a character.

[76] Cp. Tolman 1938, 59: This hypothesis must be regarded ‘as a postulate which can be ultimately justified only by the correspondence between the conclusions which it permits and the regularities in the behaviour of actual systems which are empirically found.’ This applies to all suppositions founding calculations of probabilities.

[77] When at a time to, No radioactive atom of the same kind are left in a sample, then the expected number of remaining atoms at time t equals: Nt=No exp.-(t-to)/t, such that Nt/No=exp.-(t-to)/t. The characteristic constant t is proportional to the well-known half-life time. The law of decay is theoretically derivable from quantum field theory. This results in a slight deviation from the exponential function, too small to be experimentally verifiable, see Cartwright 1983, 118.

[78] Namely as the proportion exp.-(t-to)/t = [exp.-t/t]/[exp.-to/t].

 

 


 

 

 

 

Chapter 3

 

Symmetry

 

  

 

 

 

 

 


 

 

3.1. Spatial magnitudes and vectors

3.2. Character, transformation and symmetry of spatial figures

3.3. Non-Euclidean space-time in the theory of relativity

 


 

 

3.1. Spatial magnitudes and vectors

 

The second relation frame for characters concerns their spatial relations. In 1899, David Hilbert formulated his foundations of projective geometry as relations between points, straight lines and planes, without defining these.[1] Gottlob Frege thought that Hilbert referred to known subjects, but Hilbert denied this. He was only concerned with the relations between things, leaving aside their nature. According to Paul Bernays, geometry is not concerned with the nature of things, but with ‘a system of conditions for what might be called a relational structure’.[2] Inevitably, structuralism influenced the later emphasis on structures.[3]

Topological, projective, and affine geometries are no more metric than the theory of graphs.[4] They deal with spatial relations without considering the quantitative relation frame. I shall not discuss these non-metric geometries. The nineteenth- and twentieth-century views about metric spaces and mathematical structures turn out to be very important to modern physics.

This chapter is mainly concerned with the possibility to project a relation frame on a preceding one, and its relevance to characters. Section 3.1 discusses spatial magnitudes and vectors. The metric of space, being the law for the spatial relation frame, turns out to rest on symmetry properties. Symmetry plays an important part in the character and transformation of spatial figures that are the subject matter of section 3.2. Finally, section 3.3 deals with the metric of non-Euclidean kinetic space-time according to the theory of relativity.

Mathematics studies inter alia spatially qualified characters. Because these are interlaced with kinetic, physical, or biotic characters, spatial characters are equally important to science. This also applies to spatial relations concerning the position and posture of one figure with respect to another one. A characteristic point, like the centre of a circle or a triangle, represents the position of a figure objectively. The distance between these characteristic points objectifies the relative position of the circle and the triangle. It remains to stipulate the posture of the circle and the triangle, for instance with respect to the line connecting the two characteristic points. A co-ordinate system is an expedient to establish spatial positions by means of numbers.

 

Spatial relations are rendered quantitatively by means of magnitudes like distance, length, area, volume, and angle. These objective properties of spatial subjects and their relations refer directly (as a subject) to numerical laws and indirectly (as an object) to spatial laws.

Science and technology prefer to define magnitudes that satisfy quantitative laws.[5] If we want to make calculations with a spatial magnitude, we have to project it on a suitable set of numbers (integral, rational, or real), such that spatial operations are isomorphic to arithmetical operations like addition or multiplication. This is only possible if a metric is available, a law to find magnitudes and their combinations.

For many magnitudes, the isomorphic projection on a group turns out to be possible. For magnitudes having only positive values (e.g., length, area or volume), a multiplication group is suitable. For magnitudes having both positive and negative values (e.g., position), a combined addition and multiplication group is feasible. For a continuously variable magnitude, this concerns a group of real numbers. For a digital magnitude like electric charge, the addition group of integers may be preferred. It would  express the fact that charge is an integral multiple of the electron’s charge, functioning as a unit.

Every metric needs an arbitrarily chosen unit. Each magnitude has its own metric, but various metrics are interconnected. The metrics for area and volume are reducible to the metric for length. The metric for speed is composed from the metrics of length and time. Connected metrics form a metric system.

If a metric system is available, the government or the scientific community may decide to prescribe a metric to become a norm, for the benefit of technology, traffic and commerce. Processing and communicating of experimental and theoretical results requires the use of a metric system.

 

A point has no dimensions and could have been considered a spatial object if extension were essential for spatial subjects. However, a relation frame is not characterized by any essence like continuous extension, but by laws for relations. Two points are spatially related by having a relative distance. The argument ‘a point has no extension, hence it is not a subject’ reminds of Aristotle and his adherents. They abhorred nothingness, including the vacuum and the number zero as a natural number. Roman numerals do not include a zero, and Europeans did not recognize it until the end of the Middle Ages. Galileo Galilei taught his Aristotelian contemporaries that there is no fundamental difference between a state of rest (the speed equals zero) and a state of motion (the speed is not zero).[6]

It is correct that the property length does not apply to a point, any more than area can be ascribed to a line, or volume to a triangle. The difference between two line segments is a segment having a certain length. The difference between two equal segments is a segment with zero length, but a zero segment is not a point. A line is a set having points as its elements, and each segment of the line is a subset. A subset with zero elements or only one element is still a subset, not an element. A segment has length, being zero if the segment contains only one point. A point has no length, not even zero length. Dimensionality implies that a part of a spatial figure has the same dimension as the figure itself. A three-dimensional figure has only three-dimensional parts. We can neither divide a line into points, nor a circle into its diameters. A spatial relation of a whole and its parts is not a subject-object relation, but a subject-subject relation.[7]

Whether a point is a subject or an object depends on the nomic (nomos is Greek for law) context, on the laws we are considering. The relative position of the ends of a line segment determines in one context a subject-subject relation (to wit, the distance between two points), in another context a subject-object relation (the objective length of the segment). Likewise, the sides of a triangle, having length but not area, determine subjectively the triangle’s circumference, and objectively its area.

 

The sequence of numbers can be projected on a line, ordering its points numerically. To order all points on a line or line segment the natural, integral or even rational numbers are not sufficient. It requires the complete set of real numbers (2.2). The spatial order of equivalence or co-existence presents itself to full advantage only in a more-dimensional space. In a three-dimensional space, all points in a plane perpendicular to the x-axiscorrespond simultaneously to a single point on that axis. With respect to the numerical order on the x-axis, these points are equivalent. To lay down the position of a point completely, we need several numbers (x,y,z,…) simultaneously, as many as the number of dimensions. Such an ordered set of numbers constitutes a number vector (2.3).

For the character of a spatial figure too, the number of dimensions is a dominant characteristic. The number of dimensions belongs to the laws constituting the character. A plane figure has length and width. A three-dimensional figure has length, width and height as mutually independent measures. The character of a two-dimensional figure like a triangle may be interlaced with the character of a three-dimensional figure like a tetrahedron. Hence, dimensionality leads to a hierarchy of spatial figures. At the basement of the hierarchy, we find one-dimensional spatial vectors.

 

Contrary to a number vector, a spatial vector is localized and oriented in a metrical space. Localization and orientation are spatial concepts, irreducible to numerical ones. A spatial vector marks the relative position of two points. By means of vectors, each point is connected to all other points in space. Vectors having one point in common form an addition group. After the choice of a unit of length, this group is isomorphic to the group of number vectors having the same dimension. Besides spatial addition, a scalar product is defined (2.3).[8] The group’s identity element is the vector with zero length. Its base is a set of orthonormal vectors, i.e., the mutually perpendicular unit vectors having a common origin. Each vector starting from that origin is a linear combination of the unit vectors. So far, there is not much difference with the number vectors.

However, whereas the base of a group of number vectors is rather unique, in a group of spatial vectors the base can be chosen arbitrarily. For instance, one can rotate a spatial base about the origin. It is both localized and oriented. The set of all bases with a common origin is a rotation group. The set of all bases having the same orientation but different origins is a translation group. It is isomorphic both to the addition group of spatial vectors having the same origin and to the addition group of number vectors.

 

Euclidean space is homogeneous (similar at all positions) and isotropic (similar in all directions). Combining spatial translations, rotations, reflections with respect to a line or a plane and inversions with respect to a point leads to the Euclidean group. It  reflects the symmetry of Euclidean space. Symmetry points to a transformation keeping certain relations invariant.[9] At each operation of the Euclidean group, several quantities and relations remain invariant, for instance, the distance between two points, the angle between two lines, the shape and the area of a triangle, and the scalar product of two vectors.

Besides a relative position, a spatial vector represents a displacement, the result of a motion. This is a disposition, a tertiary characteristic of spatial vectors.

 

Each base in each point of space defines a co-ordinate system. In an Euclidean space, this is usually a Cartesian system of mutually perpendicular axes. Partly, the choice of the co-ordinate system is arbitrary. We are free to choose rectangular, oblique or polar axes.[10] If we have a reference system, we can replace it by translation, rotation, mirroring or a combination of these. A co-ordinate system has to satisfy certain rules.

 

1. The number of axes and unit vectors equals the number of dimensions. With fewer co-ordinates, the system is underdetermined, with more it is overdetermined.

2. The unit vectors are mutually independent. Two vectors are mutually dependent if they have the same direction. An arbitrary vector is a linear combination of the unit vectors, and is said to depend on them.[11]

3. Replacing a co-ordinate system should not affect the spatial relations between the subjects in the space. In particular the distance between two points should have the same value in all co-ordinate systems. This rule warrants the objectivity of the co-ordinate systems.[12]

4. The choice of a unit of length is arbitrary, but should have the same value in all co-ordinate systems, as well as along all co-ordinate axes. That may seem obvious, but for a long time at sea, the units used for depth and height were different from those for horizontal dimensions and distances.

5. For calculating the distance between two points we need a law, called the spatial metric, see below.

6. The co-ordinate system should reflect the symmetry of the space. For an Euclidean space, a Cartesian co-ordinate system satisfies this requirement. Giving preference to one point, e.g. the source of an electric field, breaks the Euclidean symmetry. In that case, scientists often prefer a co-ordinate system that expresses the spherical symmetry of the field. In the presence of a homogeneous gravitational field, physicists usually choose one of the co-ordinate axes in the direction of the field. If the space is non-Euclidean, like the earth’s surface, a Cartesian co-ordinate system is quite useless.

 

The fact that we are free to choose a co-ordinate system has generated the assumption that this choice rests on a convention, an agreement to keep life simple.[13] However, both the fact that a group of co-ordinate systems reflects the symmetry of the space and the requirement of objectivity make clear that these rules are normative. It is not imperative to follow these rules, but we ought to choose a system that reflects spatial relations objectively.

 

The metric depends on the symmetry of space. In an Euclidean space, Pythagoras’ law determines the metric.[14] Since the beginning of the nineteenth century, mathematics acknowledges non-Euclidean spaces as well.[15] (Long before, it was known that on a sphere the Euclidean metric is only applicable to distances small compared with the radius.) Preceded by Carl Friedrich Gauss, in 1854 Bernhard Riemann formulated the general metric for an infinitesimal small distance in a multidimensional space.[16]

For a non-Euclidean space, the co-efficients in the metric depend on the position.[17] To calculate a finite displacement requires the application of integral calculus. The result depends on the choice of the path of integration. The distance between two points is the smallest value of these paths. On the surface of a sphere, the distance between two points corresponds to the path along a circle whose centre coincides with the centre of the sphere.

The metric is determined by the structure and eventually the symmetry of the space. This space has the disposition to be interlaced with the character of kinetic space or with the physical character of a field. A well-known example is the general theory of relativity, being the relativistic theory of the gravitational field.[18]

In general, a non-Euclidean space is less symmetrical than an Euclidean one having the same number of dimensions. Motion as well as physical interaction causes a break of symmetry in spatial relations.

 


 

 

3.2. Character, transformation

and symmetry of spatial figures

 

This section discusses the shape of a spatial figure as an elementary example of a character. A spatial character has both a primary and a secondary characteristic. The tertiary characteristic plays an increasingly complex part in the path of a specific motion, the shape of a crystal, the morphology of a plant or the body structure of an animal. Besides, even the simplest figures display a spatial interlacement of their characters.

 

A spatial figure has the profile of a thing-like subject. Its shape determines its character. Consider a simple plane triangle in an Euclidean space.[19] The character of a triangle constitutes a set of widely different triangles, having different angles, linear dimensions, and relative positions.[20] We distinguish this set easily from related sets of e.g., squares, ellipses, or pyramids. Clearly, the triangle’s character is primarily spatially characterized and secondarily quantitatively founded. Thirdly, a triangle has the disposition to have an objective function in a three- or more-dimensional figure.

A triangle is a two-dimensional spatial thing, directly subject to spatial laws. The triangle is bounded by its sides and angular points, which have no two-dimensional extension but determine the triangle’s objective magnitude. Quantitatively, we determine the triangle by the number of its angular points and sides, the magnitude of its angles, the length of its sides and its area.

With respect to the character of a triangle, its sides and angular points are objects, even if they are in another context subjects (3.1). Their character has the disposition to become interlaced with that of the triangle.

A triangle has a structure or character because its objective measures are bound, satisfying restricting laws or constraints. Partly this is a matter of definition, a triangle having three sides and three angular points. This definition is not entirely free, for a ‘biangle’ as a two-dimensional figure does not exist and a quadrangle may be considered a combination of two triangles. However, there are other lawlike relations not implied by the definition, for instance the law that the sum of the three triangles equals p, the sum of two right angles. This is a specific law, only valid for plane triangles.

A triangle is a whole with parts. As observed, the relation of a whole and its parts is not to be confused with a subject-object relation. It makes no sense to consider the sides and the angular points as parts of the triangle. With respect to a triangle, the whole-part relation has no structural meaning. In contrast, a polygon is a combination of triangles being parts of the polygon. Therefore, a polygon has not much more structure than it derives from its component triangles. The law that the sum of the angles of a polygon having n sides equals (n-2)p is reducible to the corresponding law for triangles.

 

Two individual triangles can be distinguished in three ways, by their relative position, their relative magnitude, and their different shape. I shall consider two mirrored triangles to be alike.

Relative position is not relevant for the character of a triangle. We could just as well consider its relative position with respect to a circle or to a point as to another triangle. Relative position is the universal spatial subject-subject relation. It allows of the identification of any individual subject. Often, the position of a triangle will be objectified, e.g. by specifying the positions of the angular points with respect to a co-ordinate system.

Next, triangles having the same shape can be distinguished by their magnitude. This leads to the secondary variation in the quantitative foundation of the character.

Finally, two triangles may have different shapes, one being equilateral, the other rectangular, for example. This leads to the primary variation in the spatial qualification of the triangle’s character. Triangles are spatially similar if they have equal angles. Their corresponding sides have an equal ratio, being proportional to the sinuses of the opposite angles.

For any polygon, the triangle can be considered the primitive form. It displays a primary spatial variability in its shape and a secondary quantitative variability in its magnitude. Another primitive form is the ellipse, with the circle as a specific variation.

There are irregular shapes as well, not subject to a specific law. These forms have a secondary variability in their quantitative foundation, but lack a lawlike primary variation regarding the qualifying relation frame.

 

Like two triangles can be different in three respects, a triangle can be changed in three ways: by displacement (translation, rotation and/or mirroring), by making it larger or smaller, or by changing its shape, i.e., by transformation. A transformation means that the triangle becomes a triangle with different angles or it gets an entirely different shape. Displacement, enlargement or diminishment and transformation are spatial expressions anticipating actual events.

An operator (2.3) describes a characteristic transformation, if co-ordinates and functions represent the position and the shape of the figure. The character of a spatial transformation preserving the shape of the figure is interlaced with the character of an operator having eigenfunctions and eigenvalues.

 

All displacements of a triangle in a plane form a group isomorphic to the addition group of two-dimensional vectors. All rotations, reflections and their combinations constitute groups as well. Enlargements of a given triangle form a group isomorphic to the multiplication group of positive real numbers. (A subgroup is isomorphic to the multiplication group of positive rational numbers).

A separate class of spatial figures is called symmetric, e.g., equilateral and isosceles triangles. Symmetry is a property related to a spatial transformation such that the figure remains the same in various respects. Without changing, an equilateral triangle can be reflected in three ways and rotated about two angles. An isosceles triangle has only one similar operation, reflection, and is therefore less symmetric. A circle is very symmetric, because an infinite number of rotations and reflections transform it into itself.

The theory of groups renders good services to the study of these symmetries (2.3).[21] Consider the group consisting of only three elements, I, A and B, such that AB=I, AA=B, BB=A. This is very abstract and only becomes transparent if an interpretation of the elements is given. This could be the rotation symmetry of an equilateral triangle, A being an anti-clockwise rotation of p/3, B of 2p/3. The inverse is the same rotation clockwise. The combination AB is the rotation B followed by A, giving I, the identity. Clearly, the character of this group has the disposition of being interlaced with the character of the equilateral triangle. However, this triangle has more symmetry, such as reflections with respect to a perpendicular. This yields three more elements for the symmetry group, now consisting of six elements. The rotation group I, A, B is a subgroup, isomorphic to the group consisting of the numbers 0, 1 and 2 added modulo 3 (2.3). The group is not only interlaced with the character of an equilateral triangle, but with many other spatial figures having a threefold symmetry, as well as with the group of permutations of three objects.[22] In turn, the character of an equilateral triangle is interlaced with that of a regular tetrahedron. The symmetry group of this triangle is a subgroup of the symmetry group of the tetrahedron.

A group expresses spatial similarity as well. The combination procedure consists of the multiplication of all linear dimensions with the same positive real or rational number, leaving the shape invariant. The numerical multiplication group of either rational or real positive numbers is interlaced with a spatial multiplication group concerning the secondary foundation of figures.

The translation operator, representing a displacement by a vector,[23] is an element of various groups, e.g., the Euclidean group mentioned before. Solid-state physics applies translation groups to describe the regularity of crystals. This implies an interlacement of the quantitative character of a group with the spatial character of a lattice and with the physical character of a crystal. The translation group for this lattice is an addition group for spatial vectors. It is isomorphic to a discrete group of number vectors, which components are not real or rational but integral. The crystal’s character has the disposition to be interlaced with the kinetic wave character of the X-rays diffracted by the crystal. Hence, this kind of diffraction is only possible for a discrete set of wave lengths.

 

The question of whether figures and kinetic subjects are real usually receives a negative answer.[24] The view that only physical things are real is a common form of physicalism.

First, this is the view of natural experience, which appears to accept only tangible matters to be real. Nevertheless, without the help of any theory, everybody recognizes typical shapes like circles, triangles or cubes. This applies to typical motions like walking, jumping, rolling or gliding as well.

Second, reality is sometimes coupled to observability. Now shapes are very well observable, albeit that we always need a physical substrate for any actual observation. Moreover, it would be an impoverishment if we would restrict our experience to what is directly observable. Human imagination is capable of representing many things that are not directly observable. For instance, we are capable of interpreting drawings of two-dimensional figures as three-dimensional objects. Although a movie consists of a sequence of static pictures, we see people moving. We can even see things that have no material existence, like a rainbow.

Third, I observe that the view that shapes are not real is strongly influenced by Plato, Aristotle, and their medieval commentators. According to Plato, spatial forms are invisible, but more real than observable phenomena. In contrast, Aristotle held that forms determine the nature of the things, having a material basis as well. Moreover, the realization of an actual thing requires an operative cause. Hence, according to Aristotle, all actually existing things have a physical character. 

In opposition, I maintain that in the cosmos everything is real that answers the laws of the cosmos. Then numbers, groups, spatial figures and motions are no less real than atoms and stars.

But are these natural structures? It cannot be denied that the concept of a circle or a triangle is developed in the course of history, in human cultural activity. Yet I consider them to be natural characters, which existence humanity has discovered, like it discovered the characters of atoms and molecules.

Reality as a theoretical concept implies that the temporal horizon is much wider than the horizon of our individual experience, and in particular much wider than the horizon of natural experience. By scientific research, we enlarge our horizon, discovering characters that are hidden from natural experience. Nevertheless, such characters are no less real than those known to natural experience are.

 

We call the kinetic space for waves a medium (and sometimes a field), and we call the physical space for specific interactions a field. For the study of physical interactions, spatial symmetries are very important. For instance, in classical physics this is the case with respect to gravity (Newton’s law), the electrostatic force (Coulomb’s law) and the magnetostatic force. Each of these forces is subject to an ‘inverse square law’. This law expresses the isotropy of physical space. In all directions, the field is equally strong at equal distances from a point-like source, and the field strength is inversely proportional to the square of the distance. About 1830, Carl Friedrich Gauss developed a method allowing of calculating the field strength of combinations of point-like sources. He introduced the concept of ‘flux’ through a surface, popularly expressed, the number of field lines passing through the surface.[25] Gauss proved that the flux through a closed surface around one or more point-like sources is proportional to the total strength of the sources, independent of the shape of that surface and the position of the sources.[26] This symmetry property has some important consequences.

Outside the sphere, a homogeneous spherical charge or mass causes a field that is equal to that of a point-like source concentrated in the centre of the sphere. Within the sphere, the field is proportional to the distance from the centre. Starting from the centre, the field initially increases linearly, but outside the sphere, it decreases quadratically. For gravity, Isaac Newton had derived this result by other means.

For magnetic interaction, physicists find empirically that the flux through a closed surface is always zero. This means that within the surface there are as many positive as negative magnetic poles. Magnetism only occurs in the form of dipoles or multipoles. There is no law excluding the existence of magnetic monopoles, but experimental physics has never found any.

In the electrical case, the combination of Gauss’s law with the existence of conductors leads to the conclusion that in a conductor carrying no electric current the electric field is zero. All net charge is located on the surface and the resulting electric field outside the conductor is perpendicular to the surface. Therefore, inside a hollow conductor the electric field is zero, unless there is a net charge in the cavity. Experimentally, this has been tested with a large accuracy. Because this result depends on the inverse square law, it has been established that the exponent in Coulomb’s law differs less than 10-20 from 2. If there is a net charge in the cavity, there is as much charge (with reversed sign) on the inside surface of the conductor. It is distributed such that in the conductor itself the field is zero. If the net charge on the conductor is zero, the charge at the outside surface equals the charge in the cavity. By connecting it with the ‘earth’, the outside can be discharged. Now outside the conductor the electric field is zero, and the charge within the cavity is undetectable. Conversely, a space surrounded by a conductor is screened from external electric fields.

Gauss’s law depicts a purely spatial symmetry and is therefore only applicable in static or quasi-static situations. James Clerk Maxwell combined Gauss’s law for electricity and magnetism with Ampère’s law and Faraday’s law for changing fields. As a consequence, Maxwell found the laws for the electromagnetic field. These laws are not static, but relativistically covariant, as Albert Einstein established.

 

Spin is a well-known property of physical particles. It derives its name from the now as naive considered assumption that a particle spins around its axis. If the particle is subject to electromagnetic interaction, a magnetic moment accompanies the spin, even if the particle is not charged. A neutron has a magnetic moment, whereas a neutrino has not. Spin is an expression of the particle’s rotation symmetry, and is similar to the angular momentum of an electron in its orbit in an atom. A pion has zero spin and transforms under rotation like a scalar. The spin of a photon is 1 and it transforms like a vector. The hypothetical graviton’s spin is twice as large, behaving as a tensor at rotation. These particles, called bosons, have symmetrical wave functions. Having a half-integral spin (as is the case with, e.g., an electron or a proton), a fermion’s wave function is antisymmetric. It changes of sign after a rotation of 2p (4.4). This phenomenon is unknown in classical physics.

 

 


 

 

3.3. Non-Euclidean space-time

in the theory of relativity

 

Until the end of the nineteenth century, motion was considered as change of place, with time as the independent variable. Isaac Newton thought space to be absolute, the expression of God’s omnipresence, a sensorium Dei. Newton’s contemporaries Christiaan Huygens and Gottfried Wilhelm Leibniz were more impressed by the relativity of motion. They believed that anything only moves relative to something else, not relative to absolute space. As soon as Thomas Young, Augustin Fresnel and other physicists in the nineteenth century established that light is a moving wave, they started the search for the ether, the material medium for wave motion. They identified the ether with Newton’s absolute space, now without the speculative reference to God’s omnipresence. This search had little success, the models for the ether being inconsistent or contrary to observed facts. In 1865, James Clerk Maxwell formulated his electromagnetic theory, connecting magnetism with electricity, and interpreting light as an electromagnetic wave motion. Although Maxwell’s theory did not require the ether, he persisted in believing its existence. In 1905, Albert Einstein suggested to abandon the ether.[27] He did not prove that it does not exist, but showed it to be superfluous. Physicists intended the ether as a material substratum for electromagnetic waves. However, in Einstein’s theory it would not be able to interact with anything else. Consequently, the ether lost its physical meaning.[28]

Until Einstein, kinetic time and space were considered independent frames of reference. In 1905, Albert Einstein shook the world by proving that the kinetic order implies a relativization of the quantitative and spatial orders. Two events being synchronous according to one observer turn out to be diachronous according to an observer moving at high speed with respect to the former one. This relativizing is unheard of in the common conception of time, and it surprised both physicists and philosophers.

Einstein based the special theory of relativity on two postulates or requirements for the theory. The first postulate is the principle of relativity. It requires each natural law to be formulated in the same way with respect to each inertial frame of reference. The second postulate demands that light have the same speed in every inertial system. From these two axioms, Einstein could derive the mentioned relativization of the quantitative and spatial orders. He also showed that the units of length and of time depend on the choice of the reference system. Moving rulers are shorter and moving clocks are slower than resting ones.[29] Only the speed of light is in all reference systems the same, acting as a unit of motion. Indeed, relativity theory often represents velocities in proportion to the speed of light.

 

An inertial system is a system of reference in which Newton’s first law of motion, the law of inertia, is valid. Unless some unbalanced force is acting on it, a body moves with constant velocity (both in magnitude and in direction) with respect to an inertial system. This is a reference system for motions; hence, it includes clocks besides a spatial co-ordinate system. If we have one inertial system, we can find many others by shifting, rotating, reflecting, or inversing the spatial co-ordinates; or by moving the system at a constant speed; or by resetting the clock, as long as it displays kinetic time uniformly (4.1). These operations form a group, in classical physics called the Galileo group. Here time is treated as a variable parameter independent of the three-dimensional spatial co-ordinate system. Since Einstein proved this to be wrong, an inertial system is taken to be four-dimensional. The corresponding group of operations transforming one inertial system into another one is called the Lorentz group.[30] The distinction between the classical Galileo group and the special relativistic Lorentz group concerns relatively moving systems. Both have an Euclidean subgroup of inertial systems not moving with respect to each other.[31]

In a four-dimensional inertial system, a straight line represents a uniform motion. Each point on this line represents the position (x,y,z) of the moving subject at the time t. If the speed of light is the unit of velocity, a line at an angle of p/4 with respect to the t-axis represents the motion of a light signal. The relativistic metric concerns the spatio-temporal interval between two events.[32] The combination rule in the Lorentz group is formulated such that the interval is invariant at each transformation of one inertial system into another one. Only then, the speed of light (the unit of motion) is equal in all inertial systems. A flash of light expands spherically at the same speed in all directions, in any inertial reference system in which this phenomenon is registered. This system is called the block universe or Hermann Minkowski’s space-time continuum.[33]  

The magnitude of the interval is an objective representation of the relation between two events, combining a time difference with a spatial distance. For the same pair of events in another inertial system, both the time difference Dt and the spatial distance Dr may be different. Only the magnitude Ds of the interval is independent of the choice of the inertial system.

 

Whereas the Euclidean metric is always positive or zero, the pseudo-Euclidean metric, determining the interval between two events may be negative as well. For the motion of a light signal between two points, the interval is zero.[34] In other cases, an interval is called space-like if the distance Dr>cDt, or time-like if the time difference Dt>Dr/c (in absolute values). In the first case, light cannot bridge the distance within the mentioned time difference, in the second case it can.

For two events having a space-like interval, an inertial system exists such that the time difference is zero (Dt=0), hence the events are simultaneous. In another system, the time difference may be positive or negative. The distance between the two events is too large to be bridged even by a light signal, hence the two events cannot be causally related. Whether such a pair of events is diachronous or synchronous appears to depend on the choice of the inertial system.

Other pairs of events are diachronous in every inertial system, their interval being time-like (Ds2<0). If in a given inertial system event A occurs before event B, this is the case in any other inertial system as well. Now A may be a cause of B, anticipating the physical relation frame. The causal relation is irreversible, the cause preceding the effect.[35]

The formula for the relativistic metric shows that space and time are not equivalent, as is often stated. By a rotation about the z-axis, the x-axis can be transformed into the y-axis. In contrast, no transformation exists from the t-axis into one of the spatial axes or conversely.

In the four-dimensional space-time continuum, the spatial and temporal co-ordinates form a vector. Other vectors are four-dimensional as well, often by combining a classical three-dimensional vector with a scalar. This is meaningful if the vector field has the same or a comparable symmetry as the space-time continuum.[36]

 

An unexpected consequence of the symmetry of physical space and time is that the laws of conservation of energy, linear and angular momentum turn out to be derivable from the principle of relativity. Emmy Noether first showed this in 1918. Because natural laws have the same symmetry as kinetic space, the conservation laws in classical mechanics differ from those in special relativity.

Considering the homogeneity and isotropy of a field-free space and the uniformity of kinetic time, theoretically the principle of relativity allows of two possibilities for the transformations of inertial systems.[37] According to the classical Galileo group, the metric for time is independent of the metric for space. The units of length and time are invariant under all transformations. The speed of light is different in relatively moving inertial systems. In the relativistic Lorentz group, the metrics for space and time are interwoven into the metric for the interval between two events. The units of length and time are not invariant under all transformations. Instead, the unit of velocity (the speed of light) is invariant under all transformations. On empirical grounds, the speed of light being the same in all inertial systems, physicists accept the second possibility. Not the Galileo group but the Lorentz group turns out to be interlaced with kinetic space-time.

 

According to the principle of relativity, the natural laws can be formulated independent of the choice of an inertial system. Einstein called this a postulate, a demand imposed on a theory. In contrast, I call it a norm,[38] resting on the irreducibility of physical interaction to spatial or kinetic relations. The principle of relativity is not merely a convention, an agreement to formulate natural laws as simple as possible. It is first of all a requirement of objectivity, to formulate the laws such that they have the same expression in every appropriate reference system.

Yet, physicists do not always stick to the principle of relativity. When standing on a revolving merry-go-round, anyone feels an outward centrifugal force. When trying to walk on the roundabout he or she experiences the Coriolis force as well. These forces are not the physical cause of acceleration, but its effect. Both are inertial forces, only occurring in a reference system accelerating with respect to the inertial systems.

Although the centrifugal force and the Coriolis force do not exist with respect to inertial systems, they are real, being measurable and exerting influence. In particular, the earth is a rotating system. The centrifugal force causes the acceleration of a falling body to be larger at the poles than at the equator.[39] The Coriolis force causes the rotation of Foucault’s pendulum, and it has a strong influence on the weather. The wind does not blow directly from a high- to a low-pressure area, but it is deflected by the Coriolis force to encircle such areas.

Another example of an inertial force occurs in a reference system having a constant acceleration with respect to inertial systems. This force experienced in an accelerating or braking lift or train is equal to the product of the acceleration and the mass of the subject on which the force is acting. It is a universal force, influencing the motion of all subjects that we wish to refer to the accelerated system of reference.

Often, physicists and philosophers point to that inertial force in order to argue that the choice of inertial systems is arbitrary and conventional. Only because of simplicity, we prefer inertial systems, because it is awkward to take into account these universal forces. A better reason to avoid such universal forces is that they do not represent subject-subject relations. Inertial forces do not satisfy Newton’s third law, the law of equal action and reaction, for an inertial force has no reaction.[40] The source of the force is not another subject. A Newtonian physicist would call such a force fictitious.[41] The use of inertial forces is only acceptable for practical reasons. For instance, this applies to weather forecasting, because the rotation of the earth strongly influences the weather.

Another hallmark of inertial forces is to be proportional to the mass of the subject on which they act. In fact, it does not concern a force but an acceleration, i.e., the acceleration of the reference system with respect to inertial systems. We interpret it as a force, according to Newton’s second law.

 

Gravity too happens to be proportional to the mass of the subject on which it acts. At any place, all freely falling subjects experience the same acceleration. Hence, gravity looks like an inertial force. This inspired Einstein to develop the general theory of relativity, defining the metric of space and time such that gravity is eliminated. It leads to a curved space-time, having a strong curvature at places where - according to the classical view - the gravitational field is strong. Besides subjects having mass, massless things experience this field as well. Even light moves according to this metric, as confirmed by ingenious observations.

Yet, gravity is not an inertial force. Contrary to the centrifugal and Coriolis forces, gravity expresses a subject-subject relation. The presence of heavy matter determines the curvature of space-time. In classical physics, gravity was the prototype of a physical subject-subject relation. One of the unexpected results of Newton’s Principia was that the planets attract the sun, besides the sun attracting the planets. It undermined Newton’s Copernican view that the sun is at rest at the centre of the world.[42]

Einstein observed that a gravitational field in a classical inertial frame is equivalent with an accelerating reference system without gravity, like an earth satellite. The popular argument for this principle of equivalence is that locally one could not measure any difference.[43] I like to make four comments.

First, on a slightly larger scale the difference between a homogeneous acceleration and a non-homogeneous gravitational field is easily determined.[44] Even in an earth satellite, differential effects are measurable. Except for a homogeneous field, the principle of equivalence is only locally valid.[45]

Second, the curvature of space-time is determined by matter, hence it has a physical source. The gravity of the sun causes the deflection of starlight observed during a total eclipse. An inertial force lacks a physical source.

Third, in non-inertial systems of reference, the law of inertia is invalid. In contrast, the general theory of relativity maintains this law, taking into account the correct metric. A subject on which no force is acting – apart from gravity – moves uniformly with respect to the general relativistic metric. If considered from a classical inertial system, this means a curved and accelerated motion due to gravity. The general relativistic metric eliminates (or rather, incorporates) gravity.

Finally, in the general relativistic space-time, the speed of light remains the universal unit of velocity. Light moves along a ‘straight’ line (the shortest line according to Riemann’s definition). Accelerating reference systems still give rise to inertial forces.[46]

The metrics of special and general relativity theory presuppose that light moves at a constant speed everywhere. The empirically confirmed fact that light is subject to gravity necessitates an adaptation of the metric. In the general theory of relativity, kinetic space-time is less symmetric than in the special theory. Because gravity is quite weak compared to other interactions, this symmetry break is only observable at a large scale, at distances where other forces do not act or are neutralized. Where gravity can be neglected, the special theory of relativity is applicable.

The general relativistic space-time is not merely a kinetic, but foremost a physical manifold. The objection against the nineteenth-century ether was that it did not allow of interaction. This objection does not apply to the general relativistic space-time. This acts on matter and is determined by matter.[47]

The general theory of relativity presents models for the physical space-time, which models are testable. It leads to the insight that the physical cosmos is finite and expanding. It came into being about thirteen billions years ago, in a ‘big bang’. According to the standard model to be discussed in section 5.1, the fundamental forces initially formed a single universal interaction. Shortly after the big bang they fell apart by a symmetry break into the present electromagnetic, strong and weak nuclear interaction besides the even weaker gravity. Only then the characters to be discussed in the next two chapters were gradually realized in the astrophysical evolution of the universe.



[1] ‘Projective geometry’ is since the beginning of the nineteenth century developed as a generalization of Euclidean geometry.

[2] Shapiro 1997, 158; Torretti 1999, 408-410.

[3] e.g. Bourbaki, pseudonym for a group of French mathematicians. See Barrow 1992, 129-134; Shapiro 1997, chapter 5.

[4] A ‘graph’ is a two- or more-dimensional discrete set of points connected by line stretches.

[5] This is not the case with all applications of numbers.  Numbers of houses project a spatial order on a numerical one, but hardly allow of calculations. Lacking a metric, neither Mohs’ scale of hardness nor Richter’s scale for earthquakes leads to calculations.

[6] Galileo 1632, 20-22.

[7] In a quantitative sense a triangle as well as a line segment is a set of points, and the side of a triangle is a subset of the triangle. But in a spatial sense, the side is not a part of the triangle.

[8] In an Euclidean space, the scalar product of two vectors a and b equals a.b=ab cos a. Herein aa.a is the length of a and a is the angle between a and b. If two vectors are perpendicular to each other, their scalar product is zero.

[9] Van Fraassen 1989, 262.

[10] Polar co-ordinates do not determine the position of a point by its projections on two or more axes, but by the distance r to the origin and by one or more angles. For example, think of the geographical determination of positions on the surface of the earth.

[11] In two dimensions, a=(a1,a2)=a1(1,0)+a2(0,1).

[12] In a co-ordinate transformation, a magnitude that remains equal to itself is called ‘invariant’. This applies e.g. to the magnitude of a vector and the angle between two vectors. ‘Covariant’ magnitudes change in analogy to the co-ordinates.

[13] See e.g. Grünbaum 1973, chapter 1; Sklar 1974, 88-146.

[14] If the co-ordinates of two points are given by (x1,y1,z1) and (x2,y2,z2), and if we call Dx=x2x1 etc., then the distance Dr is the square root of Dr2=Dx2+Dy2+Dz2. This is the Euclidean metric.

[15] Non-Euclidean geometries were discovered independently by Nicolai Lobachevski (first publication, 1829-30), János Bolyai and Friedrich Gauss, later supplemented by Felix Klein. Significant is to omit Euclides’ fifth postulate, corresponding to the axiom that one and only one line parallel to a given line can be drawn through a point outside that line.

[16] Riemann’s metric is dr2=gxxdx2+gyydy2+gxydxdy+gyxdydx+… Mark the occurrence of mixed terms besides quadratic terms. In the Euclidean metric gxx=gyy=1, gxy=gyx=0, and Δx and Δy are not necessarily infinitesimal. See Jammer 1954, 150-166; Sklar 1974, 13-54. According to Riemann, a multiply extended magnitude allows of various metric relations, meaning that the theorems of geometry cannot be reduced to quantitative ones, see Torretti 1999, 157.

[17] If i and j indicate x or y, the gij’s, are components of a tensor. In the two-dimensional case gij is a second derivative (like d2r/dxdy). For a more-dimensional space it is a partial derivative, meaning that other variables remain constant.

[18] In the general theory of relativity, the co-efficients for the four-dimensional space-time manifold form a symmetrical tensor, i.e., gij=gji for each combination of i and j. Hence, among the sixteen components of the tensor ten are independent. An electromagnetic field is also described by a tensor having sixteen components. Its symmetry demands that gij=-gji for each combination of i and j, hence the components of the quadratic terms are zero. This leaves six independent components, three for the electric vector and three for the magnetic pseudovector. Gravity having a different symmetry than electromagnetism is related to the fact that mass is definitely positive and that gravity is an attractive force. In contrast, electric charge can be positive or negative and the electric Coulomb force may be attractive or repulsive. A positive charge attracts a negative one, two positive charges (as well as two negative charges) repel each other.

[19] In a non-Euclidean space two figures only have the same shape if they have the same magnitude as well, see Torretti 1999, 149. Similarity (to be distinguished from congruence or displacement symmetry) is a characteristic of an Euclidean space. Many regular figures like squares or cubes only exist in an Euclidean space.

[20] Because each triangle belonging to the character class is a possible triangle as well, the ensemble coincides with the character class.

[21] In 1872, Felix Klein in his ‘Erlangen Program’ pointed out the relevance of the theory of groups for geometry, considered to be the study of properties invariant under transformations, see Torretti 1999, 155.

[22] A permutation is a change in the order of a sequence; e.g., BAC is a permutation of ABC. A set of n objects allows of n! = 1.2.3…. n permutations.

[23] The translation about a vector a is formally represented by T(a)r=r+a.

[24] Even in Protestant philosophy. Dooyeweerd 1953-1958, III, 99: ‘No single real thing or event is typically qualified or founded in an original mathematical aspect.’ Hart 1984, 156: ‘If anything is to be actually real in the world of empirical existence, it must ultimately be founded in physical reality.’ Ibid. 263: ‘Existence is ordered so as to build on physical foundations.’

[25] An infinitesimal surface is defined as a vector a by its magnitude and the direction perpendicular to the surface. The flux is the scalar product of a with the field strength E at the same location and is maximal if a is parallel to E, minimal if their directions are opposite. If a ^ E the flux is zero. For a finite surface one finds the flux by integration.

[26] The proportionality factor depends on the force law and is different in the three mentioned cases.

[27]Einstein 1905.

[28] The cosmic electromagnetic background radiation discovered by Arno Penzias and Robert Wilson in 1964 may be considered to be a kind of ether.

[29] In the theory of Lorentz and others, time dilation and space contraction were explained as molecular properties of matter. Einstein explained them as kinetic effects.

[30] Sometimes called the Poincaré group, of which the Lorentz group (now without spatial and temporal translations) is a subgroup.

[31]The distinction concerns the combination of motions, objectified by velocities. Restricted to one direction, in the Galileo group velocities are combined by addition (v+w), in the Lorentz group by the formula (v+w)/(1+vw/c2), see section 2.3. The name ‘Galileo group’ dates from the twentieth century.

[32] The metric of special relativity theory is Ds2=Dx2+Dy2+Dz2-Dt2=Dr2-Dt2. There are no mixed terms, and the interval is not necessarily infinitesimal. This metric is pseudo-Euclidean because of the minus sign in front of Dt2. If the speed of light is not taken as the unit of speed, this term becomes c2Dt2. The metric can be made apparently Euclidean by considering time an imaginary co-ordinate: Ds2=Dx2+Dy2+Dz2+(iDt)2. It is preferable to make visible that kinetic space is less symmetric than the Euclidean four-dimensional space, for lack of symmetry between the time axis and the three spatial axes. According to the formula, Ds2 can be positive or negative, and Ds real or imaginary. Therefore, one defines the interval as the absolute value of Ds.

[33]Minkowski 1908.

[34] For a light signal, Ds=0, for the covered distance Dr equals cDt. If Dr=0, the two events have the same position and the interval is a time difference (Dt). If Dt=0, the interval is a spatial distance (Dr) and the two events are simultaneous.

[35] Bunge 1967a, 206: ‘… the space of events, in which the future-directed [electromagnetic] signals exist, is not given for all eternity but is born together with happenings, and it has the arrow of time built into it.’

[36] For instance, the linear momentum and the energy of a particle are combined into the four-dimensional momentum-energy vector (px,py,pz,E/c). Its magnitude (the square root of  px2+py 2+pz 2-E2/c2) has in all inertial systems the same value. The theory of relativity distinguishes invariant, covariant and contravariant magnitudes, vectors etc.

[37] Rindler 1969, 24, 51-53.

[38]Bunge 1967a, 213, 214: ‘The principle … is a normative metanomological principle …’, ‘… it constitutes a necessary though insufficient condition for objectivity …’

[39]Partly directly, partly due to the flattening of the earth at the poles, another effect of the centrifugal force.

[40] French 1965, 494. Sometimes one calls an inertial force a reaction force, and then there is no action.

[41]The inertial forces give rise to so many misunderstandings that W.F. Osgood (quoted by French 1965, 511) sighs: ‘There is no answer to these people. Some of them are good citizens. They vote the ticket of the party that is responsible for the prosperity of the country; they belong to the only true church; they subscribe to the Red Cross drive – but they have no place in the Temple of Science; they profane it.’

[42]Newton 1687, 419.

[43]Bunge 1967a, 207-210.

[44] Rindler 1969, 19; Sklar 1974, 70.

[45]Bunge 1967a, 210-212.

[46] This means that Einstein’s original intention to prove the equivalence of all moving reference systems has failed.

[47] Rindler 1969, 242.

 

 


 

 

 

 

Chapter 4

 

Periodic motion

 

  

 

 

 

 

 


 

 

4.1. Motion as a relation frame

4.2. The character of oscillations and waves

4.3. A wave packet as an aggregate

4.4. Symmetric and antisymmetric wave functions

 


 

 

 

4.1. Motion as a relation frame

 

Chapter 4 investigates characters primarily qualified by kinetic relations, avoiding high-level mathematics in favour of philosophical reflections.

In ancient and medieval philosophy, local motion was a kind of change. Classical mechanics emphasized uniform and accelerated motion of unchanging matter. In modern physics, the periodic motion of oscillations and waves is the main theme. In living nature and technology, rhythms play an important part as well.

Twentieth-century physics is characterized by the theory of relativity (chapter 3), by the investigation of the structure of matter (chapter 5), and by quantum physics. The latter is dominated by the duality of waves and particles. In section 4.1, I discuss the kinetic relation frame, and in section 4.2, the kinetic character of oscillations and waves. Section 4.3 deals with the character of a wave packet with its anticipations on physical interaction. Section 4.4 concerns the meaning of symmetrical and antisymmetrical wave functions for physical aggregates.

Kinetically qualified characters are founded in the quantitative or the spatial relation frame and are interlaced with physical characters. Like numbers and spatial forms, periodic motions take part in our daily experience. And like irrational numbers and non-Euclidean space, some aspects of periodic phenomena collide with common sense. Chapter 4 aims to demonstrate that a realistic interpretation of quantum physics is feasible and even preferable to the standard non-realistic interpretations. This requires insight in the phenomenon of character interlacement.

In section 1.2, I proposed relative motion to be the third general type of relations between individual things and processes. Kinetic time is subject to the kinetic order of uniformity and is expressed in the periodicity of mechanical or electric clocks. Before starting the investigation of kinetic characters, I discuss some general features of kinetic time.

 

Like the rational and real numbers, points on a continuous line are ordered, yet no point has a unique successor (2.2). One cannot say that a point A is directly succeeded by a point B, because there are infinitely many other points between A and B. Yet, a uniformly or accelerating moving subject passes the points of its path successively.[1] The succession of temporal moments cannot be reduced to quantitative and/or spatial relations. It presupposes the numerical order of earlier and later and the spatial order of simultaneity, being diachronic and synchronic aspects of kinetic time. Zeno recognized this long before the Christian era. Nevertheless, until the seventeenth century, motion was not recognized as an independent principle of explanation.[2] Later on, it was reinforced by Albert Einstein’s theory of relativity (3.3).

 

The uniformity of kinetic time seems to rest on a convention.[3] Sometimes it is even meaningful to construct a clock that is not uniform. For instance, the physical order of radioactive decay as applied in the dating of archaeological and geological finds is logarithmic rather than linear.[4] However, the uniformity of kinetic time together with the periodicity of many kinds of natural motion yields a kinetic norm for clocks. A norm is more than a mere agreement or convention. If applied by human beings constructing clocks, the law of inertia becomes a norm. A clock does not function properly if it represents a uniform motion as non-uniform.

With increasing clarity, the law of inertia was formulated by Galileo Galilei, René Descartes and others, finding its ultimate form in Isaac Newton’s first law of motion.[5] Inertial motion is not in need of a physical cause. Classical and modern physics consider inertial motion to be a state, not a change. In this respect, modern kinematics differs from Aristotle’s, who assumed that each change needs a cause, including local motion. Contrary to Aristotle (being the philosopher of common sense), the seventeenth-century physicists considered friction to be a force. Friction causes an actually moving subject to decelerate. In order to maintain a constant speed, another force is needed to compensate for friction. Aristotelians did not recognize friction as a force and interpreted the compensating force as the cause of uniform motion.

Uniformity of motion means that the subject covers equal distances in equal times. But how do we know which times are equal? The diachronous order of earlier and later allows of counting hours, days, months, and years. These units do not necessarily have a fixed duration. In fact, months are not equal to each other, and a leap year has an extra day. Until the end of the Middle Ages, an hour was not defined as 1/24th of a complete day, but as the 1/12th part of a day taken from sunrise to sunset. A day in winter being shorter than in summer, the duration of an hour varied annually. Only after the introduction of mechanical clocks in the fifteenth century, it became customary to relate the length of an hour to the period from noon to noon, such that all hours are equal.

Mechanical clocks measure kinetic time. Time as measured by a clock is called uniform if the clock correctly shows that a subject on which no net force is acting moves uniformly.[6] This appears to be circular reasoning. On the one side, the uniformity of motion means equal distances in equal times. On the other hand, the equality of temporal intervals is determined by a clock subject to the norm that it represents uniform motion correctly.[7] This circularity is unavoidable, meaning that the uniformity of kinetic time is an unprovable axiom. However, this axiom is not a convention, but an expression of a fundamental and irreducible law.

 

Uniformity is a law for kinetic time, not an intrinsic property of time. There is nothing like a stream of time, flowing independently of the rest of reality.[8] Time only exists in relations between events. The uniformity of kinetic time expressed by the law of inertia asserts the existence of motions being uniform with respect to each other.

Both classical and relativistic mechanics use this law to introduce inertial systems. An inertial system is a spatio-temporal reference system in which the law of inertia is valid. It can be used to measure accelerated motions as well. Starting with one inertial system, all others can be constructed by using either the Galileo group or the Lorentz group, reflecting the relativity of motion (3.3). Both start from the axiom that kinetic time is uniform.

 

The law of uniformity concerns all dimensions of kinetic space. Therefore, it is possible to project kinetic time on a linear scale, irrespective of the number of dimensions of kinetic space. Equally interesting is that kinetic time can be projected on a circular scale, as displayed on a traditional clock. The possibility of establishing the equality of temporal intervals is actualized in uniform circular motion, in oscillations, waves, and other periodic processes. Therefore, besides the general aspect of uniformity, the time measured by clocks has a characteristic component as well, the periodic character of any clock.[9] Mechanical clocks depend on the regularity of a pendulum or a balance. Electronic clocks apply the periodicity of oscillations in a quartz crystal.  Periodicity has always been used for the measurement of time. The days, months, and years refer to periodic motions of celestial bodies. The modern definition of the second depends on atomic oscillations.[10] The periodic character of clocks allows of digitalizing kinetic time, each cycle being a unit, whereas the cycles are countable.

The uniformity of kinetic time as a universal law for kinetic relations and the periodicity of all kinds of periodic processes reinforce each other. Without uniformity, periodicity cannot be understood, and vice versa.

The idea that the uniformity of kinetic time is a convention has the rather absurd consequence, that the periodicity of oscillations, waves and other natural rhythms would be a convention as well.

 


 

 

4.2. The character of oscillations and waves

 

Periodicity is the distinguishing mark of each primary kinetic character with a tertiary physical characteristic. The motion of a mechanical pendulum, for instance, is primarily characterized by its periodicity and tertiarily by gravitational acceleration. For such an oscillation, the period is constant if the metric for kinetic time is subject to the law of inertia. This follows from an analysis of pendulum motion. The character of a pendulum is applied in a clock. The dissipation of energy by friction is compensated such that the clock is periodic within a specified margin.

Kepler’s laws determine the character of periodic planetary motion. Strictly speaking, these laws only apply to a system consisting of two subjects, a star with one planet or binary stars. Both Newton’s law of gravity and the general theory of relativity allow of a more refined analysis. Hence, the periodic motions of the earth and other systems cannot be considered completely apart from physical interactions. However, in this section I shall abstract from physical interaction in order to concentrate on the primary and secondary characteristics of periodic motion.

 

The simplest case of a periodic motion appears to be uniform circular motion. Its velocity has a constant magnitude whereas its direction changes constantly. Ancient and medieval philosophy considered uniform circular motion to be the most perfect, only applicable to celestial bodies. Seventeenth-century classical mechanics discovered uniform rectilinear motion to be more fundamental, the velocity being constant in direction as well as in magnitude. Christiaan Huygens assumed that the outward centrifugal acceleration is an effect of circular motion. Robert Hooke and Isaac Newton demonstrated the inward centripetal acceleration to be the cause needed to maintain a uniform circular motion.

Not moving itself, the circular path of motion is simultaneously a kinetic object and a spatial subject. The position of the centre and the magnitude and direction of the circle’s radius vector determine the spatial position of the moving subject on its path. The radius is connected to magnitudes like orbital or angular speed, acceleration, period and phase.[11] These quantitative properties allow of calculations and an objective representation of motion.

A uniform circular motion can be constructed as a composition of two mutually perpendicular linear harmonic motions, having the same period and amplitude and a phase difference of one quarter. But then circular uniform motion turns out to be merely a single instance of a large class of two-dimensional harmonic motions. A similar composition of two harmonics – having the same period but different amplitudes or a phase difference other than one quarter – does not produce a circle but an ellipse.[12] We can also make a composition of two mutually perpendicular oscillations with different periods. Now the result is a Lissajous figure (so called after Jules A. Lissajous), if and only if the two periods have a harmonic ratio, i.e., a rational number. Only then, the path of motion is a closed curve. If the proportion is an octave (1:2), then the resulting figure is a lemniscate (a figure eight). The Lissajous figures derive their specific regularity from periodic motions. Clearly, the two-dimensional Lissajous motions constitute a kinetic character. This character has a primary rational variation in the harmonic ratio of the composing oscillations, as well as a secondary variation in frequency, amplitude and phase. It is interlaced with the character of linear harmonic motion and several other characters. The structure of the path like the circle or the lemniscate is primarily spatial and secondarily quantitatively founded. A symmetry group is interlaced with the character of each Lissajous-figure, the circle being the most symmetrical of all.

In all mentioned characters, we find a typical subject-object relation determining an ensemble of possible variations. In the structure of the circle, the circumference has a fixed proportion to the diameter. This allows of an unbounded variation in diameter. In the character of the harmonic motion, we find the period (or its inverse, the frequency) as a typical magnitude, allowing of an unlimited variability in period as well as a bounded variation of phase. Varying the typical harmonic ratio results in an infinite but denumerable ensemble of Lissajous-figures.

 

A linear harmonic oscillation is quantitatively represented by a harmonic function. This is a sine or cosine function or a complex exponential function, being a solution of a differential equation.[13] This equation, the law for harmonic motion, concerns mechanical or electronic oscillations, for instance. Primarily, a harmonic oscillation has a specific kinetic character. It is a special kind of motion, characterized by its law and its period. An oscillation is secondarily characterized by magnitudes like its amplitude and phase, not determined by the law but by accidental initial conditions. Hence, the character of an oscillation is kinetically qualified and quantitatively founded.

The harmonic oscillation can be considered the basic form of any periodic motion, including the two-dimensional periodic motions discussed above. In 1822, Joseph Fourier demonstrated that each periodic function is the sum or integral of a finite or infinite number of harmonic functions. The decomposition of a non-harmonic periodic function into harmonics is called Fourier analysis.

A harmonic oscillator has a single natural frequency determined by some specific properties of the system. This applies, for instance, to the length of a pendulum; or to the mass of a subject suspended from a spring together with its spring constant; or to the capacity and the inductance in an electric oscillator consisting of a capacitor and a coil. This means that the kinetic character of a harmonic oscillation is interlaced with the physical character of an artefact.

Accounting for energy dissipation by adding a velocity-dependent term leads to the equation for a damped oscillator. Now the initial amplitude decreases exponentially. In the equation for a forced oscillation, an additional acceleration accounts for the action of an external periodic force. In the case of resonance, the response is maximal. Now the frequency of the driving force is approximately equal to the natural frequency. Applying a periodic force, pulse or signal to an unknown system and measuring its response is a widely used method of finding the system’s natural frequency, revealing its characteristic properties.

 

An oscillation moving in space is called a wave. It has primarily a kinetic character, but contrary to an oscillation it is secondarily founded in the spatial relation frame. Whereas the source of the wave determines its period, the velocity of the wave, its wavelength and its wave number express the character of the wave itself.[14] The wave velocity has a characteristic value independent of the motion of the source. It is a property of the medium, the kinetic space of a wave that specifically differs from the general kinetic space as described by the Galileo or Lorentz group.[15]

A wave has a variability expressed by its frequency, phase, amplitude, and polarization.[16] During the motion, the amplitude may decrease. For instance, in a spherical wave the amplitude decreases in proportion to the distance from the centre.

Waves do not interact with each other, but are subject to superposition. This is a combination of waves taking into account amplitude as well as phase. Superposition occurs when two waves are crossing each other. Afterwards each wave proceeds as if the other had been absent. Interference is a special case of superposition. Now the waves concerned have exactly the same frequency as well as a fixed phase relation. If the phases are equal, interference means an increase of the net amplitude. If the phases are opposite, interference may result in the mutual extinction of the waves.

Just like an oscillation, each wave has a tertiary, usually physical disposition. This explains why waves and oscillations give a technical impression, because technology opens dispositions. During the seventeenth century, the periodic character of sound was discovered in musical instruments. The relevance of oscillations and waves in nature was only fully realized at the beginning of the nineteenth century. This happened after Thomas Young and Augustin Fresnel brought about a break-through in optics by discovering the wave character of light in quite technical experiments. Since the end of the same century, oscillations and waves dominate communication and information technology.

 

It will be clear that the characters of waves and oscillations are interlaced with each other. A sound wave is caused by a loudspeaker and strikes a microphone. Such an event has a physical character and can only occur if a number of physical conditions are satisfied. However, there is a kinetic condition as well. The frequency of the wave must be adapted to the oscillation frequency of the source or the detector. The wave and the oscillating system are correlated. This correlation concerns the property they have in common, i.e., their periodicity, their primary kinetic qualification.

Sometimes an oscillation and a wave are directly interlaced, for instance in a violin string. Here the oscillation corresponds to a standing wave, the result of interfering waves moving forward and backward between the two ends. The length of the string determines directly the wavelength and indirectly the frequency, dependent on the string’s physical properties determining the wave velocity. Amplified by a sound box, this oscillation is the source of a sound wave in the surrounding air having the same frequency. In fact, all musical instruments perform according to this principle. The wave is always spatially determined by its wavelength. The length of the string fixes the fundamental tone (the keynote or first harmonic) and its overtones. The frequency of an overtone is an integral number times the frequency of the first harmonic.

 

A wave equation represents the law for a wave, and a real or complex wave function represents an individual wave. Whereas the equation for oscillations only contains derivatives with respect to time, the wave equation also involves differentiation with respect to spatial co-ordinates. Usually a linear wave equation provides a good approximation for a wave, for example, the equations for the propagation of light, the Schrödinger equation, and the Dirac equation.[17] If j and f are solutions of a linear wave equation, then  aj+bf is a solution as well, for each pair of real (or complex) numbers a and b. Hence, a linear wave equation has an infinite number of solutions, an ensemble of possibilities. Whereas the equation for an oscillation determines its frequency, a wave equation allows of a broad spectrum of frequencies. The source determines the frequency, the initial amplitude and the phase. The medium determines the wave velocity, the wavelength and the decrease of the amplitude when the wave proceeds away from the source.

 

Events having their origin in relative motions may be characteristic or not. A solar or lunar eclipse depends on the relative motions of sun, moon and earth. It is accidental and probably unique that the moon and the sun are equally large as seen from the earth, such that the moon is able to cover the sun precisely. Such an event does not correspond to a character. However, wave motion gives rise to several characteristic events satisfying specific laws.

Snell’s and Brewster’s laws for the refraction and reflection of light at the boundary of two media only depend on the ratio of the wave velocities, the index of refraction. Because this index depends on the frequency, light passing a boundary usually displays dispersion, like in a prism. Dispersion gives rise to various special natural phenomena like a rainbow or a halo, or artificial ones, like Newton’s rings.

If the boundary or the medium has a periodic character like the wave itself, a special form of reflection or refraction occurs if the wavelength fits the periodicity of the lattice. In optical technology, diffraction and reflection gratings are widely applied. Each crystal lattice forms a natural three-dimensional grating for X-rays, if their wavelength corresponds to the periodicity of the crystal lattice according to Bragg’s law.

These are characteristic kinetic phenomena, not because they lack a physical aspect, but because they can be explained satisfactorily by a kinetic theory of wave motion.

 

 


 

 

4.3. A wave packet as an aggregate

 

A signal like a sound is a pattern of oscillations moving as an aggregate of waves from the source to the detector. This motion has a physical aspect as well, for the transfer of a signal requires energy. But the message is written in the oscillation pattern, being a signal if a human or an animal receives and recognizes it.

A signal composed from a set of periodic waves is called a wave packet. Although a wave packet is a kinetic subject, it achieves its foremost meaning if considered interlaced with a physical subject like an electron having a wave-particle character. The wave-particle duality has turned out to be equally fundamental and controversial. Neither experiments nor theories leave room for doubt about the existence of the wave-particle duality. However, it seems to contradict common sense, and its interpretation is the object of hot debates.

 

René Descartes and Christiaan Huygens assumed that space is completely filled up with matter, that space and matter coincide. They considered light to be a succession of mechanical pulses in space.[18] From the fact that planets move without friction, Isaac Newton inferred that interplanetary space is empty. He supposed that light consists of a stream of particles. In order to explain interference phenomena like the rings named after him, he ascribed the light particles (or the medium) properties that we now consider to apply to waves.[19]

Between 1800 and 1825, Thomas Young in England and Augustin Fresnel in France developed the wave theory of light. Common sense dictated waves and particles to exclude each other, meaning that light is either one or the other. When the wave theory turned out to explain more phenomena than the particle model, the battle was over.[20] Light is wave motion, as was later confirmed by James Clerk Maxwell’s theory of electromagnetism. Nobody realized that this conclusion was a non sequitur. At most, it could be said that light has wave properties, as follows from the interference experiments of Young and Fresnel, and that Newton’s particle theory of light was refuted.[21]

Nineteenth-century physics discovered and investigated many other rays. Some looked like light, such as infrared and ultraviolet radiation (about 1800), radio waves (1887), X-rays and gamma rays (1895-96). These turned out to be electromagnetic waves. Other rays consist of particles. Electrons were discovered in cathode rays (1897), in the photoelectric effect and in beta-radioactivity. Canal rays consist of ions and alpha rays of helium nuclei.[22]

At the end of the nineteenth century, this gave rise to a rather neat and rationally satisfactory worldview. Nature consists partly of particles, for the other part of waves, or of fields in which waves are moving. This dualistic worldview assumes that something is either a particle or a wave, but never both, tertium non datur.

It makes sense to distinguish a dualism, a partition of the world into two compartments, from a duality, a two-sidedness. The dualism of waves and particles rested on common sense, one could not imagine an alternative. However, twentieth-century physics had to abandon this dualism perforce and to replace it by the wave-particle duality. All elementary things have both a wave and a particle character.

 

Almost in passing, another phenomenon, called quantization, made its appearance. It turned out that some magnitudes are not continuously variable. The mass of an atom can only have a certain value. Atoms emit light at sharply defined frequencies. Electric charge is an integral multiple of the elementary charge. In 1905 Albert Einstein suggested that light consists of quanta of energy.[23] In Niels Bohr’s atomic theory (1913), the angular momentum of an electron in its atomic orbit is an integer times Planck’s reduced constant.[24] Until Erwin Schrödinger and Werner Heisenberg in 1926 introduced modern quantum mechanics, repeatedly atomic scientists found new quantum numbers with corresponding rules.

The dualism of matter and field, of particles and waves, was productive as long as its components were studied separately. Problems arose when scientists started to work at the interaction between matter and field. The first problem concerned the specific emission and absorption of light restricted to spectral lines, characteristic for chemical elements and their compounds. Niels Bohr tentatively solved this problem in 1913. The spectral lines correspond to transitions between stationary energy states. The second question was under which circumstance light can be in equilibrium with matter, for instance in an oven. This concerns the shape of the continuous spectrum of black radiation. After a half century of laborious experimental and theoretical work, this problem led to Max Planck’s theory (1900) and Albert Einstein’s photon hypothesis (1905). According to Planck, the interaction between matter and light of frequency f is in need of the exchange of energy packets of E = hf (h being Planck’s constant). Einstein suggested that light itself consists of quanta of energy. Later he added that these quanta have linear momentum as well, proportional to the wave number, p=E/c=hs=h/l. The relation between energy and frequency (E=hf), applied by Bohr in his atomic theory of 1913, was experimentally confirmed by Robert Millikan in 1916, and the relation between momentum and wave number (p=hs) in 1922 by Arthur Compton.[25]

Until 1920, Planck and Einstein did not have many adherents to their views. As late as 1924, Bohr, Kramers and Slater published a theory of electromagnetic radiation, fighting the photon hypothesis at all cost.[26] They went as far as abandoning the laws of conservation of energy and momentum at the atomic level. That was after the publication of the Compton effect, describing the collision of a gamma-particle with an electron conserving energy and momentum. Within a year, experiments by Bothe and Geiger proved the ‘BKS-theory’ to be wrong. In 1924 Bose and Einstein derived Planck’s law from the assumption that electromagnetic radiation in a cavity behaves like an ideal gas consisting of photons.

In 1923, Louis de Broglie published a mathematical paper about the wave-particle character of light. [27] Applying the theory of relativity, he predicted that electrons too would have a wave character. The motion of a particle or energy quantum does not correspond to a single monochromatic wave but to a group of waves, a wave packet. The speed of a particle cannot be related to the wave velocity (l/T=ƒ/s), being larger than the speed of light for a material particle. Instead, the particle speed corresponds to the speed of the wave packet, the group velocity. This is the derivative of frequency with respect to wave number (df/ds) rather than their quotient. Because of the relations of Planck and Einstein, this is the derivative of energy with respect to momentum as well (dE/dp). At most, the group velocity equals the speed of light.[28]

In order to test these suggestions, physicists had to find out whether electrons show interference phenomena. Experiments by Clinton Davisson and Lester Germer in America and by George P. Thom­son in England (1927) proved convincingly the wave character of electrons, thirty years after Thomson’s father Joseph J. Thomson established the particle character of electrons. As predicted by Louis De Broglie, the linear momentum turned out to be proportional to the wave number. Afterwards the wave character of atoms and nucleons was demonstrated experimentally.

We have seen that it took quite a long time before physicists accepted the particle character of light. Likewise, the wave character of electrons was not accepted immediately, but about 1930 no doubt was left among pre-eminent physicists.

This meant the end of the wave-particle (or matter-field) dualism, implying all phenomena to have either a wave character or a particle character, and the beginning of wave-particle duality being a universal property of matter. In 1927, Niels Bohr called the wave and particle properties complementary.[29]

 

An interesting aspect of a wave is that it concerns a movement in motion, a propagating oscillation. Classical mechanics restricted itself to the motion of unchangeable pieces of matter. For macroscopic bodies like billiard balls, bullets, cars and planets, this is a fair approximation, but for microscopic particles it is not.[30] The ex­perimentally established fact of photons, electrons, and other microsystems having both wave and particle properties does not fit the still popular mechanistic worldview. However, the theory of characters accounts for this fact as follows.

The character of an electron consists of an interlacement of two characters, a generic kinetic wave character and an accompanying specific particle character that is physically qualified. The specific character (different for different physical kinds of particles) determines primarily how e.g. electrons interact with other physical subjects, and secondarily which magnitudes play a role in this interaction. These characteristics distinguish the electron from other particles, like protons and atoms being spatially founded, and like photons having a kinetic foundation (5.2-5.4).

Interlaced with the specific character is a pattern of motion having the kinetic character of a wave packet. Electrons share this generic character with all other particles. In experiments demonstrating the wave character, there is little difference between electrons, protons, neutro­ns, or photo­ns. The generic wave character has primarily a kinetic qualification and secondarily a spatial foundation (4.2). The specific physical character determines the boundary conditions and the actual shape of the wave packet. Its wavelength is proportional to its linear momentum, its frequency to its energy. A free electron’s wave packet looks different from that of an electron bound in a hydrogen atom.

The wave character representing the electron’s motion has a tertiary characteristic as well, anticipating physical interaction. The wave function describing the composition of the wave packet determines the probability of the electron’s performance as a particle in any kind of interaction.

 

A purely periodic wave is infinitely extended in both space and time. It is unfit to give an adequate description of a moving particle, being localized in space and time. A packet of waves having various amplitudes, frequencies, wavelengths, and phases delivers a pattern that is more or less localized. The waves are superposed such that the net amplitude is zero almost everywhere in space and time. Only in a relatively small interval (to be indicated by Δ) the net amplitude differs from zero.

Let us restrict the discussion to rectilinear motion of a wave packet at constant speed. Now the motion is described by four magnitudes. These are the position (x) of the packet at a certain instant of time (t), the wave number (s) and the frequency (f).

The packet is an aggregate of waves with frequencies varying within an interval Δf and wave numbers varying within an interval Δs. Generally, it is provable that the wave packet in the direction of motion has a minimum dimension Δx such that Δx.Δs>1. In order to pass a certain point, the packet needs a time Δt, for which Δt.Δf>1. If we want to compress the packet (Δx and Δt small), the packet consists of a wide spectrum of waves (Δs and Δf large). Conversely, a packet with a well defined frequency (Δs and Δf small) is extended in time and space (Dx and Dt large). It is impossible to produce a wave packet whose frequency (or wave number) has a precise value, and whose dimension is point-like simultaneously. If we make the variation Δs small, the length of the wave packet Δx is large. Or we try to localize the packet, but then the wave number shows a large variation.

Sometimes a wave packet is longer than one might believe. A photon emitted by an atom has a dimension of Δx=cΔt, Δt being equal to the mean duration of the atom’s metastable state before the emission. Because Δt is of the order of 10-8 sec and c=3*108 m/sec, the photon’s ‘coherence length’ in the direction of motion is several metres. This is confirmed by interference experiments, in which the photon is split into two parts, to be reunited after the parts have transversed different paths. If the path difference is less than a few metres, interference will occur, but this is not the case if the path difference is much longer. The coherence length of photons in a laser ray is many kilometres long, because in a laser, Δt has been made artificially long.

An oscillating system emits or absorbs a wave packet as a whole.During its motion, the coherence of the composing waves is not always spatial. A wave packet can split itself without losing its kinetic coherence. This coherence is expressed by phase relations, as can be demonstrated in interference experiments as described above. In general, two different wave packets do not interfere in this way, because their phases are not correlated. This means that a wave packet maintains its kinetic identity during its motion. The physical unity of the particle comes to the fore when it is involved in some kind of interaction, for instance if it is absorbed by an atom causing a black spot on a photographic plate or a pulse in a Geiger-Müller counter. Emission and absorption are physically qualified events, in which an electron or a photon acts as an indivisible whole.

 

The identification of a particle with a wave packet seems to be problematic for various reasons. The first problem, the possible splitting and absorption of a wave packet, is mentioned above.

Second, the wave packet of a freely moving particle always expands, because the composing waves having different velocities.[31] Even if the wave packet is initially well localized, gradually it is smeared out over an increasing part of space and time. However, the assumption that the wave function satisfies a linear wave equation is a simplification of reality. Wave motion can be non-linearly represented by a ‘soliton’ that does not expand. Unfortunately, a non-linear wave equation is mathematically more difficult to treat than a linear one.

Third, in 1926 Werner Heisenberg observed that the wave packet is subject to a law known as indeterminacy relation, uncertainty relation or Heisen­berg relation. As a matter of fact, there is as little agreement about its definition as about its name.

Combining the relations Δx.Δs>1 and Δtf >1 with those of Planck (E=hf) and Einstein (p=hs) leads to Heisenberg’s relations for a wave packet:[32] Δxp>h and ΔtE>h. The meaning of Δx etc. is given above. In particular, Δt is the time the wave packet needs to pass a certain point.[33] This interpretation is the oldest one, for the indeterminacy relations – without Planck’s constant - were applied in communication theory long before the birth of quantum mechanics.[34] It is interesting to observe that the indeterminacy relations are not characteristic for quantum mechanics, but for wave motion. The relations are an unavoidable consequence of the wave character of particles and of signals. I shall discuss some alternative interpretations, in particular paying attention to the Heisenberg relation between energy and time.[35]

 

Quantum mechanics connects any variable magnitude with a Hermitean operator having eigenfunctions and eigenvalues (2.3). The eigenvalues are the possible values for the magnitude in the system concerned. In a measurement, the scalar product of the system’s state function with an eigenfunction of the operator is the square of the probability that the corresponding eigenvalue will be realized.

If two operators act successively on a function, the result may depend on their order. The Heisenberg relation Δxp > h can be derived as a property of the non-commuting operators for position and linear momentum. In fact, each pair of non-commuting operators gives rise to a similar relation. This applies, e.g., to each pair out of the three components of angular momentum.[36] Consequently, only one component of an electron’s magnetic moment (usually along a magnetic field) can be measured. The other two components are undetermined, as if the electron exerts a precessional motion about the direction of the magnetic field.

Remarkably, there is no operator for kinetic time. Therefore, some people deny the existence of a Heisenberg relation for time and energy.[37] On the other hand, the operator for energy, called Hamilton-operator or Hamiltonian, is very important. Its eigenvalues are the energy levels characteristic for e.g. an atom or a molecule. Each operator commuting with the Hamiltonian represents a ‘constant of the motion’ subject to a conservation law.[38]

 

From the wave function, the probability to find a particle in a certain state can be calculated. Now the indeterminacy is a measure of the mean standard deviation, the statistical inaccuracy of a probability calculation. The indeterminacy of time can be interpreted as the mean lifetime of a metastable state. If the lifetime is large (and the state is relatively stable), the energy of the state is well defined. The rest energy of a short living particle is only determined within the margin given by the Heisenberg relation for time and energy.

This interpretation is needed to understand why an atom is able to absorb a light quantum emitted by another atom in similar circumstances. Because the photon carries linear momentum, both atoms get momentum and kinetic energy. The photon’s energy would fall short to excite the second atom. Usually this shortage is smaller than the uncertainty in the energy levels concerned. However, this is not always the case for atomic nuclei. Unless the two nuclei are moving towards each other, the process of emission followed by absorption would be impossible. Rudolf Mössbauer discovered this consequence of the Heisenberg relations in 1958. Since then, the Mössbauer effect became an effective instrument for investigating nuclear energy levels.

 

The position of a wave packet is measurable within a margin of Δx and its linear momentum within a margin of Δp. Both are as small as experimental circumstances permit, but their product has a minimum value determined by Heisenberg’s relation. The accuracy of the measurement of position restricts that of momentum.

Initially the indeterminacy was interpreted as an effect of the measurement disturbing the system. The measurement of one magnitude disturbs the system such that another magnitude cannot be measured with an unlimited accuracy. Heisenberg explained this by imagining a microscope exploiting light to determine the position and the momentum of an electron.[39] Later, this has appeared to be an unfortunate view. It seems better to consider the Heisenberg relations to be the cause of the limited accuracy of measurement, rather than to be its effect.

The Heisenberg relation for energy and time has a comparable consequence for the measurement of energy. If a measurement has duration Δt, its accuracy cannot be better than ΔE>ht.

 

In quantum mechanics, the law of conservation of energy achieves a slightly different form. According to the classical formulation, the energy of a closed system is constant. In this statement, time does not occur explicitly. The system is assumed to be isolated for an indefinite time, and that is questionable. Heisenberg’s relation suggests a new formulation. For a system isolated during a time interval Δt, the energy is constant within a margin of ΔEht. Within this margin, the system shows spontaneous energy fluctuations, only relevant if Δt is very small.[40]

According to quantum field theory, a physical vacuum is not an empty space. Spontaneous fluctuations may occur. A fluctuation leads to the creation and annihilation of a virtual photon or a virtual pair consisting of a particle and an antiparticle, having an energy of ΔE, within the interval Δt<hE. Meanwhile the virtual particle or pair is able to exert an interaction, e.g. a collision between two real particles.[41] Virtual particles are not directly observable but play a part in several real processes.

 

The amplitude of waves in water, sound, and light corresponds to a measurable physical real magnitude. In water this is the height of its surface, in sound the pressure of air, in light the electromagnetic field strength. The energy of the wave is proportional to the square of the amplitude. This interpretation is not applicable to the waves for material particles like electrons. In this case the wave has a less concrete character, it has no direct physical meaning. Even in mathematical terms, the wave is not real, for the wave function has a complex value.

In 1926, Max Born offered a new interpretation, since then commonly accepted.[42] He stated that a wave function (real or complex) is a probability function. In a footnote added in proof, Born observed that the probability is proportional to the square of the wave function.[43]

The wave function we are talking about is prepared at an earlier interaction, for instance, the emission of the particle. It changes during its motion, and one of its possibilities is realized at the next interaction, like the particle’s absorption. The wave function expresses the transition probability between the initial and the final state.[44]

This probability may concern any measurable property that is variable. Hence, it does not concern natural constants like the speed of light or the charge of the electron. According to Born, the probability interpretation bridges the apparently incompatible wave and particle aspects.[45] Wave properties determine the probability of position, momentum, etc., traditionally considered properties of particles.

Classical mechanics used statistics as a mathematical means, assuming that the particles behave deterministic in principle. In 1926, Born’s probability interpretation put a definitive end to mechanist determinism, having lost its credibility before because of radioactivity. Waves and wave motion are still determined, e.g. by Schrödinger’s equation, even if no experimental method exists to determine the phase of a wave. However, the wave function determines only the probability of future interactions.[46] In quantum mechanics, the particles themselves behave stochastically.

Even more strange is that chance is subject to interference. In the traditional probability calculus (2.4) probabilities can be added or multiplied. Nobody ever imagined that probabilities could interfere. Interference of waves may result in an increase of probability, but to a decrease as well, even to the extinction of probability. Hence, besides a probability interpretation of waves, we have a wave interpretation of probability.[47]

Outside quantum mechanics, this is still unheard of, not only in daily life and the humanities, but in sciences like biology and ethology as well. The reason is that interference of probabilities only occurs as long as there is no physical interaction by which a chance realizes itself.[48] The absence of physical interaction is an exceptional situation. It only ocurs if the system concerned has no internal interactions (or if these are frozen), as long as it moves freely. In macroscopic bodies, interactions occur continuously and interference of probabilities does not occur. Therefore, the phenomenon of interference of chances is unknown outside quantum physics.[49]

 

The concept of probability or chance anticipates the physical relation frame, because only by means of a physical interaction a chance can be realized. An open-minded spectator observes an asymmetry in time. Probability always concerns future events. It draws a boundary line between a possibility in the present and a realization in the future. For this realization, a physical interaction is needed. The wave equation and the wave function describe probabilities, not their realization. The wave packet anticipates a physical interaction leading to the realization of a chance, but is itself a kinetic subject, not a physical subject. If the particle realizes one of its possibilities, it simultaneously destroys all alternative possibilities. In that respect, there is no difference between quantum mechanics and classical theories of probability.

As long as the position of an electron is not determined, its wave packet is extended in space and time. As soon as an atom absorbs the electron at a certain position, the probability to be elsewhere collapses to zero.[50] This so-called reduction of the wave packet requires a velocity far exceeding the speed of light. However, this reduction concerns the wave character, not the physical character of the particle. It does not counter the physical law that no material particle can move faster than light.

Likewise, Schrödinger’s equation describes the states of an atom or molecule and the transition probabilities between states. It does not account for the actual transition from a state to an eigenstate, when the system experiences a measurement or another kind of interaction.[51]

Is the problem of the reduction of the wave packet relevant for macroscopic bodies as well? Historically, this question is concentrated in the problem of Schrödinger’s cat, locked up alive in a non-transparent case. A mechanism releases a mortal poison at an unpredictable instant, for instance controlled by a radioactive process. As long as the case is not opened, one may wonder whether the cat is still alive. If quantum mechanics is applied consequently, the state of the cat is a mixture, a superposition of two eigenstates, dead and alive, respectively.

The principle of decoherence, discovered at the end of the twentieth century, provides a satisfactory answer. For a macroscopic body, a state being a combination of eigenstates will spontaneously change very fast into an eigenstate, because of the many interactions taking place in the macroscopic system itself. This solves the problem of Schrödinger’s cat, for each superposition of dead and alive transforms itself almost immediately into a state of dead or alive.[52] The principle of decoherence is part of a realistic interpretation of quantum physics. It does not idealize the ‘reduction of the wave packet’ to a projection in an abstract state space. It takes into account the character of the macroscopic system in which a possible state is realized by means of a physical interaction.

 

The so-called measurement problem constitutes the nucleus of what is usually called the interpretation of quantum mechanics.[53] It is foremost a philosophical problem, not a physical one, which is remarkable, because measurement is part of experimental physics, and the starting point of theoretical physics. After the development of quantum physics, both experimental and theoretical physicists have investigated the relevance of symmetry, and the structure of atoms and molecules, solids and stars, and subatomic structures like nuclei and elementary particles. Apparently, this has escaped the attention of many philosophers, who are still discussing the consequences of Heisenberg’s indeterminacy relations.  

 


 

 

4.4. Symmetric and antisymmetric

wave functions

 

The concept of probability is applicable to a single particle as well as to a homogeneous set of similar particles, a gas consisting of molecules, electrons or photons. In order to study such systems, since circa 1860 statistical physics has developed various mathematical methods. A distribution function points out how the energy is distributed over the particles, how many particles have a certain energy value, and how the average energy depends on temperature. In any distribution function, the temperature is an important equilibrium parameter.

Classical physics assigned each particle its own state, but in quantum physics, this would lead to wrong results. It is better to design the possible states, and to calculate how many particles occupy a given state, without questioning which particle occupies which state. It turns out that there are two entirely different cases.[54]

In the first case, the occupation number of particles in a well-defined state is unlimited. Bosons like photons are subject to a distribution function in 1924 derived by Satyendra Bose and published by Albert Einstein, hence called Bose-Einstein statistics. Bosons have an integral spin.[55] The occupation number of each state may vary from zero to infinity.

In the other case, each well-defined state is occupied by at most one particle, according to Wolfgang Pauli’s exclusion principle. The presence of a particle in a given state excludes the presence of another similar particle in the same state. Fermions like electrons, protons, and neutrons have a half-integral spin. They are subject to the distribution function that Enrico Fermi and Paul Dirac derived in 1926.

In both cases, the distribution approximates the classical Maxwell-Boltzmann distribution function, if the mean occupation of available states is much smaller than 1. This applies to molecules in a classical gas (2.4).

 

The distinction of fermions and bosons rests on permutation symmetry. In a finite set the elements can be ordered into a sequence and numbered using the natural numbers as indices. For n elements, this can be done in n!=1.2.3.4…n different ways. The n! permutations are symmetric if the elements are indistinguishable. Permutation symmetry is not spatial but quantitative.

In a system consisting of a number of similar particles, the state of the aggregate can be decomposed into a product of separate states for each particle apart.[56] A permutation in the serial order of similar particles should not have consequences for the state of the aggregate as a whole. However, in quantum physics only the square of a state is relevant to probability calculations. Hence, exchanging two particles allows of two possibilities: either the state is multiplied by +1 and does not change, or it is multiplied by –1. In both cases, a repetition of the exchange produces the original state. In the first case, the state is called symmetric with respect to a permutation, in the second case antisymmetric.

In the antisymmetric case, if two particles would occupy the same state an exchange would simultaneously result in multiplying the state by +1 (because nothing changes) and by –1 (because of antisymmetry), leading to a contradiction. Therefore, two particles cannot simultaneously occupy the same state. This is the exclusion principle concerning fermions. No comparable principle applies to bosons, having symmetric wave functions with respect to permutation,.

Both a distribution function like the Fermi-Dirac statistics and Pauli’s exclusion principle are only applicable to a homogeneous aggregate of similar particles. In a heterogeneous aggregate like a nucleus, they must be applied to the protons and neutrons separately. 

 

The distinction of fermions and bosons, and the exclusion principle for fermions, have a fundamental significance for the understanding of the characters of material things containing several similar particles. To a large extent, it explains the orbital structure of atoms and the composition of nuclei from protons and neutrons.

When predicting the wave character of electrons, Louis de Broglie suggested that the stability of the electronic orbit in a hydrogen atom is explainable by assuming that the electron moves around the nucleus as a standing wave. This implies that the circumference of the orbit is an integral number times the wavelength. From the classical theory of circular motion, he derived that the orbital angular momentum should be an integral number times Planck’s reduced constant (h/2p). This is precisely the quantum condition applied by Niels Bohr in 1913 in his first atomic theory.[57]

The atomic physicists at Copenhagen, Göttingen, and Munich considered this idea rather absurd, but it received support from Albert Einstein, and it inspired Erwin Schrödinger to develop his wave equation.[58] In a stable system,  Schrödinger’s equation is independent of time and its solutions are stationary waves, comparable to the standing waves in a violin string or an organ pipe. Only a limited number of frequencies are possible, corresponding to the energy levels in atoms and molecules.[59] Although one often speaks of the Schrödinger equation, there are many variants, one for each physical character. Each variant specifies the system’s boundary conditions and expresses the law for the possible motions of the particles concerned.

 

In the practice of solid-state physics, the exclusion principle is more important than the Schödinger equation. This can be elucidated by discussing the model of particles confined to a rectangular box. Again, the wave functions look like standing waves.

In a good approximation the valence electrons in a metal or semiconductor are not bound to individual atoms but are free to move around. The mutual repulsive electric force of the electrons compensates for the attraction by the positive ions. The electron’s energy consists almost entirely of kinetic energy, E=p2/2m, if p is its linear momentum and m its mass.

Because the position of the electron is confined to the box, in the Heisenberg relation Δx equals the length of the box (analogous for y and z). Because Δx is relatively large, Δp is small and the momentum is well defined. Hence the momentum characterizes the state of each electron and the energy states are easy to calculate. In a three-dimensional momentum space a state denoted by the vector p occupies a volume Δp.[60] According to the exclusion principle, a low energy state is occupied by two electrons (because there are two possible spin states), whereas high-energy states are empty. In a metal, this leads to a relatively sharp separation of occupied and empty states. The mean kinetic energy of the electrons is almost independent of temperature, and the specific heat is proportional to temperature, strikingly different from other aggregates of particles.

Mechanical oscillations or sound waves in a solid form wave packets. These bosons are called phonons or sound particles. Bose-Einstein statistics leads to Peter Debije’s law for the specific heat of a solid. At a moderate temperature the specific heat is proportional to the third power of temperature.[61] A similar situation applies to an oven, in which electromagnetic radiation is in thermal equilibrium. According to Planck’s law of radiation, the energy of this boson gas is proportional to the fourth power of temperature.[62] Hence, the difference between fermion and boson aggregates comes quite dramatically to the fore in the temperature dependence of their energy. Amazingly, the physical character of the electrons, phonons, and photons plays a subordinate part compared to their kinetic character. Largely, the symmetry of the wave function determines the properties of an aggregate. Consequently, a neutron star has much in common with an electron gas in a metal.

 

The existence of antiparticles is a consequence of a symmetry of the relativistic wave equation. The quantum mechanics of Erwin Schrödinger and Werner Heisenberg in 1926 was not relativistic, but about 1927 Paul Dirac found a relativistic formulation.[63] From his equation follows the electron’s half-integral angular momentum, not as a spinning motion as conceived by its discoverers, Samual Goudsmit and George Uhlenbeck, but as a symmetry property (still called spin).

Dirac’s wave equation had an unexpected result, to wit the existence of negative energy eigenvalues for free electrons. According to relativity theory, the energy E and momentum p for a freely moving particle with rest energy Eo=moc2 are related by the formula: E2=Eo2+(cp)2. For a given value of the linear momentum p, this equation has both positive and negative solutions for the energy E. De positive values are minimally equal to the rest energy Eo and the negative values are maximally -Eo. This leaves a gap of twice the rest energy, about 1 MeV for an electron.[64] Classical physics could ignore negative solutions, but this is not allowed in quantum physics. Even if the energy difference between positive and negative energy levels is large, the transition probability is not zero. In fact, each electron should spontaneously jump to a negative energy level, releasing a gamma particle having an energy of at least 1 MeV.

Dirac took recourse to Pauli’s exclusion principle. By assuming all negative energy levels to be occupied, he could explain why these are unobserved most of the time, and why many electrons have positive energy values. An electron in one of the highest negative energy levels may jump to one of the lowest positive levels, absorbing a gamma particle having an energy of at least 1 MeV. The reverse, a jump downwards, is only possible if in the nether world of negative energy levels, at least one level is unoccupied. Influenced by an electric or magnetic field, such a hole moves as if it were a positively charged particle. Initially, Dirac assumed protons to correspond to these holes, but it soon became clear that the rest mass of a hole should be the same as that of an electron.

After Carl Anderson in 1932 discovered the positron, a positively charged particle having the electron’s rest mass, this particle was identified with a hole in Dirac’s nether world.[65] Experiments pointed out that an electron is able to annihilate a positron, releasing at least two gamma particles.[66]

Meanwhile it is established that besides electrons all particles, bosons included, have antiparticles. Only a photon is identical to its antiparticle. The existence of antiparticles rests on several universally valid laws of symmetry. A particle and its antiparticle have the same mean lifetime, rest energy and spin, but opposite values for charge, baryon number, or lepton number (5.2).

However, if the antiparticles are symmetrical to particles, why are there so few? (Or why is Dirac’s nether world nearly completely occupied?) Probably, this problem can only be solved within the framework of a theory about the early development of the cosmos.

 

The image of an infinite set of unobservable electrons having negative energy, strongly defeats common sense. However, it received unsolicited support from the so-called band theory in solid-state physics, being a refinement of the earlier discussed free-electron model. The influence of the ions is not completely compensated for by the electrons. An electric field remains having the same periodic structure as the crystal. Taking this field into account, Rudolf Peierls developed the band model. It explains various properties of solids quite well, both quantitatively and qualitatively.

A band is a set of neighbouring energy levels separated from other bands by an energy gap.[67] It may be fully or partly occupied by electrons, or it is unoccupied. Both full and empty bands are physically inert. In a metal, at least one band is partly occupied, partly unoccupied by electrons. An isolator has only full (i.e., entirely occupied) bands besides empty bands. The same applies to semiconductors, but now a full band is separated from an empty band by a relatively small gap. According to Rudolf Peierls in 1929, if energy is added in the form of heat or light (a phonon or a photon), an electron jumps from the lower band to the higher one, leaving a hole behind. This hole behaves like a positively charged particle. In many respects, an electron-hole pair in a semiconductor looks like an electron-positron pair. Only the energy needed for its formation is about a million times smaller.[68]

Another important difference should be mentioned. The set of electron states in Dirac’s theory is an ensemble.  In the class of possibilities independent of time and space, half is mostly occupied, the other half is mostly empty. There is only one nether world of negative energy values. In contrast, the set of electrons in a semiconductor is a spatially and temporally restricted collection of electrons, in which some electron states are occupied, others unoccupied. There are as many of these collections as there are semiconductors. To be sure, Peierls was interested in an ensemble as well. In his case, this is the ensemble of all semiconductors of a certain kind. This may be copper oxide, the standard example of a semiconductor in his days, or silicon, the base material of modern chips. But this only confirms the distinction from Dirac’s ensemble of electrons.

 

Common sense did not turn out to be a reliable guide in the investigation of characters. At the end of the nineteenth century, classical mechanics was considered the paradigm of science. Yet, even then is was clear that daily experience was in the way of the development of electromagnetism, for instance. The many models of the ether were more an inconvenience than a stimulus for research.

When relativity theory and quantum physics unsettled classical mechanics, this led to uncertainty about the reliability of science. At first, the oncoming panic was warded off by the reassuring thought that the new theories were only valid in extreme situations. These situations were, for example, a very high speed, a total eclipse, or a microscopic size. However, astronomy cannot cope without relativity theory, and chemistry fully depends on quantum physics. All macroscopic properties and phenomena of solid-state physics can only be explained in the framework of quantum physics.

Largely, daily experience rests on habituation. In hindsight, it is easy to show that classical mechanics collided with common sense in its starting phase, as we have seen with respect to the law of inertia (4.1). Action at a distance in Newton’s Principia evoked the abhorrence of his contemporaries, but the nineteenth-century public did not experience any trouble with this concept. In the past, mathematical discoveries would cause heated discussions, but the rationality of irrational numbers or the reality of non-Euclidean spaces is now accepted almost as a matter of course.

This does not mean that common sense is always wrong in scientific affairs. The irreversibility of physical processes is part of daily experience. In the framework of the mechanist worldview of the nineteenth century, physicists and philosophers have stubbornly but in vain tried to reduce irreversible processes to reversible motion, and to save determinism. This is also discernible in attempts to find (mostly mathematical) interpretations of quantum mechanics that allow of temporal reversibility and of determinism.[69]

Since the twentieth century, mathematics, science and technology dominate our society to such an extent, that new developments are easier to integrate in our daily experience than before. Science has taught common sense to accept that the characters of natural things and events are neither manifest nor evident. The hidden properties of matter and of living beings brought to light by the sciences are applicable in a technology that is accessible for anyone but understood by few. This technology has led to an unprecedented prosperity. Our daily experience adapts itself easily and eagerly to this development.

 



[1] Lucas 1973, 29.

[2] Stafleu 1987, 61.

[3] Reichenbach 1957, 116-119; Grünbaum 1968, 19, 70; 1973, 22.

[4] Cf. Grünbaum 1973, 22-23.

[5] Newton 1687, 13: ‘Every body continues in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed upon it.’

[6] Margenau 1950, 139.

[7] Maxwell 1877, 29; Cassirer 1921, 364. The uniformity of time is sometimes derived from a ceteris paribus argument. If one repeats a process at different moments under exactly equal circumstances, there is no reason to suppose that the process would proceed differently. In particular the duration should be the same. This reasoning is applicable to periodic motions, like in clocks. But it betrays a deterministic vision and is not applicable to stochastic processes like radioactivity. Einstein observed that the equality of covered distances provides a problem as well, because spatial relations are subject to the order of simultaneity, dependent on the state of motion of the clocks used for measuring uniform motion.

[8] Mach 1883, 217, observes: ‘Die Frage, ob eine Bewegung an sich gleichförmig sei, hat gar keinen Sinn. Ebensowenig können wir von einer “absoluten Zeit” (unabhängig von jeder Veränderung) sprechen.’ [‘The question of whether a motion is uniform in itself has no meaning at all. No more can we speak of an “absolute time” (independent of any change).’] In my view, the law of inertia determines the meaning of the uniformity of time. According to Reichenbach 1957, 117 it is an ‘empirical fact’ that different definitions give rise to the same ‘measure of the flow of time’: natural, mechanical, electronic or atomic clocks, the laws of mechanics, and the fact that the speed of light is the same for all observers. On the next page, Reichenbach says: ‘It is obvious, of course, that this method does not enable us to discover a “true” time, but that astronomers simply determine with the aid of the laws of mechanics that particular flow of time which the laws of physics implicitly define.’ However, if ‘truth’ means law conformity, ‘true time’ is the time subject to natural laws. It seems justified to generalize Reichenbach’s ‘empirical fact’, to become the law concerning the uniformity of kinetic time. Carnap 1966, chapter 8 poses that the choice of the metric of time rests on simplicity: the formulation of natural laws is simplest if one sticks to this convention. But then it is quite remarkable that so many widely different systems confirm to this human agreement. More relevant is to observe that physicists are able to explain all kinds of periodic motions and processes based on laws that presuppose the uniformity of kinetic time. Such an explanation is completely lacking with respect to any alternative metric invented by philosophers.

[9] Periodicity is not merely a kinetic property, but a spatial one as well, as in crystals. We shall see that this gives rise to an interlacement of kinetic and spatial characters.

[10] A second is the duration of 9,192,631,770 periods of the radiation arising from the transition between two hyperfine levels of the atom caesium 133. This number gives an impression of the accuracy in measuring the frequency of electromagnetic microwaves.

[11] The phase (φ) indicates a moment in the periodic motion, the kinetic time (t) in proportion to the period (T): φ=t/T=ft modulo 1. If considered an angle, φ=2πft modulo 2π. A phase difference of ¼ between two oscillations means that one oscillation reaches its maximum when the other passes its central position.

[12] If the force is inversely proportional to the square of the distance (like the gravitational force of the sun exerted on a planet), the result is a periodic elliptic motion as well, but this one cannot be constructed as a combination of only two harmonic oscillations. Observe that an ellipse can be defined primarily (spatially) as a conic section, secondarily (quantitatively) by means of a quadratic equation between the co-ordinates [e.g., (x-x0)2/a2+(y-y0)2/b2=1], and tertiary as a path of motion, either kinetically as a combination of periodic oscillations or physically as a planetary orbit.

[13] This equation, the law for harmonic motion, states that the acceleration a is proportional to the distance x of the subject to the centre of oscillation x0, according to: a=d2x/dt2=-(2pf)2(x-x0) wherein the frequency f=1/T is the inverse of the period T. The minus sign means that the acceleration is always directed to the centre.

[14] In an isotropic medium, the wavelength λ is the distance covered by a wave with wave velocity v in a time equal to the period T: λ=νT=ν/f. The inverse of the wavelength is the wave number (the number of waves per metre), σ=1/l=f/ν. In three dimensions, the wave number is replaced by the wave vector k, which besides the number of waves per metre also indicates the direction of the wave motion. In a non-isotropic medium, the wave velocity depends on the direction.

[15] Usually, the wave velocity depends on the frequency as well. This phenomenon is called dispersion. Only light moving in a vacuum is free of dispersion. (The medium of light in vacuum is the electromagnetic field.) The observed frequency of a source depends on the relative motions of source, observer and medium. This is called the Doppler effect after Christian A. Doppler.

[16] Polarization concerns the direction of oscillation. A sound wave in air is longitudinal, the direction of oscillation being parallel to the direction of motion. Light is transversal, the direction of oscillation being perpendicular to the direction of motion. Light is called unpolarized if it contains waves having all directions of polarization. Light may be partly or completely polarized. It may be linearly polarized (having a permanent direction of oscillation) or circularly polarized (the direction of oscillation itself rotating at a frequency independent of the frequency of the wave itself).

[17] The non-relativistic Schrödinger equation and the relativistic Dirac equation describe the motion of material waves.

[18] Descartes believed that light does not move, but has a tendency to move. Huygens 1690, 15 denied that wave motion is periodical, see Sabra 1967, 212.

[19] Newton 1704, 278-282; Sabra 1967, chapter 13.

[20] Achinstein 1991, 24. Decisive was Foucault’s experimental confirmation in 1854 of the wave-theoretical prediction that light has a lower speed in water than in air. Newton’s particle theory predicted the converse.

[21] See Hanson 1963, 13; Jammer 1966, 31.

[22] Cathode rays, canal rays and X-rays are generated in a cathode tube, a forerunner of our television tube, fluorescent lamp and computer screen.

[23] Einstein never had problems with the duality of waves and particles, but he rejected its probability interpretation, see e.g. Klein 1964, Pais 1982, part IV.

[24] Pais 1991, 150. Planck’s reduced constant is h/2π. In Bohr’s theory the angular momentum L=nh/2π, n being the orbit’s number. For the hydrogen atom, the corresponding energy is En=E1/n2, with E1=-13.6 eV, the energy of the first orbit.

[25] The particle character of electromagnetic radiation is easiest to demonstrate with high-energetic photons in gamma- or X-rays. The wave character is easiest proven with low-energetic radiation, with radio or microwaves.

[26] Bohr, Kramers, Slater 1924; see Slater 1975, 11; Pais 1982, chapter 22; 1991, 232-239.

[27] Darrigol 1986.

[28] The group velocity df/ds=dE/dp equals approximately Df/Ds. E/p>c and dE/dp<c follow from the relativistic relation between energy and momentum, E=(Eo2+c2p2)1/2, where Eo is the particle’s rest energy. Only if Eo=0, E/p=dE/dp=c. Observe that the word ‘group’ for a wave packet has a different meaning than in the mathematical theory of groups.

[29] Bohr 1934, chapter 2; Bohr 1949; Meyer-Abich 1965; Jammer 1966, chapter 7; 1974, chapter 4; Pais 1991, 309-316, 425-436. Bohr’s principle of complementarity presupposes that quantum phenomena only occur at an atomic level, which is refuted in solid state physics. According to Bohr, a measuring system is an indivisible whole, subject to the laws of classical physics, showing either particle or wave phenomena. In different measurement systems, these phenomena would give incompatible results. This view is out of date. [Sometimes, non-commuting operators and the corresponding variables (like position and momentum) are called ‘complementary’ as well, at least if their commutator is a number.]

[30] Even in classical physics, the idea of a point-like particle is controversial. Both its mass density and charge density are infinite, and its intrinsic angular momentum cannot be defined.

[31] Light in vacuum is an exception.

[32] The values of ‘1’ respectively ‘h’ in de mentioned relations indicate an order of magnitude. Sometimes other values are given, e.g. h/4p instead of h, see Messiah 1961, 133.

[33] If Δxstf=1, the wave packet’s speed vxtfs is approximately the group velocity df/ds, according to De Broglie.

[34] In communication technology, Δf is the bandwidth, see Bunge 1967a, 265. Bunge denies that wave-particle duality exists in quantum mechanics, see ibid. 266, 291. In his formulation, the single concept of a quanton replaces the concepts of wave and particle. However, this masques the fact that in the quanton a physical and a kinetic character are interlaced.

[35] See e.g. Margenau 1950, chapter 18; Messiah 1961, 129-149; Jammer 1966, chapter 7; Jammer 1974, chapter 3; Omnès 1994, chapter 2.

[36] From the commutation properties of the operators referring to the components of angular momentum for an electron (having rotational symmetry), one derives the integral eigenvalues for the orbital angular momentum as well as the half-integral eigenvalues for the intrinsic angular momentum or spin, see Messiah 1961, 523-536.

[37] Bunge 1967a, 248, 267. 

[38] I leave here aside the important distinction between a time dependent and a time independent Hamiltonian, the former describing transition processes, the latter stationary states.

[39] Heisenberg 1930, 21-23.

[40] In fact, the value of ΔE is less significant than the relative indeterminacy ΔE/E. For a macroscopic system the energy E is so much larger than ΔE that the energy fluctuations can be neglected, and the law of conservation of energy remains valid.

[41] Such virtual processes are depicted in the so-called Feynman-diagrams.

[42] Jammer 1974, 38-44.

[43] The probability to find a particle in the volume element between r and r+dr is y(r)y*(r)dr, hence the scalar product y(r)y*(r) is a probability density.

[44] Cp. Cartwright 1983, 179. Of course, the probability is not given by a single wave function, but by a wave packet. If this consists of a set of orthogonal eigenvectors, a matrix represents the transition probability.

[45] ‘The true philosophical import of the statistical interpretation consists in the recognition that the wave-picture and the corpuscle-picture are not mutually exclusive, but are two complementary ways of considering the same process’, M. Born, Atomic physics (1944), quoted by Bastin (ed.) 1971, 5.

[46] The fact that quantum physics is a stochastic theory has evoked widely differing reactions. Einstein considered the theory incomplete. Born stressed that at least waves behave deterministically, only its interpretation having a statistical character. Bohr accepted a fundamental stochastic element in his world-view.

[47] Heisenberg 1958, 25.

[48] Observe that an interference-experiment aims at demonstrating interference. This is only possible if the interference of waves is followed by an interaction of the particles concerned with, e.g., a screen.

[49] For the relevance of interactions for the interpretation of quantum physics, see Healey 1989.

[50] Theoretically, this means the projection of a state vector on one of the eigenvectors of Hilbert space, representing all possible states of the system. Omnès 1994, 509: ‘No other permanent or transient principle of physics has ever given rise to so many comments, criticisms, pleadings, deep remarks, and plain nonsense as the wave function collapse.’ In particular, the assumptions that probability is an expression of our limited knowledge of a system and that the observer causes the reduction of the wave packet, have led to a number of subjectivist and solipsist interpretations of quantum physics and related problems, of which I shall only briefly discuss that of Schrödinger’s cat.

[51] Omnès 1994, 84: ‘This transition therefore does not belong to elementary quantum dynamics. But it is meant to express a physical interaction between the measured object and the measuring apparatus, which one would expect to be a direct consequence of dynamics’ Cartwright 1983, 195: ‘Von Neumann claimed that the reduction of the wave packet occurs when a measurement is made. But it also occurs when a quantum system is prepared in an eigenstate, when one particle scatters from another, when a radioactive nucleus disintegrates, and in a large number of other transition processes as well … There is nothing peculiar about measurement, and there is no special role for consciousness in quantum mechanics.’ But contrary to Cartwright (198) stating: ‘… there are not two different kinds of evolution in quantum mechanics. There are evolutions that are correctly described by the Schrödinger equation, and there are evolutions that are correctly described by something like von Neumann’s projection postulate. But these are not different kinds in any physically relevant sense’, I believe that there is a difference. The first concerns a reversible motion, the second an irreversible physical process, cp. Cartwright 1983, 179: ‘Indeterministically and irreversibly, without the intervention of any external observer, a system can change its state … When such a situation occurs, the probabilities for these transitions can be computed; it is these probabilities that serve to interpret quantum mechanics.’

[52] The principle of decoherence is in some cases provable, but is not proved generally, see Omnès 1994, chapter 7, 484-488; Torretti 1999, 364-367. Decoherence even occurs in quite small molecules, see Omnès 1994, 299-302. There are exceptions too, in systems without much internal energy dissipation, e.g. electromagnetic radiation in a transparent medium and superconductors (5.4), see Omnès 1994, 269.

[53] Kastner 2013, 202: ‘The interpretive challenge of quantum theory is often presented in terms of the measurement problem: i.e., that the formalism itself does not specify that only one outcome happens, nor does it explain why or how that particular outcome happens. This is the context in which it is often asserted that the theory is incomplete and is therefore in need of alteration in some way.’

[54] Jammer 1966, 338-345.

[55] An integral spin means that the intrinsic angular momentum is an integer times Planck’s reduced constant, 0, h/2π, 2h/2π, etc. A half-integral spin means that the intrinsic angular moment has values like (1/2)h/2π, (3/2)h/2π. I shall not discuss the connection of integral spin with bosons and half-integral spin with fermions

[56] It is by no means obvious that the state function of an electron or photon gas can be written as a product (or rather a sum of products) of state functions for each particle apart, but it turns out to be a quite close approximation.

[57] For a uniform circular motion with radius r, the angular momentum L=rp. The linear momentum p = h/l according to Einstein. If the circumference 2πr = nl, n being a positive integer, then L=nlp/2π=nh/2π. Quantum mechanics allows of the value L=0 for orbital angular momentum. This has no analogy as a standing wave on the circumference of a circle.

[58] Klein 1964; Raman, Forman 1969.

[59] A time-dependent Schrödinger equation describes transitions between energy levels, giving rise to the discrete emission and absorption spectra characteristic for atoms and molecules.

[60] Momentum space is a three-dimensional diagram for the vector p’s components, px,py and pz. The volume of a state equals ΔppxΔpyΔpz. In the described model, the states are mostly occupied up till the energy value EF, the ‘Fermi-energy’, determining a sphere around the origin of momentum space. Outside the sphere, most states are empty. A relatively thin skin, its thickness being proportional to the temperature, separates the occupied and empty states.

[61] Except for very low temperatures, the electrons contribute far less to the specific heat of a solid than the phonons do. The number of electrons is independent of temperature, whereas the number of phonons in a solid or photons in an oven strongly depends on temperature.

[62] For a gas satisfying the Maxwell-Boltzmann distribution, the energy is proportional to temperature. Some people who got stuck in classical mechanics define temperature as a measure of the mean energy of molecules. Which meaning such a definition should have for a fermion gas or boson gas is unclear.

[63] Kragh 1990, chapter 3, 5.

[64] 1 MeV (a much used unit of energy) is one million electronvolt, much more than the energy of visible light, being about 5 eV per photon.

[65] This identification took some time, see Hanson 1963, chapter IX. The assumption of the existence of a positive electron besides the negative one was in 1928 much more difficult to accept than in 1932. In 1928, physics acknowledged only three elementary particles, the electron, the proton and the photon. In 1930, the existence of the neutrino was postulated and in 1932, Chadwick discovered the neutron. The completely occupied nether world of electrons is as inert as the nineteenth-century ether. It neither moves nor interacts with any other system. That is why we do not observe it. For those who find this difficult to accept, alternative theories are available explaining the existence of antiparticles.

[66] In the inertial system in which the centre of mass for the electron-positron pair is at rest, their total momentum is zero. Because of the law of conservation of momentum, the annihilation causes the emergence of at least two photons, having opposite momentum.

[67] A band is comparable to an atomic shell but has a larger bandwidth.

[68] Dirac and Heisenberg corresponded with each other about both theories, initially without observing the analogy, see Kragh 1990, 104-105.

[69] I am referring here to the so-called many-worlds interpretation, and to the transaction interpretation.

 


 

 

 

 

Chapter 5

 

Physical characters

 

  

 

 

 

 

 

 


 

 

5.1. The unification of physical interactions

5.2. The character of electrons

5.3. The quantum ladder

5.4. Individualized currents

5.5. Aggregates and statistics

5.6. Coming into being, change and decay

 


 

5.1. The unification of physical interactions

 

The discovery of the electron in 1897 provided the study of the structure of matter with a strong impulse, both in physics and in chemistry. Our knowledge of atoms and molecules, of nuclei and sub-atomic particles, of stars and stellar systems, dates largely from the twentieth century. The significance of electrotechnology and electronics for the present society can hardly be overestimated. A philosophical analysis of physical characters is the aim of chapter 5.

The physical aspect of the cosmos is characterized by interactions between two or more subjects. Interaction is a relation different from the quantitative, spatial, or kinetic relations, on which it can be projected. It is subject to natural laws. Some laws are specific, like the electromagnetic ones, determining characters of physical kinds. Some laws are general, like the laws of thermodynamics and the laws of conservation of energy, linear and angular momentum. General laws constitute the physical-chemical relation frame, specific laws determine physical characters.. Both for the general and the specific laws, physics has reached a high level of unification.

Because of their relevance to study types of characters, this chapter starts with an analysis of the projections of the physical relation frame onto the three preceding ones (5.1). Next, I investigate the characters of physically stable things, consecutively quantitatively, spatially, and kinetically founded (5.2-5.4). In section 5.5, I survey aggregates and statistics. Finally, in section 5.6 I shall review processes of coming into being, change, and decay.

 

The existence of physically qualified things and events implies their interaction, the universal physical relation. If something could not interact with anything else it would be inert. It would not exist in a physical sense, and it would have no physical place in the cosmos.[1] The noble gases are called inert because they hardly ever take part in chemical compounds, yet their atoms are able to collide with each other. The most inert things among subatomic particles are the neutrino’s, capable of flying through the earth with a very small probability of colliding with a nucleus or an electron. Nevertheless, neutrinos are detectable and have been detected.[2]

The universality of the relation frames allows science of comparing characters with each other and to determine their specific relations. The projections of the physical relation frame onto the preceding frames allow us to measure these relations. Measurability is the base of the mathematization of the exact sciences. It allows of applying statistics and designing mathematical models for natural and artificial systems.

The simplest case of interaction concerns two isolated systems interacting only with each other. Thermodynamics characterizes an isolated or closed system by magnitudes like energy and entropy.[3] The two systems have thermal, chemical, or electric potential differences, giving rise to currents creating entropy. According to the second law of thermodynamics, this interaction is irreversible.

In kinematics, an interactive event may have the character of a collision, minimally leading to a change in the state of motion of the colliding subjects. Often, the internal state of the colliding subjects changes as well. Except for the boundary case of an elastic collision, these processes are subject to the physical order of irreversibility. Frictionless motion influenced by a force is the standard example of a reversible interaction. In fact, it is also a boundary case, for any kind of friction or energy dissipation causes motion to be irreversible.

 

The law of inertia (4.1) expresses the independence of uniform motion from physical interaction. It confirms the existence of uniform and rectilinear motions having no physical cause. This is an abstraction, for concrete things experiencing forces have a physical aspect as well. In reality a uniform rectilinear motion only occurs if the forces acting on the moving body balance each other.

Kinetic time is symmetric with respect to past and future. If in the description of a motion the time parameter (t) is replaced by its reverse (–t), we achieve a valid description of a possible motion. In the absence of friction or any other kind of energy dissipation, motion is reversible. By distinguishing past and future we are able to discover cause-effect relations, assuming that an effect never precedes its cause. According to relativity theory, the order of events having a causal relation is in all inertial systems the same, provided that the time parameter is not reversed (3.3).

In our common understanding of time, the discrimination of past and future is a matter of course,[4] but in the philosophy of science it is problematic. The existence of irreversible processes cannot be denied. All motions with friction are irreversible. Apparently, the absorption of light by an atom or a molecule is the reverse of emission, but Albert Einstein demonstrated that the reverse of (stimulated) absorption is stimulated emission of light, making spontaneous emission a third process, having no reverse (5.6). This applies to radioactive processes as well. The phenomenon of decoherence (4.3) makes most quantum processes irreversible.[5] Only wave motion subject to Schrödinger’s equation is symmetric in time. Classical mechanics usually expresses interaction by a force between two subjects, this relation being symmetric according to Newton’s third law of motion. However, this law is only applicable to spatially separated subjects if the time needed to establish the interaction is negligible, i.e., if the action at a distance is (almost) instantaneous. Einstein made clear that interaction always requires time, hence even interaction at a distance is asymmetric in time.

Irreversibility does not imply that the reverse process is impossible. It may be less probable, or requiring quite different initial conditions. The transport of heat from a cold to a hotter body (as occurs in a refrigerator) demands different circumstances from the reverse process, which occurs spontaneously if the two bodies are not thermally isolated from each other. A short living point-like source of light causes a flash expanding in space. It is not impossible but practically very difficult to reverse this wave motion, for instance applying a perfect spherical mirror with the light source at the centre. But even in this case, the reversed motion is only possible thanks to the first motion, such that the experiment as a whole is still irreversible.

Yet, irreversibility as a temporal order is philosophically controversial, for it does not fit into the reductionist worldview influenced by nineteenth-century mechanism.[6] This worldview assumes each process to be reducible to motions of as such unchangeable pieces of matter, interacting through Newtonian forces. Ludwig Boltzmann attempted to bridge reversible motion and irreversible processes by means of the concepts of probability and randomness. In order to achieve the intended results, he had to assume that the realization of chances is irreversible.[7] Moreover, it is stated that all ‘basic’ laws of physics are symmetrical in time. This seems to be true as far as kinetic time is concerned, and if any law that belies temporal symmetry (like the second law of thermodynamics, or the law for spontaneous decay) is not considered ‘basic’. Anyhow, all attempts to reduce irreversibility to the subject side of the physical aspect of reality have failed.

 

Interaction is first of all subject to general laws independent of the specific character of the things involved. Some conservation laws are derivable from Einstein’s principle of relativity, stating that the laws of physics are independent of the motion of inertial systems.

Being the physical subject-subject relation, interaction may be analysed with the help of quantitative magnitudes like energy, mass, and charge; spatial concepts like force, momentum, field strength, and potential difference; as well as kinetic expressions like currents of heat, matter, or electricity.

Like interaction, energy, force, and current are abstract concepts. Yet these are not merely covering concepts without physical content. They can be specified as projections of characteristic interactions like the electromagnetic one. Electric energy, gravitational force, and the flow of heat specify the abstract concepts of energy, force, and current.

For energy to be measurable, it is relevant that one concrete form of energy is convertible into another one. For instance, a generator transforms mechanical energy into electric energy. Similarly, a concrete force may balance another force, whereas a concrete current accompanies currents of a different kind. This means that characteristically different interactions are comparable, they can be measured with respect to each other. The physical subject-subject relation, the interaction projected as energy, force, and current, is the foundation of the whole system of measuring, characteristic for astronomy, biology, chemistry, physics, as well as technology. The concepts of energy, force, and current enable us to determine physical subject-subject relations objectively.

Measurement of a quantity requires several conditions to be fulfilled. First, a unit should be available. A measurement compares a quantity with an agreed unit. Secondly, a magnitude requires a law, a metric, determining how a magnitude is to be projected on a set of numbers, on a scale (3.1). The third requirement, being the availability of a measuring instrument, cannot always be directly satisfied. A magnitude like entropy can only be calculated from measurements of other magnitudes. Fourth, therefore, there must be a fixed relation between the various metrics and units, a metrical system. This allows of the application of measured properties in theories. Unification of units and scales is a necessary requirement for the communication of both measurements and theories.[8]

I shall discuss the concepts of energy, force, and current in some more detail. It is by no means evident that these concepts are the most general projections of interaction. Rather, their development has been a long and tedious process, leading to a general unification of natural science, to be distinguished from a more specific unification to be discussed later on.

 

a. Since the middle of the nineteenth century, energy is the most important quantitative expression of physical, chemical, and biotic interactions.[9] As such it has superseded mass, in particular since it is known that mass and energy are equivalent, according to physics’ most famous (but often misinterpreted[10]) formula, E=mc2. Energy is specifiable as kinetic and potential energy, thermal energy, nuclear energy, or chemical energy. Affirming the total energy of a closed system to be constant, the law of conservation of energy implies that one kind of energy can be converted into another one. For this reason, energy forms a universal base for comparing various types of interaction.[11]

Before energy, mass became a universal measure for the amount of matter,[12] serving as a measure for gravity as well as for the amount of heat that a subject absorbs when heated by one degree. Energy and mass are general expressions of physical interaction. This applies to entropy and related thermodynamic concepts too. In contrast, the rest energy and the rest mass of a particle or an atom are characteristic magnitudes.

Velocity is a measure for motion, but if it concerns physically qualified things, linear momentum (quantity of motion, the product of mass and velocity) turns out to be more significant. The same applies to angular momentum (quantity of rotation, the product of moment of inertia and angular frequency).[13] In the absence of external forces, linear and angular momentum are subject to conservation laws. Velocity, linear and angular momentum, and moment of inertia are not expressed by a single number (a scalar) but by vectors or tensors. Relativity theory combines energy (a scalar) with linear momentum (a vector with three components) into a single vector, having four components (3.3).

 

b. According to Newton’s third law, the mechanical force is a subject-subject relation.[14] If A exerts a force F on B, then B exerts a force –F on A. The minus sign indicates that the two forces being equal in magnitude have opposite directions. The third law has exerted a strong influence on the development of physics during a quite long time. In certain circumstances, the law of conservation of linear momentum can be derived from it. However, nowadays physicists allot higher priority to the conservation law than to Newton’s third law. In order to apply Newton’s laws when more than one force is acting, we have to consider the forces simultaneously. This does not lead to problems in the case of two forces acting on the same body. But the third law is especially important for action at a distance, inherent in the Newtonian formulation of gravity, electricity, and magnetism. In Einstein’s theory of relativity, simultaneity at a distance turns out to depend on the motion of the reference system. The laws of conservation of linear momentum and energy turn out to be easier to amend to relativity theory than Newton’s third law. Now one describes the interaction as an exchange of energy and momentum (mediated by a field particle like a photon). This exchange requires a certain span of time.

Newton’s second law provides the relation between force and momentum: the net force equals the change of momentum per unit of time. The law of inertia seems to be deductible from Newton’s second law. If the force is zero, momentum and hence velocity is constant, or so it is argued. However, if the first law would not be valid, there could be a different law, assuming that each body experiences a frictional force, dependent on speed, in a direction opposite to the velocity. (In its most simple form, F=-bv, b>0.) Accordingly, if the total force on a body is zero, the body would be at rest. A unique reference system would exist in which all bodies on which no forces act would be at rest. This would agree with Aristotle’s mechanics, but it contradicts both the classical principle of relativity and the modern one. The principle of relativity is an alternative expression of the law of inertia, pointing out that absolute (non-relative) uniform motion does not exist. Just like spatial position on the one hand and interaction on the other side, motion is a universal relation.

Besides to a rigid body, a force is applicable to a fluid, usually in the form of a pressure (i.e., force per area). A pressure difference causes a change of volume or a current subject to Bernoulli’s law, if the fluid is incompressible. Besides, there are non-mechanical forces causing currents. A temperature gradient causes a heat current, chemical potentials drive material flows (e.g., diffusion) and an electric potential difference directs an electric current.

To find a metric for a thermodynamic or an electric potential is not an easy task. On the basis of an analysis of idealized Carnot-cycles, William Thomson (later Lord Kelvin) established the theoretical metric for the thermodynamic temperature scale.[15] The practical definition of the temperature scale takes this theoretical scale as a norm.

The Newtonian force can sometimes be written as the derivative of a potential energy (i.e., energy as a function of spatial position). Since the beginning of the nineteenth century, the concept of a force is incorporated in the concept of a field. At first a field was considered merely a mathematical device, until Maxwell proved the electromagnetic field to have reality of its own. A field is a physical function projected on space. Usually one assumes the field to be continuous and differentiable almost everywhere. A field may be constant or variable. There are scalar fields (like the distribution of temperature in a gas), vector fields (like the electrostatic field) and tensor fields (like the electromagnetic field). A field of force is called ‘conservative’ if the forces are derivable from a space-dependent potential energy. This applies to the classical gravitational and electrostatic fields. It does not apply to the Lorentz force, because it depends on the velocity of a charged body with respect to a magnetic field. The Lorentz force and Maxwell’s equations for the electromagnetic field are derivable from a gauge-invariant vector potential. ‘Gauge-invariance’ is the relativistic successor to the static concept of a conservative field.

 

c. A further analysis of thermodynamics and electricity makes clear that current is a third projection, now from the physical onto the kinetic relation frame. The concept of entropy points to a general property of currents. In each current, entropy is created, making the current irreversible.[16] In a system in which currents occur, entropy increases. Only if a system as a whole is in equilibrium, there are no net currents and the entropy is constant. Like several mechanical forces are able to balance each other, so do thermodynamic forces and currents. This leads to mutual relations like thermo-electricity.[17]

The laws of thermodynamics are generally valid, independent of the specific character of a physical thing or aggregate. For a limited set of specific systems (e.g., a gas consisting of similar molecules), statistical mechanics is able to derive the second law from mechanical interactions, starting from assumptions about their probability.[18] Whereas the thermodynamic law states that the entropy in a closed system is constant or increasing, the statistical law allows of fluctuations. The source of this difference is that thermodynamics supposes matter to be continuous, whereas statistical mechanics takes into account the molecular character of matter.

 

There are many different interactions, like electricity, magnetism, contact forces (e.g., friction), chemical forces (e.g., glue), or gravity. Some are reducible to others. The contact forces turn out to be of an electromagnetic nature, and chemical forces are reducible to electrical ones.

Besides the general unification discussed above allowing of the comparison of widely differing interactions, a characteristic unification can be discerned. Maxwell’s unification of electricity and magnetism implies these interactions to have the same character, being subject to the same specific cluster of laws and showing symmetry. The fact that they can still be distinguished points to an asymmetry, a break of symmetry. The study of characteristic symmetries and symmetry breaks supplies an important tool for achieving a characteristic unification of natural forces.

Since the middle of the twentieth century, physics discerns four fundamental specific interactions. These are gravity and electromagnetic interaction besides the strong and weak nuclear forces. Later on, the electromagnetic and weak forces were united into the electroweak interaction, whereas the strong force is reducible to the colour force between quarks. In the near future, physicists expect to be able to unite the colour force with the electroweak interaction. The ultimate goal, the unification of all four forces is still far away.[19]

These characteristic interactions are distinguished in several ways, first by the particles between which they act. Gravity acts between all particles, the colour force only between quarks, and the strong force only between particles composed from quarks. A process involving a neutrino is weak, but the reverse is not always true.

Another difference is their relative strength. Gravity is weakest and only plays a part because it cannot be neutralized. It manifests itself only on a macroscopic scale. The other forces are so effectively neutralized, that the electrical interaction was largely unknown until the eighteenth century, and the nuclear forces were not discovered before the twentieth century. Gravity conditions the existence of stars and systems of stars.

Next, gravity and electromagnetic interaction have an infinite range, the other forces do not act beyond the limits of an atomic nucleus. For gravity and electricity the inverse-square law is valid (the force is inversely proportional to the square of the distance from a point-like source). This law is classically expressed in Newton’s law of gravity and Coulomb’s electrostatic law, with mass respectively charge acting as a measure of the strength of the source. A comparable law does not apply to the other forces, and the lepton and baryon numbers do not act as a measure for their sources. As a function of distance, the weak interaction decreases much faster than quadratically. The colour force is nearly constant over a short distance (of the order of the size of a nucleus), beyond which it decreases abruptly to zero.

The various interactions also differ because of the field particles involved. Each fundamental interaction corresponds to a field in which quantized currents occur. For gravity, this is an unconfirmed hypothesis. Field particles have an integral spin and they are bosons (3.2, 4.4). If the spin is even (0 of 2), it concerns an attractive force between equal particles and a repulsive force between opposite particles (if applicable). For an uneven spin it is the other way around. The larger the field particle’s rest mass, the shorter is the range of the interaction. If the rest mass of the field particles is zero (as is the case with photons and gravitons), the range is infinite. Unless mentioned otherwise, the field particles are electrically neutral.

The mean lifetime of spontaneous decay differs widely. The stronger the interaction causing a transition, the faster the system changes. If a particle decays because of the colour force or strong force, it happens in a very short time (of the order of 10-23 to 10-19 sec). Particles decaying due to weak interaction have a relatively long lifetime (10-12 sec for a tauon up to 900 sec for a free neutron). Electromagnetic interaction is more or less between.

 

In high-energy physics, symmetry considerations and group theory play an important part in the analysis of collision processes. New properties like isospin and strangeness have led to the introduction of groups named SU(2) and SU(3) and the discovery of at first three, later six quarks.[20] Quantum electrodynamics reached its summit shortly after the Second World War, but the other interactions are less manageable, being developed only after 1970. Now each field has a symmetry property called gauge invariance, related to the laws of conservation of electric charge, baryon number and lepton number.[21] The appropriate theory is the standard model, since the discovery of the J/y particle in 1974 explaining successfully a number of properties and interactions of subatomic particles. However, the general theory of relativity is still at variance with quantum electrodynamics, with the electroweak theory of Weinberg and Salam, as well as with quantum chromodynamics.[22]

These fundamental interactions are specifications of the abstract concept of interaction being the universal physical and chemical relation. Their laws, like those of Maxwell for electromagnetism, form a specific set, which may be considered a character. But this character does not determine a class of things or events, but a class of relations.

 


 

 

5.2. The character of electrons

 

Ontology, the doctrine of on (or ontos, Greek for being), aims to answer the question of how matter is composed according to present-day insights. Since the beginning of the twentieth century, many kinds of particles received names ending with on, like electron, proton, neutron and photon. At first sight, the relation with ontology seems to be obvious.[23] Yet, not many physicists would affirm that an electron is the essence of electricity, that the proton forms the primeval matter, that the neutron and its little brother, the neutrino, have the nature of being neutral, or that in the photon light comes into being, and in the phonon sound. In pion, muon, tauon, and kaon, on is no more than a suffix of the letters π, μ, τ and K, whereas Paul Dirac baptized fermion and boson after Enrico Fermi and Satyendra Bose. In 1833 Michael Faraday, advised by Willim Whewell, introduced the words ion, kation, and anion, referring to the Greek word for to go. In an electrolyte, an ion moves from or to an electrode, an anode or cathode (names proposed by Whewell as well). An intruder is the positive electron. Meant as positon, the positron received an additional r, possibly under the influence of electron or new words like magnetron and cyclotron, which however are machines, not particles.

Only after 1925 quantum physics and high-energy physics allowed of the study of the characters of elementary physical things. Most characters have been discovered after 1930. But the discovery of the electron (1897), of the internal structure of an atom, composed from a nucleus and a number of electrons (1911) and of the photon (1905) preceded the quantum era. These are typical examples of characters founded in the quantitative, spatial, and kinetic projections of physical interaction. In section 5.1, these projections were pointed out to be energy, force or field, and current.

 

An electron is characterized by a specific amount of mass and charge and is therefore quantitatively founded. The foundation is not in the quantitative relation frame itself (because that is not physical), but in the most important quantitative projection of the physical relation frame. This is energy, expressing the quantity of interaction. Like other particles, an electron has a typical rest energy, besides specific values for its electric charge, magnetic moment and lepton number.

In chapter 4, I argued that an electron has the character of a wave packet as well, kinetically qualified and spatially founded, anticipating physical interactions. An electron has a specific physical character and a generic kinetic character. The two characters are interlaced within the at first sight simple electron. The combined dual character is called the wave-particle duality. Electrons share it with all other elementary particles. As a consequence of the kinetic character and the inherent Heisenberg relations, the position of an electron cannot be determined much better than within 10-10 m (about the size of a hydrogen atom). But the physical character implies that the electron’s collision diameter (being a measure of its physical size) is less than 10-17 m.

Except for quarks, all quantitatively founded particles are leptons, to be distinguished from field particles and baryons (5.3, 5.4). Leptons are not susceptible to the strong nuclear force or the colour force. They are subject to the weak force, sometimes to electromagnetic interaction, and like all matter to gravity. Each lepton has a positive or negative value for the lepton number (L), which significance appears in the occurrence or non-occurrence of certain processes. Each process is subject to the law of conservation of lepton number, i.e., the total lepton number cannot change. For instance, a neutron (L=0) does not decay into a proton and an electron, but into a proton (L=0), an electron (L=1) and an antineutrino (L=-1). The lepton number is just as characteristic for a particle as its electric charge. For non-leptons the lepton number is 0, for leptons it is +1 or  -1.

Leptons satisfy a number of characteristic laws. Each particle has an electric charge being an integral multiple (positive, negative or zero) of the elementary charge. Each particle corresponds with an antiparticle having exactly the same rest mass and lifetime, but opposite values for charge and lepton number. Having a half-integral spin, leptons are fermions satisfying the exclusion principle and the characteristic Fermi-Dirac statistics (4.3, 5.5).

Three generations of leptons are known, each consisting of a negatively charged particle, a neutrino, and their antiparticles. These generations are related to similar generations of quarks (5.3). A tauon decays spontaneously into a muon, and a muon into an electron. Both are weak processes, in which simultaneously a neutrino and an anti-neutrino are emitted.

The leptons display little diversity, their number is exactly 12. Like their diversity, the variation of leptons is restricted. It only concerns their external relations: their position, their linear and angular momentum, and the orientation of their magnetic moment or spin relative to an external magnetic field.

This description emphasizes the quantitative aspect of leptons. But leptons are first of all physically qualified. Their specific character determines how they interact by electroweak interaction with each other and with other physical subjects, influencing their coming into being, change and decay.

 

Electrons are by far the most important leptons, having the disposition to become part of systems like atoms, molecules and solids. The other leptons only play a part in high-energy processes. In order to stress the distinction between a definition and a character as a set of laws, I shall dwell a little longer on hundred years of development of our knowledge of the electron.[24]

Although more scientists were involved, it is generally accepted that Joseph J. Thomson in 1897 discovered the electron. He identified his cathode ray as a stream of particles and established roughly the ratio e/m of their charge e and mass m, by measuring how an electric and/or magnetic field deflects the cathode rays. In 1899 Thomson determined the value of e separately, allowing him to calculate the value of m. Since then, the values of m and e, which may be considered as defining the electron, are determined with increasing precision. In particular Robert Millikan did epoch-making work, between 1909 and 1916. Almost simultaneously with Thomson, Hendrik Antoon Lorentz observed that the Zeeman effect (1896) could be explained by the presence in atoms of charged particles having the same value for e/m as the electron. Shortly afterwards, the particles emerging from β-radioactivity and the photoelectric effect were identified as electrons.

The mass m depends on the electron’s speed, as was first established experimentally by Walter Kaufmann, later theoretically by Albert Einstein. Since then, instead of the mass m the rest mass mo is characteristic for a particle. Between 1911 and 1913, Ernest Rutherford and Niels Bohr developed the atomic model in which electrons move around a much more massive nucleus. The orbital angular momentum turned out to be quantized. In 1923 Louis De Broglie made clear that an electron sometimes behaves like a wave, interpreted as the bearer of probability by Max Born in 1926 (4.3). In 1925, Samuel Goudsmit and George Uhlenbeck suggested a new property, half-integral spin, connected to the electron’s intrinsic magnetic moment. In the same year, Wolfgang Pauli discovered the exclusion principle. Enrico Fermi and Paul Dirac derived the corresponding statistics in 1926. Since then, the electron is a fermion, playing a decisive part in all properties of matter (4.3, 5.3, 5.5). In 1930 it became clear that in β-radioactivity besides the electron a neutrino emerges from a nucleus. Neutrino’s were later on recognized as members of the lepton family characterized by the electroweak interaction. β-radioactivity is not caused by electromagnetic interaction, but by the weak nuclear force. Electrons turned out not to be susceptible to strong nuclear forces. In 1931 the electron got a brother, the positron or anti-electron. This affirmed that an electron has no eternal life, but may be created or annihilated together with a positron. In β-radioactivity, too, an electron emerges or disappears (in a nucleus, an electron cannot exist as an independent particle), but apart from these processes, the electron is the most stable particle we know besides the proton. According to Dirac, the positron is a hole in the nether world of an infinite number of electrons having a negative energy (4.3). After the second world war, Richard Feynman, Julian Schwinger and Sin-Itiro Tomonaga developed quantum electrodynamics. This is a field theory in which the physical vacuum is not empty, but is the stage of spontaneous creations and annihilations of virtual electron-positron pairs. Interaction with other (sometimes virtual) particles is partly responsible for the properties of each particle. As a top performance counts the theoretical calculation of the magnetic moment of the electron in eleven decimals, a precision only surpassed by the experimental measurement of the same quantity in twelve decimals. Moreover, the two values differ only in the eleventh decimal, within the theoretical margin of error.[25] Finally, the electron got two cousins, the muon and the tauon.

Besides these scientific developments, electronics revolutionized the world of communication, information, and control.

Since Thomson’s discovery, the concept of an electron has been changed and expanded considerably. Besides being a particle having mass and charge, it is now a wave, a top, a magnet, and a fermion, half of a twin, and a lepton. Yet, few people doubt that we are still talking about the same electron.

What the essence of an electron is appears to be a hard question, if ever posed. It may very well be a meaningless question. But we achieve a growing insight into the laws constituting the electron’s character, determining the electron’s relations with other things and the processes in which it is involved. The electron’s charge means that two electrons exert a force on each other according to the laws of Coulomb and Lorentz. The mass follows from the electron’s acceleration in an electric and/or magnetic field, according to Maxwell’s laws. The lepton number makes only sense because of the law of conservation of lepton number, allowing of some processes and prohibiting others. Electrons are fermions, satisfying the exclusion principle and the distribution law of Fermi and Dirac.

The character of electrons is not logically given by a definition, but physically by a specific set of laws, which are successively discovered and systematically connected by experimental and theoretical research.

 

An electron is to be considered an individual satisfying the character described above. A much-heard objection to the assignment of individuality to electrons and other elementary particles is the impossibility to distinguish one electron from another. Electrons are characteristically equal to each other, having much less variability than plants or animals, even less than atoms.

This objection can be retraced to the still influential worldview of mechanism. This worldview assumed each particle to be identifiable by objective kinetic properties like its position and velocity at a certain time. Quantum physics observes that the identification of physically qualified things requires a physical interaction. In general, this interaction influences the particle’s position and momentum (4.3). Therefore, the electron’s position and momentum cannot be determined with unlimited accuracy, as follows from Heisenberg’s relations. This means that identification in a mechanistic sense is not always possible. Yet, in an interaction such as a measurement, an electron manifests itself as an individual.[26]

If an electron is part of an atom, it can be identified by its state, because the exclusion principle precludes that two electrons would occupy the same state. The two electrons in the helium atom exchange their states continuously without changing the state of the atom as a whole. But it cannot be doubted that at any moment there are two electrons, each with its own mass, charge and magnetic moment. For instance, in the calculation of the energy levels the mutual repulsion of the two electrons plays an important part.

The individual existence of a bound electron depends on the binding energy being much smaller than its rest energy. Binding energy is the energy needed to liberate an electron from an atom. It varies from a few eV (the outer electrons) to several tens of keV (the inner electrons in a heavy element like uranium). The electron’s rest mass is about 0.5 MeV, much larger than its binding energy in an atom (13.6 eV).[27] To keep an electron as an independent particle in a nucleus would require a binding energy of more than 100 MeV, much more than the electron’s rest energy of 0,5 MeV. For this reason, physicists argue that electrons in a nucleus cannot exist as independent, individual particles, like they are in an atom’s shell.

In contrast, protons and neutrons in a nucleus satisfy the criterion that an independent particle has a rest energy substantially larger than the bindingenergy. Their binding energy is about 8 MeV, their rest energy is almost 1000 MeV. A nucleus is capable of emitting an electron (this is β-radioactivity). The electron’s existence starts at the emission and eventually ends at the absorption by a nucleus. Because of the law of conservation of lepton number, the emission of an electron is accompanied by the emission of an anti-neutrino, and at the absorption of an electron a neutrino is emitted.[28] This would not be the case if the electron could exist as an independent particle in the nucleus.

 

More than as free particles, the electrons display their characteristic properties as components of atoms, molecules and solids, as well as in processes. The half-integral spin of electrons was discovered in the investigation of atomic spectra. The electron’s fermion character largely determines the shell structure of atoms. In 1930, Wolfgang Pauli suggested the existence of neutrino’s because of the character of β-radioactivity. The lepton number is discovered by an analysis of specific nuclear reactions.

Electrons have the affinity or propensity of functioning as a component of atoms and molecules because electrons share electromagnetic interaction with nuclei. Protons and electrons have the same but opposite charge, allowing of the formation of neutral atoms, molecules and solids. Electric neutrality is of tremendous importance for the stability of these systems. This tertiary characteristic determines the meaning of electrons in the cosmos.

 


 

 

5.3. The quantum ladder

 

An important spatial manifestation of interaction is the force between two spatially separated bodies. An atom or molecule having a spatially founded character consists of a number of nuclei and electrons kept together by the electromagnetic force. More generally, any interaction is spatially projected on a field.

Sometimes a field can be described as the spatial derivative of the potential energy. A set of particles constitutes a stable system if the potential energy has an appropriate shape, characteristic for the spatially founded structure. In a spatially founded structure, the relative spatial positions of the components are characteristic, even if their relative motions are taken care of. Atoms have a spherical symmetry restricting the motions of the electrons. In a molecule, the atoms or ions have characteristic relative positions, often with a specific symmetry. In each spatially founded character a number of quantitatively founded characters are interlaced.

 

It is a remarkable fact that in an atom the nucleus acts like a quantitatively founded character, whereas the nucleus itself is a spatial configuration of protons and neutrons kept together by forces. The nucleus itself has a spatially founded character, but in the atom it has the disposition to act as a whole, characterized by its mass, charge and magnetic moment. Similarly, a molecule or a crystal is a system consisting of a number of atoms or ions and electrons, all acting like quantitatively founded particles. Externally, the nucleus in an atom and the atoms or ions in a molecule act as a quantitatively founded whole, as a unit, while preserving their own internal spatially founded structure.

However, an atom bound in a molecule is not completely the same as a free atom. In contrast to a nucleus, a free atom is electrically neutral and it has a spherical symmetry. Consequently, it cannot easily interact with other atoms or molecules, except in collisions. In order to become a part of a molecule, an atom has to open up its tertiary character. This can be done in various ways. The atom may absorb or eject an electron, becoming an ion. A common salt molecule does not consist of a neutral sodium atom and a neutral chlorine atom, but of a positive sodium ion and a negative chlorine ion, attracting each other by the Coulomb force. This is called heteropolar or ionic bonding. Any change of the spherical symmetry of the atom’s electron cloud leads to the relatively weak VanderWaals interaction. A very strong bond results if two atoms share an electron pair. This homopolar or covalent bond occurs in diatomic molecules like hydrogen, oxygen and nitrogen, in diamond and in many carbon compounds. Finally, especially in organic chemistry, the hydrogen bond is important. It means the sharing of a proton by two atom groups.

The possibility of being bound into a larger configuration is a very significant tertiary characteristic of many physically qualified systems, determining their meaning in the cosmos.

 

The first stable system studied by physics is the solar system, in the seventeenth century investigated by Kepler, Galileo, Huygens, and Newton. The law of gravity, mechanical laws of motion, and conservation laws determine the character of planetary motion. The solar system is not unique, there are more stars with planets, and the same character applies to a planet with its moons, or to a double star. Any model of the system presupposes its isolation from the rest of the world, which is the case only approximately. This approximation is pretty good for the solar system, less good for the system of the sun and each planet apart, and pretty bad for the system of earth and moon.

 

Spatially founded physical characters display a large disparity. Various specific subtypes appear. According to the standard model (5.1), these characters form a hierarchy, called the quantum ladder.[29] At the first rung there are six (or eighteen, see below) different quarks, with the antiquarks grouped into three generations related to those of leptons, as follows from analogous processes.

Like a lepton, a quark is quantitatively founded, it has no structure. But a quark cannot exist as a free particle. Quarks are confined as a duo in a meson (e.g., a pion) or as a trio in a baryon (e.g., a proton or a neutron) or an antibaryon.[30] Confinement is a tertiary characteristic, but it does not stand apart from the secondary characteristics of quarks, their quantitative properties. Whereas quarks have a charge of 1/3 or 2/3 times the elementary charge, their combinations satisfy the law that the electric charge of a free particle can only be an integral multiple of the elementary charge. Likewise, in confinement the sum of the baryon numbers (for quarks ±1/3 of ±2/3) always yields an integral number. For a meson this number is 0, for a baryon it is +1, for an antibaryon it is -1.

Between quarks the colour force is acting, mediated by gluons. The colour force has no effect on leptons and is related to the strong force between baryons. In a meson the colour force between two quarks hardly depends on their mutual distance, meaning that they cannot be torn apart. If a meson breaks apart, the result is not two separate quarks but two quark-antiquark pairs.

Quarks are fermions, they satisfy the exclusion principle. In a meson or baryon, two identical quarks cannot occupy the same state. But an omega particle (sss) consists of three strange quarks having the same spin. This is possible because each quark exists in three variants, each indicated by a ‘colour’ besides six ‘flavours’. For the antiquarks three complementary colours are available. The metaphor of ‘colour’ is chosen because the colours are able to neutralize each other, like ordinary colours can be combined to produce white. This can be done in two ways, in a duo by adding a colour to its anticolour, or in a trio by adding three different colours or anticolours. The law that mesons and baryons must be coulorless yields an additional restriction on the number of possible combinations of quarks. A white particle is neutral with respect to the colour force, like an uncharged particle is neutral with respect to the Coulomb force. Nevertheless, an electrically neutral particle may exert electromagnetic interaction because of its magnetic moment. This applies e.g. to a neutron, but not to a neutrino. Similarly, by the exchange of mesons, the colour force manifests itself as the strong nuclear force acting between baryons, even if baryons are ‘white’. Two quarks interact by exchanging gluons, thereby changing of colour.

The twentieth-century standard model has no solution to a number of problems. Why only three generations? If all matter above the level of hadrons consists of particles from the first generation, what is the tertiary disposition of the particles of the second and third generation? Should the particles of the second and third generation be considered excited states of those of the first generation? Why does each generation consist of two quarks and two leptons (with corresponding antiparticles)? What is the origin of the mass differences between various leptons and quarks?

The last question might be the only one to receive an answer in the twentyfirst century, when the existence of the Higgs-particle and its mass were experimentally established (2012). For the other problems, at the end of the twentieth century no experiment is proposed providing sufficient information to suggest a solution.

 

The second level of the hierarchy consists of hadrons, baryons having half integral spin and mesons having integral spin. Although the combination of quarks is subject to severe restrictions, there are quite a few different hadrons. A proton consists of two up and one down quark (uud), and a neutron is composed of one up and two down quarks (udd). These two nucleons are the lightest baryons, all others being called hyperons. A pion consists of dd, uu (charge 0), du (–e) or ud (+e). As a free particle, only the proton is stable, whereas the neutron is stable within a nucleus.[31] All other hadrons have a very short mean lifetime, a free neutron having the longest (900 sec). Their diversity is much larger than that of leptons and of quarks. Based on symmetry relations, group theory orders the hadrons into sets of e.g. eight baryons or ten mesons.

For a large part, the interaction of hadrons consists of rearranging quarks accompanied by the creation and annihilation of quark-antiquark pairs and lepton-antilepton pairs. The general laws of conservation of energy, linear and angular momentum, the specific laws of conservation of electric charge, lepton number and baryon number, and the laws restricting electric charge and baryon number to integral values, characterize the possible processes between hadrons in a quantitative sense. Besides, the fields described by quantum electrodynamics and quantum chromodynamics characterize these processes in a spatial sense, and the exchange of field particles in a kinetic way.

 

Atomic nuclei constitute the third layer in the hierarchy. With the exception of hydrogen, each nucleus consists of protons and neutrons, determining together the coherence, binding energy, stability, and lifetime of the nucleus. The mass of the nucleus is the sum of the masses of the nucleons less the mass equivalent to the binding energy. Decisive is the balance of the repulsive electric force between the protons and the attractive strong nuclear force binding the nucleons independent of their electric charge. In heavy nuclei, the surplus of neutrons compensates for the mutual repulsion of the protons. To a large extent, the exclusion principle applied to neutrons and protons separately determines the stability of the nucleus and its internal energy states.

The nuclear force is negligible for the external functioning of a nucleus in an atom or molecule. Only the mass of the nucleus, its electric charge and its magnetic moment are relevant. Omitting the latter, we recognize two diversities in nuclei.

The first diversity concerns the number of protons. In a neutral atom it equals the number of electrons determining the atom’s chemical propensities. The nuclear charge together with the exclusion principle dominates the energy states of the electrons, hence the position of the atom in the periodic system of elements.

The second diversity concerns the number of neutrons in the nucleus. Atoms having the same number of protons but differing in neutron number are called isotopes, because they have the same position (topos) in the periodic system. They have similar chemical propensities.

The diversity of atomic nuclei is represented in a two-dimensional diagram, a configuration space. The horizontal axis represents the number of protons (Z = atomic number), the vertical axis the number of neutrons (N). In this diagram the isotopes (same Z, different N) are positioned above each other. The configuration space is mostly empty, because only a restricted number of combinations of Z and N lead to stable or metastable (radioactive) nuclei. The periodic system of elements is a two-dimensional diagram as well. Dmitri Mendelejev ordered the elements in a sequence according to a secondary property (the atomic mass) and below each other according to tertiary propensities (the affinity of atoms to form molecules, in particular compounds with hydrogen and oxygen). Later on, the atomic mass was replaced by the atomic number Z. However, quantum physics made clear that the atomic chemical properties are not due to the nuclei, but to the electrons subject to the exclusion principle. The vertical ordering in the periodic system concerns the configuration of the electronic shells. In particular the electrons in the outer shells determine the tertiary chemical propensities.

This is not an ordering according to a definition in terms of necessary and sufficient properties distinguishing one element from the other, but according to their characters. The properties do not define a character, as essentialism assumes, but the character (a set of laws) determines the properties and propensities of the atoms.

 

In the hierarchical order, we find globally an increase of spatial dimensions, diversity of characters and variation within a character, besides a decrease of the binding energy per particle and the significance of strong and weak nuclear forces. For the characters of atoms, molecules, and crystals, only the electromagnetic interaction is relevant.

The internal variation of a spatially founded character is very large. Quantum physics describes the internal states with the help of a Hilbert space, having the eigenvectors of the Hamiltonian operator as a base (2.3). A Hilbert space describes the ensemble of possibilities (in particular the energy eigenvalues) determined by the system’s character. In turn, the atom or molecule’s character itself is represented by Schrödinger’s equation.[32] This equation is exactly solvable only in the case of two interacting particles, like the hydrogen atom, the helium ion, the lithium ion, and positronium.[33] In other cases, the equation serves as a starting point for approximate solutions, usually only manageable with the help of a computer.

The hierarchical connection implies that the spatially founded characters are successively interlaced, for example nucleons in a nucleus, or the nucleus in an atom, or atoms in a molecule. Besides, these characters are interlaced with kinetically, spatially, and quantitatively qualified characters, and often with biotically qualified characters as well.

The characters described depend strongly on a number of natural constants, which value can be established only experimentally, not theoretically. Among others, this concerns the gravitational constant G, the speed of light c, Planck’s constant h and the elementary electric charge e, or combinations like the fine structure constant (2pe2/hc=1/137.036) and the mass ratio of the proton and the electron (1836.104). If the constants of nature would be slightly different, both nuclear properties and chemical properties would change drastically.[34]

The quantum ladder is of a physical and chemical nature. As an ordering principle, the ladder has a few flaws from a logical point of view. For instance, the proton occurs on three different levels, as a baryon, as a nucleus, and as an ion. The atoms of the noble gases are their molecules as well. This is irrelevant for their character. The character of a proton consists of the specific laws to which it is subjected. The classification of baryons, nuclei or ions is not a characterization, and a proton is not ‘essentially’ a baryon and ‘accidentally’ a nucleus or an ion.

 

The number of molecular characters is enormous and no universal classification of molecules exists. In particular the characters in which carbon is an important element show a large diversity.

The molecular formula indicates the number of atoms of each element in a molecule. Besides, the characteristic spatial structure of a molecule determines its chemical properties. The composition of a methane molecule is given by the formula CH4, but it is no less significant that the methane molecule has the symmetrical shape of a regular tetrahedron, with the carbon atom at the centre and the four hydrogen atoms at the vertices. The V-like shape of a water molecule (the three atoms do not lie on a straight line, but form a characteristic angle of 105o) causes the molecule to have a permanent electric dipole moment, explaining many of the exceptional properties of water. Isomers are materials having the same molecular formula but different spatial orderings, hence different chemical properties. Like the symmetry between a left and a right glove, the spatial symmetry property of mirroring leads to the distinction of dextro- and laevo-molecules.

The symmetry characteristic for the generic (physical) character is an emergent property, in general irreducible to the characters of the composing systems. Conversely, the original symmetry of the composing systems is broken. In methane, the outer shells of the carbon atom have exchanged their spherical symmetry for the tetrahedron symmetry of the molecule. Symmetry break also occurs in fields.[35] From quantum field theory, in principle it should be possible to derive successively the emergent properties of particles and their spatially founded composites. This is the synthetic, reductionist or fundamentalist trend, constructing complicated structures from simpler ones. It cannot explain symmetry breaks.[36] For practical reasons too, a synthetic approach is usually impossible. The alternative is the analytical or holistic method, in which the symmetry break is explained from the empirically established symmetry of the original character. Symmetries and other structural properties are usually a posteriori explained, and hardly ever a priori derived. However, analysis and synthesis are not contrary but complementary methods.

 

Climbing the quantum ladder, complexity seems to increase. On second thoughts, complexity is not a clear concept. An atom would be more complex than a nucleus and a molecule even more. However, in the character of a hydrogen atom or a hydrogen molecule, weak and strong interactions are negligible, and the complex spatially founded nuclear structure is reduced to the far simpler quantitatively founded character of a particle having mass, charge, and magnetic moment. Moreover, a uranium nucleus consisting of 92 protons and 146 neutrons has a much more complicated character than a hydrogen molecule consisting of two protons and two electrons, having a position two levels higher on the quantum ladder.

Inward a system is more complex than outward. An atom consists of a nucleus and a number of electrons, grouped into shells. If a shell is completely filled in conformity with the exclusion principle, it is chemically inert, serving mostly to reduce the effective nuclear charge. A small number of electrons in partially occupied shells determines the atom’s chemical propensities. Consequently, an atom of a noble gas, having only completely occupied shells, is less complicated than an atom having one or two electrons less. The complexity of molecules increases if the number of atoms increases. But some very large organic molecules consist of a repetition of similar atomic groups and are not particularly complex.

In fact, there does not exist an unequivocal criterion for complexity.[37]

 

An important property of hierarchically ordered characters is that for the explanation of a character it is sufficient to descend to the next lower level. For the understanding of molecules, a chemist needs the atomic theory, but he does not need to know much about nuclear physics. A molecular biologist is acquainted with the chemical molecular theory, but his knowledge of atomic theory may be rather superficial. This is possible because of the phenomenon that a physical character interlaced in another one both keeps its properties and hides them.

Each system derives its stability from an internal equilibrium that is hardly observable from without. The nuclear forces do not range outside the nucleus. Strong electric forces bind an atom or a molecule, but as a whole it is electrically neutral. The strong internal equilibrium and the weak remaining external action are together characteristic for a stable physical system. If a system exerts a force on another one, it experiences an equal external force. This external force should be much smaller than the internal forces keeping the system intact, otherwise it will be torn apart. In a collision between two molecules, the external interaction may be strong enough to disturb the internal equilibrium, such that the molecules fall apart. Eventually, a new molecule with a different character emerges. Because the mean collision energy is proportional to the temperature, the stability of molecules and crystals depend on this parameter. In the sun’s atmosphere no molecules exist and in its centre no atoms occur. In a very hot star like a neutron star, even nuclei cannot exist.

Hence, a stable physical or chemical system is relatively inactive. It looks like an isolated system. This is radically different from plants and animals that can never be isolated from their environment. The internal equilibrium of a plant or an animal is maintained by metabolism, the continuous flow of energy and matter through the organism.

 


 

 

5.4. Individualized currents

 

I consider the primarily physical character of a photon to be secondarily kinetically founded. A photon is a field particle in the electromagnetic interaction, transporting energy, linear and angular momentum from one spatially founded system to another. Besides photons, nuclear physics recognizes gluons being field particles for the colour force, mesons for the strong nuclear force, and three types of vector bosons for the weak interaction (5.1). The existence of the graviton, the field particle for gravity, has not been experimentally confirmed. All these interaction particles have an integral spin and are bosons. Hence, these are not subject to the exclusion principle. Field particles are not quantitatively or spatially founded things, but individualized characteristic currents, hence kinetically founded ‘quasiparticles’. Bosons carry forces, whereas fermions feel forces.

By absorbing a photon, an atom comes into an excited state, i.e. a metastable state at a higher energy than the ground state. Whereas an atom in its ground state can be considered an isolated system, an excited atom is always surrounded by the electromagnetic field.

A photon is a wave packet, like an electron it has a dual character. Yet there is a difference. Whereas the electron’s motion has a wave character, a photon is a current in an electromagnetic field, a current being a kinetic projection of physical interaction. With respect to electrons, the wave motion only determines the probability of what will happen in a future interaction. In a photon, besides determining a similar probability, the wave consists of periodically changing electric and magnetic fields. A real particle’s wave motion lacks a substratum, there is no characteristic medium in which it moves, and its velocity is variable. Moving quasiparticles have a substratum, and their wave velocity is a property of the medium. The medium for light in empty space is the electromagnetic field, all photons having the same speed independent of any reference system.

 

Each inorganic solid consists of crystals, sometimes microscopically small. Amorphous solid matter does not exist or is very rare. The ground state of a crystal is the hypothetical state at zero temperature. At higher temperatures, each solid is in an excited state, determined by the presence of quasiparticles.

The crystal symmetry, adequately described by the theory of groups, has two or three levels. First, each crystal is composed of space filling unit cells. All unit cells of a crystal are equal to each other, containing the same number of atoms, ions or molecules in the same configuration. A characteristic lattice point indicates the position of a unit cell. The lattice points constitute a Bravais lattice, representing the crystal’s translation symmetry. Only fourteen types of Bravais lattices are mathematically possible and realized in nature. Each lattice allows of some variation, for instance with respect to the mutual distance of the lattice points, as is seen when the crystal expands on heating. Because each crystal is finite, the translation symmetry is restricted and the surface structure of a crystal may be quite different from the crystal structure.

Second, the unit cell has a symmetry of its own, superposed on the translation symmetry of the Bravais lattice. The cell may be symmetrical with respect to reflection, rotation or inversion. The combined symmetry determines how the crystal scatters X-rays or neutrons, presenting a means to investigate the crystalline structure empirically. Hence, the long distance spatial order of a crystal evokes a long time kinetic order of specific waves.

Third, in some materials we find an additional ordering, for instance that of the magnetic moments of electrons or atoms in a ferromagnet. Like the first one, this is a long-distance ordering. It involves an interaction that is not restricted to nearest neighbours. It may extend over many millions of atomic distances.

The atoms in a crystal oscillate around their equilibrium positions.[38] These elastic oscillations are transferred from one atom to the next like a sound wave, and because the crystal has a finite volume, this is a stationary wave, a collective oscillation. The crystal as a whole is in an elastic oscillation, having a kinetically founded character. These waves have a broad spectrum of frequencies and wavelengths, being sampled into wave packets. In analogy with light, these field particles are called sound quanta or phonons.

Like the electrons in a metal, the phonons act like particles in a box (4.4). Otherwise they differ widely. The number of electrons is constant, but the number of phonons increases strongly at increasing temperature. Like all quasiparticles, the phonons are bosons, not being subject to the exclusion principle. The mean kinetic energy of the electrons hardly depends on temperature, and their specific heat is only measurable at a low temperature. In contrast, the mean kinetic energy of phonons strongly depends on temperature, and the phonon gas dominates the specific heat of solids. At a low temperature this increases proportional to T3 to become constant at a higher temperature. Peter Debije’s theory (originally 1912, later adapted) explains this from the wave and boson character of phonons and the periodic character of the crystalline structure.

In a solid or liquid, besides phonons many other quantized excitations occur, corresponding, for instance, with magnetization waves or spin waves. The interactions of quasiparticles and electrons cause the photoelectric effect and transport phenomena like electric resistance and thermo-electricity.

 

The specific properties of some superconductors can be described with the help of quasiparticles.[39] In a superconductor two electrons constitute a Cooper-pair. This is a pair of electrons in a bound state, such that both the total linear momentum and the total angular momentum are zero. The two electrons are not necessarily close to each other. Superconductivity is a phenomenon with many variants, and the theory is far from complete.

Superconductivity is a collective phenomenon in which the wave functions of several particles are macroscopically coherent.[40] There is no internal dissipation of energy. It appears that on a macroscopic scale the existence of kinetically founded characters is only possible if there is no decoherence (4.3). Therefore, kinetically founded physical characters on a macroscopic scale are quite exceptional.

 


 

 

5.5. Aggregates and statistics

 

We have now discussed three types of physically qualified characters, but this does not exhaust the treatment of matter. The inorganic sciences acknowledge many kinds of mixtures, aggregates, alloys or solutions. In nature, these are more abundant than pure matter. Often, the possibility to form a mixture is restricted and some substances do not mix at all. In order to form a stable aggregate, the components must be tuned to each other. Typical for an aggregate is that the characteristic magnitudes (like pressure, volume and temperature for a gas) are variable within a considerable margin, even if there is a lawful connection between these magnitudes.

Continuous variability provides quantum physics with a criterion to distinguish a composite thing (with a character of its own) from an aggregate. Consider the interaction between an electron and a proton. In the most extreme case this leads to the absorption of the electron and the transformation of the proton into a neutron (releasing a neutrino). At a lower energy, the interaction may lead to a bound state having the character of a hydrogen atom if the total energy (kinetic and potential) is negative.[41] Finally, if the total energy is positive, we have an unbound state, an aggregate. In the bound state the energy can only have discrete values, it is quantized, whereas in the unbound state the energy is continuously variable.

Hence, if the rest energy has a characteristic value and internal energy states are lacking, we have an elementary particle (a lepton or a quark). If there are internal discrete energy states we have a composite character, whereas we have an aggregate if the internal energy is continuously variable.

 

With aggregates it is easier to abstract from specific properties than in the case of the characters of composite systems discussed in section 5.3. Studying the properties of macroscopic physical bodies, thermodynamics starts from four general laws, for historical reasons numbered 0 to 3.

The zeroth law states that two or more bodies (or parts of a single body) can be in mutual equilibrium. Now the temperature of the interacting bodies is the same, and in a body as a whole the temperature is uniform. Depending on the nature of the interaction, this applies to other intensive magnitudes as well, for instance the pressure of a gas, or the electric or chemical potential. In this context bodies are not necessarily spatially separated. The thermodynamic laws apply to the components of a mixture as well. Equilibrium is an equivalence relation (2.1). An intensive magnitude like temperature is an equilibrium parameter, to be distinguished from an extensive magnitude like energy, which is additive. If two unequal bodies are in thermal equilibrium with each other, their temperature is the same, but their energy is different and the total energy is the sum of the energies of the two bodies apart. An additive magnitude refers to the quantitative relation frame, whereas an equilibrium parameter is a projection on the spatial frame.

According to the first law of thermodynamics, the total energy is constant, if the interacting bodies are isolated from the rest of the world. The thermodynamic law of conservation of energy forbids all processes in which energy would be created or annihilated. The first law does not follow from the fact that energy is additional. Volume, entropy, and the mass of each chemical component are additive as well, but not always constant in an interaction.

The second law states that interacting systems proceed towards an equilibrium state. The entropy decreases if a body loses energy and increases if a body gains energy, but always in such a way that the total entropy increases as long as equilibrium is not reached. Based on this law only entropy differences can be calculated.[42]

According to the third law the absolute zero of temperature cannot be reached. At this temperature all systems would have the same entropy, to be considered the zero point on the entropy scale.

From these axioms other laws are derivable, such as Gibbs’s phase rule (see below). As long as the interacting systems are not in equilibrium, the gradient of each equilibrium parameter acts as the driving force for the corresponding current causing equilibrium. A temperature gradient drives a heat current, a potential difference drives an electric current, and a chemical potential difference drives a material current. Any current (except a superconducting flow) creates entropy.

The thermodynamic axioms describe the natural laws correctly in the case of interacting systems being close to equilibrium. Otherwise, the currents are turbulent and a concept like entropy cannot be defined. Another restriction follows from the individuality of the particles composing the system. In the equilibrium state, the entropy is not exactly constant, but it fluctuates spontaneously around the equilibrium value. Quantum physics shows energy to be subject to a Heisenberg-relation (4.3). In fact, the classical thermodynamic axioms refer to a continuum, not to the actually coarse matter. Thermodynamics is a general theory of matter, whereas statistical physics studies matter starting from the specific properties of the particles composing a system. This means that thermodynamics and statistical physics complement each other.

An equilibrium state is sometimes called an ‘attractor’, attracting a system from any instable state toward a stable state. Occasionally, a system has several attractors, now called local equilibrium states. If there is a strong energy barrier between the local equilibrium states, it is accidental which state is realized. By an external influence, a sudden and apparently drastic transition may occur from one attractor to another one. In quantum physics a similar phenomenon is called ‘tunneling’, to which I shall return in section 5.6.

 

a. A homogeneous set of particles having the same character may be considered a quantitatively founded aggregate, if the set does not constitute a structural whole with a spatially founded character of its own (like the electrons in an atom). In a gas the particles are not bound to each other. Usually, an external force or a container is needed to keep the particles together. In a fluid, the surface tension is a connective force that does not give rise to a characteristic whole. The composing particles’ structural similarity is a condition for the applicability of statistics. Therefore I call a homogeneous aggregate quantitatively founded.

It is not sufficient to know that the particles are structurally similar. At least it should be specified whether the particles are fermions or bosons (4.4). Consider, for instance, liquid helium, having two varieties. In the most common isotope, a helium nucleus is composed of two protons and two neutrons. The net spin is zero, hence the nucleus is a boson. In a less common isotope, the helium nucleus has only one neutron besides two protons. Now the nucleus’ net spin is ½ and it is a fermion. This distinction (having no chemical consequences) accounts for the strongly diverging physical properties of the two fluids.

Each homogeneous gas is subjected to a specific law, called the statistics or distribution function. It determines how the particles are distributed over the available states, taking into account parameters like volume, temperature, and total energy. The distribution function does not specify which states are available. Before the statistics is applicable, the energy of each state must be calculated separately.

The Fermi-Dirac statistics based on Pauli’s exclusion principle applies to all homogeneous aggregates of fermions, i.e., particles having half-integral spin. For field particles and other particles having an integral spin, the Bose-Einstein statistics applies, without an exclusion principle. If the mean occupation number of available energy states is low, both statistics may be approximated by the classical Maxwell-Boltzmann distribution function. Except at very low temperatures, this applies to every dilute gas consisting of similar atoms or molecules. The law of Boyle and Gay-Lussac follows from this statistics. It determines the relation between volume, pressure and temperature for a dilute gas, if the interaction between the molecules is restricted to elastic collisions and if the molecular dimensions are negligible. Without these two restrictions, the state equation of Van der Waals counts as a good approximation. Contrary to the law of Boyle and Gay-Lussac, the VanderWaals equation contains two constants characteristic for the gas concerned. It describes the condensation of a gas to a fluid as well as the phenomena occurring at the critical point, the highest temperature at which the substance is liquid.

 

b. It is not possible to apply statistics directly to a mixture of subjects having different characters. Sometimes, it can be done with respect to the components of a mixture apart. For a mixture of gases like air, the pressure exerted by the mixture equals the sum of the partial pressures exerted by each component apart in the same volume at the same temperature (Dalton’s law). The chemical potential is a parameter distinguishing the components of a heterogeneous mixture.

I consider a heterogeneous mixture like a solution to have a spatial foundation, because the solvent is the physical environment of the dissolved substance. Solubility is a characteristic disposition of a substance dependent on the character of the solvent as the potential environment.

Stable characters in one environment may be unstable in another one. Common salt molecules solved in water fall apart into sodium and chlorine ions. In the environment of water, the dielectric constant is much higher than in air. Now the Coulomb force between the ions is proportionally smaller, too small to keep the ions together.[43]

The composition of a mixture, the number of grams of solved substance in one litre water, is accidental. It is not determined by any character but by its history. This does not mean that two substances can be mixed in any proportion whatsoever. However, within certain limits dependent on the temperature and the characters of the substances concerned, the proportion is almost continuously variable.

 

c. Even if a system only consists of particles of the same character, it may not appear homogeneous. It exists in two or more different ‘phases’ simultaneously, for example, the solid, liquid, and vaporous states. A glass of water with melting ice is in internal equilibrium at 0 °C. If heat is supplied, the temperature remains the same until all ice is melted. Only chemically pure substances have a characteristic melting point. In contrast, a heterogeneous mixture has a melting trajectory, meaning that during the melting process, the temperature increases. A similar characteristic transition temperature applies to other phase transitions in a homogeneous substance, like vaporizing, the transition from a paramagnetic to a ferromagnetic state, or the transition from a normal to a superconducting state. Addition of heat or change of external pressure shifts the equilibrium. A condition for equilibrium is that the particles concerned move continuously from one phase to the other. Therefore I call it a homogeneous kinetically founded aggregate.

An important example of a heterogeneous kinetic equilibrium concerns chemical reactions. Water consists mostly of water molecules, but a small part (10-7 at 25 oC) is dissociated into positive H-ions and negative OH-ions. In the equilibrium state, equal amounts of molecules are dissociated and associated. By adding other substances (acids or bases), the equilibrium is shifted.[44] 

Both phase transitions and chemical reactions are subject to characteristic laws and to general thermodynamic laws, for instance Gibbs’s phase rule.[45]

 


 

 

5.6. Coming into being, change and decay

 

I call an event physically qualified if it is primarily characterized by an interaction between two or more subjects. A process is a characteristic set of events, partly simultaneously, partly successively. Therefore, physically qualified events and processes often occur in an aggregate, sometimes under strictly determined circumstances, among which the temperature. In a mixture, physical, chemical and astrophysical reactions lead to the realization of characters. Whereas in physical things properties like stability and life time are most relevant, physical and chemical processes concern the coming into being, change and decay of those things.[46]

 

In each characteristic event a thing changes of character (it emerges or decays) or of state (preserving its identity). With respect to the thing’s character considered as a law, the first case concerns a subjective event (because the subject changes). The second case concerns an objective event (for the objective state changes). Both have secondary characteristics. I shall briefly mention some examples.

Annihilation or creation of particles is a subjective numerically founded event. Like any other event, it is subject to conservation laws. An electron and a positron emerge simultaneously from the collision of a γ-particle with some other particle, if the photon’s energy is at least twice the electron’s rest energy. The presence of another particle, like an atomic nucleus, is required in order to satisfy the law of conservation of linear momentum. For the same reason, at least two photons emerge when an electron and a positron destroy each other.

By emitting or absorbing a photon, a nucleus, atom or molecule changes its state. This is a spatially founded objective transformation. In contrast, in a nuclear or chemical reaction one or more characters are transformed, constituting a subjective spatially founded event. In a- or b-radioactivity, a nucleus changes subjectively its character, in g-activity it only changes objectively of its state.

An elastic collision is an event in which the kinetic state of a particle is changed without consequences for its character or its internal state. Hence, this concerns an objective kinetically founded event. In a non-elastic collision a subjective change of character or an objective change of state occurs. Quantum physics describes such events with the help of operators determining the transition probability.

A process is an aggregate of events. In a homogeneous aggregate, phase transitions may occur. In a heterogeneous aggregate chemical reactions occur (5.5). Both are kinetically founded. This also applies to transport phenomena like electric, thermal or material currents, thermo-electric phenomena, osmosis and diffusion.

 

Conservation laws are ‘constraints’ restricting the possibility of processes. For instance, a process in which the total electric charge would change is impossible. In atomic and nuclear physics, transitions are known to be forbidden or improbable because of selection rules for quantum numbers characterizing the states concerned.

Physicists and chemists take for granted that each process that is not forbidden is possible and therefore experimentally realizable. In fact, several laws of conservation like those of lepton number and baryon number were discovered because certain reactions turned out to be impossible. Conversely, in 1930 Pauli postulated the existence of neutrino’s, because otherwise the laws of conservation of energy and momentum would not apply to b-radioactivity. Experimentally, the existence of neutrinos was not confirmed until 1956.

 

In common parlance, a collision is a rather dramatic event, but in physics and chemistry a collision is just an interaction between two or more subjects moving towards each other, starting from a large distance, where their interaction is negligible. In classical mechanics, this interaction means an attractive or repelling force. In modern physics, it implies the exchange of real or virtual particles like photons.

In each collision, at least the state of motion of the interacting particles changes. If that is all, we speak of an elastic collision, in which only the distribution of kinetic energy, linear and angular momentum over the colliding particles changes. A photon can collide elastically with an electron (this is the Compton effect), but an electron cannot absorb a photon. Only a composite thing like a nucleus or an atom is able to absorb a particle.

Collisions are used to investigate the character of the particles concerned. A famous example is the scattering of a-particles by gold atoms (1911). For the physical process, it is sufficient to assume that the particles have mass and charge and are point-like. It does not matter whether the particles are positively or negatively charged. The character of this collision is statistically expressed in a mathematical formula derived by Ernest Rutherford. The fact that the experimental results (by Hans Geiger and Ernest Marsden) agreed with the formula indicated that the nucleus is much smaller than the atom, and that the mass of the atom is almost completely concentrated in the nucleus. A slight deviation between the experimental results and the theoretical formula allowed of an estimate of the size of the nucleus, its diameter being about 104 times smaller than the atom’s. The dimension of a microscopic invisible particle is calculable from similar collision processes, and is therefore called its collision diameter. Its value depends on the projectiles used. The collision diameter of a proton differs if determined from collisions with electrons or neutrons.

 

In a non-elastic collision the internal structure of one or more colliding subjects changes in some respect. With billiard balls only the temperature increases, kinetic energy being transformed into heat, causing the motion to decelerate.

In a non-elastic collision between atoms or molecules, the state of at least one of them changes into an excited state, sooner or later followed by the emission of a photon. This is an objective characteristic process.

The character of the colliding subjects may change subjectively as well, for instance, if an atom loses an electron and becomes an ion, or if a molecule is dissociated or associated.

Collisions as a means to investigate the characters of subatomic particles have become a sophisticated art in high-energy physics.

 

Spontaneous decay became first known at the end of the nineteenth century from radioactive processes. It involves strong, weak or electromagnetic interactions, respectively in α-, β-, and γ-radiation. The decay law of Rutherford and Soddy (1902) approximately gives the character of a single radioactive process.[47] This statistical law is only explainable by assuming that each atom decays independently of all other atoms. It is a random process. Besides, radioactivity is almost independent of circumstances like temperature, pressure and the chemical compound in which the radioactive atom is bound. Such decay processes occur in nuclei and sub-atomic particles, as well as in atoms and molecules being in a metastable state. The decay time is the mean duration of existence of the system or the state.

Besides spontaneous ones, stimulated transformations occur. Einstein first investigated this phenomenon in 1916, with respect to transitions between two energy levels of an atom or molecule, emitting or absorbing a photon. He found that (stimulated) absorption and stimulated emission are equally probable, whereas spontaneous emission has a different probability.[48] Stimulated emission is symmetrical with stimulated absorption, but spontaneous emission is asymmetric and irreversible. 

 

A stable system or a stable state may be separated from other systems or states by an energy barrier. It may be imagined that a particle is confined in an energy well, for instance an α-particle in a nucleus. According to classical mechanics, such a barrier is insurmountable if it has a larger value than the kinetic energy of the particle in the well, but quantum physics proves that there is some probability that the particle leaves the well. This is called ‘tunneling’, for it looks like the particle digging a tunnel through the energy mountain.

Consider a chemical reaction in which two molecules A and B associate to AB and conversely, AB dissociates into A and B. The energy of AB is lower than the energy of A+B apart, the difference being the binding energy. A barrier called the activation energy separates the two states. In an equilibrium situation, the binding energy and the temperature determine the proportion of the numbers of molecules (NA.NB/NAB). It is independent of the activation energy. At a low temperature, if the total number of A’s equals the total number of B’s, only molecules AB will be present. In an equilibrium situation at increasing temperatures, the number of molecules A and B increases, and that of AB decreases. In contrast, the speed of the reaction depends on the activation energy (and again on temperature). Whereas the binding energy is a characteristic magnitude for AB, the activation energy partly depends on the environment. In particular the presence of a catalyst may lower the activation energy and stimulate tunneling, increasing the speed of the reaction.

The possibility to overcome energy barriers explains the possibility of transitions from one stable system to another one. It is the basis of theories about radioactivity and other spontaneous transitions, chemical reaction kinetics, the emergence of chemical elements and of phase transitions, without affecting theories explaining the existence of stable or quasi-stable systems.

In such transition processes the characters do not change, but a system may change of character. The laws do not change, but their subjects do.

 

The chemical elements have arisen in a chain of nuclear processes, to be distinguished as fusion and fission. The chain starts with the fusion of hydrogen nuclei (protons) into helium nuclei, which are so stable that in many stars the next steps do not occur. Further processes lead to the formation of all known natural isotopes up to uranium. Besides helium with 4 nucleons, beryllium (8), carbon (12), oxygen (16) and iron (56) are relatively stable. In all these cases, both the number of protons and the number of neutrons is even.

The elements only arise in specific circumstances. In particular, the temperature and the density are relevant. The transition from hydrogen to helium occurs at 10 to 15 million Kelvin and at a density of 0,1 kg/cm3. The transition of helium into carbon, oxygen and neon occurs at 100 to 300 million Kelvin and 100 kg/cm3.[49] Only after a considerable cooling down, these nuclei form with electrons the atoms and molecules to be found on the earth.

Once upon a time the chemical elements were absent. This does not mean that the laws determining the existence of the elements did not apply. The laws constituting the characters of stable and metastable isotopes are universally valid, independent of time and place. But the realization of the characters into actual individual nuclei does not depend on the characters only, but on circumstances like temperature as well. On the other hand, the available subjects and their relations determine these circumstances. Like initial and boundary conditions, characters are conditions for the existence of individual nuclei. Mutatis mutandis, this applies to electrons, atoms and molecules as well.

 

In the preceding chapters, I discussed quantitative, spatial and kinetic characters. About the corresponding subjects, like groups of numbers, spatial figures or wave packets, it cannot be said that they come into being or decay, except in relation to physical subjects. Only interacting things emerge and disappear. Therefore there is no quantitative, spatial or kinetic evolution comparable to the astrophysical one, even if the latter is expressed in numerical proportions, spatial relations and characteristic rhythms.

Although stars have a lifetime far exceeding the human scale, it is difficult to consider them stable. Each star is a reactor in which continuously processes take place. Stars are subject to evolution. There are young and old stars, each with their own character. Novae and supernovae, neutron stars and pulsars represent various phases in the evolution of a star. The simplest stellar object may be the black hole, behaving like a thermodynamic black body subject to the laws of thermodynamics.[50]

These processes play a part in the theory about the astrophysical evolution, strongly connected to the standard model discussed in section 5.1. It correctly explains the relative abundance of the chemical elements.[51] After the start of the development of the physical cosmos, about thirteen billion years ago, it has expanded. As a result all galaxies move away from each other, the larger the distance, the higher their speed. Because light needs time to travel, the picture we get from galaxies far away concerns states from era’s long past. The most remote systems are at the spatio-temporal horizon of the physical cosmos. In this case, astronomers observe events that occurred shortly after the big bang, the start of the astrophysical evolution.

Its real start remains forever behind the horizon of our experience. Astrophysicists are aware that their theories based on observations may approach the big bang without ever reaching it. The astrophysical theory describes what has happened since the beginning - not the start itself - according to laws discovered in our era. The extrapolation towards the past is based on the supposition that these laws are universally valid and constant. This agrees with the realistic view that the cosmos can only be investigated from within. It is not uncommon to consider our universe as one realized possibility taken from an ensemble of possible worlds.[52] However, there is no way to investigate these alternative worlds empirically.



[1] Groups, spatial figures, waves and oscillations do not interact, hence are not physical unless interlaced with physical characters.

[2] Wolfgang Pauli postulated the existence of neutrinos in 1930 in order to explain the phenomenon of β-radioactivity. Neutrino’s were not detected experimentally before 1956. According to a physical criterion, neutrino’s exist if they demonstrably interact with other particles. Sometimes it is said that the neutrino is ‘observed’ for the first time in 1956. Therefore one has to stretch the concept of ‘observation’ quite far. In no experiment neutrino’s can be seen, heard, smelled, tasted or felt. Even their path of motion cannot be made visible in any experiment. But in several kinds of experiment, from observable phenomena the energy and momentum (both magnitude and direction) of individual neutrino’s can be calculated. For a physicist, this provides sufficient proof for their existence.

[3] ‘System’ is a general expression for a bounded part of space inclusive of the enclosed matter and energy. A closed system does not exchange energy or matter with its environment. Entropy can only be defined properly if the system is in internal equilibrium.

[4] Lucas 1973, 43-56.

[5] Omnès 1994, 193-198, 315-319.

[6] Dijksterhuis 1950; Reichenbach 1956; Gold (ed.) 1967; Grünbaum 1973; 1974; Sklar 1974, chapter V; Sklar 1993; Prigogine 1980; Coveney, Highfield 1990.

[7] Compare Reichenbach 1956, 135: ‘The direction of time is supplied by the direction of entropy, because the latter direction is made manifest in the statistical behaviour of a large number of separate systems, generated individually in the general drive to more and more probable states.’ But on p. 115 Reichenbach observes: ‘The inference from time to entropy leads to the same result whether it is referred to the following or to preceding events’. Putnam 1975, 88 concludes that ‘… the one great law of irreversibility (the Second Law) cannot be explained from the reversible laws of elementary particle mechanics…’.

[8]The international physical community, organized in the Conférence Générale des Poids et Mesures, designed the metric system of units and scales. The basic magnitudes and units of the Système International (SI) are: length (metre), mass (kilogram), kinetic time (second), electric current (ampère), temperature (kelvin), amount of matter (mol) and luminosity (candela). All other units are derived from these. Theoretically, a different base could have been chosen, e.g. electric charge or potential difference instead of current. The choice is made especially with regard of the possibility to establish the unit and metric concerned with large precision. Physicists and astronomers do not always stick to these agreements, using the speed of light, the light year or the charge of the electron as alternatives to the standard units.

[9] von Laue 1949; Jammer 1961; Elkana 1974a; Harman 1982.

[10] The formula means that mass and energy are equivalent, that each amount of energy corresponds with an amount of mass and conversely. It does not mean that mass is a form of energy, or can be converted into energy.

[11] Because energy is not easy to measure, its metric and unit (joule) are derived from those of mass, length and time: 1 J = 1 kg.m2/sec2, or alternatively from electric current, potential difference and time: 1 J = 1 A.V.sec.

[12] For the amount of matter, moles are used as well. A mole is the quantity of matter containing as many elementary particles (i.e., atoms, molecules, ions, electrons etc.) as there are atoms in 0.012 kg of carbon-12.

[13] Angular frequency equals 2π times the frequency. The moment of inertia is an expression of the distribution of matter about a body with respect to a rotation axis.

[14] About the history of the concept of force, see Jammer 1957. On Newton’s views, see Cohen, Smith  (eds.) 2002.

[15] Morse 1964, 53-58; Callen 1960, 79-81; Stafleu 1980, 70-73. The definition of the metric of pressure is relatively easy, but finding the metric of electric potential caused almost as much trouble as the development of the thermodynamic temperature scale.

[16] A current in a superconductor is a boundary case. In a closed superconducting circuit without a source, an electric current may persist indefinitely, whereas a normal current would die out very fast.

[17] Thermo-electricity is the phenomenon that a heat current causes an electric current (Seebeck-effect) or reverse (Peltier-effect), see Callen 1960, 293-308. This is applied in the thermo-electric thermometer, measuring a temperature difference by an electric potential difference. Relations between various types of currents are subject to a symmetry relation discovered by William Kelvin and generalized by Lars Onsager, see Morse 1964, 106-118; Callen 1960, 288-292; Prigogine 1980, 84-88.

[18] Sklar 1993, chapters 5-7.

[19] About 1900, the electromagnetic worldview supposed that all physical and chemical interactions could be reduced to electromagnetism, see McCormmach 1970a; Kragh 1999, chapter 8. Just like the modern unification program, it aimed at deducing the (rest-) mass of elementary particles from the fundamental interaction, see Jammer 1961, chapter 11.

[20] SU(3) means special unitary group with three variables. The particles in a representation of this group have the same spin and parity (together one variable), but different values for strangeness and one component of isospin.

[21] Symmetry is as much an empirical property as any other one. After the discovery of antiparticles it was assumed that charge conjugation C (symmetry with respect to the interchange of a particle with its antiparticle), parity P (mirror symmetry) and time reversal T are properties of all fundamental interactions.  Since 1956, it is experimentally established that β-decay has no mirror symmetry unless combined with charge conjugation (CP). In 1964 it turned out that weak interactions are only symmetrical with respect to the product CPT, such that even T alone is no longer universally valid.

[22] Pickering 1984, chapter 9-11; Pais 1986, 603-611. The J/ψ particle established the existence of charm as the fourth flavour of quarks in 1974. In 1977 the fifth quark was found (bottom), in 1978 the tauon, in 1995 the sixth quark (top). In order to explain the mass of field particles and other particles, the standard model needs the Higgs particle in the Higgs field (called after Peter Higgs), which was found experimentally in 2012. In the standard model, some constants of nature serve as a datum for the theory. Their values do not follow from the theory, but have to be established by experiments. New theories, replacing point-like particles by strings and postulating a ‘supersymmetry’ between fermions and bosons, have so far not led to empirically confirmable results, see e.g. ’t Hooft 1992. Some other unsolved problems will be mentioned below.

[23] Historically the suffix –on goes back to the electron. Whether the connection with ontology has really played a part is unclear. See Walker, Slack 1970, who do not mention Faraday’s ion. The word electron comes from the Greek word for amber or fossilized resin, since antiquity known for its properties that we now recognize as static electricity. From 1874, George Stoney used the word electron for the elementary amount of charge. Only in the twentieth century, electron became the name of the particle identified by Joseph Thomson in 1897. Ernest Rutherford introduced the names proton and neutron in 1920 (long before the actual discovery of the neutron in 1932). Gilbert Lewis baptized the photon in 1926, 21 years after Albert Einstein proposed its existence.

[24] See Millikan 1917; Anderson 1964; Thomson 1964; Pais 1986; Galison 1987; Kragh 1990; 1999.

[25] Pickering 1984, 67; Pais 1986, 466: ‘The agreement between experiment and theory shown by these examples, the highest point in precision reached anywhere in the domain of particles and fields, ranks among the highest achievements of twentieth-century physics.’

[26] In a collision between two electrons, the assumption that they do or do not keep their identity leads to different predictions for the result. Experimentally, it turns out that they do not maintain their identity.

[27] 1 MeV is 1 million electronvolt. 1 eV equals the energy that a particle having the elementary charge gains by proceeding through an electric potential difference of 1 Volt.

[28] Neutrino’s are stable, their rest mass is zero or very small, and they are only susceptible to weak interaction. Neutrino’s and anti-neutrino’s differ by their parity, the one being left handed, and the other right handed. (This distinction is only possible for particles having zero restmass. If neutrinos have a rest mass different from zero, as some recent experiments suggest, the theory has to be adapted with respect to parity). That the three neutrinos differ from each other is established by processes in which they are or are not involved, but in what respect they differ is less clear. For some time, physicists expected the existence of a fourth generation, but the standard model restricts itself to three, because astrophysical cosmology implies the existence of at most three different types of neutrino’s with their antiparticles.

[29]Weisskopf  1972, 41-51.

[30] From scattering experiments of electrons at a high energy, it follows that a proton as well as a neutron has three hard kernels, each with an electric charge of (1/3)e or (2/3)e. Like electrons in an atom, quarks may have an orbital angular momentum besides their spin angular momentum, such that mesons and baryons may have a spin larger than 2/3.

[31] A free neutron decays into a proton, an electron and an antineutrino. The law of conservation of baryon number is responsible for the stability of the proton, being the baryon with the lowest rest energy. The assumption that this law is not absolutely valid, the proton having a decay time of the order of 1031 years, is not confirmed experimentally.

[32] This is the so-called time-independent Schrödinger equation, determining stationary states and energy levels.

[33] Positronium is a short living composite of an electron and a positron, the only spatially founded structure entirely consisting of leptons.

[34] See Barrow, Tipler 1986, 5, 252-254.

[35]The symmetry of strong nuclear interaction is broken by electroweak interaction. For the strong interaction, the proton and the neutron are symmetrical particles having the same rest energy, but the electroweak interaction causes the neutron to have a slightly larger rest energy and to be metastable as a free particle.

[36]Cat 1998, 288: ‘The unifying symmetry Weinberg seems to propose as a picture of the world as it is can, if true, be neither universal nor complete.’

[37] In the theory of evolution too, the idea of increasing complexity is widely used but hard to define and to apply in practice, see McShea 1991.

[38] Even in the ground state at zero temperature the atoms oscillate, but this does not give rise to a wave motion.

[39] This applies to the superconducting metals and alloys known before 1986. For the ceramic superconductors, discovered since 1986, this explanation is not sufficient.

[40] This phenomenon is called Bose-condensation. A similar situation occurs in liquid helium below 2.1 K.

[41] The zero point of energy is the potential energy at a large mutual distance.

[42] A small increase of entropy (DS) is equal to the corresponding increase of energy (DE) divided by the temperature (T): DS=(DE)/T, if other extensive magnitudes like volume are kept constant. If two bodies at different temperatures make thermal contact, one body loses as much energy as the other gains. Hence, the entropy loss of the hot body is smaller than the entropy gain of the cold body, and the total entropy increases.

[43] A more detailed explanation depends on the property of a water molecule to have a permanent electric dipole moment (5.3). Each sodium or chlorine ion is surrounded by a number of water molecules, decreasing their net electric charge. This causes the binding energy to be less than the mean kinetic energy of the molecules.

[44]The negative logarithm (base 10) of the molar concentration of protons is called the pH-value. For pure water at 25 oC, pH = 7, meaning that one in a half billion molecules are ionized. A water molecule may lose or gain a proton. Most H+-ions are coupled to a water molecule to become H3O+ (hydronium).

[45] Callen 1960, 206-207. The number of degrees of freedom f is defined as the number of variables (temperature, pressure, and concentration) that can be chosen freely to describe the state of a chemical component. The number of components is r, and between the components c different chemical reactions are possible. The number of different phases is m. Now Gibbs’s phase rule is f=(r+2) -m-c. For the equilibrium of ice, water, and its vapour r=1, m=3, c=0, hence f=0. This means that this equilibrium can exist at only one value for temperature and pressure, the so-called triple point (temperature 273,16 K = 0,01 oC, pressure 611,2 Pascal).

[46] As far as change seems to presuppose motion, only physical events and processes should be called real changes. But each motion means a change of position, and transformations are changes of form.

[47] The law of decay is given by the exponential function: N(t)=N(t0) exp.–(t-t0)/τ. Herein N(t) is the number of radioactive particles at time t. τ is the characteristic decay time. The better known half-life time equals τ.log 2=0,693 τ. This formula is an approximation because N is not a continuous variable but a natural number. Like all statistical laws, the decay law is only applicable to a homogeneous aggregate.

[48] Einstein 1916. In stimulated emission, an incoming photon causes the emission of another photon such that there are two photons after the event, mutually coherent, i.e., having the same phase and frequency. Stimulated emission plays an important part in lasers and masers, in which coherent light respectively microwave radiation is produced. Absorption is always stimulated.

[49] Mason 1991, 50.

[50] Hawking 1988, chapter 6, 7.

[51] Mason 1991, chapter 4.

[52] Barrow, Tipler 1986, 6-9.

 

 


 

Conclusion

 

This treatise investigates the mathematical foundations of quantum physics from the perspective of the philosophy of dynamic development. In contrast to the usual approach, it does not focus on a high-level mathematical theory having hardly or no connection to experimental science, but it emphasizes the physical content. By including the emergence of new structures, it is moreover open to the propensity of physics to be the foundation of biology. This will be developed in a separate dissertation.

 

 


 

Cited literature

 

Achinstein, P. 1971, Law and explanation, Oxford.

Achinstein, P. 1991, Particles and waves, Historical essays in the philos­ophy of science, Oxford.

Allen, K. 1995, ‘A revolution to measure: The political economy of the metric system in France’, in Wise (ed.) 1995: 39-71.

Anderson, D.L. 1964, The discovery of the electron, Princeton NJ.

Anderson, P. 1995, ‘Historical overview of the twentieth century in physics’, in Brown,  Pais,  Pippard  (eds.) 1995: 2017-2032.

 

Barrow, J.D. 1990, Theories of everything, Oxford (Theorieën over alles, Amsterdam 1992).

Barrow, J.D. 1992, Pi in the sky, Counting, thinking and being, Oxford; London 1993.

Barrow, J.D., Tipler, F.J. 1986, The anthropic cosmological princi­ple, Oxford.

Bastin, T. (ed.) 1971, Quantum theory and beyond, Cambridge.

Belinfante, F.J. 1975,  Measurements and time reversal in objective quantum theory, Oxford.

Bellone, E. 1980, A world on paper, Studies on the second scientific revolution, Cambridge Mass. 1982.

Bohr, N. 1934, Atomic theory and the description of nature, Cambrid­ge 1961 (Atoomtheorie en natuur­beschrijving, Utrecht 1966).

Bohr, N. 1949, ‘Discussion with Einstein on epistemological problems in atomic physics’, in Schilpp (ed.) 1949, 199-241.

Bohr, N., Kramers, H.A., Slater, J.C. 1924, ‘The quantum theory of radiation’, Philosophical Magazine 47: 785-802; reprinted in van der Waerden (ed.) 1967: 159-176.

Born, M. 1949, Natural philosophy of cause and chance, New York 1964.

Bots, J. 1972, Tussen Descartes en Darwin, geloof en natuurwetenschap in de 18de eeuw in Nederland, Assen.

Braithwaite, R.B. 1953, Scientific explanation, Cambridge.

Brody, B.A. (ed.) 1970, Readings in the philosophy of science, Englewood Cliffs, NJ.

Brooke, J.H. 1992, ‘Natural law in the natural sciences’,  Science and Christian Beliefs, 4: 83-103.

Brown, J.R. 1999, Philosophy of mathematics, London.

Brown, L.M., Pais, A., Pippard B. (eds.) 1995, Twentieth century physics, 3 vols., Bristol, New York.

Bunge, M. (ed.) 1967, Quantum theory and reality, Berlin.

Bunge, M. 1967a, Foundations of physics, Berlin.

Bunge, M. 1967b, Scientific research, I: The search for system, II: The search for truth, Berlin.

 

Callen, H.B. 1960, Thermodynamics, New York.

Caneva,  K.L. 2005, ‘’Discovery’ as a site for the collective construction for scientific knowledge’, Historical Studies In the Physical Sciences 35 (2005) 175-291.

Carnap, R. 1966, Philosophical foundations of physics, New York.

Carroll, J.W. 1994, Laws of nature, Cambridge.

Cartwright, N. 1983, How the laws of physics lie, Oxford.

Cassirer, E. 1910, Substance and function; Einstein's theory of relativity, Chigaco 1923; New York 1953 (Substanzbegriff und Funktionsbegriff; 1910, Zur Einstein'schen Relativitätstheorie, 1921).

Cat, J. 1998, ‘The physicists’ debates on unification in physics at the end of the 20th century’, Historical Studies in the Physical and Biological Sciences 28: 253-299.

Clarke, D.M. 2006, Descartes, A biography, Cambridge.

Charles, D. Lennon, K. (eds.) 1992, Reduction, explanation, and realism, Oxford.

Clouser, R.A. 1991a, The myth of religious neutrality, An essay on the hidden role of religious belief in theories, Notre Dame. (Second revised edition 2005).

Cohen, H.F. 2010, How modern science came into the world. Four civilizations, one 17th century breakthrough, Amsterdam 2012.

Cohen, I.B. 2002, ‘Newton’s concepts of force and mass’, in: Cohen, Smith (eds.) 2002, 57-84.

Cohen, I.B. (ed.) 1958, Isaac Newton’s papers and letters on natural philosophy, Cambridge Mass.

Cohen, I.B., Smith G.E. (eds.) 2002, The Cambridge companion to Newton, Cambridge.

Coveney, P., Highfield R. 1990, The arrow of time, London.

Crease, R.P. 1993, The play of nature, experimentation as performance, Bloomington.

 

Dampier, W.C. 1929, A history of science, Cambridge 1966 (fourth edition).

Darrigol, O. 1986, ‘The origin of quantified matter waves’, Historical Studies in the Physical and Biological Sciences 16: 197-253.

Dengerink, J.D. 1986, De zin van de werkelijkheid, Amsterdam.

Dijksterhuis, E.J. 1950, De mechanisering van het wereld­beeld, Amsterdam (The mechanization of the world picture, Oxford 1961).

Disalle, R. 2002, ‘Newton’s philosophical analysis of space and time’, in: Cohen, Smith (eds.) 2002, 33-56.

Dooyeweerd, H. 1935-36, De wijsbegeerte der wetsidee, 3 delen, Amsterdam.

Dooyeweerd, H. 1953-1958, A new critique of theoretical thought, 4 vols. (revised translation of Dooyeweerd 1935-36), Amsterdam.

 

Einstein, A. 1905, ‘On the electrodynamics of moving bodies’, ‘Does the inertia of a body depend upon its energy-content?’ in Einstein et al. 1922: 35-65, 67-71 (translation of ‘Zur Elektrodynamik bewegter Körper’; ‘Ist die Trägheit eines Körpers von seiner Energieeinhalt abhängig?’, Annalen der Physik 17: 891-921; 18: 639-641).

Einstein, A. 1916, ‘On the quantum theory of radiation’, in van der Waerden (ed.) 1967: 63-77 (translation of ‘Zur Quantentheorie der Strahlung’, Physikalische Zeitung  18, 1917, 121).

Einstein, A. et al. 1923, The principle of relati­vity, London 1923 (Das Relativitätsprinzip, 1922).

Elkana, Y. 1974, The discovery of the conservation of energy, Cambridge, Mass..

 

Feyerabend, P.K. 1970, ‘How to be a good empiricist’, in Brody (ed.) 1970: 319-342.

Fraassen, B.van 1989, Laws and symmetry, Oxford.

Freeman, E., Sellars, W. (eds.) 1971, Basic issues in the philosophy of time, La Salle, Ill..

French, A.P. 1965, Newtonian mechanics, New York 1971.

Frey, G. 1958, Gesetz und Entwicklung in der Natur, Hamburg.

 

Galilei, G. 1632, Dialogue concerning the two chief world systems, Berkeley (1953) 1974.

Galison, P. 1987, How experiments end, Chicago.

Gaukroger, S. 2006, The emergence of a scientific culture, Science and the shaping of modernity 1210-1685, Oxford.

Gaukroger, S. 2010,  The collapse of mechanism and the rise of sensibility, Science and the shape of modernity 1680-1760, Oxford.

Gehlen, A. 1940, Der Mensch, Seine Natur und seine Stellung in der Welt (4. verbesserte Auflage 1950), Bonn.

Giere, R.N. 1988, Explaining science, Chicago.

Gödel, K. 1962, On formally undecidable propositions of principia mathema­tica and related systems, Edinburg.

Gold, T. (ed.) 1967, The nature of time, Ithaca NY.

Griffioen, S., Balk, B.M. (eds.) 1995, Christian philosophy  at the close of the twentieth century, Kampen.

Grünbaum, A. 1968, Geometry and chronometry in philos­ophical perspec­tive, Minneapolis.

Grünbaum, A. 1973, Philosophical problems of space and time, New York 1963; second, enlarged edition, Dordrecht 1974.

Grünbaum, A. 1974, ‘Popper’s view on the arrow of time’, in: Schilpp (ed.) 1974: 775-797.

 

Hanson, N.R. 1958, Patterns of discovery, Cambridge.

Hanson, N.R. 1963, The concept of the positron, Cambridge.

Harman, P.M. 1982, Energy, force and matter, Cambridge.

Harper, W. 2002, ‘Newton’s argument for universal gravitation’, in: Cohen, Smith (eds.) 2002, 174-201.

Hart, H. 1984, Understanding our world, Lanham.

Hawking, S. 1988, Het heelal, Amsterdam (A brief history of time, New York 1968).

Healey, R. 1989, The philosophy of quantum mechanics, An interactive interpretation, Cambridge.

Heisenberg, W. 1930, The physical principles of the quantum theory, Chicago 1930; New York 1949 (Die physikalischen Prinzipien der Quanten Theorie, Leipzig 1930).

Heisenberg, W. 1958, Physics and philosophy, London (Physik und Philoso­phie, Stuttgart 1959; Frankfurt-am-Main 1970).

Hempel, C.G. 1965, Aspects of scientific explanation, New York.

Hesse, M.B. 1974, The structure of scientific in­ference, London.

Hooft, G.’t 1992, De bouwstenen van de schepping, Amsterdam (In search of the ultimate building blocks, Cambridge 1997).

Howell, R.W., Bradley, W.J. (eds.) 2001, Mathematics in a postmodern age, A christian perspective, Grand Rapids, Mich.

Howson, C. (ed.) 1976, Method and appraisal in the physical scien­ces, Cambridge.

Huygens, C. 1690, Traité de la lumière, Brussel 1967 (Treatise on light, New York 1962; Verhandeling over het licht, Utrecht 1990).

 

Jammer, M. 1954, Concepts of space, Cambridge, Mass.; New York 1960 (enlarged).

Jammer, M. 1957, Concepts of force, Cambridge, Mass..

Jammer, M. 1961, Concepts of mass, Cambridge, Mass.:.

Jammer, M. 1966, The conceptual development of quantum mechanics, New York.

Jammer, M. 1974, The philosophy of quantum mechanics, New York.

 

Kastner, R. E. 2013, The transactional interpretation of quantum mechanics, The reality of possibility, Cambridge.

Katzir, S. 2004, ‘The emergence of the principle of symmetry in physics’, Historical Studies in the Physical Sciences 35, 35-65.

Kevles, D.J. 1997, ‘Big science and big politics in the United States: reflections on the death of the SSC and the life of the human genome project’, Historical Studies in the Physical and Biological Sciences 27: 269-298.

Khinchin, A.I. 1949, Mathematical foundations of statistical mechanics, New York.

Klein, M.J. 1964, ‘Einstein and the wave-particle duality’, The Natural Philosopher 3, 1-49.

Kolakowski, L. 1966, Die Philosophie des Positivismus, München: Piper (Positivist philosophy, Har­mondsworth 1972).

Kragh, H. 1990, Dirac: A scientific biography, Cambridge.

Kragh, H. 1999, Quantum generations, A history of physics in the twentieth century, Princeton.

Kuhn, T.S. 1962, The structure of scientific revolu­tions, second edition 1970 (including ‘Postscript-1969’), Chicago.

Kuhn, T.S. 1978, Black body theory and the quantum disconti­nuity, Oxford.

 

Lakatos, I. 1970, ‘Falsification and the methodology of scientific research programmes’, in Lakatos, Musgrave 1970: 91-196 reprinted in Lakatos 1978, chapter 1: 8-101.

Lakatos, I. 1978, The methodology of scientific research programmes, Cambrid­ge.

Lakatos, I., Musgrave, A. 1970 (eds.), Criticism and the growth of knowled­ge, Cambridge.

Laudan, L. 1977, Progress and its problems, Towards a theory of scientific growth, Berkeley.

Laue, M. von 1949, ‘Inertia and energy’, in Schilpp (ed.) 1949: 501-533.

Lindberg, D.C., Numbers, R.L. (eds.) 1986, God and nature, Histo­ri­cal essays on the encounter between chris­tianity and science, Berkeley.

Lucas, J.R. 1973, A treatise on time and space, London 1976.

 

Mach, E. 1883, Die Mechanik, historisch-kritisch dargestellt, Darmstadt 1973 (The science of mechanics, La Salle, Ill. 1960).

Margenau, H. 1950, The nature of physical reality, New York.

Mason, S.F. 1991, Chemical evolution, Origin of the elements, molecules, and living systems, Oxford.

Maxwell, J.C. 1860, ‘Illustrations of the dynamical theory of gases’, in Maxwell 1890, I: 377-409.

Maxwell, J.C. 1877, Matter and motion, Cambridge; London 1920; New York 1952.

Maxwell, J.C. 1890, The scientific papers of James Clerk Maxwell (W.D.Niven, ed.), 2 vols. bound in one, New York 1965.

McCormmach, R. 1970, ‘H.A.Lorentz and the electromag­netic view of nature’, Isis 61, 459-497.

McIntire, C.T. (ed.) 1985, The legacy of Herman Dooyeweerd, Lanham.

Mehlberg, H. 1971, ‘Philosophical aspects of physical time’, in Freeman, Sellars (eds.) 1971: 16-60.

Messiah, A. 1961-62, Quantum mechanics, Amsterdam (Mécanique quantique, Paris 1958).

Meyer-Abich, K.M. 1965, Korrespondenz, Individualität und Komplementarität, Wiesbaden.

Millikan, R.A. 1917, The electron, Chicago 1963.

Minkowski, H. 1908, ‘Space and time’, in Einstein et al. 1922: 73-91 (translation of Raum und Zeit, address delivered in 1908).

Mises, R. von 1939, Positivism, Cambridge, Mass. 1951; New York 1968 (Kleines Lehrbuch des Positivismus, 1939).

Morse, P.M. 1964, Thermal physics, New York.

 

Nagel, E. 1939, Principles of the theory of probability, Chicago 1969.

Nagel, E. 1961, The structure of science, New York.

Newton, I. 1687, Sir Isaac Newton's Mathematical principles of natural philosophy,Berkeley 1971 (Philosophiae naturalis principia mathe­matica 1687, translated by A.Motte 1729, revised by F.Cajori 1934).

Newton, I. 1704, Opticks, New York 1952.

Niiniluoto, I. 1999, Critical scientific realism, Oxford.

 

Omnès, R. 1994, The interpretation of quantum mechanics, Princeton.

 

Pais, A. 1982, Subtle is the Lord, The science and the life of Albert Einstein, Oxford.

Pais, A. 1986, Inward bound, of matter and forces in the physical world, Oxford.

Pais, A. 1991, Niels Bohr’s times, in physics, philosophy, and polity, Oxford.

Papineau, D. 1993, Philosophical naturalism, Oxford.

Pickering, A. 1984, Constructing quarks, A sociological history of particle physics, Chicago.

Popper, K.R. 1959, The logic of scientific discovery, London (revised 1960, 1968; original: Logik der Forschung, Wien 1934).

Popper, K.R. 1963, Conjectures and refutations, London 1976 (De groei van kennis, Meppel 1978: Boom).

Popper, K.R. 1967, ‘Quantum theory without “The Observer”’, in Bunge (ed.) 1967: 7-44.

Popper, K.R. 1972, Objective knowledge, Oxford 1974.

Popper, K.R. 1974, ‘Autobiography’, in Schilpp (ed.) 1974, 1-181.

Popper, K.R. 1983, Realism and the aim of science, London.

Psillos, S. 1999, Scientific realism, How science tracks truth, London.

Putnam, H. 1975, Mathematics, matter and method, Cambridge 1979 (with additional chapter).

 

Quine, W.V.O. 1963, Set theory and its logic, Cambridge Mass. 1971.

 

Raman, V.V., Forman, P. 1969, ‘Why was it Schrödinger who developed De Broglie's ideas?’, Historical Studies in the Physical Sciences 1: 291-314.

Reichenbach, H. 1956, The direction of time, Berkeley 1971.

Reichenbach, H. 1957, The philosophy of space and time, New York (original: Philosophie der Raum-Zeit-Lehre, 1927).

Rindler, W. 1969, Essential relativity, New York 1977 (revised).

 

Sabra, A.I. 1967, Theories of light, from Descartes to Newton, Cambridge 1981.

Schilpp, P.A. (ed.) 1949, Albert Einstein, Philosopher-scientist, New York 1959.

Schilpp, P.A. (ed.) 1974, The philosophy of K.R.Popper, La Salle, Ill..

Seth, S. 2004, ‘Quantum theory and the electromagnetic world-view’, Historical Studies in the Physical Sciences 35, 67-93.

Settle, T. 1974, ‘Induction and probability unfused’, in Schilpp (ed.) 1974: 697-749.

Shapiro, S. 1997, Philosophy of mathematics, Structure and ontology, Oxford.

Sklar, L. 1974, Space, time and spacetime, Berkeley.

Sklar, L. 1993, Chance, Philosophical issues in the foundations of statistical mechanics, Cambridge.

Slater, J.C. 1975, Solid state and molecular theory, a scientific biograp­hy, New York.

Smith, G.E. 2002, ‘The methodology of the Principia’, in: Cohen, Smith (eds.) 2002, 138-173.

Sneed, J.D. 1971, The logical structure of mathematical physics, second revised edition 1979, Dordrecht.

Stafleu, M.D. 1966, ‘Quantumfysica en wijsbegeerte der wetsidee’, Philosophia Reformata 31: 126-156.

Stafleu, M.D. 1968, ‘Individualiteit in de fysica’, in: D.M. Bakker e.a. 1968, Reflexies, opstellen aangeboden aan Prof. Dr. J.P.A. Mek­kes, ter gelegen­heid van zijn zeventigste verjaardag, Amsterdam, 287-305.

Stafleu, M.D. 1970, ‘Analysis of time in modern physics’, Philosop­hia Reformata 35: 1-24, 119-131.

Stafleu, M.D. 1972, ‘Metric and measurement in physics’, Philosop­hia Reformata 37: 42-57.

Stafleu, M.D. 1980, Time and again, A systematic analysis of the founda­tions of physics, Toronto; Bloemfon­tein. www.mdstafleu.nl.

Stafleu, M.D. 1985, ‘Spatial things and kinematic events (On the reality of mathema­tically qualified structures of individuality)’, Philosophia Reforma­ta 50: 9-20.

Stafleu, M.D. 1987, Theories at work, On the structure and functioning of theories in science, in particular during the Copernican revolution, Lanham, New York, London.

Stafleu, M.D. 1989, De verborgen struc­tuur, Wijsgerige beschouwingen over natuurlij­ke structuren en hun samenhang, Amsterdam.

Stafleu, M.D. 1994, ‘De structuur der materie in de wijsbegeerte van de wetsidee’, in: H.G. Geertsema e.a. (red.), Herman Dooyeweerd 1894-1977, Breedte en actualiteit van zijn filosofie, Kampen, 114-142.

Stafleu, M.D. 1995, ‘The cosmochronological idea in natural science’, in Griffioen, Balk (eds.) 1995: 93-111.

Stafleu, M.D. 1996, ‘Filosofie van de natuurwetenschap’, in: R.van Wouden­berg (red.) 1996: 177-202, Amsterdam.

Stafleu, M.D. 1998, Experimentele filosofie, Geschiedenis van de natuurkunde vanuit een wijsgerig perspectief, Amsterdam (Stafleu 2006, part III)..

Stafleu, M.D. 1999, ‘The idea of natural law’, Philosophia Reformata 64: 88-104.

Stafleu, M.D. 2002, Een wereld vol relaties, Karakter en zin van natuurlijke dingen en processen, Amsterdam (translation: Stafleu 2006, part IV; Stafleu 2010).

Stafleu M.D. 2006, Relations and characters in Protestant philosophywww.allofliferedeemed.co.uk/stafleu.htm.

Stafleu, M.D. 2010, A world full of relations, www.scibd.com/doc/29057727 (pdf, translation of Stafleu 2002).

Stafleu, M.D. 2016, Theory and experiment, Christian philosophy of science in a historical context, revised edition of Stafleu 1987 combined with Stafleu 1998, www.mdstafleu.nl.

Stafleu, M.D. 2017, The open future, Contours of a Christian philosophy of dynamic development, www.mdstafleu.nl.

Stafleu, M.D. 2018a, Encyclopedia of relations and characters. I. Natural laws. II. Normative principles, www.mdstafleu.nl.

Stafleu 2018b, Nature and freedom, Philosophy of nature, Natural theology, Enlightenment and Romanticism, www.mdstafleu.nl.

Strauss, D.F.M. 2009, Philosophy, Discipline of the disciplines, Grand Rapids MI.

Suppe, F. (ed.) 1977, The structure of scientific theories (first edition 1973, enlarged 1977), Urbana.

Suppe, F. 1977, ‘The search for philosophical understanding of scientific theories’, in Suppe (ed.) 1977: 3-241; ‘Afterword-1977’, ibid.: 617-730.

Swartz, N. 1985, The concept of physical law, Cambridge.

 

Thomson, G.P. 1964, J.J.Thomson and the Cavendish laborat­ory in his day, London (J.J.Thomson, Discoverer of the electron, New York 1966).

Tolman, R.C. 1938, The principles of statistical mechanics, London 1959.

Torretti, R. 1999, The philosophy of physics, Cambridge.

Toulmin, S. 1972, Human understanding, Princeton.

Toulmin, S., Goodfield, J. 1965, The discovery of time, London; Harmondsworth 1967.

 

Verburg, M.E. 1989, Herman Dooyeweerd, Leven en werk van een Neder­lands christen-wijsgeer, Baarn.

 

Waerden, B.L.van der (ed.) 1967, Sources of quantum mechanics, Amsterdam; New York 1968.

Walker, C.T., Slack, G.A. 1970, ‘Who named the –on’s?’, American Journal of Physics 38: 1380-1389.

Weinberg, S. 1995, ‘Nature itself’, in Brown, Pais, Pippard (eds.) 1995: 2033-2040.

Weiner, C.(ed.) 1977, History of 20th-century physics, New York.

Weisskopf, V.F. 1972, Physics in the twentieth century: Selected essays, Cambridge Mass..

Wertheim, M. 1995, Pythagoras’ trousers, God, physics and the gender wars, New York.

Weyl, H. 1928, The theory of groups and quantum mechanics, New York no date (Gruppentheorie und Quantenmechanik, second revised edition 1930).

Wise, M.N. (ed.) 1995, The values of precision, Princeton.

Wolterstorff, N. 1976,  De rede binnen de grenzen van de religie, Amsterdam 1993 (Reason within the bounds of religion, Grand Rapids 1976).

Woudenberg, R. van 1992, Gelovend denken, Inleiding tot een christelijke filosofie, Amsterdam.

Wouden­berg, R. van (red.) 1996, Kennis en werkelijkheid, Tweede inleiding tot een christelijke filosofie, Amster­dam.