Dynamic development in


the history of science








© 2022 (revised edition of Nature and Freedom): M.D.Stafleu










1. Francis Bacon’s experimental history

1.1. The meaning of the ideas of nature and of freedom

1.2. Experimental history

1.3. Medieval development of technology


2. Galileo as assayer

2.1. Galileo’s enlightened philosophy

2.2. Galileo and the Jesuits

2.3. The double truth


3. Mechanical philosophy in the Dutch Republic

3.1. René Descartes: founder of modernism

3.2. Descartes’ natural philosophy

3.3. Christiaan Huygens’ moderate mechanicism

3.4. The radical mechanicism of Benedict Spinoza


4. Dynamical philosophy

4.1. Isaac Newton’s two faces: Principia and Opticks

4.2. Johann Kepler

4.3. Matter and force

4.4. Measurement of time and motion


5. Experimental philosophy

5.1. Methodical isolation

5.2. Newton’s synthesis

5.3. Moderate Enlightenment

5.4. Blaise Pascal


6. Laws of nature

6.1. The Renaissance search for order

6.2. Experimental philosophy discovers natural laws a posteriori

6.3. Variable and invariable properties

6.4. Laws and causality

6.5. Ernst Mach’s instrumentalist view of natural laws


7. Knowledge of natural laws

7.1. Formulating natural laws

7.2. Scientific knowledge of natural laws

7.3. Induction and deduction as complementary heuristics

7.4. Rules of reasoning in Newton’s heuristics

7.5. Successive approximation

7.6. The myth of linear progress


8. The search for structure

8.1. Successive views on particles and elements

8.2. Enlightened chemistry: compounds

8.3. John Dalton’s structural atomism

8.4. The reality of atoms and molecules

8.5. The hidden structure of matter


9. Randomness

9.1. Natural laws in modern physics

9.2. Mechanical determinism

9.3. Random processes

9.4. Philosophical and theological objections

9.5. Laws for random events

9.6. Lawfulness in biology


10. The Romantic turn

10.1. What is Romanticism?

10.2. Gottfried Leibniz

10.3. From pietism to evangelicalism

10.4. Immanuel Kant

10.5. The unity of all natural forces

10.6. German Naturphilosophie

10.7. Unification

10.8. Energeticism and positivism


11. Naturalism

11.1. Radical enlightenment

11.2. Moderate Enlightenment and Contra-Enlightenment

11.3. Physico-theology

11.4. The uniformity of natural laws

11.5. Biblical exegesis

11.6. Enlightened biology

11.7. Evolutionism


12. Relativism

12.1. Romanticism and history

12.2. Public facts

12.3. Crisis and revolution

12.4. The crisis of 1910

12.5. Postmodern relativism


13. Values and norms

13.1. Animal behaviour and human activity

13.2. Philosophical ethics

13.4. Immanuel Kant’s transition from naturalism to moralism

13.5. Faith and religion


14. Critical realism

14.1. Karl Popper’s critical realism

14.2. The critical-realistic ethos of the scientific community


15. Christian critical-realistic philosophy of science

15.1. Discovery of the law

15.2. Dynamic coherence

15.3. Abundancy of kinds

15.4. Evolution of natural characters

15.5. Natural behaviour and normative acts

15.6. Values and norms for human acts and relations

15.7. Growing technical ability

15.8. Lingual and logical artefacts

15.9. Increasing socialization

15.10. The state and the public domain




Index of cited works

Index of historical persons








This book investigates several ´driving forces´ in the history of science. One of these is the Enlightenment motive of nature and freedom. The age of Enlightenment is often restricted to the eighteenth century, but the spirit of the Enlightenment is a hundred years older and still very much alive.[1] The expression nature and freedom indicates the continuing controversy within Western philosophy since the seventeenth century. It oscillates between two poles: on the one hand, the scientific control of nature; on the other hand, the freedom of human personality, the alleged autonomy of humankind. It is the tension between the lawfulness of nature, to which also humans belong, and human freedom transcending it. This book discusses this theme from the viewpoints of natural philosophy and of natural theology, being the theology basing itself on natural thought independent of divine revelation. From the sixteenth century it was almost inevitable that the findings of science would collide with theology based on revelation.

Until the end of the eighteenth century, philosophers were not distinguished from scientists, but they took distance from theology, the most important stronghold of the Counter-Enlightenment. The Renaissance, Enlightenment and Romantic philosophers shared their aversion of Aristotle. Therefore they experienced much resistance from conservative philosophers and especially from scholastic theologians, who accepted the accommodation of Aristotelian philosophy to Christian faith, as proposed by Thomas Aquinas. This includes Aristotle’s view that natural philosophy or metaphysics is among other things concerned with the nature of God and with various arguments in proofs of God’s existence. In this conflict Copernicanism acted as a dividing shibboleth: without many exceptions, Enlightenment philosophers were Copernicans, and counter-Enlightenment philosophers and theologians were not. In this conflict, the emphasis gradually shifted from planetary science to biological evolution theory, from Copernicus to Darwin.

Nature and freedom starts with Francis Bacon, for a long time considered the figurehead of the Enlightenment, but later on less appreciated. Simultaneously with Bacon, the Enlightenment made in Italy a false start with the views of Galileo Galilei, successfully opposed by the Jesuits. In the Dutch republic the rationalistic Enlightenment with René Descartes and Baruch Spinoza flowered, followed by the empiricism of Robert Boyle, Isaac Newton and John Locke. Meanwhile the modern idea of natural law was developed. During the eighteenth century the emphasis in science shifted from generic to structural laws in chemistry. Simultaneously naturalism emerged, with evolutionism as its most radical variant. In Immanuel Kant’s thought the primacy of the commandment of nature shifted to the primacy of morality. Since the end of the eighteenth century the Enlightenment’s science ideal was replaced by the Romantic ideal of personality. The increasing emphasis on historical, cultural and social influences on science led to post-modern relativism. Together with evolutionism it dominates present-day natural philosophy. 

Nature and freedomis not only the leading theme of Enlightenment and Romanticism. It also refers to the emancipation of those who were later called scientists. For their investigation of nature, they were no longer willing to accept the rulings of theologians and philosophers, but found the touchstone of their in freedom developed and argued theories in their own observations, measurements and experiments. Since the end of the twentieth century, their critical realism is the most important moderate alternative for both evolutionism and relativism, being the radical opposites in the dialectic of nature and freedom.

[1] Pagden 2013; Pinker 2018; About  the history of the Enlightenment and Romanticism, see  Israel 2001, 2006, 2011; Gaukroger 2006, 2010, 2016; Cohen 2016.



 Chapter 1



Francis Bacon’s

experimental history




1.1. The meaning of the ideas

 of nature and of freedom




The early Enlightenment philosophy of nature started with Francis Bacon’s experimental history, soon followed by Galileo Galilei’s and René Descartes’ mechanicism. Later Enlightenment philosophers recognized their debt to Bacon and Descartes alike, but Galileo’s contribution to the early Enlightenment is generally underestimated if not overlooked by philosophers and historians – probably because his early attempt at reforming philosophy was nipped in the bud by conservative Italian Aristotelians.


At the beginning of the eighteenth century mechanicism lost its appeal, but during the nineteenth it revived, thanks to Immanuel Kant, who adapted it to rising Newtonianism. On the one hand it made place for the radical, rationalistic Enlightenment inspired by Benedict Spinoza, on the other hand for Isaac Newton’s dynamic and experimental philosophy related to John Locke’s moderate, empiricist Enlightenment. Until the turn to Romanticism the counter-Enlightenment was primarily scholastic Aristotelian, in philosophy and especially in theology.


The Enlightenment was preceded by the Renaissance, the early modern time. Scientists like Johannes Kepler, Galileo Galilei, Simon Stevin, and William Gilbert crossed the Rubicon, whereas Francis Bacon and René Descartes were Enlightenment philosophers from the start. One of the constant features of both the Renaissance and the Enlightenment is their rejection of Aristotelianism as interpreted by Thomas Aquinas, dominating European thought between the thirteenth and seventeenth century. Until the rise of scholasticism (the medieval method of critical thought by dialectical reasoning) Western Christian theology and philosophy were especially influenced by Augustine’s Neo-Platonism.


Only in the thirteenth century, after his works were translated mainly from Islamic sources into Latin, Aristotle became the most discussed philosopher of medieval Europe, especially at the university of Paris. In 1277 bishop Stephen Tempier of Paris condemned 219 controversial philosophical and theological theses, such as Aristotle’s thesis that the world is eternal. This action induced fourteenth-centuries scholars like Jean Buridan and Nicole Oresme to act carefully in order not to provoke the church. Yet as precursors of Galileo they succeeded in developing a view on mechanics which would later undermine the Aristotelian world view.


The work of scholars like Thomas Aquinas made Aristotle’s rationalistic natural philosophy the reasoned foundation of theology. Natural philosophy or metaphysics contained the reasoning about the nature of God, several kinds of proofs of God’s existence, and apologetic treatises contra Jews, Muslims, heretics, agnostics and atheists. Philosophers and theologians realised that reasoning about God defined as the immaterial personal first cause of the world[1] could not lead to knowledge of God as the father of Jesus Christ. For this they appealed to the Biblical revelation, in agreement with the then generally accepted view that knowledge of God has two sources, the book of nature and the book of God. The renunciation of Aristotelianism by the Renaissance and the Enlightenment had great consequences for theology. Contrary to the intention of the church reformers Martin Luther and John Calvin, Protestant theologians remained as faithful to scholasticism as their Catholic colleagues, colliding with new philosophical views.


As far as the Enlightenment can be characterized by the contrast of human freedom or autonomy on the one hand and dominating nature on the other, it should be realized that the meaning of the ideas of both nature and freedom had shifted considerably since the Renaissance. Following Aristotle, scholastic thought discussed nature as the proper kind of everything existing, including humanity and the divine. To start with Francis Bacon the Enlightenment philosophy considered nature as an entity distinguished from humanity and from God. Nature became the realm of natural things and events, of minerals, plants, and animals. In the natural philosophy of the Enlightenment humankind took position over and against nature, until Darwin included humanity in nature again. Francis Bacon stressed the human dominion over nature as two-sided:[2] ‘Nature to be commanded must be obeyed.’[3]


The scholastic natural theology deliberated about the nature of God, His existence and His attributes. Rationalistic philosophers argued that natural things, being entirely dependent on God, are completely passive. In physico-theology, experimental philosophy (chapter 5) took distance from this view in favour of the lawfulness of nature as a new proof of God’s existence and benevolence (chapter 6).




1.2. Experimental history




When the statesman Francis Bacon, lord Verulam, died in 1626, Galileo Galilei and Isaac Beeckman were at their zenith, whereas René Descartes just started his career. Bacon did not contribute significantly to science, but he became an influential philosopher, albeit only after his death. He did not adhere to mechanicist views, but preferred an empirical theory of discovery, called experimental history, with knowledge derived in the crafts, practical know-how, as a unique source of science.


Bacon criticized the view of scientific method as exposed in Aristotle’s Organon. In Novum organum scientiarum (New instrument of science, 1620), he discussed four kinds of prejudices or bad habits of thought, which he called idols, occurring especially in scholastic natural philosophy.[4]


Bacon is considered an empiricist and an inductivist, contrary to the mechanists, being rationalistic deductivists, but he was not an experimental philosopher, like William Gilbert was and especially Isaac Newton would become. Instead he introduced a different form of natural philosophy, called ‘experimental history, the history of the arts and of nature as changed and altered by man’.[5] This concerns the research of experimental methods developed in the course of centuries in the arts and crafts, in the medical profession as well as in alchemical laboratories. Consequently, in contrast to mechanicism, his philosophy does not stress motion but matter as its main theme.


Recently, the relevance of medieval alchemy for the rise of seventeenth-century science is more emphasized than before.[6] Treatises written in this extra-scholastic tradition outside the universities contain recipes for the production of chemical substances. Bacon’s approach is related to natural history, the empirical investigation of plants and animals. However, his attention was not directed at nature as a source of knowledge, but at the technical know-how as applied in many practices, in particular the medical practice. Robert Boyle and Herman Boerhaave were inspired by Bacon’s experimental history, dominating chemistry (but not physics) until the end of the eighteenth century.


Because he stressed the practical value of science, Bacon was critical about the theories developed in Nicholas Copernicus’ De revolutionibus (1543) and in William Gilbert’s De magnete (1600). He proposed the foundation of an international society of scholars to investigate nature. After Bacon’s death his views became quite popular all over Europe. At their foundation both the Royal Society in London (1662) and the Académie Royale des Sciences in Paris (1666) subscribed to his philosophy. In his introduction to the Encyclopédie ou dictionnaire raisonné des sciences, des arts et métiers (28 volumes, 1751-1772), the pinnacle of eighteenth-century Enlightenment philosophy, Jean d’Alembert announced that this work would be written according to Francis Bacon’s ideas as interpreted by John Locke, although the later parts of the Encyclopédie would steer a different, much more radical course. According to Bacon’s program of experimental history the Encyclopédie was as much concerned with the arts and crafts as with the sciences. The taxonomic structure of both was inspired by Bacon’s Advancement of learning (1605).


Bacon’s views remained influential well into the nineteenth century, when he inspired William Whewell to write History of the inductive sciences (1837) and The philosophy of the inductive sciences, founded upon their history (1840)[7]. However, inductivism became suspect in the eyes of twentieth-century deductivist philosophers like Karl Popper and his disciples (14.1).




1.3. Medieval development of technology




Francis Bacon supposed that science can only proceed from the practical knowledge, the know-how, available in the crafts and arts, in alchemy and in medical practice. The question may arise whether such knowledge was really sufficiently available in his days? Indeed, the rise of science during the Renaissance and the Enlightenment was preceded and facilitated by the development of western technology during the Middle Ages. The usual underestimation of medieval technology including alchemy is no doubt connected to the disdainful view by the Renaissance philosophers on the ‘dark’ Middle Ages; of literates on handiwork; and of Enlightenment philosophers on the obscure practices of alchemical magicians and old-fashioned medical doctors.


In fact the start of western technology and the break with developments outside Europe took place much earlier, since in the eleventh century an important agrarian reform with its innovations caused a formerly unknown prosperity, witness the building of the glorious Gothic cathedrals and cloisters.[8] The inventions of paper (less expensive than papyrus or parchment) and book printing (movable type, circa 1450, block printing is much older) are much more peaceful and no less important than the armaments applied during the crusades, the hundred year war, and the religious wars. Without printed books the rise of natural science in the seventeenth century cannot be explained. Watermills supplied the wants of labour saving; at the end of the eleventh century more than 6000 mills were present in England, as inventoried in the famous Domesday book, a tax register (1086). The rudder, the compass and all other improvements of ship building (with sailing ships instead of the hated galleys) allowed of the emergence of commerce around the North Sea and the Baltic. Inventions like dykes, windmills, the curing of herring and a superior ship building laid the foundation of the prosperity of the Low Countries in the fifteenth and sixteenth centuries and the emergence of the Dutch Republic.[9]


Changing ordinary life radically, many of these and other inventions were already known in antiquity or were imported from outside Europe, where they were often treated as toys or curiosities.  Apparently, only the Christian culture in Western-Europe was able to make inventions practically applicable. In the twelfth century the Byzantine, Arabic, Indian and Chinese civilizations were more advanced than the European one. In the thirteenth century the first four stagnated, whereas European culture made a passing manoeuvre, in which the technological progress has been an important, perhaps decisive factor.[10]


In all sections of the population, the widely applied technology required a conscious and constant willingness to maintain and improve existing apparatus and to learn about it. This led to a critical and inquisitive mind. In this way, the late-medieval technology contributed to the emergence of modern science in the seventeenth century, after people had liberated themselves from Aristotelian and other bookish views.[11] In contrast, medieval natural philosophy contributed next to nothing to technology or to science, as Bacon did not cease to emphasize. Nevertheless, science writers still tend to underestimate the relevance of technology for experiments and instrumental observation compared to the formation of theories. 


Even before Bacon published his views, Galileo Galilei was inspired by Italian shipbuilding, architecture, and musical theory. Besides Italian artists-engineers like Leonardo da Vinci, Filippo Brunelleschi, and Michelangelo Buonaroti, in the Netherlands Simon Stevin, Cornelis Drebbel, Willebrord Snellius, Isaac Beeckman, Antoni van Leeuwenhoek, and Jan Swammerdam were raised in the crafts.[12] René Descartes and Christiaan Huygens maintained close contacts with instrument makers. Before them, Georgius Agricola’s De re metallica (1556) described the practice of mining. Andreas Vesalius’ De humani corporis fabrica (1543) on the anatomy of the human body made Claudius Galenus’ views obsolete.


After Bacon, scientific research became more and more dependent on especially designed apparatus. Astronomical observations made by telescopes provided the ammunition for Galileo’s attack on Aristotelian philosophy and laid the foundation of Newton’s dynamic philosophy and theory of gravity. Biological research promoted by the invention of the microscope revealed many secrets of nature. The investigation of the void would have been impossible without Evangelista Torricelli’s tube and Robert Boyle’s air pump. The development of optics required the production of lenses and prisms. The skilful use of these appliances led to a new view of nature, aptly called experimental philosophy (chapter 5).


During the Renaissance and the Enlightenment, well until the middle of the nineteenth century, science remained tributary to technology. The development of the steam engine did not owe much to science, but it stimulated the development of thermodynamics in the nineteenth century. For their progress, experimental scientists were, are, and will ever be strongly dependent on technical appliances.


Only after physics and chemistry transferred the focus of their research from the general laws of mechanics to the specific laws for electricity, magnetism, atoms, and molecules, these sciences were able to promote the technical development of plastics, electric technology, electronics, and informatics. Technology accompanied by scientific research was first applied in the nineteenth-century chemical industry, electric technology, and electronics. Since then it has expanded to any kind of industry. For their progress the experimental sciences were and are dependent on technical appliances and therefore on technologists.


Therefore, Francis Bacon was right by calling ‘experimental history, the history of the arts, and of nature as changed and altered by man’ the condition for the growth of natural science. Thereby he undermined the primacy of Greek theory above instrumental observation, measurement and experiment.




[1] Rutten, de Ridder 2015, 13.

[2] Gaukroger 2001, chapter 6.

[3] Bacon 1620, 39, Aphorism 3.

[4] Bacon 1620, Book I, Aphorisms XXXIX-XLIV; Gaukroger 2001, 121-127.

[5] Dijksterhuis 1950, 442 (IV: 191); Gaukroger 2001, 7; Klein, Lefèvre 2007, 23.

[6] Yates 1964, 1972.

[7] Blake et al. 1960, chapter 9; Cohen 1994, 27-39.

[8] White 1962; 1978; Duby 1976; Eamon 1994; Landes 1998, chapter 4.

[9] De Vries, van der Woude 1995; Israel 1995.

[10] Landes 1998, chapter 3; Bala 2006.

[11] Dijksterhuis 1950; Hooykaas 1972; Landes 1983; 1998; Cohen 1994; Gaukroger 2006.

[12] Romein, Romein 1938-1940, 178-205, 451-469; Dijksterhuis 1950, 358-368 (IV, IIA-B).






 Chapter 2

Galileo as assayer


2.1. Galileo’s enlightened philosophy


Parallel to Francis Bacon’s experimental history the Enlightenment started with mechanicism or mechanical philosophy, as it was later called.[1] Usually René Descartes is considered its founder after he published Discours de la méthode in 1637,but he was preceded by Galileo Galilei in 1623, as well as by Isaac Beeckman,[2] whose work, however, was not published until long after his death. This chapter describes how Galileo as a member of the Accademia dei Lincei almost succeeded in starting the Enlightenment by publishing Il saggiatore (The assayer, 1623)and Dialogo (1632). However, he was confronted with Aristotelian opposition, who nipped in the bud his attempt of introducing a mechanicist philosophy. At least in Southern-Europe, for his views found a fertile soil in the Northern Netherlands (chapter 3), where in 1638 his Discorsi on mechanics was published. Meanwhile the conservative philosophers and theologians returned to the medieval practice of the double truth (2.3), confirming the dominance of Aristotelian philosophy in the exertion of natural science

The nucleus of the mechanist philosophy is its view on matter and motion. A new theory of matter and motion could only have a chance of success after Aristotle’s cosmology was abandoned. The absolute separation of perfect celestial bodies and the imperfect terrestrial realm formed the heart of the Aristotelian cosmology, such that Galileo considered it necessary to devote the whole First Day of his Dialogue to a devastating criticism of this distinction. In both realms he could now apply the same principles of explanation. Yet his views did not entirely come out of the blue. The discovery that the diagonal of a unit square cannot be expressed as a ratio of integral numbers confronted the Pythagoreans (circa 500 BC) with the irreducibility of spatial to quantitative relations.[3] Next, Zeno of Elea (circa 450 BC) stumbled on the irreducibility of motion to quantitative and spatial relations in his analysis of some famous paradoxes. Galileo too considered motion to be sui generis, not explainable, but to be used in an explanation.[4] Explanation of motion by motion is a short expression for the acceptance of one or two principles of motion in order to explain other motions. It is not necessary to explain the primary or natural motions themselves, but these must be expressed in a mathematically simple and correct form, as a motion at a constant speed or at a constant acceleration. The principle of inertia may be simply formulated as follows: a body to which no external force is applied moves of itself – it moves because it moves.

The principle of relativity, which Galileo was the first to investigate, implies that inert motion is a relation, irreducible to quantitative, spatial, and physical relations, although this was not immediately clear to everyone. Even the mechanicists often conceived of motion as a property of moving bodies, not as a relation between these. Because he stuck to the finiteness of the cosmos, Galileo still believed that non-accelerated motion could only be circular, of a body moving without external impediments horizontally around the earth or any other celestial body, or turning around its own axis. Soon after his death his disciples corrected this into rectilinear uniform motion. Galileo introduced the uniformly accelerated motion of fall as a second and independent explanation, which Isaac Newton reduced to the physical action of a force. Aristotle too distinguished two natural motions, the circular motion of celestial bodies around the centre of the earth, and the motion towards the centre or away from it, respectively for heavy and light bodies. Galileo however did not recognize the distinction of terrestrial and celestial motions, or that of heavy and light bodies. Moreover he posited that circular motion does not only occur around the earth, but around any celestial body. Since Newton the rotation of a body around its own axis is no longer considered an example of inert motion.

Galileo started his career as a Neo-Platonic Renaissance philosopher, but in Il saggiatore he presented the program of the emerging mechanicist philosophy, to reduce all physical phenomena to matter, quantity, shape, and motion:  ‘whenever I conceive any material or corporeal substance, I immediately feel the need to think of it as bounded, and as having this or that shape; as being large or small in relation to other things, and in some specific place at any given time; as being in motion or at rest; as touching or not touching some other body; and as being one in number, or few, or many. From these conditions I cannot separate such a substance by any stretch of my imagination.’[5]

This became the nucleus of mechanicism. For instance, Galileo explained heat as motion of material particles: ‘I do not believe that in addition to shape, number, motion, penetration, and touch there is any other quality in fire corresponding to “heat”.’[6]

Following Giovanni Benedetti he explained sound as being caused by the periodic motion of a string, ‘the waves which are produced by the vibrations of a sonorous body, which spread through the air, bringing to the tympanum of the ear a stimulus which the mind translates into sound.’[7] Galileo distinguished objective from subjective properties, or primary from secondary qualities, as he called them: ‘To excite in us tastes, odours, and sounds I believe that nothing is required in external bodies except shapes, numbers, and slow or rapid movements. I think that if ears, tongues and noses were removed, shapes and numbers and motions would remain, but not odours or tastes or sounds.’[8]

For human experience this means the separation of the human subject, the internal experiencing mind, on the one hand, from the natural object, the external experienced nature, on the other hand. The dialectic of the free human intellect opposed to nature determined by natural laws, to be expanded by René Descartes, would become a hallmark of Enlightenment philosophy.[9]


2.2. Galileo and the Jesuits


Since 1611 Galileo was a prominent member of the Accademia dei Lincei, the academy of (sharp seeing) lynxes, a small group of scientific and literary Roman intellectuals founded in 1603, guided by prince Federico Cesi and initially supported by pope Urban VIII. Cesi professed to be engaged in the destruction of the principal doctrines of the philosophy which is currently dominant, the doctrine of “il maestro di color che sanno”[10] (the master of those who know, i.e.,  Aristotle according to Dante). The members of the academy, called the Lyncei, took an active part in the publication of Galileo’s Letters on the sunspots (including an appendix describing new observations about Jupiter’s satellites[11]), The assayer, and the Dialogue. As a centre of enlightened views the academy opposed the Collegio Romano of the Jesuits, who defended the Aristotelian philosophy as interpreted by Thomas Aquinas at all costs. In 1611 and 1613 the Jesuits were ordered by their general to ‘fall into line and present a common front behind Aquinas in theology and Aristotle in philosophy’.[12] After Cesi’s death in 1630 the pope changed his mind and the Jesuits got the upper hand, as Galileo would experience from 1633 till his death in 1642.

The central issue in the debate between the Lincei and the Jesuits became Copernicanism. During the sixteenth century no more than ten professional astronomers accepted Copernicanism and no Catholic church authority or theologian took exception to it. (Only Martin Luther and some Lutherans openly rejected a sun-centred cosmology, whereas the Lutheran Johann Kepler was its most prominent adherent.) This would change shortly after 1610, when Galileo openly approved of Nicholas Copernicus’ views of the moving earth (1543). Siderius nuncius (message of the stars, or star messenger, 1610) describes Galileo’s first observations with a telescope, shortly before invented at Middelburg, the Netherlands. Galileo discovered mountains on the moon and Jupiter’s four satellites, confirming Copernicanism. In the same year he moved from Padua to Florence to become Philosopher and Mathematician to the court of the Grand Duke of Tuscany. There he discovered the phases of Venus (comparable to the phases of the moon) proving that Venus turns around the sun, not around the earth. In 1611 he visited Rome, being honourably received by pope Paul V and by the Jesuit cardinal Robert Bellarmine, as well as by the leading astronomer Christophorus Clavius and other Jesuits at the Collegio Romano.[13] Although the Jesuits confirmed and admired Galileo’s telescopic discoveries, they rejected the idea of the moving earth. They preferred Tycho Brahe’s system (1588), a compromise in which the sun and the moon turn around the stationary earth, and the other planets turn around the sun. It left aside the daily motion of the earth. The Jesuits established that Tycho’s system described Galileo’s discoveries as well as the Copernican system did. It could not explain Kepler’s planetary laws (1609), which, however, were ignored by both sides in the debate between Galileo and the Jesuits.


Growing conflict

Only after this triumphal tour to Rome Galileo came into conflict with conservative scholars, though initially not on the issue of the moving earth, but on floating bodies. Galileo presented a theory deviating from Aristotelian views, but sustained by experiments.[14]

In 1612 Paolo Foscarini’s letter about ‘the Pythagorean and Copernican opinion concerning the mobility of the earth and the stability of the sun’ made some conservative Dominican scholars for the first time in history wondering whether the hypothesis of the moving earth was in conflict with biblical texts.

Shortly afterwards, Bellarmine wrote a friendly letter to Galileo, expressing his instrumentalist opinion that the earth’s motion might be discussed as a logical possibility in a scientific debate. However, in view of the Bible and the teachings of the church fathers, nobody should hold this hypothesis to be true, as long as there was no conclusive proof of the earth’s motion.[15]

In 1615, at the court of the Grand Duke of Tuscany, a discussion between several scholars took place about the relation of Copernicanism and the church’s doctrines, in particular about Galileo’s views on this subject. Not having attended this discussion, Galileo found it necessary to respond by means of an open letter to the Grand Duchess Christina, the Grand Duke’s mother.[16] Galileo wrote that the Bible does not intend to make statements about nature, but to relate religious truths. He believed ‘that the intention of the Holy Ghost is to teach us how one goes to heaven, not how heaven goes.’[17] What the Bible incidentally says about nature is accommodated to the comprehension of common readers, and therefore has less authority than statements based on sensory experience and reasoning. In this way, Galileo claimed the priority of natural science over theology concerning the study of nature. He even suggested that in case of conflict the theologians should carry the burden of proof – they have to prove the scientists wrong.

At the end of the letter Galileo presented his own interpretation of Joshua’s miracle halting the sun’s motion in order to proceed with the battle of Israel against the Amorites.[18] Galileo argued that this passage can only be understood by accepting his theory that the earth’s motion is caused by the sun’s rotation around its axis. He implied that if the sun stands still (ceases to rotate, ‘in mid heaven’), the earth’s daily motion also ceases.[19]


First condemnation by the inquisition

In the Middle Ages the inquisition as an ecclesiastical court for the suppression of heresy was a task of the Dominican order under supervision of the local bishop. The papal inquisition, the Holy Office, was initiated by the Council of Trent, simultaneously with the Congregation of the Index. Both consisted of cardinals with a staff of theologians. Initially the papal inquisition only operated in Italy (but not in Venice), later also in Spain and Portugal, where heretics, Jews and Muslims were the targets.

Galileo’s and Foscarini’s letters elicited a relatively mild indictment. The Holy Office considered two Copernican theses: that the sun is unmoving at the centre of the universe; that the earth is not at the centre of the universe, and is moving as a whole and also with diurnal motion. Without mentioning Galileo, the Inquisition gave as its opinion – not as a binding conclusion – that in a philosophical sense both theses are foolish and absurd. In a theological sense the first is formally heretic, and the second is at least erroneous in faith.[20]  Shortly afterwards, the Congregation of the Index suspended circulation of Copernicus’ Revolutionibus, until in 1620 a few corrections were issued.[21] Simultaneously, some other books were prohibited, but none of them written by Galileo.

Cardinal Bellarmine was instructed to serve Galileo a warning, which he executed orally. It is not known what exactly happened at this occasion.[22] An unsigned report survives, according to which Galileo was admonished not to teach the earth’s motion, but Galileo denied having seen this report before his trial in 1633. Instead he received a letter by Bellarmine, confirming that Galileo was not condemned, but advising Galileo not to teach Copernicus’ theory as being true.[23] Galileo took this as permission to discuss it as a hypothesis, but also as an interdiction to connect Copernicanism with the Bible. He painfully stuck to this, for instance in Il Saggiatore. In his Dialogue  Galileo only once refers to the Bible, in order to criticize an author who used biblical arguments ‘in a scandalous way’.[24] Bellarmine’s letter to Galileo was initially not known to the pope or to the Inquisition, until Galileo showed it – to his own disadvantage.


Il Saggiatore (1623) and Dialogo (1632)

After this episode, Galileo kept quiet for some time. In a heated discussion on the appearance of three comets in 1618 with the prominent Jesuit mathematician, astronomer, and architect Orazio Grassi between 1618 and 1623, Galileo initially hid behind his former disciple Mario Guiducci, and moreover took pains not to defend the Copernican doctrine.[25] Only in Il Saggiatore he openly attacked and ridiculed the Jesuits’ adherence to Tycho Brahe’s cosmological system.[26] Ten years before, Galileo had annoyed the Jesuits in a long struggle with Christoph Scheiner about the observation and interpretation of the sunspots.[27]

The same year 1623 his Florentine friend cardinal Maffeo Barberini became pope Urban VIII, and Galileo decided to dedicate Il Saggiatore to him. The pope encouraged Galileo to compose a dialogue on the systems of Ptolemy and Copernicus. The book got the title of Dialogo sopra i due massimi sistemi del mondo (Dialogue on the two most important systems of the world), but it was intended to be called Dialogue on the tides. It is a theatrical discussion occurring at Venice in four days. The fourth day treats with the theory of ebb and flood, according to Galileo proving definitely that the earth conducts a double motion. However, the censors would not allow of this.

The pope insisted that Galileo would conclude his book in an instrumentalist way. Galileo put the papal statement in the mouth of the Aristotelian Simplicio: ‘I do not therefore consider them true and conclusive; indeed, keeping always before my mind’s eye a most solid doctrine that I once heard from a most eminent and learned person, and before which one must fall silent, I know that if asked whether God in his infinite power and wisdom could have conferred upon the watery element its observed reciprocating motion using some other means than moving its containing vessels, both of you would reply that He could have, and that He would have known how to do this in many ways which are unthinkable to our minds. From this I forthwith conclude that, this being so, it would be excessive boldness for anyone to limit and restrict the Divine power and wisdom to some particular fancy of his own.’[28]

By putting this statement in the mouth of Simplicio, Galileo unintentionally offended the pope, as was duly stressed by the Jesuits advising the pope. Galileo put this statement in the mouth of Simplicio. Representing the standard Aristotelian views criticised by Galileo, Simplicio was named after the sixth-century philosopher Simplicius, but this name was easily associated with a simpleton, who in the Dialogue was always wrong.

Galileo pretended to be keeping his agreement to present in his book an impartial discussion of two theories, but he admitted: ‘I have taken the Copernican side in the discourse, proceeding as with a pure mathematical hypothesis and striving by every artifice to represent it as superior to supposing the earth motionless – not, indeed, absolutely, but as against the arguments of some professed Peripatetics.[29] Yet the book’s critique was not directed to Ptolemy, but to Aristotle. Once more, Galileo insulted the Jesuits by completely ignoring Tycho Brahe’s system, being their favourite compromise.


Second condemnation by the inquisition (1633)

Despite the approbation by Roman and Florentine censors, all three Florentine Dominicans and friends of Galileo’s, the book was prohibited immediately after the first copies reached Rome. Galileo was summoned to the papal Inquisition, which had enough reasons to condemn Galileo, but remarkably few juridically valid ones.[30] After all, the book had passed the censors. The backgrounds and the contents of Galileo’s first encounter with the Inquisition in 1616 were obscure, because except for Galileo, all people involved were deceased. However, the documents interpreted as admonishing Galileo not to teach Copernicanism in any way convinced the pope that Galileo by not telling him about their existence had betrayed his trust.

Three out of the ten cardinals constituting the court finally refused to sign the verdict, but the critical political situation of the Pontifical State in the Thirty Years’ War required pope Urban VIII to take a firm stand. Moreover he was personally hurt and ill-advised by some Jesuits involved in the above mentioned struggle with the enlightened Accademia dei Lyncei. Anyhow, Galileo received an unexpected harsh punishment. It consisted of the public recantation of his Copernican views; the interdiction of his book; the prohibition to publish anything new; and lifelong imprisonment, soon to be changed into confinement to his own home.

One might easily get the impression that the Galileo affair was mainly a matter of injured personalities, but there was much more at stake. It became part of the smouldering conflict between traditional scholastic Aristotelianism and the emerging new philosophy, later to be called the Enlightenment. In this conflict Copernicanism acted as a dividing shibboleth. Without many exceptions, Enlightenment philosophers were Copernicans, and counter-Enlightenment philosophers and theologians were not. The propagation of Copernicanism is one of Galileo’s lasting contributions to the age of reason.


Galileo heretic?

Because of his supposed atomistic views, Galileo was also suspected to be a heretic, which he vehemently denied such that it was dropped from the accusation.[31] This assumed heresy concerned the doctrine, after a centuries long discussion established by the council of Trent (1545-1563) as a dogma, regarding the transubstantiation of bread and wine into the body and blood of Jesus Christ after the consecration by a Catholic priest. In line with Aristotelian hylemorphism (the view that any substance is a union of matter and form) as interpreted by Thomas Aquinas, this meant that the substance or essence of bread and wine changed miraculously into the substance of Jesus’ body and blood, without changing the outward appearance of bread and wine. Martin Luther proposed that after the consecration both substances were simultaneously present, which he called consubstantiation. In contrast, John Calvin considered the presence of Christ in the sacrament of the Last Supper as a matter of faith, signifying and sealing God’s covenant with his people, and not in need of a natural theological explanation:

‘Now, should any one ask me as to the mode, I will not be ashamed to confess that it is too high a mystery either for my mind to comprehend or my words to express; and to speak more plainly, I rather feel than understand it.’[32]

Calvin’s views found a place in the Consensus Tigurensis (Zürich agreement, 1549), sooner or later uniting all Protestants on the Last Supper except the Lutherans.

Galileo and Descartes were both corpuscularians assuming that matter consisted of particles. However, they were neither atomists in the classical sense, believing that atoms are unchangeable and indivisible, moving in a void, nor in the eighteenth-century sense of accepting that atoms have specific properties besides their shape, extension, and impenetrability (chapter 8). The atomism they were accused of concerned the nominalist view that ‘tastes, odours, colours, and so on are no more than mere names so far as the object in which we place them is concerned, and that they reside only in the consciousness’.[33]

According to the Jesuits this contradicted the Tridentine dogma of transubstantiation, in which the ‘accidents’, the sensible properties of bread and wine, but not the essential properties of bread and wine  remain unchanged.[34] Therefore, after 1633 Galileo took care to dissociate himself from his corpuscularian views as expressed in Il Saggiatore, although this book did not play a part in his condemnation, and it was never put on the Index of prohibited books – after all, it was dedicated to pope Urban VIII, who had lauded it after its appearance.


Discorsi (1638)

After the trial of 1633, Galileo concentrated on writing and publishing his final work, Discorsi e dimostrazioni matematiche, intorno à due nuoue scienze (Discourse with mathematical proofs on two new sciences), in 1638 surreptitiously published at Leiden. In this book Galileo laid the foundation of a new mechanics. As the science of motion, largely developed when he lived at Padua (1592-1610), this must be distinguished from mechanicism as the philosophy making the science of mechanics the core of explanation of natural phenomena.

Galileo recognized two fundamental or natural motions: uniform circular motion (at constant speed), and the uniform accelerated motion of free fall (at constant acceleration). Both occur without external cause, and are idealized states. In the third day of Discorsi, entitled ‘Change of position – De motu locali’, he introduced them carefully by the axiomatic method, starting with the words: ‘My purpose is to set forth a very new science dealing with a very ancient subject. There is, in nature, perhaps nothing older than motion, concerning which the books written by philosophers are neither few nor small; nevertheless I have discovered by experiment some properties of it which are worth knowing and which have not hitherto been either observed or demonstrated.’[35]

Galileo’s experiments were usually thought-experiments. Only his fall of law was in fact found experimentally, with marbles rolling down an inclined plane.[36] Galileo used the concept of force only in static situations of equilibrium. He never related force to motion, not even to gravity, as Johann Kepler and Isaac Newton would do (chapter 4).



For the development of science, Galileo’s condemnation did not have many negative results. As a matter of course, Protestants did not care about the views of the Inquisition, and even Catholic scientists often paid little attention to clerical statements.[37] His mechanics soon became incorporated in Enlightenment philosophy.

The consequences were severest for the Jesuits, the most faithful sons of the church. Until 1824, when the ban on Copernicus’ and Galileo’s works was lifted, the Jesuit schools had to teach Tycho Brahe’s system, which apart from the Jesuits nobody took seriously. Blaise Pascal, who was a Catholic, but an adversary of the Jesuits, wrote in his Provincial letters: ‘In vain you have obtained a decree of Rome against Galileo, because this will not prove that the earth stands still. If reliable observations were available that it turns around, then all people together could not prevent her from rotating, and could not prevent themselves from rotating with her.’[38]

Most scientists soon converted to the Copernican views. Outside science, in particular theologians, both Catholic and Protestant, remained adverse to Copernicanism well into the eighteenth century. In 1633 the Catholic church got the image of an enemy of science. In the age of Enlightenment this picture was strongly advanced, both by anticlerical currents, and by the clerical opposition against Darwinism. However, this image is wrong and undeserved.[39] Since the beginning of the Middle Ages the Christian church has more often promoted than opposed science, and one or two counterexamples like the Galileo affair are more than compensated by the predominantly positive attitude of leaders of the church with respect to science and learning. The most severe opposition to modern views did not originate from the church as an institute, but from conservative scientists, philosophers and theologians.

However, even at the end of the twentieth century, the Vatican failed in an attempt to rehabilitate Galileo unambiguously.[40] Considering itself infallible in matters of faith, the church could not afford to admit of having made a mistake.


2.3. The double truth


In a historical perspective, the Galileo affair did not only concern the question of the truth about the terrestrial motion, but first of all a question of authority. Arguing that the motion of the earth could be derived by natural means, from perception and reasoning, Galileo demanded the right for scientists to decide for themselves by what methods to arrive at the truth about nature. They were no longer ready to subject themselves to a theory sanctioned by the church, such as derived from Plato and Aristotle, but they wanted to devise their own theories subject to another authority, to wit instrumental observation, measurement and experiment,  

The Inquisition argued that the immobility of the earth was a matter of faith, in which only the church could exert authority. Since Galileo, scientists reject the priority of theology in the study of nature. In due course, they also came into conflict with philosophers, with the same result. This implied the rejection of the instrumentalist practice of double truth, suggested by Andreas Osiander (in an anonymous preface to Copernicus’ De revolutionibus), Robert Bellarmine and pope Urban VIII – and by remarkably many modern philosophers of science.

As a true Catholic instrumentalist, Pierre Duhem still defended it even in the twentieth century: ‘Despite Kepler and Galileo, we believe today, with Osiander and Bellarmine, that the hypotheses of physics are mere mathematical contrivances devised for the purpose of saving the phenomena. But thanks to Kepler and Galileo, we now require that they save all the phenomena of the inanimate universe together.’[41]


Athens and Alexandria

This practice started with the distinction of Athenian physics and Alexandrian astronomy. Plato argued that the perfect celestial bodies can only move in circles with the earth as a centre, at a constant speed. Eudoxus thought that they were attached to transparent crystalline spheres, and Aristotle elaborated this model even further. It worked very well for the daily motion of the large majority of celestial bodies, the ‘fixed stars’ on the outermost sphere, but the observation of the seven planets or wandering stars presented many problems. If a celestial body fixed to a sphere would turn around the earth it should always appear equally bright, which Mars for instance did not do. Moreover five of the seven planets return on their path occasionally, in a retrograde motion (when Mars, Jupiter and Saturn are brightest). This occurs whenever their position is opposite to the sun’s, a property used by Copernicus to demonstrate that the apparent retrograde motion of the planets is in fact a projection (a parallax) of the real annual motion of the earth. It enabled him to calculate the relative distance of these five planets to the sun. In order to explain why the fixed stars do not show any parallax, he had to assume that their distance is much larger than that of Saturn, which was confirmed by measurements about three centuries later.

Claudius Ptolemaeus’ planetary model (circa 150), building on insights due to earlier mathematicians from Alexandria, was intended to describe the observed motions of the wandering stars mathematically. It did not consist of homocentric spheres, but of heterocentric circles. He calculated the motion of each planet apart. His mathematical models were not intended to explain the so-called inequalities (deviations from uniform circular motion), but to calculate these, in order to allow of astrological predictions. During the Renaissance these were increasingly applied in medicine. In contrast to Aristotle’s realistic physics, mathematical astronomy was always interpreted in an instrumentalistic sense. The many circles and epicycles were not understood realistically. They were not considered to represent the real state of affairs. A faithful disciple of Aristotle, the twelfth-century Arab philosopher Averroes commented: ‘The Ptolemaic astronomy is nothing so far as existence is concerned; but it is convenient for computing the non-existent.’[42]  

The Athenian philosophers preferred the system of crystalline spheres with the earth at their centre. Belonging to physics, the homocentric spherical system of Eudoxus and Aristotle was considered by many to be a true and sufficient explanation of the cosmos. Ptolemy’s system of heterocentric circles, being a part of mathematical astronomy, was merely a useful instrument to make calculations, ‘to save the phenomena’.[43] Aristotelian physics stated essential truths about celestial motion, whereas Ptolemy’s astronomy better fitted observations. Most of the time the two theories could peacefully coexist, but occasionally, conflicts between enthusiastic partisans of the two truths could not fail to occur.[44]


The rise of Aristotle in Western Europe

In the twelfth and thirteenth centuries, translations into Latin of the works of Aristotle, Ptolemy, and others became available in Europe, together with Arab comments. These manuscripts were eagerly studied at the universities, but they contained many views contradicting Christian doctrines, giving rise to conflicts with the church. For instance, Aristotle taught the cosmos to be eternal and unchangeable, which clearly contradicts the Christian idea of creation. For this reason, during the early Middle Ages Aristotle was less popular than Plato, who in his dialogue Timaeus introduced the Demiurge, a divine craftsman creating the visible world according to eternal ideas.[45]

Only in the thirteenth century, Aristotle became the most important philosopher, in particular at the university of Paris. The work of scholars like Thomas Aquinas led to a synthesis of official theology with Aristotelian philosophy, including its physics. Since then, philosophy and physics as part of natural theology were taught in the theological faculty of the medieval universities, whereas astronomy as one of the seven artes liberales belonged to the preparatory faculty of arts. The students and masters of liberal arts were free to discuss their views on natural affairs, provided they did not pretend these to be true. Jean Buridan and Nicole Oresme in the fourteenth and Nicholas of Cusa in the fifteenth century discussed Aristotle’s On the heavens, contemplating the logical possibility of a daily motion of the earth.[46] They never considered the annual motion of the earth around the sun.[47] Because this is the most important feature of Copernicus’ theory, it is not tenable to consider e.g. Oresme a precursor of Copernicus.

As soon as the question arose whether the earth really moves, Buridan writes: ‘I do not say this affirmatively, but I shall ask the lords theologians to teach me how they think that these things happen.’ Oresme doubted the distinction between celestial and terrestrial matter, and presented many arguments in favour of the daily motion of the earth. Nevertheless, in the end he rejected its reality: ‘And yet all people, myself including, believe that the heavens move, and the earth not: Thou hast fixed the earth immovable and firm.’ (Psalm 93, 1).[48]

In general, in their comments on Aristotle’s works, the medieval scholastics did not question Aristotle’s views, but they investigated his proofs. Thus, Oresme argued that Aristotle’s proof of the immobility of the earth is logically wanting, but he did not really doubt it. The earth’s motion, being contrary to Aristotelian cosmology and biblical texts, was considered at most as an astronomical possibility, but never as a physical reality.

The clerical practice of the double truth provided the medieval scholars with a margin within which they were free to investigate and discuss anything, if only they ultimately submitted themselves to the authority of the church.[49] But at the close of the Middle Ages, the authority of the church waned. With the Renaissance and the Reformation people demanded the right for themselves to decide what is true or false. In science, evidence obtained by observation, measurement and experiment became more important than the authority of Aristotle. Even so, before Christopher Columbus in 1492 demonstrated that classical geography was hopelessly out of date, the primary objective of Renaissance intellectuals was to recover the lost culture of the past.[50]


Tycho Brahe and Galileo Galilei

In the sixteenth century, Tycho Brahe’s observations undermined the generally accepted theory of the heavens.[51] First he showed that the bright new star (stella nova, actually a supernova, an exploding star) of 1572 occurred far beyond the sphere of the moon, thereby discrediting the conviction that the starry heaven is perfect and therefore unalterable. Five years later he proved that also the comets are beyond the moon, having quite non-circular orbits. This refuted Aristotle’s view that a comet is a sublunar fiery atmospheric phenomenon (because they come into existence and disappear), as well as the Platonic doctrine that celestial bodies necessarily move uniformly in a circular orbit. Moreover Tycho shed doubt on the reality of Eudoxus’ solid crystalline spheres, which the comets appeared to cross without any hindrance. The final blow to the realistic theory of perfect celestial bodies and their motion was delivered by Galileo’s discoveries of the mountains on the moon, of Jupiter’s satellites, and of the phases of Venus, as well as by the investigation of the sunspots. Whereas he defended the Copernican model, he rejected its instrumentalist interpretation.

When Galileo criticized Aristotle’s realistic physics and claimed the truth for his Copernican view of the motion of the earth based on experience, he refused to take recourse to the double truth, and he ascribed the Pope’s pronunciation to the Aristotelian Simplicio. Thereby he collided with natural theology in a way the church could not tolerate.


Johannes Kepler

Johannes Kepler, too, rejected the practice of double truth. Kepler broke away from both Claudius Ptolemy’s and Nicholas Copernicus’ fundamental idea of uniform, circular motion, the Platonic dogma about the motion of celestial bodies. Kepler’s first two laws, published in Astronomia nova (1609), proclaimed that planetary orbits are not circular but elliptic, and planetary motion is not uniform, but with a speed varying according to the area law. He did not base these laws on a rational analysis but on careful observations made by Tycho Brahe, supported by mathematical calculations. Therefore it should not be amazing that mechanists like Galileo Galilei, René Descartes,[52] and Christiaan Huygens, rejected his results, and held to uniform circular motion. Descartes admitted that the planetary orbits are not circular, but like Galileo he did not accept Kepler’s laws, because these did not fit the mechanist program of explaining motion by motion.

Kepler realized that if planetary motion deviates from uniform circular motion it needs a dynamic explanation. He was the first to assume that the sun is the physical cause of planetary motion. Kepler’s main work was significantly entitled Astronomia nova seu physica coelestis (New astronomy or celestial physics), an implicit rejection of the practice of double truth.


René Descartes and Isaac Newton

Only Descartes still made use of the double truth, to hide for the church his true feelings about Copernicanism. He distinguished metaphysical truth from scientific hypotheses that may be false, but are useful to save the phenomena. Isaac Newton’s sharp statement ‘hypotheses non fingo’ (I don’t feign hypotheses) was directed to this practice. In his Queries, added to Opticks (1704) and extended in later editions, he proposed several hypotheses, but with an altogether different intention, namely to suggest further investigation.

This did not prohibit later enlightened positivists from propagating instrumentalism as soon as new scientific insights (such as the existence of atoms) did not fit their prejudices.  


[1] Wootton 2015, 433-434, 441.

[2] Van Berkel 1983; Gaukroger 1995, chapter 3; Wootton 2015, 363-364..

[3] Stafleu 2016, 3.1.

[4] Stafleu 2016, 3.3.

[5] Galileo 1623, 274.

[6] Galileo 1623, 277-278.

[7] Galileo 1638, 98-99. see Drake 1970, chapter 2.

[8] Galileo 1623,276-277. Compare Plato, Timaeus, 1186-1192.

[9] Wootton 2010, 167; Dooyeweerd 1953-1958, I, part II, 169-495: ‘The development of the basic antinomy in the cosmonomic idea of humanistic immanence-philosophy’.

[10] Wootton 2010, 33.

[11] Drake 1990, chapter 10.

[12] Moss 1993, 125.

[13] Drake (ed.) 1957, 75.

[14] Drake (ed.) 1957, 79-81.

[15] Drake (ed.) 1957, 162-164.

[16] Galileo 1615 was not printed before 1636; Drake (ed.) 1957, 145-171; Moss 1993, 190-211.

[17] Galileo 1615, 186; Kepler 1609, 29-33; Drake (ed.) 1957, 169, 181-184.

[18] Joshua, chapter 10.

[19] Galileo 1615, 212-215.

[20] Finocchiaro 1989, 146. Duhem 1908, 95-96.

[21] Finocchiaro 1989, 30, 148-150.

[22] Drake 1978, 252-256; Finocchiaro 1989, 147-148.

[23] Finocchiaro 1989, 153. In 1633,

[24] Galileo 1632, 357-358.

[25] Drake 1978, 264-288.

[26] Galileo 1623; Drake 1990, chapter 12.

[27] Galileo 1613.

[28] Galileo 1632, 464; Finocchiaro 1980, 8-12.

[29] Galileo 1632, 5-6.

[30] de Santillana 1955; Drake 1978, 341-352; Finocchiaro 1989; 2005; McMullin (ed.) 2005.  For the (partial) text of the indictment, see note to page 103 of Galileo 1632, and for the text of Galileo’s abjuration, see Drake’s introduction to Galileo 1632, xxiv-xxv; Shea 1986, 131.

[31] Redondi 1983; Moss 1993, 253-259; Wootton 2010, 168.

[32] Calvin 1559, book IV, chapter 17, section 32.

[33] Galileo 1623, 274.

[34] Redondi 1983, chapter 7.

[35] Galilei 1638, 153.

[36] Drake 1978, 84-104.

[37] Koyré 1961, 471; Ashworth 1986.

[38] Pascal 1663, 467 (my translation).

[39] See Lindberg, Numbers (eds.) 1986.

[40] Finocchiaro 2005; McMullin (ed.) 2005; Heilbron 2010, 362-365.

[41] Duhem 1908, 117.

[42] Koestler 1959, 209; Rosen 1984, chapter 3.

[43] Dijksterhuis 1950, 57-66.

[44] Dijksterhuis 1950, 230-237 (II: 141-148); Duhem 1908, chapters 2-4.

[45] Plato, Timaeus, 1161 ff.

[46] Dijksterhuis 1950, 237-241, 254-256; Hooykaas 1971, 75-79; Kuhn 1957, 114-122; Toulmin, Goodfield 1961, 165-169; Grant 2001, 200-201.  

[47] Dijksterhuis 1950, 237-241, 254-256 (II: 149-151, II: 12-13); Hooykaas 1971, 75-79; Kuhn 1957, 114-122; Toulmin, Goodfield 1961, 165-169; Grant 2001, 200-201.

[48] For both quotes in this paragraph, see Hooykaas 1971, 77-79. Oresme refers to Psalm 93:1.

[49] Dijksterhuis 1950, 185, 186 (II:100).

[50] Wootton 2015, 73.

[51] Wootton 2015, 187-194.

[52] Descartes 1647, 117.






Chapter 3

Mechanicist philosophy in the Dutch Republic




3.1. René Descartes: founder of modernism




During its Golden Age the Dutch Republic was not only the richest and most powerful country of Europe; a constitutional state with the largest civil freedom then possible; a centre of science and a renewing art of painting; a tolerant Calvinian country; an asylum for Jews and Huguenots; the first modern economy; and a colonial empire; but also a refuge where the early Enlightenment could flower after its false start in Italy.


Chapter 3 discusses successively the mechanicist philosophy of René Descartes, the moderate mechanicism of Christiaan Huygens, and the radical Enlightenment philosophy of Benedict Spinoza. Their bed was made by Desiderius Erasmus, Simon Stevin, Isaac Beeckman, Willebrord Snel, Hugo Grotius, and several other scholars.


Between 1628 and 1650, living at several places in the northern Netherlands, René Descartes or Cartesius was the leading philosopher of the Enlightenment, well-known because of his methodical doubt. His cogito ergo sum:I doubt (or: I think), hence I am,[1] became the motto of modern rationalist philosophy, later called modernism. Descartes conquered his doubt by showing that he could not doubt his own existence. By doubting, by being uncertain, man is imperfect. This conclusion required that he could not doubt the existence of a perfect being, God.[2] Being perfect, God will not deceive someone having a clear and distinct idea.[3] God warrants the existence of anything which can be perceived claire et distincte (clearly and distinctly), as evidently true.[4] In this way, God’s existence became a stage in Descartes’ progress towards science through the methodical ordering of evident insights.[5] The truth of clear and distinct ideas such as Euclid’s axioms is warranted by God. In turn, the fact that we have such ideas is proof of the existence of God as a perfect rational being, transcending everything except logic and mathematics. Like Thomas Aquinas, Descartes stressed that this does not include knowledge of Jesus Christ, for which independent revelation is required. He arrives at what is now called ‘theism’, the rationalist attempt to base Christian and other religions in reason as a common denominator. Enlightenment philosophers would soon replace it by pantheism, deism, or agnosticism, and ultimately by atheism.


Of course, this reasoning falters if it would depend on Descartes’ subjective doubt. His Discours de la méthode (Discourse on the method of rightly conducting one's reason and of seeking truth in the sciences, Leiden 1637) starts with the statement: ‘Good sense is mankind’s most equitably divided endowment, for everyone thinks that he is abundantly provided with it.’[6] Only if his methodical doubt would have a universal character, and if every right-minded person would agree with his train of thought, he would be able to construct the world rationally. In Méditations touchant la première philosophie (Meditations on first philosophy, 1641) he took perfection as an a priori argument for God’s existence. Its sub-title (possibly added by the printer) is, ’in which the existence of God and the immortality of the soul (or alternatively: the distinction of the human soul and the body) are demonstrated.’ In the Aristotelian tradition metaphysics is the first philosophy, followed by physics. Aristotle himself did not make this distinction.


The argument of perfection, since Immanuel Kant also called the ontological argument for God’s existence, was widely discussed since Anselm of Canterbury in Proslogion (1078) defined God as ‘a greatest being having such attributes that nothing greater could exist.’ For Descartes this implies God’s transcendence: God cannot be immanent in nature. In the same meditations Descartes argues that the only effect of the alternative argument, that of causality, is his idea of God. The proof from causality (that each cause has a cause itself, until the first cause is reached) presupposes something different from God, whereas the argument from perfection doe not need this premise.  


However, man can only construct the world if he is free, if he stands opposite nature. Therefore Descartes made a sharp division between body and mind. ‘Reason is the only thing which makes us human and distinguishes us from the animals.’[7] Descartes compares the body with a machine, and states that it is the use of language which makes man different from machines. For Descartes, using language means having reason.[8]


Descartes divided created reality into res extensa (extended being) and res cogitans (thinking being). Res extensa is the objective physical world, determined by natural laws, essentially extension identified with matter. Res cogitans is the subjective mental world, which essence is thought, the human mind.[9] Descartes was more certain about his thought, his mind, than about his body.[10] Initially he was uncertain about how these two could interact, but shortly before his death, he suggested that the two worlds interact via the pineal gland (situated near the centre of the human brain, between its two hemispheres), the ‘principal seat of the soul’,[11] the source of ‘clear and distinct ideas’. In an individual living person, the mind is able to act on the body, to perceive, to have a memory, to judge, having a free will, and being responsible for its deeds. Whereas matter is completely inert, the embodied soul is quite active, although completely dependent on God. After death the mind exists in a disembodied form as an immortal soul, preserving the person’s identity. For this Thomas Aquinas assumed that the resurrection of the body is required, but like most of his contemporaries, Descartes was not much concerned with this doctrine.[12] In a private letter of condolence, Descartes writes:  ‘... those who die pass to a sweeter and more tranquil life than ours ... We shall go to find them some day, and we shall still remember the past; for I believe we have an intellectual memory which is certainly independent of the body.’[13]


Unfortunately, whereas he is quite clear about corporeal memory, he never explains the concept of intellectual memory.


Descartes discusses three kinds of relations: between God and the human mind; between God and corporeal nature; and between body and mind.[14] Both nature and mind are completely dependent on God. This implies that they have no capacity of self-preservation: God continually recreates both body and mind at each instant.


Because Descartes’ God fully transcends the natural world, the study of nature will not lead us to knowledge of the divine. Nature and the supernatural realm are firmly separated. Therefore most Cartesians did not adhere to any kind of physico-theology (13.3).[15]


In the Netherlands, Cartesian theologians such as Johannes Coccejus argued that because the Bible is written in vulgar language based on common sense, it should be interpreted according to clear and distinct ideas. They met with strong resistance from their Aristotelian colleagues, among whom Gijsbert Voet (Gisbertus Voetius) at Utrecht, who rejected Copernicanism and Cartesian mechanism alike.[16]


After Descartes’ death in 1650 his philosophy met with much criticism from the Aristotelian counter-Enlightenment. Protestant theologians in the Netherlands and abroad prohibited teaching his works. The pope banned these in 1663, especially because Descartes’ vision on matter collided with the doctrine of transubstantiation (2.2).[17] About 1678 Louis XIV ordered that only Aristotelian science was admitted in French education. Members of the Académie Française, among who Christiaan Huygens, were not allowed to discuss philosophical problems. Nicolas Malebranche’s books, published in the Dutch Republic, were prohibited in France. Antoine Arnault, who was both a Cartesian and a Jansenist, was forced to fly to Holland, just like the Huguenot Cartesian Pierre Bayle, the influential editor of Nouvelles de la republique des lettres (1684-1687) and author of Dictionaire historique et critique (1697), a precursor of d’Alembert’s and Diderot’s Encyclopédie




3.2. Descartes’ natural philosophy


 After Galileo Galilei, René Descartes became the main founder of mechanical philosophy,[18] attempting to reduce macroscopic natural phenomena to microscopic ones, to be explained by matter, quantity, shape, and motion. The transfer of motion could only occur by impact between material particles. Because he identified matter with spatial extension, he assumed that these particles are mutually impenetrable Phenomena that could not be reduced in this mechanical way he excluded from physics.


Descartes published his contributions to physics in several works, culminating in Principia philosophiae (1644, French translation 1647). He made a deep impression with his views on matter and motion in his theories of magnetism; of impact; of planetary motion; and of light; all illustrating his natural philosophy.






Descartes was equally successful with his theory of magnetism as with his explanation of the colours of the rainbow to be found in his Les météores. This is one of three attachments to his Discours de la méthode (1637), written much earlier. The other two are La dioptrique and La géométrie. Descartes assumed that all matter consists of moving particles only differing by magnitude, density, and shape. He rejected the existence of a vacuum, but suggested that a magnet and other objects have pores, invisible for the naked eye. Through these pores a continuous current of particles moves towards other bodies. The magnet expels particles fitting the pores of other magnets and of iron, but not fitting those of nonmagnetic materials. The stream of particles causes the motion of iron toward the magnet. Descartes explained the difference between North and South poles by assuming the particles and the pores to have the shape of right-handed or left-handed screws.[19]


Descartes considered his theory as a possible, not a certain explanation of the phenomena. Yet it was widely accepted, because for the first time someone provided a clear and insightful mechanical explanation of magnetic action based on shape and motion of material particles (Descartes had no opinion about their magnitude or number).


Descartes left behind the Aristotelian physics which for any explanation started from the contrary concepts of warm and cold, dry and moist, hard and soft, heavy and light, because these were considered evident, not requiring further explanation. These contrary properties served as ‘termini’ (ends) for the explanation of changes or ‘motions’ (we would now say: processes) in the direction of one of the two termini (for instance, the process of cooling is a motion from a hot to a cold body). These were considered manifest, obvious, and either rational or observable with the senses. Medieval Aristotelians accepted alchemical concepts like the philosopher’s mercuryand sulphuras clear as far as these could be connected to these termini. Phenomena that could not be reduced in this way they called obscure or occult, reminding of magic. Their standard example of an occult property happened to be magnetism, for it has no obvious connection with the properties warm, cold, dry, moist, heavy, or hard.


Therefore, Descartes’ explanation of magnetism was hailed as a triumph of the new mechanical philosophy. It conferred much credit to Cartesian physics, although it did not explain anything; it did not predict new phenomena; it did not further measurability; it could not be confirmed by independent experiments; and it did not generate new and interesting problems. For these reasons, Isaac Newton rejected it without much ado.






For the analysis of impactDescartes’ basic assumption was the identification of matter and space. Therefore he believed matter to be homogeneous, isotropic, continuous, and infinitely divisible. As a consequence, extended material bodies are mutually impenetrable, perfectly hard and not elastic. Moreover, he assumed quantity of natural motion to be indestructible.[20] At the creation, God supplied the cosmos with a quantity of motion, never to change afterwards. He considered this law of conservation of motion to be clear and distinct, therefore evidently true, not subject to empirical scrutiny.


Starting from these principles, Descartes developed seven laws of impact.[21] Because of his assumption that bodies are hard and not elastic, these laws (with perhaps one exception) are contradicted by the results of experiments with colliding objects. Admitting this, Descartes observed that his laws concern circumstances which cannot be realized in concrete reality: the laws are valid for hard or rigid bodies, without any elasticity. ‘The proofs of all this are so certain, that even if our experience would show us the contrary, we are obliged to give credence to our mind rather than to our senses.’[22]




In his theory of impact, Descartes treated rest and motion apart, as contrary concepts. Besides the concept of quantity of motion (momentum), he applied quantity of rest (inertia), as an effect of spatial extension. If in a collision the body at rest is larger than the moving one, the quantity of rest dominates the quantity of motion, and the largest body remains at rest.[23]


Descartes needed the distinction between rest and motion to explain the existence of extended bodies moving as a whole. The parts of the body move together with the whole, but are at rest with respect to each other. If Descartes would not have available the idea of rest, the idea of universal motion would have excluded the existence of extended bodies, he believed.




Planetary motion


Descartes admitted that the planetary orbits are not perfectly circular, but like Galileo, he ignored Johann Kepler’s laws (4.2).[24] He explained planetary motion by vortices surrounding the sun and other celestial bodies. In a manuscript called Le monde (The world), Descartes assumed the motion of the earth. ‘If this is false, all foundations of my philosophy would be false as well, for it is evidently demonstrated from them.’[25] But learning of Galileo’s conviction in 1633, he withdrew the manuscript from the printer. As a devout Roman-Catholic raised at the Jesuit College at La Flèche, he did not want to challenge the church. Emphasizing that the earth is at rest with respect to the vortex that carries it around the sun, Descartes attempted to circumvent the church’s prohibition of Copernicus’ and Galileo’s views, acting as a Copernican in disguise, ‘… denying the motion of the earth with more care than Copernicus, and with more truth than Tycho Brahe.’[26] According to his theory the earth moves around the sun because it is dragged by the whirling matter around the sun. But the earth is at rest with respect to its direct surroundings. Calling this double truth ‘a hypothesis or supposition which is perhaps false’, Descartes assured that he respected the church’s doctrines.


Without admitting it in plain words, Descartes assumed some kind of absolute space, a space as seen by God, but he also contended that motion can only be relative.[27] This dilemma arose from his identification of space and matter. If matter is the same as space, local motion as change of position is difficult to imagine. The only possibility to create motion in a plenum arises when spatial parts exchange their positions. Hence, real motion only occurs in a vortex, circular motion returning into itself. Descartes was impressed by William Harvey’s discovery of the circulation of blood (1628), but like Harvey he rejected the assumption that the heart is acting as a pump, even if this would have been a nice mechanical explanation. Instead Descartes suggested that blood is circulated by heat. Real vortex motion in a plenum is relative motion. The non-existing idealized rectilinear motion in a void is absolute with respect to absolute space.






For his theory of light Descartes had to take recourse to a double truth, too. The full title of Descartes’ at first unpublished manuscript reads: Le monde, où traité de la lumière (The world, or treatise on light, published posthumously in 1664). Because light and seeing play a central part in Descartes’ philosophy, his theory of light should start from a clear and distinct idea. It is that light is propagated instantly, with an infinite speed, through a medium pervading all other kinds of matter. However evident this idea was in Descartes’ perception, it was not shared by Galileo Galilei, Isaac Beeckman, Pierre Fermat, and Christiaan Huygens, who were more empirically inclined. Both Descartes’ physics and his cosmology started from this certainly true and indubitable idea as a crucial element of his philosophy: In a letter to Isaac Beeckman he wrote: ‘To my mind, it is so certain that if, by some impossibility, it were found guilty of being erroneous, I should be ready to acknowledge to you immediately that I know nothing in philosophy … if this lapse of time could be observed, then my whole philosophy would be completely upset.’[28]


Because Descartes identified matter with space, he could not conceive of a void. The only kind of motion in a plenum is vortex motion. Therefore, light propagating rectilinearly cannot be motion. It is inconceivable that light would need time to move from one place to another. Visual perception has an immediate character. However, for the explanation of refraction he had to assume that the speed of light is different in differing media.


In La dioptrique Descartes assumed that light does not move, but has a tendency to move, with a speed different in various media, according to the laws of motion. With this assumption he could explain refraction at the boundary between two media and derive Willebrord Snel’s law by making an analogy with a really moving ball.[29] Descartes only wanted to suggest a possible explanation, based on a hypothesis that might be false.


On metaphysical grounds, Descartes had some axioms in which he believed without any reserve, because they were clear and distinct within his mechanical philosophy. But he was aware that these principles were much too simple to explain the full complicated reality. If he wanted to give an explanation of some phenomenon as plain as refraction, he concocted a hypothesis with the sole purpose of demonstrating that the new mechanical philosophy was able to explain everything. He considered it irrelevant that this hypothesis contradicted his own clear and distinct axioms, and therefore would be false.[30]


In this way Descartes maintained the medieval practice of double truth. It was the task of metaphysics to give explanations which were true, and the task of physics to find theories derived from false hypotheses but describing the phenomena correctly. The only difference from the medieval scholars was the division of tasks. Because Descartes refused to distinguish between physics and mathematics, or physics and astronomy, he had to divide the tasks between physics and metaphysics.


Experimental philosophers soon took his advice to heart. They accepted the separation of physics and metaphysics, leaving metaphysics to philosophers (in the modern sense). But they did so in a way quite different from Descartes’ intentions. Starting with Isaac Newton, physicists considered the content of their physics to be true, and they left scholastic, Cartesian, Kantian, or post-modern metaphysics for what it was: rationalist speculation. In their investigation of nature they liberated themselves of any metaphysical constraint. The experimental philosophers arrived at the conclusion that Descartes’ clear and distinct ideas, how much evident, were untenable, and that their own physics, based on instrumental observations, measurements, and experiments, supplied more certainty than any metaphysics, including mechanical philosophy. Physical questions should be settled on physical principles.


Even Huygens shared this opinion.




3.3. Christiaan Huygens’

moderate mechanicism




Christiaan Huygens is known as a Dutch mathematician, physicist and astronomer, who from 1666 to 1681 lived at Paris as a leading member of the Académie des Sciences. In his work seventeenth-century mechanical philosophy received a moderate form.[31] As a designer Huygens became famous because of the pendulum clock (1657), not because he invented it, but because he brought several inventions together, building a working instrument. Initially it was intended to determine the longitude at sea, but for this purpose a pendulum is not the most obvious choice. This stimulated Huygens to design the spring balance (1674-1675), which turned out to be better applicable in a marine chronometer. He published his results in Horologium oscillatorium (1673), after Galileo’s Discorsi and Newton’s Principia the most important seventeenth-century book on mechanics.


For Huygens, mechanicism was more a research program than a philosophical doctrine. Being inclined to give physical arguments priority to metaphysical ones, he corrected Descartes’ laws of impact. Applying both the law of inertia and the principle of relativity, he argued that motion is not only a quantity, as Descartes believed, but is also directed; it is a vector as we now say. Only taking this into account the law of conservation of momentum (the product of quantity of matter and quantity of directed motion) is valid. After Galileo he was probably the first to apply the view that motion is a relation between bodies, not a property inherent to matter as Descartes and other mechanists assumed.


Huygens studied uniform circular motion, deriving a mathematical expression for the centrifugal force as its effect. Later Robert Hooke and Isaac Newton interpreted the mathematically equal but inversely directed centripetal force as the cause of circular motion deviating from linear inertial motion.


One objection against Descartes’ vortex theory was that the density of the imperceptible whirling matter would have to be larger than the density of all bodies falling to the earth. Moreover it is difficult to understand why gravity is directed to the centre, rather than to the axis of the earth’s rotation. In an ingenious way, Huygens sought to meet these objections. He published his theory of gravity, developed in 1667, only in 1690, three years after the much more successful theory of Isaac Newton, which Huygens admiringly but critically discussed. Writing to John Locke, who did not understand much of mathematics, Huygens recommended Newton’s Principia as a first- class mathematical exercise, but he rejected its physical principles, because they collided with Huygens’ mechanicist research program to reduce all physical phenomena to motion. In this program, the only kind of interaction between particles could be impact, in which momentum was transferred. Like most contemporary philosophers and theologians, Huygens despised Newton’s introduction of an attractive force acting at a distance (in a collision the particles repel each other).


As observed above, the seventeenth-century mechanical philosophers were not atomists (Pierre Gassendi excepted), neither in the classical sense believing atoms to be unchangeable particles moving in a void, nor in the later sense of accepting atoms to have specific electrical and chemical properties. In suit of Aristotle, atomists were generally considered to be atheists. Wise men avoided being associated with them. They assumed that particles in a fluid are randomly distributed and can move along each other (though not incessantly). In a solid the particles were assumed to have fixed positions, though they might oscillate around their equilibrium positions, as when propagating sound. Huygens was one of the first to realize that the particles in a crystal constitute a regular spatial pattern. This enabled him to explain some optical and mechanical properties of minerals like quartz.[32] In particular his explanation of the birefringence of Iceland spar was highly admired, though Newton criticized it.[33]






With the exception of Huygens, the mechanists were thoroughly rationalists, believing that like mathematics, physical laws could be found by reasoning alone. Therefore mathematics played a foundational part in mechanism, and many of its adherents contributed significantly both to physics and mathematics. In Il Saggiatore Galileo proclaimed: ‘Philosophy is written in this grand book, the universe, which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and read the letters in which it is composed. It is written in the language of mathematics, and its characters are triangles, circles and other geometric figures without which it is humanly impossible to understand a single word of it; without these, one wanders about in a dark labyrinth.’[34]


In La géometrie Descartes developed analytical geometry, and Huygens was considered the most important mathematician of his time, in the Alexandrian tradition of Euclid and Archimedes. Gottfried Leibniz (who started his career as a mechanist, but soon took a different path) invented the calculus, later than but independent of Isaac Newton. Both Huygens and Leibniz respected Newton’s Principia mathematica (1687) as a brilliant mathematical exercise, but they were very critical of his philosophia naturalis.


Until the nineteenth century it was generally assumed that mathematics is purely rational. Its propositions should be logically derivable from a few clear and distinct axioms. As far as natural theology considers God to be subject to logical laws, He would also be subject to mathematical relations.


A paradigm of undeniable mathematics was Euclidean geometry, but in the first half of the nineteenth century non-Euclidean geometries were discovered, shedding doubt on the rationalistic idea of a priori truths, even in mathematics. The study of prime, negative, irrational, and complex numbers, of infinity and transfinite numbers, of sets and their paradoxes, of groups and other mathematical structures, were by no means guided by naive clear and distinct a priori insights into the truth of 2+2=4, but rather by creative investigation into the possibilities laid down in the laws of the creation.




3.4. The radical mechanicism of

Benedict Spinoza


 On the one hand the Enlightenment philosophers maintained that nature is restless determined by natural laws, on the other hand they propagated the freedom and autonomy of men (more often than not excluding women and coloured people). Thomas Hobbes concluded that the two principles collide with each other.[35] If everything, including mankind, is determined by natural laws, there is no room for human freedom to act, and the idea of human autonomy is an illusion.


In England, Thomas Hobbes became the most outspoken representative of mechanical philosophy, although he was quite critical about Descartes’ theories on mechanics and optics.[36] His critique of Robert Boyle’s empiricist views on the void was shared by Gottfried Leibniz and by Benedict Spinoza (Baruch Spinoza, Benedito de Espinosa), who is considered the founder of radical Enlightenment.[37]


Long before he published anything, Spinoza was in 1656 at the age of 23 years expelled from the Sephardic-Jewish community at Amsterdam, probably because of his emerging radical and shocking views, which, however, were not specified, and about which no details are known with certainty.


Initially influenced by René Descartes, in 1663 Spinoza published Renati des Cartes principiorum philosophiae, pars I & II, more geometrico demonstrata, an axiomatic exposition of two parts of Descartes’ Principia philosophiae (1644). ‘More geometrico’ (according to geometry) refers to the axiomatic method applied in Euclid’s geometry, not to geometry itself, for Spinoza was not a mathematician. He lived of grinding lenses and of constructing and selling telescopes and microscopes. Although he did some experiments and corresponded about these, he did not contribute significantly to experimental science.


Spinoza was the most radical rationalist, mechanist, naturalist, and reductionist in Enlightenment philosophy. Like Huygens, with whom he was acquainted, Spinoza knew that Descartes’ laws of impact were wrong. Whereas Descartes assumed that motion was created apart from matter, Spinoza believed that motion is inherent in matter and entirely connected to spatial extension. The differences between bodies or kinds of matter must be ascribed to their different motions or rest. There is nothing like a force causing motion, nor inertia. Spinoza rejected Francis Bacon’s and Robert Boyle’s empiricism. His natural philosophy, radically different from the views of Descartes, Huygens, Boyle, Newton, Locke, and Leibniz, was largely ignored.[38]


Also Spinoza’s main work Ethica ordine geometrico demonstrata (Ethics, demonstrated in geometrical order, written between 1664 and 1665, posthumously published in 1677) is an axiomatic presentation of his monistic philosophy. He distinguished natura naturans, the active and creative force of nature, from the actual natural creation, called natura naturata. Repeating the ontological argument, Spinoza argued that God is a perfect being, having perfect foreknowledge of anything that happens. This would only be possible if everything is restless determined by natural laws. Rejecting Descartes’ dualism of mind and body, he stated that there can only be one single substance, in which matter and spirit, res extensa and res cogitans, are united. It implies that God has to be identified with nature: Deus sive natura, he emphasized.  


Spinoza was believed to be an atheist (although he was really a pantheist), and during the seventeenth and early eighteenth century his views were highly suspect.


In 1670 Spinoza published anonymously Tractatus logico-philosophicus, criticizing then current theological views and introducing a historical-critical exegesis of the Old Testament as became common in nineteenth-century liberal theology.[39] It contradicted the literalist reading of the Bible advocated by conservative theologians.


In Spinoza’s radical mechanicism the ideal of nature prevailed above that of human freedom. Spinoza did not conceive of freedom as freedom to act, but as freedom to think, to communicate one’s views, and to accept with resignation that the world is completely determined by natural laws. Even if human acts are completely determined, people are free because of their reason inherent to their conatus, their strive to self-preservation. Although Spinoza was a radical determinist, denying free will, he advocated (in contrast to Hobbes) democracy, religious tolerance, freedom of thought and freedom of expression of one’s opinions, views which in the eighteenth century came into the centre of Enlightened thought. Here for Spinoza ‘democracy’ is not the modern liberal concept, but ‘rule by many’ in contrast to ‘rule by few’ as in monarchy or aristocracy. Spinoza favoured the Dutch ´regents´ kind of rule.




[1] Descartes 1637, 31-33; 1641, 13-18; 1647, 28-29.

[2] Descartes 1637, 33-40; 1647, 33-35.

[3] Descartes 1647, 37-38.

[4] Descartes 1647, 31; 1664, 36-37.

[5] Taylor 1989, 157.

[6] Descartes 1637, 1-2, 38-40.

[7] Descartes 1637, 2.

[8] Descartes 1637, 46, 56-58.

[9] Descartes 1647, 48, 53-54.

[10] Descartes 1647, 29.

[11] Descartes 1649, 351-355, 359-362.

[12] Gaukroger 1995, 199.

[13] Descartes, letter to Constantijn  Huygens (1642), cited by Gaukroger 1995, 392.

[14] Gaukroger 1995,, 346-352.

[15] Ashworth 1986, 139-140.

[16] Gaukroger 1995, 357-361; Israel 2001, chapter 2, I; chapter 11, I-IV.

[17] Gaukroger 1995, 356-357.

[18] Westfall 1971; Stafleu 2016, 3.4.

[19] Descartes 1647, 278-305; Scott 1952, 188-193; Gaukroger 1995, 380-383.

[20] Descartes 1637, 21.

[21] Descartes 1647, 89-94.

[22] Descartes 1647, 93.

[23] Koyré 1965, 77; Harman 1982, 12.

[24] Descartes 1647, 117.

[25] Descartes, letter to Mersenne (1633), Œuvres, I, 271.

[26] Descartes 1647, 109-110, 113, 115-116.

[27] Descartes 1647, 76-79.

[28] Descartes, Œuvres, I, 307, letter to Beeckman, 1634; 1637, 43, 84; 1647, 136; 1664, 98; see Duhem 1906, 33-34.

[29] Descartes 1637, 93-105; Sabra 1967, 469.

[30] Descartes 1647, 123-126.

[31] Stafleu 2016, 3.4; 5.2.

[32] Huygens 1690.

[33] Newton 1704, Queries 25-28.

[34] Galileo 1623, 237-238.

[35] Gaukroger 2006, 276.

[36] Shapin, Schaffer 1985, chapters 3, 4; Gaukroger 2006, 282-289, 368-379.

[37] Israel 2001, chapters 8, 12-17; Nadler 1999, 2001, 2011; Gaukroger 2006, 471-492..

[38] Gaukroger 2006, 491-492.

[39] Israel 2001, chapter 24.



Chapter 4

Dynamical philosophy


4.1. Isaac Newton’s two faces:

Principia and Opticks


Isaac Newton’s scientific views constituted the moderate Enlightenment and reinforced the polar dialectic of nature and freedom. ‘Newton’s science portrays the natural world as governed by laws. But we are part of nature and hence to a considerable extent must also be governed by such laws. The upshot is a tension between our conception of ourselves as moral, reason-giving beings, on the one hand, and modern science, on the other, that took root during the eighteenth century and has again been with us ever since.’[1]

Newton’s natural philosophy has two components: dynamic philosophy (chapter 4) and experimental philosophy (chapter 5). He published his dynamics in 1687 in Philosophiae naturalis principia mathematica (mathematical principles of natural science), his experimental philosophy in 1704 in Opticks. Both books present laws as an alternative for Cartesian mechanicism.

From the seventeenth to the nineteenth century, mechanists were critical of the concept of force, which they considered reducible to the fundamental mechanical concepts of quantity, space, matter, and motion. Christiaan Huygens and other moderate mechanists applied the concept of force only in cases of static equilibrium, never as a dynamic force, as a cause of changing motion. Neither Johann Kepler nor Isaac Newton was a mechanical philosopher, and both applied a new concept of force freely in problems of motion.

Like Aristotle, the mechanists were speculative foundationalists, system builders trying to explain everything from first principles or self-evident axioms. In contrast, Kepler and Newton set out to establish lawful relations between phenomena, regardless of their supposed foundations.[2] Unlike Aristotle and with more success than the mechanists, they applied mathematics as an indispensable tool for their investigation of nature, but they considered observation, measurement, and experiment more important. Evangelista Torricelli, Blaise Pascal, Robert Boyle, and others studied the properties of the void without discussing the question of whether a vacuum exists, which both the Aristotelians and the mechanists considered a fundamental problem that should be solved philosophically first.

Newton investigated mathematically how gravity determines the structure of the solar system without caring about the essence of gravity. He investigated experimentally the refraction of light in a prism without bothering about the nature of light. In 1600 William Gilbert was the first to follow this path, carefully distinguishing magnetism from electricity without discussing their essence. Instead he made that distinction based on experimentally determined properties, without attempting to explain these from the shape and size of the particles constituting the bodies concerned.[3] 


                         4.2. Johann Kepler


According to Aristotle forces only act where motion was violent, not natural. No natural motion is caused by a force. In particular the motions of the celestial bodies were not subject to forces. Johannes Kepler, however, understood that a changing speed requires a force, and that this also applies to changing planetary motion.

Like Aristotle, Kepler supposed the force keeping a body in violent motion to be proportional to its speed. Because a planet’s velocity is largest if it is closest to the sun, Kepler concluded this force to be inversely proportional to the distance from the sun and to be tangential, directed along the planetary orbit. It was by no means attractive, and not directed towards the sun, as Newton would later pose. Kepler suggested that the rotation of the sun about its axis causes the revolution of the planets.[4] Kepler estimated the period of the sun’s revolution to be about three days, and he was disappointed to learn from Galileo’s investigation of the sunspots that the actual period is thirty days.[5]

In De magnete (1600), William Gilbert assumed the earth to be a magnet, and that magnetism is the force driving the diurnal rotation of the earth. Kepler applied Gilbert’s suggestion to the annual motion around the sun (about which Gilbert did not express an opinion): the force exerted by the sun on the planets is also magnetic,[6] as well as the influence of the moon on the tides. Galileo and Descartes rejected both ideas, because they wanted to explain motion by motion. Galileo executed this program with respect to the tides, explaining these from the joint daily and annual motions of the earth.[7] Descartes had a mechanical explanation of magnetism, and assumed that the rotation of the sun causes a whirlpool keeping the planets into their orbits around the sun. Newton argued that this could not explain Kepler’s laws, and he introduced gravity as a universal dynamic cause of planetary motion, acting at a distance. This contradicted the mechanists’ view of action by contact, as in Descartes’ vortices.

An important difference between Ptolemy’s and Copernicus’ theories of planetary motion is that Copernicus assumed that the so-called retrograde apparent motion of the planets as annually observed from the earth is a projection of the real motion of the earth around the sun. This parallax allowed him to estimate the distances of the planets to the sun relative to the earth’s distance to the sun. It inspired Kepler in 1596 to develop a spatial model of the planetary system based on Plato’s five regular polyhedra. This model explained why there are six planets (not five or seven), presenting Kepler a new confirmation of the Copernican system.[8] The Copernicans recognized six planets, Ptolemy seven, including the sun and the moon, but not counting the earth. Tycho Brahe considered only five planets, not including one of these.

In 1619 Kepler found his third law relating these distances to the periods of planetary revolution about the sun (6.3). This law, which later was seen to apply to the satellites of Jupiter and Saturn as well, Newton used as a weighty argument in his theory of gravity and of the solar system (7.3).


4.3. Matter and force


Isaac Newton’s Principia distinguished several kinds of force (vis in Latin). The concept of inertia, which Newton in his first law of motion called vis insita or vis inertiae was also accepted by the Cartesians, who otherwise applied the concept of force only in equilibrium situations. What we nowadays call force is Newton’s vis impressa (external force) expressing mutual interaction. In his mechanics it is the most important concept besides matter. According to Newton’s second law of motion, a force exerted on a body causes it to accelerate. This is a strong rupture with the mechanists, who wanted to explain motion by motion. They accepted only action by impact in collisions, based on the view that extension implies the mutual impenetrability of bodies. Newton emphasized that knowledge of this mechanical property is not based on reasoning but on sensory experience. The ability of material bodies to act mutually cannot be based on extension alone. With vis impressa Newton introduced a new principle of explanation, nowadays called interaction. Newton’s third law of motion recognizes impressed force as a physical relation between material bodies: if a body exerts a force on another one, the second body exerts an equal force on the first, albeit in the opposite direction. Besides quantitative, spatial, and kinetic relations, interactions turn out to be indispensable for the explanation of natural phenomena.

Galileo Galilei, Isaac Beeckman, and Christiaan Huygens showed motion to be a principle of explanation independent of quantitative and spatial principles. This led them to the law of inertia, now called Newton’s first law of motion. René Descartes assumed that all natural phenomena should be explained by matter and motion. Newton relativized this kinetic principle, by demonstrating the need of another irreducible principle of explanation, the physical principle of interaction.[9] Yet, as a Copernican inspired by the idea that the earth moves, his real interest was in the explanation of all kinds of motion, uniform or accelerated, rectilinear or curved, in a vacuum or in a plenum, on earth or in the heavens. That is the subject matter of Newton’s Principia, in which he took distance from Descartes’ Principia philosophiae. With the exception of gravity, the full exploration of the physical principle of explanation did not occur during the Copernican era, which ended with the appearance of Newton’s Principia in 1687, but in the succeeding centuries, starting with his Opticks (1704).

Newton’s metaphysics[10] can be found in an untitled and unfinished manuscript, not published before 1962,[11] and probably predating Principia. It is mostly a critique of Descartes’ theory of matter, space, and motion, culminating in Newton’s theology. Next the matter-force dualism appears as an alternative to Descartes’ metaphysics in the form of a number of definitions reminding of Newton’s much more articulated  ‘Axioms, or laws of motion’ appearing at the first pages of Principia.[12]

Newton’s alternative to Cartesian metaphysics, to be summarized as the dualism of matter and force, introduces besides the concept of impressed force, a new view of matter, expressed by the equally new concept of mass, the product of density and volume.[13] As a concept and as a measurable property density was known since Galileo (La Bilancetta, 1586). Ernst Mach’s critique that Newton’s definition of mass would be circular because density can only be defined as mass divided by volume[14] is therefore historically not correct.

Newton also considered mass to be the measure of vis insita, the force of inertia. The impressed force equals the product of mass and acceleration. Christiaan Huygens’ momentum became the product of mass and velocity. The acting force in the theory of gravity is proportional to the masses of the interacting particles and therefore weight is proportional to mass. (No more than Albert Einstein, Newton distinguished between inertial and gravitational mass, as a nineteenth-century myth asserts.) Although a certain type of force may depend on the mass or the distance of the bodies concerned, as in the case of gravity, or of their relative motion, as in the case of friction, a force is conceptually different from quantitative, spatial or kinetic relations.

Newton’s view of gravity was far more successful than any mechanist theory that made not use of the concept of impressed force as a physical relation irreducible to quantitative, spatial, or kinetic relations.

Although a certain type of force may depend on the mass or the distance of the bodies concerned, as is the case with gravity, or on their relative motion, as is the case with friction, a force is conceptually different from quantitative, spatial or kinetic relations.

Newton’s concept of gravity was much more successful than any mechanicist theory that did not apply the concept of external force as a physical relation irreducible to quantitative, spatial, or kinetic relations.

Newton’s three axioms or laws of motion opening Principia lie at the foundation of the dualism of force and matter. Throughout his life, Newton maintained an ambivalent position with respect to this dualism, because he could not take distance from the Neo-Platonist view that matter cannot be active.[15] Active matter would be independent of God. This view of the inertness of matter was shared by most philosophers of his time. Only alchemists, astrologers and other naturalists deviated from it, pointing to action at a distance in magnetism for an example. By introducing impressed force as a new principle of explanation, Newton made matter more active than he liked and more than both the mechanists and the scholastics would allow of. On the one hand Newton rejected any activity of matter, because all activity in the world had to come directly from God.[16] On the other hand, Newton restricted inertia to linear motion. By vis insita, the force of inertia, each body resists change of motion, but despite the mechanists, he no longer considered circular uniform motion inertial. This requires a vis centripetalis as a cause, an impressed force directed to a centre, instead of Huygens’ vis centrifugalis as an effect of a uniform and therefore inert circular motion. Matter became interactive as a source of vis impressa, subject to the law of action and reaction, first of all the source of gravity, later also as the source of electricity and magnetism. Matter turned out to have specific properties, electrical or magnetic, besides chemical affinities, contrary to the mechanist view that matter can only have magnitude, spatial extension, and shape.

In order to maintain God’s sovereignty over the material world, Newton emphasized that any kind of force is subject to laws (chapter 6). Starting with Roger Cotes,[17] who corresponded extensively with Newton before he published the second strongly revised edition of Principia (1713), Newton’s disciples between 1700 and 1850 accepted the matter-force dualism, including action at a distance. It was the inspiration for the development of static electricity (electric charge and Coulomb force) and magnetism (magnetic force and pole-strength), both including an inverse-square law, analogous to Newton’s law of gravity. However, the dualism came under fire after the romantic turn in physics, after it was already rejected by Enlightened chemists.


4.4. Measurement of time and motion


During the antiquity and the Middle Ages, measurements served only commercial interests. On the  market goods were measured and weighted. For use in astrology astronomers made measurements, developing various kinds of instruments. In Western Europe Tycho Brahe was a pioneer in the systematic establishment of the positions of celestial bodies. The invention of the telescope has strongly stimulated this development, just like its application in shipping. Yet it took some time before scientists considered the performance of measurements a constant and indispensable part of their research. Together with instrumental observation and experiments measurements form the basis of modern natural science and technology.   

Inspired by his view on inertia, Newton devoted one quarter of Principia’s introductory summary of mechanics to a scholium (Greek: scholion, a marginal note or a comment) on space, time, and motion.[18] He did not intend to give definitions of these concepts, ‘as being known to all.’ His first aim was to make a distinction between the words absolute and relative, having a different meaning for him than it is usual nowadays. He was not concerned with what time is, but how it can be measured, and first of all with a universal standard for the measurement of time.

‘Absolute, true and mathematical time, of itself, and from its own nature, flows equably without relation to anything external, and by another name is called duration: relative, apparent, and common time, is some sensible and external (whether accurate or unequable) measure of duration by the means of motion, which is commonly used instead of true time; such as an hour, a day, a month, a year.’[19]

By relative time Newton meant time as actually measured by some clock, whereas absolute time satisfies a standard independent of the applied method of measurement. Some clocks may be more accurate than others, but in principle no measuring instrument is absolutely accurate. During the Middle Ages, the establishment of temporal moments (like noon or midnight, or the date of Eastern) was more important than the measurement of temporal intervals, which was only relevant for astronomers. Mechanical clocks came into use since the thirteenth century, with a gradually increasing accuracy, which, however, until the invention of pendulum clocks never was better than a quarter of an hour.[20]

By absolute time Newton meant a universal standard or metric of time, independent of measuring instruments. No one before Newton posed the problem of distinguishing the standard of time from the way it is measured. This problem could only be raised in the context of experimental philosophy. Only after Newton, the establishment of a reliable metric for any measurable quantity became a common practice in the physical sciences. Sometimes this metric or standard was called ‘absolute’, like in absolute temperature, referring to the thermodynamic scale devised by Kelvin (William Thomson, 8.7). It means a standard independent of the specific properties of the applied measurement method.

By postulating an absolute clock, together with an absolute space, both imaginary, Newton did not prove that time and space are absolute starting from some preconceived idea of absoluteness, but he defined what should be understood by these concepts. Therefore he applied the law of inertia: ‘Every body continues in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed upon it.’[21]

This means that the absolute standard of time is operationally defined by the law of inertia itself. The accuracy of any actual clock should be judged by the way it confirms this law. The law of inertia couples the standards of time to that of space: uniform motion means that equal distances are covered in equal times.


The pail experiment

Newton knew very well that the speed of a uniformly moving body with respect to absolute space cannot be measured, but he argued that a non-uniform motion with respect to absolute space can very well be experimentally determined.[22] He hung a pail of water on a rope, and made it turn. Initially, the water remained at rest and its surface horizontal. Next, the water began rotating, and its surface became concave. If ultimately the rotation of the pail was arrested abruptly, the water continued its rotation, maintaining a concave surface. Newton concluded that the shape of the surface was determined by the absolute rotation of the fluid, independent of the state of motion of its immediate surroundings. Observation of the shape of the surface allowed him to determine whether the fluid was rotating or not.

In a similar way, Jean Foucault’s pendulum experiment (1851) demonstrated the earth’s rotation without reference to some extraterrestrial reference system, such as the fixed stars. Both Newton and Foucault supplied physical arguments to sustain their views on space as independent of matter. Descartes’ mechanical philosophy identified matter with space. In his mechanics and theory of gravity, Newton had to distinguish matter from space and time. ‘Newton’s absolute, infinite, three-dimensional, homogeneous, indivisible, immutable, void space, which offered no resistance to the bodies that moved and rested in it, became the accepted space of Newtonian physics and cosmology for some two centuries.’[23]

This view on space and time became part of the Enlightenment’s world picture.


Theological considerations

Gottfried Leibniz and Samuel Clarke (the latter acting on behalf of Newton) discussed these views in 1715-1716, each writing five letters.[24] Leibniz held that space as the order of simultaneity or co-existence, and time as the order of succession, only serve to determine relations between material particles.[25] Actually, this view was shared by Newton in Principia: ‘All things are placed in time as to order of succession; and in space as to order of situation’.[26] Denouncing absolute space and time, Leibniz said that only relative space and time are relevant. But it is clear that for him both absolute and relative mean something different from Newton’s intention. As for Descartes, the identification of space with matter means that space is a substance. Like Aristotle and Descartes, Leibniz understood the place of a body to be the position relative to the surrounding matter. (In a vacuum a body could not have a place). Earlier, Newton had argued that this view would not allow of an understanding of linear motion and of deviations from linear motion, and therefore the principle of inertia would not make sense. Unfortunately, this point was not pressed by Clarke, so that Leibniz’ possible reaction is not known.

The debate (‘It was less a genuine dialogue than two monologues in tandem’[27]) ended with Leibniz’ death, after which Clarke published it with some comments.

Newton and virtually all his predecessors and contemporaries, related considerations of space and time to God’s eternity and omnipresence.[28] This changed significantly after Newton’s death, when Enlightened scientists took distance from theology. ‘Scientists gradually lost interest in the theological implications of a space that already possessed properties derived from the deity. The properties remained with the space. Only God departed.’[29] ‘It was better to conceive God as a being capable of operating wherever He wished by His will alone rather than by His literal and actual presence. Better that God be in some sense transcendent rather than omnipresent, and therefore better that He be removed from space altogether. With God’s departure, physical scientists finally had an infinite, three-dimensional, void frame within which they could study the motion of bodies without the need to do theology as well.’[30]


Ernst Mach

Leibniz’ rejection of absolute space and time was repeated by Ernst Mach in the nineteenth century, who in turn influenced Albert Einstein, although later Einstein took distance from Mach’s opinions. Mach denied the conclusion drawn from Newton’s pail experiment.[31] He replaced the immediate surroundings by the fixed stars as the reference system for any kind of motion. He stated that the same effect should be expected if it were possible to rotate the starry universe instead of the pail with water. The rotating mass of the stars would have the effect of making the surface of the fluid concave. According to Mach this means that the inertia of any body would be caused by the total mass of the universe.[32] It has not been possible to find a mathematical theory or any experiment giving the effect predicted by Mach.[33] Einstein‘s general theory of relativity introduced a global reference system determined by gravity, allowing of local systems of inertia, in which rotational motion plays the same part as in Newton’s theory. Mach’s principle, stating that rotational motion is just as relative as linear uniform motion, is therefore unsubstantiated. Whereas inertial motion is sui generis, independent of physical causes, accelerated motion with respect to an inertial system always requires a dynamic explanation by means of a force. In this respect there is no difference between Newton’s and Einstein’s relativity. The most important difference occurs when a reference system is transformed into another one moving with respect to the former. According to Newton the metric of time is in that case independent of the metric of space, whereas in Einstein’s special theory of relativity these metrics are connected. Moreover, his general theory shows that the combined metric depends on the distribution of matter in space and time, reminding of Descartes’ identification of space and matter.


Inertial systems

Both Newtonian and relativistic mechanics use the law of uniform time to introduce inertial systems. An inertial system is a spatial and temporal reference system in which the law of inertia is valid. Its metric can be used to measure accelerated motions as well. In 1831 Évariste Galois introduced the concept of a ‘group’ as a mathematical structure describing symmetries. In physics group theory was first applied in the theory of relativity, and since 1925 also in atom, molecule, and solid state physics. All inertial systems can be generated from one system by using either the Galileo group or the Lorentz group, both reflecting the relativity of motion and expressing the symmetry of space and uniform time. Both start from the axiom that kinetic time is uniform. In the classical Galileo group, the unit of time is the same in all reference systems. In the relativistic Lorentz group this is not the case, but the unit of speed (equal to the speed of light) is a universal constant. Late nineteenth-century measurements decided in favour of the latter. In special relativity, the Lorentz group of all inertial systems serves as an absolute standard for temporal-spatial measurements. Being independent of the specific properties of any clock, this standard is absolute in Newton’s sense.


Uniform  time

Aristotle defined time as the measure of change, but he did not develop his physics into a quantitative theory of change, and his conceptual definition of time never became operational. Galileo discovered the temporal isochrony of a pendulum. Its period only depends on the length of the pendulum and not on the amplitude (as long as this is small compared to the pendulum’s length). Experimentally this can be checked by comparing pendulums moving simultaneously. Pendulums can be used to synchronize clocks, an important step in the measurement of time.

In 1659 Christiaan Huygens derived the pendulum law by using the law of inertia, but apparently he did not recognize the inherent problem of time. Just like Aristotle and Galileo het simply assumed that the diurnal motion of the fixed stars (or of the earth) is uniform and therefore a natural standard of time. However, Newton’s Principia made clear that this motion could very well be irregular. A day is a relative measure of time in Newton’s conception.

Time as measured by a clock is called uniform if the clock correctly shows that a subject on which no net force is acting moves uniformly.[34] This appears to be circular reasoning. On the one side, the uniformity of motion means equal distances covered in equal times. On the other hand, the equality of temporal intervals is determined by a clock subject to the norm that it represents uniform motion correctly.[35] This circularity is unavoidable, meaning that the uniformity of kinetic time is an axiom that cannot be proved, an expression of a fundamental law, Newton’s first law of motion. Uniformity is a law for kinetic time, not an intrinsic property of time. Time is not a substantial stream independent of the rest of reality. Time only exists in relations between events, as Gottfried Leibniz maintained, although he did neither understand the metrical character of time, nor its symmetry properties.

The uniformity of kinetic time expressed by the law of inertia asserts the existence of motions being uniform with respect to each other. If applied by human beings constructing clocks, the law of inertia acts as a norm, as a standard. A clock does not function properly if it represents a uniform motion as non-uniform. But that is not all.


Periodic time

Whereas the law of inertia allows of projecting kinetic time on a linear scale, time can also be projected on a circular scale, as displayed on a traditional clock, for instance. The possibility of establishing the equality of temporal intervals in processes is actualized in uniform circular motion, in oscillations, waves, and other periodic processes, on an astronomical scale as in pulsars, or at a sub-atomic scale, as in nuclear magnetic resonance. Besides the kinetic aspect of uniformity, the time measured by clocks has a periodic character as well. Whereas inertial motion is purely kinetic, sui generis, the explanation of any periodic phenomenon requires some physical cause besides the principle of inertia. Mechanical clocks depend on the regularity of a pendulum or a balance, based on the force of gravity or of a spring. Huygens and Newton proved that a system moving with a force directed to a centre and at any moment proportional to the distance from that centre is periodic. This is approximately the case in a pendulum or a spring. Electronic clocks apply the periodicity of oscillations in a quartz crystal.

Periodicity has always been used for the measurement of time. The days, months, and years refer to periodic motions of celestial bodies moving under the influence of gravity. The modern definition of the second depends on atomic oscillations.

The periodic character of clocks allows of digitalizing kinetic time, each cycle being a unit, whereas the successive cycles are countable. The uniformity of time as a universal law for kinetic relations and the periodicity of all kinds of periodic processes determined by physical interactions reinforce each other. Without the uniformity of inertial motion, periodicity cannot be understood, and vice versa.

At the end of the nineteenth century, Ernst Mach and Henri Poincaré suggested that the uniformity of time is not a law but merely a convention.[36] One has no intuition of the equality of successive time intervals. According to these Enlightened positivists, the choice of the metric of time rests on simplicity: the formulation of natural laws is simplest if one sticks to this convention.[37] Hans Reichenbach stated that it is merely an ‘empirical fact’ that different definitions give rise to the same ‘measure of the flow of time’: natural, mechanical, electronic or atomic clocks, the laws of mechanics, and the fact that the speed of light is the same for all observers.[38]

This philosophical idea would have the rather absurd consequence, that the periodicity of oscillations, waves, and other natural and technical  rhythms would also be based on a convention, even if this is recognized as a ‘empirical fact’.[39]

It is more relevant to observe that physicists are able to explain many kinds of periodic motions and processes based on laws presupposing the uniformity of kinetic time as a fundamental axiom.


[1] Cohen, Smith 2002, 3.

[2] Gaukroger 2006, 397-399.

[3] Gilbert 1600, 74-97.

[4] Kepler 1609, 34 (Introduction), 228 (chapter 34); Galileo 1632, 345.

[5] Galileo 1613, 106; 1615, 212-213.

[6] Kepler 1609, chapter 34, 57; Koyré 1961, 208.

[7] Galileo 1632, Day IV.

[8] Kepler 1597.

[9] Dijksterhuis 1950, 515.

[10] Cohen, Smith (eds.) 2002.

[11] Hall, Hall, 1962, 89-156.

[12] Newton 1687, 1-28.

[13] Newton 1687, 1; Jammer 1961, 64-74.

[14] Mach 1883, 237, 300.

[15] McMullin 1978, 2, 29-56.

[16] McMullin 1978, 55.

[17] McMullin 1978, 52-53.

[18] Newton 1687, 6-12.

[19] Newton 1687, 6.

[20] Landes 1983.

[21] Newton 1687, 13.

[22] Newton 1687, 10-11.

[23] Grant 1981, 254-255.

[24] Alexander (ed.) 1956; Grant 1981, 247-255.

[25] Disalle 2002, 39.

[26] Newton 1687, 8.

[27] Grant 1981, 250.

[28] Newton 1687, 545-546 (General scholium, 1713); Jammer 1954; Grant 1981, 240-247.

[29] Grant 1981, 255.

[30] Grant 1981, 264.

[31] Mach 1883, 279-286; see Grünbaum 1963, chapter 14; Disalle 2002.

[32] Mach 1883, 286-290.

[33] Pais 1982, 288.

[34] Margenau 1950, 139.

[35] Maxwell 1877, 29; Cassirer 1921, 364.

[36] Mach 1883, 217; Poincaré 1905, chapter 2; Reichenbach 1956, 116-119; Grünbaum 1968, 19, 70.

[37] Carnap 1966, chapter 8.

[38] Reichenbach 1956, 117.

[39] Reichenbach 1956, 117.




 Chapter 5


Experimental philosophy



5.1. Methodical isolation


Radical Enlightenment philosophy was predominantly rationalistic. Therefore, Newton’s rational mechanics, founded in Principia, was much more favoured by philosophers like Immanuel Kant, than his experimental philosophy, which they largely ignored. In contrast, among scientists Newton’s Opticks became more popular than his Principia. It became the focus of moderate Enlightenment, being more empiricist than rationalistic. Observations, measurements and experiments became more emphasized in Newton’s synthesis than mathematical analysis

The enlightened learned societies which, inspired by Francis Bacon, flowered during the seventeenth and eighteenth centuries were first of all intended to exert experiments together in order to illustrate and dissipate the new insights They did so piecemeal: each experiment was apart from all others. Besides the dualism of matter and force, methodical isolation became the nucleus of experimental philosophy. In the history of classical physics, the period between circa 1600 and 1850 is characterized by the successive isolation and development of separate fields or domains of science. These were investigated in close cooperation of theories and experiments. Newton’s success in his investigation of gravity was partly due to the fact that he could develop it isolated from other phenomena, because other forces are negligible at a planetary scale. Several kinds of isolation may be distinguished, each of them an artificial method of investigating nature.

Experimental isolation intends to shield a physical system from its environment, to keep constant various circumstances and parameters, in order to study the influence of one parameter on a single other one. An example is the calorimeter, a thermally isolated vessel with a thermometer, invented by Joseph Black and Antoine Lavoi­sier in the eighteenth century. Another one is electric isolation, which necessity both in experi­ment and practice was gradually established. Most Greek and medieval investigators of nature considered the experimental method to be unnatural and non-informative. Though already practiced in alchemy and in the crafts, only in the seventeenth century experiment became an academically accepted instrument for scientific research. Experimenta­l isolation aims at making a phenomenon controllable. The circumstances in which a phenomenon occurs are carefully described, such that the phenomenon becomes communicable and reproducible. Positivist philosophers tend to consider experiments only as means to check theories, but realist scientists apply experiments as heuristic tools, to discover lawful relations.

Theoretical isolation directs itself to a single pro­blem, for which the boundary conditions are carefully described. Idealisation often accompanies theoretical isolation. Influences that in reality cannot be neglected are eliminated in order to make the problem solvable. An example is the derivation of Galileo’s law of fall, deliberately neglecting air resistance and the upward force of buoyancy.[1] In experiments, too, such idealizations are applied, for instance by making pure samples, purer than can be found in nature. Theoretical isolation often leads to the construction of idealized models.

Technical isolation occurs anytime someone tries to solve a practical, experimental, or theoretical problem with the help of some specific instrument that perhaps does not yet exist but is developed for the purpose of solving the problem at hand. Inventing, designing, and using a machine or an instrument requires the isolation of the problem to be solved.

The isolation of a field of science combines experimental, theoretical, and technical isolation. A field of science is characterized by more or less well defined problems and by experimental methods. An isolated domain of science directs itself to a limited number of phenomena, some of which serve to identify the field, whereas other phenomena are generated by its research. Each theory in the field of science should concern all phenomena of the field and should not lead to results contradicting these. But it does not need to be concerned with phenomena that belong to a different field of science. For instance, if a theory on electricity is contradicted by electric phenomena, it should be rejected, but it would not be problematic if it could not explain magnetic phenomena, as long as these fields are separated.

Since the seventeenth century scientists started to distinguish fields of science from each other, in order to develop these apart.

This methodical isolation started in the seventeenth century. It was never important during antiquity and the Middle Ages, because of the then prevailing organic world view. The cosmos, conceived as an ordered world, was considered a coherent organism, in which everything had its proper position, according to a hierarchical order. Such a holistic world view searches more for agreements and analogies than for differences. The phenomena are not investigated in isolation, but in their coherence. Like Aristotle, mechanical philosophers believed that one should start from an all-embracing and universally valid system, before one could meaningfully study details. René Descar­tes reproached Galileo Galilei for studying the motion of fall without having a clear insight in the essence of gravity. Francis Bacon found the same fault with respect to William Gilbert’s On the magnet.[2] In contrast, methodical isolation is a hallmark of experimental philosophy.


5.2. Newton’s synthesis


The matter-force duality inspired by Newton replaced the duality of matter and motion in Cartesian mechanicism (4.3). Besides methodical isolation it became a hallmark of experimental philosophy. After the success of Isaac Newton’s theory of gravity, the matter-force duality was applied in various other fields of science as well. In electrostatics it concerned the pair of electric charge and electric force, in magnetostatics magnetic pole strength and magnetic force. Analogous to gravity, both included an inverse square law between material particles. In the nineteenth century, however, it turned out that this approach could only be fruitful for more or less static situations. André-Marie Ampère reduced magnetism to moving electricity, suggesting that electrodynamic currents satisfying laws of their own may be more important than static forces.[3] No more than thermal or chemical currents, the electrodynamic currents find an analogy in gravity.

In order to apply the matter-force dualism to electric, thermal, and chemical material currents, Newton’s concept of force as well as the concept of matter had to be extended. This happened first by introducing a generalized force different from Newton’s impressed force: electrical tension or potential difference, temperature difference, and chemical potentials. Besides, in various fields of scientific research specific material fluids were proposed as alternatives for Cartesian effluvia (8.3). These were called imponderable, weightless, because they appeared to be not subject to gravity, meaning that these could be studied apart from gravity. In electricity even two fluids were suggested, one positively charged, one negatively, which effects could neutralize each other. Fluids were transferable from one body to another. A fluid theory was most fruitful if it included a conservation law, a law expressing that in each transfer the total amount of the fluid (or, in the case of a two-fluids theory, the net amount) had a constant value. The law of conservation of electric charge, proposed about 1750 by various authors, is still unchallenged, but Antoine Lavoisier’s law of conservation of heat or caloric had to give way to the law of conservation of energy, which however was not considered a material fluid. A temperature difference was introduced as the force driving a heat current, a pressure difference as the driving force of an air or water current, and a potential difference pushing an electric current. For a chemical material current, for instance through the wall of a living cell, one recognized the concentration difference as the driving force. Unlike Newton’s impressed force, these generalized forces were not subject to Newton’s laws of motion. A constant Newtonian force is connected to accelerated motion, but a constant generalized force causes a stationary current with a constant speed.

About 1700 Newton’s views were accepted in England, Scotland and the Dutch Republic, and somewhat later in France and other countries.[4] By 1730 Cartesian mechanistic physics was mostly abandoned, but for the time being the French chemists remained loyal to Cartesianism. Newton’s natural philosophy became the flagship of moderate Enlightenment, after Newton wrought a synthesis of physical science. This synthesis concerned a wide spectrum: mechanics, gravity, optics, sound, the void, electricity, and magnetism. Only chemistry stood aloof. Newton’s alchemy, as far as it was known, was not popular.

Radical Enlightenment philosophers like Denis Diderot (who like Jean d’Alembert initially endorsed Newton’s views) became increasingly critical of Newtonianism, defended by François-Marie Arouet (Voltaire). In Germany Newtonianism had to compete with the views of Gottfried Leibniz and Christian Wolff. Even in the Encyclopédie itself, the four mentioned currents were represented.

The Newtonian synthesis marks the separation of a posteriori science from a priori philosophy. Since Newton, the credibility of a scientific theory is no longer determined by philosophical arguments, but by its agreement with other scientific results, in particular due to instrumental observations, skilful experiments, and accurate measurements.

Newton’s synthesis means that others preceded him: Tycho Brahe and Johannes Kepler, showing how careful observations could lead to the discovery of laws, as well as William Gilbert, Blaise Pascal, Robert Boyle, and Robert Hooke, who stressed the heuristic function of experiments to discover natural laws much more than Francis Bacon had done.[5] Initially Boyle adhered to Decartes’ mechanistic philosophy, but as Hobbes’ opponent in the battle of the void he became a prophet of experimental philosophy. Hooke was Boyle’s assistant until his appointment as ‘curator of experiments’ in the Royal Society.

Yet in experimental philosophy Newton’s views, his theories and his emphasis on observations, measurements and experiments as sources of information were dominant.[6] Whereas Principia (1687) marks both the end of the Copernican era and the beginning of rational mechanics, the much more widely read Opticks (1704) is more characteristic for the synthesis wrought by Newton’s experimental philosophy.

Opticks is a description, if not a prescript, of experimental interactive research. From the start Newton emphasized: ‘My Design in this Book is not to explain the Proper­ties of Light by Hypothe­ses, but to propose and prove them by Reason and Experiments.’ [7] And at the end he repeats: ‘For Hypotheses are not to be regarded in experimental Philosophy’.[8]

This is a manifesto against René Descartes’ mechanical philosophy, which considered optics to be part of geometry and mechanics, to be derived from clear and distinct ideas. In contrast, Newton stated that his optical theory was based on experiments. Nevertheless, the third book of Opticks contains a treasure of hypotheses in the form of queries, inspiring many scientists to new experiments and measurements according to Newtonian standards, embodying the Newtonian synthesis. The largest part of Opticks was written long before 1704. The queries were added in 170, and later supplemented.[9]

It became the program of experimental physics for more than a century. Only in the nineteenth century, after the acceptance of the wave theory of light, Opticks became discredited, because Newton had recommended a corpuscle theory, even if Thomas Young testified that his path breaking views on wave optics were indebted to Newton’s work.[10]

Because the mechanists identified matter with volume and shape, they denied the possibility of a void as much as did the Aristotelians. For experimental philosophers like Evangelista Torricelli, Blaise Pascal, and Robert Boyle only experiments could decide about the existence of a vacuum. Because planets could move without friction they assumed that interplanetary space is empty. Because of the transparency for light the mechanists (but also Newton) argued that this space is filled with an ethereal (light bearing) matter. The same controversy concerned the space above the mercury in Torricelli’s tube. 

In order to criticize Descartes’ theory of planetary motion, Newton developed his own theory of motion in a resistive medium, in the second book of Principia.[11] This theory did not play a constitutive part in his theory of planetary motion, but it was necessary to show the Cartesian cosmology to be wanting. Newton concludes this book by:

 ‘… so that the hypothesis of vortices is utterly irreconcilable with astronomical phenomena, and rather serves to perplex than explain the heavenly motions. How these motions are performed in free spaces without vortices may be understood by the first book; and I shall now more fully treat of it in the following book.’[12]


In the general scholium at the end of Principia, Newton repeats: ‘The hypothesis of vortices is pressed with many difficulties.’[13] The difference was so radical that it elicited from François-Marie Voltaire, who lived in England from 1626 to 1629, the sarcastic comment:

‘A Frenchman who arrives in London finds himself in a completely changed world. He left the world full; he finds it empty. In Paris the universe is composed of vortices of subtle matter; in London there is nothing of that kind. In Paris everything is explained by pressure which nobody understands; in London by attraction which nobody understands either.’[14]


5.3. Moderate Enlightenment


Besides experimental philosophy, physicists in the eighteenth and nineteenth centuries endorsed several world views. But they all accepted Newton’s physical results, in particular after the Dutch professors Willem Jacob ’s-Gravesande and Pieter van Musschenbroek convinced their French colleagues of the superiority of Newton’s physics above Descartes’.[15]

Voltaire’s Lettres philosophiques (1734) and Elémens de la philosophie de Newton (1738) introduced Newton’s physics and John Locke’s empiricism to the French philosophers later involved in the Encyclopédie (1751-1772). Because the editors Denis Diderot and Jean d’Alembert wanted to avoid any commitment to a systematic division of science, they ordered their encyclopaedia alphabetically. Yet from the start they admitted Locke’s influence: ‘What Newton would not attempt, and perhaps would not have executed, Locke undertook, and successfully performed. He may be said to have invented metaphysics, as Newton invented physics.’[16]


Between 1730 and 1760 John Locke’s empiricism constituted moderate Enlightenment, also represented by François-Marie Voltaire, David Hume, and Immanuel Kant.[17] In suit of Francis Bacon, Locke’s Essay concerning human understanding (1690)[18] replaced mechanical rationalism by a world view stressing the import of all kinds of information via the senses. Whereas René Descartes distinguished res extensa (extension) from res cogitans (thought), Locke did so between sensation and reflection, or external and internal experience, as it was later called. He believed substances to be unknowable, but he accepted that matter could be active, in so far as it is sensible, able to act on the senses. Locke rejected innate ideas, including Descartes’ clear and distinct ideas. He believed that all understanding owes its contents to the elementary psychical representations (‘simple ideas’) given in sensation and reflection, which the mind receives purely passively. These should be distinguished from the representations (‘complex ideas’) formed in the mind. Only the laws of mathematics and of ethics are beyond empirical experience, being a priori knowable.

Locke believed ‘that we are capable of knowing certainly that there is a God. ’[19] Locke’s ensuing ontological argument from perfection proving the existence of God does not differ basically from Descartes’ rationalism.

David Hume, a representative of the Scottish Enlightenment (including Adam Smith, Thomas Reid, Joseph Black, and James Hutton), radicalized empiricism in A treatise upon human nature (1739-40) and An enquiry concerning human understanding (1748).[20]

Without rejecting it entirely, Hume criticized the concept of causality which he considered a kind of psychological association based on habituation. As a deist, Hume stopped short of occasionalism, defended by the Cartesian philosopher Nicolas Malebranche in his influential De la recherche de la vérité (1674). Like Descartes emphasizing that matter is completely inert, Malebranche believed that only God’s will would be the occasional cause of anything, according to Cartesian laws of motion and impact. In general, Hume became very sceptical about the possibility of mathematical science as pursued by mechanism and experimental philosophy, in general the pretension of reason to go beyond the empirical.

The eighteenth-century philosophers adopted Newton’s experimental philosophy as part of the Enlightenment. However, Newton and his physical adherents were not empiricists, rationalists, or romanticists. They were not even philosophers but scientists, in the modern sense of these words. In their observations and experiments they favoured the investigation of phenomena and their causes and effects above speculations about the underlying microstructure of matter. Contrary to Thomas Hobbes and John Locke, they believed and practiced that observations, experiments and measurements should be analysed mathematically, in order to find the natural relations. They sought methods for the discovery and justification of natural laws. As exemplified in Opticks, they treated phenomena in a much more active way than the empiricists could imagine, stressing and applying quantitative, spatial, kinetic, and physical relations as Newton developed in Principia.

Newton’s empirical views also influenced his theology. He did neither adhere to Calvinism nor to Anglicanism. Based on his intensive reading of the Bible he became a Unitarian (rejecting the Trinity), carefully hiding this view, which was generally considered an Arian heresy. Contrary to the rationalist mechanists he stressed that God can only be known from his relations with the people, of whom he is the Lord: ‘God is a relative word and has a respect to servants; and Deity is the dominion of God not over his own body, as those imagine who fancy God to be the soul of the world, but over servants … we have no idea of the manner by which the all-wise God perceives and understands all things.’[21]


Otherwise than François-Marie Voltaire and Jean-Jacques Rousseau,  Newton was not a deist, someone believing that God after the creation left the world as governed by natural laws to itself. He argued that God’s interference was necessary to keep the solar system stable. A century later Pierre-Simon Laplace proved Newton’s astronomical arguments wrong.

Deism rejected revelation as the source of natural theology, arguing that reason and the study of nature were sufficient to establish God’s existence. It disappeared in Great-Britain at the end of the eighteenth century, when it was still influential among the leaders of the American, French and Dutch revolutions.

About 1750, most philosophers assembled in the Encyclopédie rejected moderate Enlightenment. They became increasingly radical, opening the door to atheism, materialism, determinism, and revolution (11.1). Yet, especially outside France, moderate Enlightenment remained predominant.


5.4. Blaise Pascal


Even Blaise Pascal, the intelligent critic of both Descartes’ Enlightenment thought and of the conservative views of the Jesuits, could not escape the dialectic of nature and freedom: “Man is only a reed, the weakest in nature, but he is a thinking reed,’ is a well-known quote from his Pensées.

As a mathematician Pascal laid the foundations of probability calculus. As a physicist he was an experimental philosopher, even before Newton. About 1640, Italian engineers attempted to use a suction-pump to raise water to a height of twelve metres or more, and discovered that ten metres was the limit. (A force-pump comes higher). Applying the hydrostatics developed by Giovanni Benedetti and Galileo Galilei, in 1643 Evangelista Torricelli assumed that in this case the weight of a column of water is balanced by the weight of a column of air with the same cross section but a larger height. In order to check this, Torricelli made a tube sealed at the bottom with a length of about one metre and filled it with mercury, having a density nearly 14 times that of water. When he placed the inverted tube vertically in a basin of mercury, the fluid in the tube dropped to a height of about 76 centimetres. It became the first instrument to measure the barometric pressure and to predict the weather.

Aristotelians held the idea of a horror vacui, nature abhors the void, also called fuga vacui, flight for the void. This is an anthropomorphic expression: in fact, not nature but the philosophers abhorred the void. The mechanists René Descartes and Thomas Hobbes also rejected the possibility of a vacuum.[22] The Cartesians maintained that Torricelli’s void was only empty of coarse types of matter, but still contained the finest material responsible for the transmission of light, implying this matter to be weightless. Christiaan Huygens and Isaac Newton, too, argued that Torricelli’s vacuum was filled with an ethereal substance, able to carry light.[23] Others believed that the empty space contains at least some spirit or vapour of mercury. (We now know that mercury vapour exerts a pressure of about one millionth of the atmospheric pressure, not measurable with seventeenth-century instruments.)

This controversy led to many new experiments, in which especially Blaise Pascal excelled.[24] Rejecting weightless matter, Torricelli and Pascal argued that all experiments suggest that the space above the mercury column is empty, or at least does not contain any known matter. Decisive was none of the many suggestions about which substance would fill Torricelli’s tube could explain why the mercury column would remain 76 centimetres.

The discovery of Torricelli’s void led to the insight that air has weight, exerting a pressure like any fluid. In Treatises on the equilibrium of liquids and the weight of air (shortly after his death published in 1663) Pascal perfected the hydrostatics of Archimedes, Simon Stevin, Galileo Galilei, Evangelista Torricelli, and Marin Mersenne. His most important concept was pressure, nowadays operationally defined as force per square metre, with the Pascal (1 Newton per square metre) as a unit. He assumed that we are living at the bottom of an atmospheric sea, pressed down by the weight of air. He based his theory on the axiom, now called Pascal’s law, saying that in a static fluid, at the same level the pressure is everywhere the same, in all directions. From this axiom he derived both Archimedes’ law on bodies floating on or submerged in a fluid and the properties of all variants of Torricelli’s tube. Pascal’s prediction (also claimed by Mersenne and Descartes), that the barometric pressure depends on the height in the atmosphere, was in 1648 experimentally confirmed by Pascal’s  brother-in-law, Florin Périer, on the Puy de Dome. The atmospheric pressure, caused by the weight of air, explains the height of the mercury column in Torricelli’s tube. Pascal confirmed this by placing the tube in a closed container, isolated from the atmosphere, which therefore could not influence the mercury column’s height, and evacuating the container, decreasing this height.

In the anonymous series Lettres provinciales (1657) Pascal became involved in the theological discussion between the Jesuits and the Jansenists. In a bulky book on Augustine (posthumously published in 1640) the Southern-Netherlands bishop Cornelius Jansen concluded that Augustine believed human nature to be evil, positioning human free will in the frame of predestination. It means that the faithful cannot enforce their own salvation, for instance by performing works of mercy. According to the Jansenists, whether someone will be saved is entirely in God’s hands. This Calvinian view, condemned by the Council of Trent (1545-1563), was attacked by the Jesuits. Pascal observed that by rejecting Jansen’s theology, the Catholic Church took distance from Augustine. Moreover he proved that the Jansenist theses condemned by the pope could not be found in Jansen’s work and were forged.

Pascal’s unfinished apologetic work Pensées was published posthumously. In this collection of short notes and fragments Pascal opposed Cartesian rationalism, stressing the dependence of human beings on the God of Abraham, Isaac, and Jacob, not the God of the philosophers. He believed that God can only be known from his revelation in Jesus Christ, without which He would be Deus absconditus,the hidden God.


[1] Wootton 2010, 230.

[2] Dijksterhuis 1950, 450 (IV: 203-204); Gaukroger 2001, 90.

[3] Ampère 1826.

[4] Israel 2001, chapter 27.

[5] Eamon 1994, 289.

[6] Cohen, Smith (eds.) 2002, 17

[7] Newton 1704, 1.

[8] Newton 1704, 404.

[9] Newton 1704, 338-405.

[10] Cohen 1952, xli-xliii.

[11] Newton 1687, book II.

[12] Newton 1687, 396.

[13] Newton 1687, 543.

[14] Koyré 1965, 14.

[15] ’s-Gravesandes textbook Physicis elementa mathematica, experimentis confirmata, sive introductio ad philosophiam newtonianam (two volumes, Leiden 1720) was quite influential.

[16]Discours préliminaire des éditeurs’ , Encyclopédie I, cited by Gaukroger 2010, 280.

[17] Israel 2001; 2006; 2011; Gaukroger 2010, chapter 4.

[18] Locke 1690.

[19] Locke 1690, book IV, chapter 10.

[20] Hume 1739, 1748.

[21] Newton 1687, 544, 545.

[22] Descartes 1647, 71-73; 1664, 16-23; Shapin, Schaffer 1985.

[23] Shapin, Schaffer 1985, 200.

[24] Dijksterhuis 1950, 488-503 (IV: 261-282); Middleton 1964, chapters 1, 2; Westfall 1971, 43-50; Shapin, Schaffer 1985, 41-42; Cohen 2010, 410-415.




Chapter 6



Laws of nature




6.1. The Renaissance search for order




Whereas in the preceding chapters the idea of natural law or law of nature (I shall not make distinction between these) was already marginally mentioned, chapters 6 and 7 put this theme in the centre, as was the case in Enlightenment philosophy. Chapter 6 discusses the ontological status of natural laws, chapter 7 investigates some epistemological problems. In both the distinction between nominalism and realism plays an important part. It dates from the medieval discussion about the meaning of ‘universals’.


Aristotle was a realist, assuming that the meaning of generic terms must be sought in reality (in re). In contrast, Plato was considered an idealist, seeking this meaning in the eternal ideas preceding reality (ante rem). The distinction between adherents of Plato and Aristotle grew into that between idealist rationalists and realist empiricists. As a third party, nominalist philosophers considered generic terms merely as names (nomen) which meaning is determined by the human mind after considering reality (post rem). In the Middle Ages laws ordained by human authorities were called ‘positive laws’, in contrast with divine laws which were called ‘natural’, therefore nominalism is also called ‘positivism’.    


The metaphorical idea that invariant laws govern nature is a fruit of the Renaissance and of early Enlightenment, but it became contested in the nineteenth century. Contrary to nominalist positivist and historicist philosophers stating that laws of nature are invented by people, realist experimental philosophers believed that these laws can be discovered in nature. Knowledge of natural laws enables people to understand nature, to solve many problems, and to apply it for practical use. Natural scientists consider their knowledge of natural laws based on experimental research to be much more reliable than any me


taphysics could supply.


Medieval scholars distinguished positive law, given by human authorities, from (mostly moral) natural law, given by God, but in this sense the word law was never applied in science.[1] In a scientific context, the word law was introduced about 1600 by Renaissance scholars like Tycho Brahe: ‘the wondrous and perpetual laws of the celestial motions … prove the existence of God’[2]; Giordano Bruno: ‘Nature is nothing but the force inherent in the things, and the law according to which they pursue their orbits’[3]; and Galileo Galilei: ‘Nature … never transgresses the laws imposed upon her’.[4]  


Early Enlightenment philosophers, too, accepted that natural laws are ordained by God. René Descartes believed that God ‘did nothing but lend his usual support to nature, allowing it to behave according to the laws he had established.’[5] Gottfried Leibniz spoke of natural laws as rules subordinate to the supernatural law of general order.[6]


Isaac Newton’s experimental philosophy considered the aim of the physical sciences to discover the laws of nature, as summarized by Roger Cotes in the preface to the second edition (1713) of Newton’s Principia: ‘Without all doubt this world, so diversified with that variety of forms and motions we find in it, could arise from nothing but the perfectly free will of God directing and presiding over all. From this fountain it is that those laws, which we call the laws of Nature, have flowed, in which there appear many traces indeed of the most wise contrivan­ce, but not the least shadow of necessity. These therefore we must not seek from uncertain conjectures, but learn them from observations and experiments.’[7]


The idea of laws of nature emerged during the Renaissance, probably without much deliberation. The increasing emphasis on laws cannot be understood apart from the general historical context. During the Middle Ages, the concept of law as we know it hardly existed. Countries and counties were partly ruled according to agreements and contracts between the rulers and the representatives of the estates. In principle, the emperors, kings, dukes, and counts derived their authority from God, or from the church. The medieval practice of government by incidental agreements or unilateral decisions collapsed under the burden of its complications and arbitrariness, its lack of unity and consistency. The idea of civic law based on universally valid human rights arose during the chaotic religious and civil wars because of the generally felt need of order.


During the Enlightenment people questioned the divine authority of their governments, and started to require laws to be based on fundamental principles like freedom and human rights, assumed to transcend both historical agreements and royal authority. A law transcends the authority of the government, even if the latter is the primary source of positive law, i.e., the law as it is formulated in customs, law books, or constitutions. It means that the government is no longer an arbitrarily acting absolute sovereign, but is subject to its own laws, a view that would have surprised many medieval scholars. Already in 1581 the Dutch States General abjured their lord (the Spanish king), accused of violating the country’s laws.


We find a similar need for order in the physical sciences. Nicholas Copernicus’ main motive to reform astronomy was his wish to bring order in the planetary system. He criticized his precursors, saying: ‘Also they have not been able to discover or deduce from them the chief thing, that is the form of the universe, and the clear symmetry of its parts. They are just like someone including in a picture hands, feet, head, and other limbs from different places, well painted indeed, but not modelled from the same body, and not in the least matching each other, so that a monster would be produced from them rather than a man.’[8]


The reformers Martin Luther and John Calvin rejected the Platonic and Aristotelian view that ideas or forms are logically transparent, self-evident, purely rational. But they feared that a one-sided emphasis on God’s omnipotence would lead to the idea that God acts arbitrarily. In particular Calvin supplemented the idea of God’s omnipotence with the idea of God’s faithfulness. God is faithful to His covenant with His people and to the laws which He accorded the creation, including the natural laws. This view allows people to discover the laws. These are not first of all open to rational thought, but to empirical investigation, in which rational thought operates together with observation and experiment.


The rationalist opinion was still shared by Galileo Galilei and René Descartes, but Johann Kepler and Isaac Newton arrived at the empiricist view that natural laws are neither logical nor intuitively evident. The planets move in elliptical orbits with a velocity changing according to a law, although this could have been different. Newton derived Kepler’s laws from his theory of gravity, but he could not logically prove this law to be necessarily true. The law of gravity is as it is, but it could have been different if God had wanted it so. It is contingently dependent on God’s will.


The Protestant view of law implies a new concept of truth. Truth is no longer conformity of theory and fact, but law conformity, obedience. The investigation of the lawfulness of the creation is conducted with respect for the laws, or rather for the lawgiver, the sovereign of heaven and earth. It was this respectful attitude which led Kepler to accept his laws, contradicting every hypothesis conceived up till then. It means the subordination of human thought to divine law.




6.2. Experimental philosophy finds


natural laws a posteriori




Johann Kepler was probably the first to formulate laws as generalizations in terms of mathematical relations.[9] His first law (1609), stating that planets move in elliptical paths, did not differ very much from the view, accepted since Plato, that the orbits of the celestial bodies are circular. After all, both circles and ellipses are geometrical figures, and the planetary orbits do not differ very much from circles. However, since Plato the circular motion of the celestial bodies had been a rational hypothesis a priori, imposed on the analysis of the observed facts. In contrast, Kepler’s law was a rational generalization a posteriori, after Tycho Brahe established the factual motion of Mars from careful measurements during twenty years.


Laws cannot be directly experienced, they cannot be observed. The only way to find natural laws empirically is to investigate the particulars supposed to satisfy them. Individual things or events are considered as samples or exemplars, supposed to be representative and reproducible. Sometimes the samples are highly idealized – frictionless motion, rigid bodies, chemically pure substances, a space kept at constant temperature. Idealized samples are studied in order to find law conformity, to be applied in more complicated concrete circumstances. Sometimes a sample consists of a series of repeated observations, like the ten revolutions of Mars in its two-year cycle observed during twenty years.


If laws of nature would have existence apart from their objects satisfying these, our knowledge of laws could be independent of empirical research. René Descartes assumed that true knowledge of the fundamental laws of nature can be achieved on the basis of intuition and logical thought. Immanuel Kant believed that true scientific knowledge (‘eigentliche wissenschafliche Naturlehre’) is apodictic, irrefutable.[10] Natural laws like Newton’s laws of motion should be a priori known, independent of experience derivable from metaphysical principles. Other fundamentalists believed that true knowledge could only be found in holy scriptures. Classical physicists gradually took distance from these a priori views, assuming that knowledge of natural laws can only be achieved a posteriori by studying their empirical consequences in experiments, observations, and measurements.


In turn, the experimental method relies on the idea that reality satisfies natural laws. An experiment is always performed at a certain place and time with specific instruments and well-chosen specimens, by a single experimenter (or a group of them), whose experimental skills, knowledge, and imagination are decisive. Nevertheless, the experimental results are declared to hold for all places, times, and comparable materials, independent of the personal properties of the researcher. Therefore, an experiment ought to be reproducible by other scientists, using different instruments and materials at various places and times. This is the critical function of the scientific community.




6.3. Variable and invariable properties




Kepler’s second law, also found a posteriori, contains another novelty. No doubt, medieval philosophers were interested in change, but their theories of change were almost never quantitative. Planets were supposed to move at a constant speed. Since antiquity, astronomers knew very well that planets have variable speeds. They applied various tricks to fit the observed facts to the Platonic idea of uniform circular motion. Kepler accepted changing velocities as a fact, connecting these to the planet’s varying distance to the sun as expressed in its elliptical path. He established a constant relation, his second law: as seen from the sun, a planet sweeps equal areas in equal times.


The area law is the first instance of a method to become very successful in physical science, namely to relate change to a constant, a magnitude that remains invariant. Instances are the laws of conservation of energy; of linear and angular momentum; and of electric charge.  At the end of the classical period, it also led to the discovery of various natural constants, like the speed of light; the electron’s mass and charge;  Avogadro’s number; Boltzmann’s and Planck’s constants. Both conservation laws and natural constants impose restraints on possible relations and changes.


Kepler’s third law (1619), which inspired Newton to his law of gravity, says that the third power of the size of the planetary orbit is proportional to the square of the period of revolution. For Kepler this was a purely empirical relation, having no rational foundation, but he was convinced of its importance: ‘The thing which dawned on me twenty-five years ago before I had yet discovered the five regular bodies between the heavenly orbits …; which sixteen years ago I proclaimed as the ultimate aim of all research; which caused me to devote the best years of my life to astronomical studies, to join Tycho Brahe and to choose Prague as my residence – that I have, with the aid of God, who set my enthusiasm on fire and stirred in me an irrepressible desire, who kept my life and intelligence alert, and also provided me with the remaining necessities through the generosity of two Emperors and the Estates of my land, Upper Austria – that I have now, after discharging my astronomical duties ad satietatum, at last brought to light … Having perceived the first glimmer of dawn eighteen month ago, but only a few days ago the plain sun of a most wonderful vision – nothing shall now hold me back. Yes, I give myself up to holy raving: I have robbed the golden vessels of the Egyptians to make out of them a tabernacle for my God, far from the frontiers of Egypt. If you forgive me, I shall rejoice. If you are angry, I shall bear it. Behold, I have cast the dice, and I am writing a book either for my contemporaries, or for posterity. It is all the same to me. It may wait a hundred years for a reader, since God has also waited six thousand years for a witness …’[11]


Already as a theological student, Kepler was a quite independent but firm Lutheran, but he was also a Pythagorean mystic. After his juvenile work Mysterium cosmographicum (1597) his most mature Harmonice mundi (1619) was also devoted to finding numerical relations within the cosmos, in matter, in music, in geometry, and in theology.




6.4. Laws and causality




Natural laws often express a connection of cause and effect. Experimental philosophers considered causality to be an asymmetrical relation between two events, one being the cause, the other its effect. Until long after the seventeenth century causality was accepted without any problem, but this changed with the publications of David Hume.[12] He stated that a causal connection between two events cannot be proved and is possibly an illusion. He believed that people assume causality because of psycholo­gical motives: the need of humans (and animals) to predict the effects of their behaviour, making decisions possible. Immanuel Kant tried to save the rationality of causality. He stated that causality, like space and time, is a necessary cate­gory of thought, because otherwise people could not order their sensorial experience in a rational way. Kant’s followers confused causality with law conformity, like Helmholtz, observing ‘… that the principle of causality is indeed nothing but the assumption of law conformity of all natural phenomena. The law recognized as an objective power we call force.’[13]


The nominalist philosopher Ernst Mach denied the lawfulness of cause and effect in nature. ‘Nature is given only once’ was Mach’s favourite expression, [14] and ‘equal effects in equal circumstances’ never occur. It is only a matter of economy to speak of cause and effect, deliberately neglecting the differences always occurring in actual cases. Referring to Hume and Kant, Mach stated that the idea of causality only arises from the attempt to reconstruct facts in thought, and to relate various events. The experience that such relations can be found leads to the idea that they are necessary. Mach ascribed this idea to the existence of voluntary motions, and the changes people are able to produce in their environment.[15] But he admitted to have no answer to the question of whether the instinctive experience of causality arises in individuals or is transferred in education.


Apparently Hume, Kant and Mach did not understand that causality is the basic assumption in the experimental practice, when a scientist causes a change in a controlled environment, studying its effects. Without this presumption experimental science would make little sense.




6.5. Ernst Mach’s instrumentalist


view of natural laws




Ever since Ernst Mach wrote his influential book Die Mechanik, historisch-kritisch dargestellt (The science of mechanics,1883), one of the first books on the history and philosophy of mechanics, economic parsimony has been an important theme in nominalist (in contrast with realistic) philosophy of science.[16]


The economics of science is expressed by the increasing division of labour within the scientific community, and by the norm of parsimony for theories. In a theory no more axioms, propositions, and data, should be used than will be necessary for its purpose. This is called Ockham’s razor: after a solution of a problem is found, erase as many special conditions as possible, in order to increase the strength of the solution, or the explanation, or the prediction.


This is only one version of Ockham’s razor. The principle of parsimony in theories is erroneously ascribed to William of Ockham, for it is much older. Didactics is often served by repetition, by telling the same story in different words. In communication technology a minimum of abundancy is recommended, for if one restricts oneself to the absolutely necessary, a single mistake suffices to make the message incomprehensible. And in statistics, more data will increase the reliability of one’s results.


According to Mach parsimony is the hallmark of science, the economical function of science, with which its full recognition all mysticism in science disappears.’[17] ‘Science itself, therefore, may be regarded as a minimal problem, consisting of the completest possible presentment of facts with the least possible expenditure of thought.’[18]


Mach’s view that theories are characterized by the need to economize our experience is at variance with the fact that a theory is more than a descriptive set of statements. Theories are not intended to give merely a description of the world, but to predict, to explain, to solve problems, and to systematize our knowledge. This means that theories transcend description.


According to Mach, laws of nature are nothing but economic summaries of sensory experience. For an example he points to Willebrord Snel’s law of refraction. An infinite table relating all possible values of the angles of incident and refracted light is exhaustively replaced by the simple formula sinα/sinβ=n, the constant index of refraction. Mach comments: ‘The economical purpose is here unmistakable. In nature there is no law of refraction, only different cases of refraction. The law of refraction is a concise compendious rule, devised by us for the mental reconstruction of a fact, and only for its reconstruction in part, that is, on its geometrical side.’[19]


Yet Mach did not overlook that Snel’s law transcends human experience in various ways. First, it is assumed to be valid for all kinds of pairs of homogeneous transparent materials, whether investigated or not. Next, it is supposed to be valid at all times and places. Third, it is supposed to be valid for any angle of incidence between zero and ninety degrees. The number of possible angles is infinite, but even our collective experience of these angles is finite. Finally, the law takes the angle or its sine to be a real variable in a mathematical sense, whereas in experiments the measured angles or their sinuses only have rational values. Mach accepted the transcendent character of law statements for the sake of economy, if it was restricted to extrapolation and interpolation into domains inaccessible to direct experience. ‘The function of science, as we take it, is to replace experience. Thus, on the one hand, science must remain in the province of experience, but, on the other, must hasten beyond it, constantly expecting confirmation, constantly expecting the reverse. Where neither confirmation nor refutation is possible, science is not concerned. Science acts and only acts in the domain of uncompleted experience.’[20]


Mach considered the use of mathematics in the natural sciences to be an exclusively economic affair, too. Already the simplest operations of arithmetic have an economic sense. This is even more the case with the use of symbols like x and y in algebra.[21]  


Mach’s nominalist views were very influential during the first half of the twentieth century, but later on realism returned. Critical realists believe it more reasonable to assume that the world has a mathematical structure independent of human thought, without denying that mathematical concepts, theorems, and theories are human-made, no less than those of the natural sciences.[22] In order to arrive at this insight, they distinguished between natural laws and their formulation in statements.[23]


Whereas medieval realists were concerned with the reality of universals, i.e., concepts or ideas with a general meaning, modern realists emphasize the objective existence of laws as a metaphysical principle.[24]


This is not the same as the religious view of the reformers and many seventeenth-century scientists, who not only accepted the real existence of laws, but also recognized their divine origin. For them this was neither a hypothesis to be tested, nor a matter of metaphysics, but a matter of religious belief.




[1] Torretti 1999, 405-407.

[2] Barrow 1988, 59.

[3] Clay 1915, 42.

[4] Galileo 1615, 182.

[5] Descartes 1637, 42; Westfall 1985, 233.

[6] Leibniz 1686, 156-160.

[7] Newton, 1687,  XXXII.

[8] Copernicus 1543, 25 (Preface); 51 (I, 10).

[9] Kepler, 1609, 24, 34 (Introduction), 247 (chapter 40), 267 (chapter 44), 345 (chapter 58).

[10] Kant 1786, 5.

[11] Kepler 1619, 279-280 (Preface of book V); Koestler 1959, 399; Koyré 1961, 343, 457.

[12] Hume 1739, 1748.

[13] Helmholtz 1847, 53; Harman 1982, 118-122.

[14] Mach 1883, 459.

[15] Mach 1883, 460-461.

[16] Mach 1883, 577-595; Cohen, Seeger (eds.) 1970; Bradley 1971; Blackmore 1972; Cohen 1994, 39-45.

[17] Mach 1883, 457.

[18] Mach 1883, 464-465.

[19] Mach 1883, 461.

[20] Mach 1883, 465.

[21] Mach 1883, 462.

[22] Wigner 1960.

[23] Bunge 1967, I, 245

[24] Braithwaite 1953, 2; Bunge 1967, I, 245, 345, Popper 1972, 191.




 Chapter 7


Knowledge of natural laws



7.1. Formulating natural laws


After having discussed the ontology of natural laws in chapter 6, we now turn to their epistemology, the human knowledge of these laws. The polar opposition of nature and freedom expresses itself here too. On the one hand enlightened philosophers claim the human freedom to formulate natural laws, on the other hand they know to be bound by nature.

Both Aristotelians and mechanists from Galileo Galilei and René Descartes to Immanuel Kant supposed that axioms in physical theories should be evidently and necessarily true, and should express the essence of their subject. The knowledge of laws rests on immediate, intuitive insight. They did not need to make distinction between natural laws and their formulation in law statements. Yet these rationalists differed about the contents of their first principles.

Experimental philosophers had to distinguish between ontological natural laws (laws of nature) governing nature and their epistemological formulation in law statements, propositions about natural laws, inspired by observations, measurements and experiments. Law statements are not necessarily either true or false; they can also be approximately true. In geometrical or ray optics, for instance, one axiom states that in a homogeneous medium light propagates rectilinearly. It is remarkable that Newton does not mention this as one of his axioms, although he applies it in his definition of ‘Rays of light’[1], and uses it as an argument against the wave theory. From Francesco Grimaldi’s experiments on the diffraction of light at the edge of a body Newton knew this to be only approximately true,[2] and he accepted that geometrical optics cannot solve all problems in optics. But a sufficiently large number of problems can be tackled by assuming that light propagates rectilinearly,[3] which therefore leads to a satisfactory and useful, albeit approximative theory. Newton also made clear that Galileo’s law of fall and Kepler’s laws of planetary motion are only approximately true.

The idea of natural law did not arise in medieval science, because this was entirely focussed on the logical analysis of ancient texts and their comments. With a few exceptions, the investigation of nature with the aim of finding regularities was foreign to medieval scholars. In the thirteenth century, only Roger Bacon used the expression lex or regula to describe regularity in nature, instead of divine decrees.[4] The aim of medieval science was to establish the essence or nature of things, plants, and animals, how they come into existence, change naturally, and eventually perish, as well as their position in the cosmic order and their practical use for humanity.

Simultaneously with the increasing emphasis on natural laws, the use of ‘essence’ in scientific language disappeared. Galileo Galilei criticized essentialism as a play of words. When in his Dialogue (1632) the Aristotelian Simplicio says that the cause of fall is known to be gravity, Galileo’s mouthpiece Salviati replies: ‘You are wrong, Simplicio; what you ought to say is that everyone knows that it is called “gravity”’.[5] However, René Descartes still maintained that the essence of matter is its extension.

Isaac Newton researched gravity without defining its essence. In Aristotelian philosophy all substances (things, plants, animals, and human beings) had the potential to realize themselves. Hence, a substance had a measure of independence over against God.[6] This view collided with Newton’s view that all things are absolutely dependent on God’s sovereign creation and support. Like the mechanists he assumed matter to be completely passive, subject to God’s laws (4.3). Therefore, Newton rejected the insinuation that he ascribed an active principle of gravitation to material things. In 1693 he wrote to his theological friend, Richard Bentley:  ‘You sometimes speak of gravity as essential and inherent to matter. Pray do not ascribe that notion to me, for the cause of gravity is what I do not pretend to know and therefore would take more time to consider of it’.[7]

In Newtonian thought essence was gradually replaced by universality and lawfulness.[8] Newton did not want to know what gravity essentially is, but which laws it satisfies. Gravity is not essential, but universal. Universality as being valid independent of place and time is a hallmark of a law of nature.

Two historically important views of the epistemology of natural laws have failed. The first, represented by René Descartes and other rationalist deductivists, held that law statements must be reducible to clear and distinct evident ideas, in order to achieve a rational and necessary character. It failed ever since Newton recognized the law of gravity to have a contingent character. His rejection of rationalism has been reinforced by nineteenth- and twentieth-century developments in the natural sciences. This means that natural laws transcend rational thought.

The other failing view is that of the inductivists like Francis Bacon, assuming that law statements are nothing but generalizations of observations. Because natural laws are supposed to be valid everywhere and always, this view is untenable: natural laws transcend human experience.

Law statements are invented by people, and spring from their imagination in a process including both rationality and active experience, by instrumental observation, measurement, and experiment. But the laws they refer to even transcend this imagination. Usually the far-reaching consequences of newly discovered laws cannot be foreseen, and are much richer than anybody could have predicted.

This threefold transcendent character of laws is radically different from Platonist transcendental idealism, in which observable things are imperfect copies of the real and perfect ideas. In Aristotelian realism, an observable thing is a unity of form and matter, imperfect as long as it has not actualized all its potentialities. Also Descartes related the laws to perfect, clear ideas. He contrasted God as a perfect being with man, who is imperfect because of his doubt. But in the modern view of law, the idea of perfectness hardly plays a part. The observable things are not copies of laws, but are subject to laws.

Scientists still speak of ideal things, either in a conceptual sense, (a rigid lever, an ideal or even perfect gas), or in an experimental sense (a pure sample, a thermostat). The meaning of these objects of research is not to obtain a perfect sample, but rather a sample which is simpler than anything found in nature. It is easier to do calculations on a pure ideal gas than on an impure mixture of oxygen and nitrogen. It is easier to do experiments in an enclosed room kept at a constant temperature than in an uncontrollable open space. The idealization used in present-day science is intended to eliminate disturbing circumstances, in  order to generate solvable problems.


7.2. Scientific knowledge of laws


A realistic view of natural laws not only implies their existence, but also their knowability. It is important to make distinction between the laws, which govern nature, being independent of mankind, from laws as formulated by scientists. The former may be called natural laws or laws of nature, and the latter law statements, but both are carelessly called laws.[9] Newton’s law of gravity is a law statement, whereas the law of gravity is a natural law ruling the planetary motions. The first is formulated by Newton and dates from the seventeenth century; the latter is discovered by him, but dates from the beginning of the creation.

It makes sense to say that a law statement is true, or approximately true, or false, but it makes no sense to call a law of nature true. Instead, a law of nature is valid or holds for a specified range, which implies a relation to its subject matter. A law statement is true (or approximately true) if it is a reliable expression of the corresponding natural law. Nominalists would say that a law-statement is true if it confirms observable facts. Realists would call this a criterion for the truth of a law-statement. Until the beginning of the twentieth century, Isaac Newton’s law statement of gravity was considered to be true, but since the acceptance of Albert Einstein’s general theory of relativity, it is considered approximately true. The Newtonian expression is sufficient to solve many problems, and is often preferred because of its relative simplicity. For a similar reason one may prefer Galileo’s law of fall, which Newton showed to be an approximation of his own statement of the law of gravity.

According to some philosophers, both Galileo’s and Newton’s statements are falsified by Albert Einstein’s theories of relativity, and therefore have become useless, but scientists share a more liberal opinion. Indeed, in the context of a theory, being a collection of deductively connected statements, only statements which are asserted to be true can be admitted. These are axioms, propositions borrowed from other theories, and facts besides propositions which truth is to be proved. This is a consequence of the logical law of excluded contradiction. If a theory would contain a statement, which is asserted to be false, the theory could prove any other statement to be true as well as false,[10] making the theory as an instrument to distinguish between true and false propositions obviously useless: it proves too much. But the users of a theory have a wide choice of axioms, facts, and so on. For instance, when studying a falling body, for the law of gravity they are free to choose between Einstein’s, Newton’s, or Galileo’s law statements. They may apply algebra or calculus. They may assume friction to occur or not. For the deductive process the statements within a theory are not allowed to be mutually contradictory, but these may very well contradict statements that are not used in the theory. That makes possible to use law statements and idealizations, which are known to be false, or only approximately true.[11] In the practice of science such ‘counterfactuals’ are as much indispensable as in common parlance.

The question of when a statement has the status of a law statement is not easy to answer.[12] There is no comprehensive concept of natural law; it cannot be subsumed under more general concepts. However, physicists have an approximating idea of law, including the assumption that natural laws found by means of theories, observations, and experiments, are universally valid. Natural laws are supposed to be valid everywhere and always; for everybody irrespective of race, prosperity, political, or religious conviction; whether people accept or reject them; whether they are understood or not. Sometimes the range of a law is restricted, as in the case of the laws for the structure and evolution of stars, which are not valid for plants and animals, but that does not make these laws less universal.

This principle of uniformity led the Copernicans to the rejection of any fundamental distinction between terrestrial and celestial physics. After Johann Kepler found his laws for the motion of Mars, he applied these to the other planets as well. Isaac Newton argued that gravity is a universal phenomenon. Results found in laboratory experiments on optical spectra were applied to the sun and the stars, and vice versa.

The theory of relativity states that natural laws are independent of place, time, and motion, with respect to any inertial frame of reference. This general criterion for a law statement is a consequence of the mutual irreducibility of quantitative, spatial, kinetic, and physical relations, providing a restriction on the formulation of physical law statements.

In a theory, a law statement is not only intended to describe a certain state of affairs. It should also enable one to make predictions and explanations. Therefore, it must allow of counterfactuals, it must be able to function in a hypothetical situation that is actually not the case.[13] A disposition, such as ‘glass is breakable’, applies to any glass even if it is not broken. Newton’s first law of motion, the law of inertia, is counterfactual, because bodies on which no forces act do not exist. Its validity can be established only if this law is applied in combination with, for instance, the law that forces can balance each other. This makes the statement ‘If no net force is exerted on a body, it does not accelerate’ a testable consequence of Newton’s first and second law of motion. The law of conservation of energy, stated as ‘the energy of a closed system is constant’, is counterfactual, because closed systems do not exist, but it has important consequences if applied to real systems. Hence, there is some truth in Nancy Cartwright’s proclamation that the laws of physics lie. ‘Really powerful explanatory laws of the sort found in theoretical physics do not state the truth’.[14]

Indeed, the fundamental laws are very distant from concrete reality. In order to present a suitable description, prediction or explanation of a complex phenomenon often requires a complicated reasoning, as Cartwright illustrates with several examples. Yet the conclusion that the fundamental laws are not true is too fast. For Cartwright does not prove that these laws are superfluous, or contradicting the phenomena they aim to describe, predict or explain

Finally, a proposition stating a law of nature is only accepted if it is connected to other law-statements. The law of Johann Titius and Johann Bode should not be called a law-statement.[15] It concerns a regularity in the distances of the planets to the sun, but (apart from the fact that the stated regularity is not very convincing) nobody has ever been able to connect it to other laws.

John Carroll rightly observes: ‘… if there were no laws, there would be little else…’, no counterfactuals, no dispositions, no causality, no chance, no explanations, no properties.[16] ‘Nearly all our ordinary concepts … are conceptually intertwined with lawhood.’[17]

In ordinary language, a law is seldom distinguished from its subject matter, but in science this distinction is prominently present.[18] It is a characteristic of science to take reality apart, of which the distinction of a law and its subject is the first instance. But even in science, a law and its subjects cannot be separated: natural laws are in re, within reality, according to a medieval expression. Knowledge about laws of nature can only be achieved by studying its subject and objects, in experiments or observations.

If the laws of nature would have existence apart from their subject matter (ante rem) scientific knowledge of laws could be independent of empirical research. Such was the opinion of the neo-Platonists, who assumed that true knowledge of the laws of nature can be achieved on the basis of intuition and thought, or that knowledge of natural laws is inborn, and can be recollected by anamnesis. Generally, present-day scientists do not share this view, although some theoretical physicists expect that in the near future natural laws can be founded exclusively on logical and mathematical ‘first principles’, such as symmetries.


7.3. Induction and deduction as

complementary heuristics


Classical physicists aimed at formulating axioms and theorems expressing laws of nature, preferably in a mathematical form. But how did they find natural laws? What were their heuristics? Originally, heuristic is the art of solving problems, but its meaning may be expanded to the art of scientific discovery, the method of finding laws of nature. According to Aristotle, universal statements spring from experience, and derive their validity from theoretical thought. Their self-evident truth is grasped intuitively. Similarly, mechanical philosophers required the most general laws to be deducible from clear and distinct ideas, from mechanical first principles about matter and motion.[19]

Until the end of the twentieth century heuristics was hardly considered a subject for the philosophy of science. The logical-empiricists distinguished sharply between the context of discovery and the context of justification. Only the latter belonged to philosophy.[20] Hypothetical-deductivists like Carl Hempel and Karl Popper stated that scientists put forward hypotheses (‘bold conjectures’), derive their logical consequences, and check these with observations. The context of discovery was the concern of historians and psychologists, not amenable to logical analysis. Only since about 1960 authors like Norwood Hanson, Thomas Kuhn, Paul Feyerabend, Imre Lakatos, and Larry Laudan started to pay attention to the way scientists find their theories.[21]

Francis Bacon was much concerned with heuristics. He saw an analogy between the procedures of natural science and the practice of justice.[22] In order to arrive at a fair judgement, lawyers have to collect the relevant facts, to discover the truth, and to know and recognize the pertinent laws. Bacon devised procedures to eliminate irrelevant facts. Rejecting Aristotle’s enumerative induction, he is known as an eliminative inductivist.[23] Bacon sought the source of all knowledge in observation and experiment as applied in alchemy and the practice of artisans (1.2). He introduced the method of instantia crucis, later called experimentum crucis, a crucial experiment devised to decide between competing hypotheses.

In Opticks, Query 31, Isaac Newton stressed the method of finding empirical generalizations by induction: ‘As in Mathematicks, so in Natural Philosophy, the Investigation of difficult Things by the Method of Analysis, ought ever to precede the Method of Composition. This Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths. For Hypotheses are not to be regarded in Experimental Philosophy. And although the arguing from Experiments and Observations by Induction be no Demonstration of general Conclusions; yet it is the best way of arguing which the Nature of Things admits of, and may be looked upon as so much the stronger, by how much the Induction is more general. But if at any time afterwards any Exception shall occur from Experiments, it may then begin to be pronounced with such Exceptions as occur. By this way of Analysis we may proceed from Compounds to Ingredients, and from Motions to the Forces producing them; and in general, from Effects to their Causes, and from particular Causes to more general ones, till the Argument end in the most general. This is the Method of Analysis: And the Synthesis consists in assuming the Causes discover’d, and establish’d as Principles, and by them explaining the Phaenomena proceeding from them, and proving the Explanations.’[24]

Hence, whereas he stressed that induction is ‘no Demonstration of general Conclusions’, Newton stated that one should hold to its results as long as no exceptions were found. By using theories, experimental philosophers applied deductive methods, whereas experiments and instrumental observation produced information by induction. Implicitly they rejected the exclusiveness of both methods. They used induction and deduction alternating, as opposite but complementary means of increasing their knowledge of laws. Moreover, they developed several other powerful methods of discovering laws: isolation (5.1); mathematization (7.4); successive approximation (7.5); analogy (8.7); and the application of instruments in observation, measurement and experiment.

Newton’s opponents such as Christiaan Huygens and Gottfried Leibniz did not fail to observe that he could not argue his views on inertia, absolute space, time, and motion, from induction, but their rationalist alternatives did not fare any better.

Induction understood as the generalization of a limited number of factual statements is not deductive, and is neither theoretically nor inductively justifiable. Karl Popper was very critical about induction.[25]  He even maintained that inductive procedures in science do not exist, overlooking statistical analysis. He propagated the hypothetical-deductive method of ‘conjectures and refutations.’ However, in every physical situation many hypotheses are proposed without any warrant of finding a convincing result.[26] Nevertheless, the inductive method serves as a heuristic tool in the search for laws of nature. Laws are hidden, they cannot be observed, but are instantiated in observable phenomena, in particular if obtained with the help of instrumental observations, experiments, and measurements. In an empirical way laws can be found by actively studying phenomena – things and events displaying some pattern that can be generalized. Physicists apply induction as a scientific heuristic, since the nineteenth century assisted by statistical methods as first developed in astronomy. Induction is based on the recognition of a pattern, founded on similarities, combined with previous experience with similar situations.[27] Of course, this method is as fallible as any other method of finding laws of nature.

Karl Popper’s philosophical assumption that laws as those of Johannes Kepler were ‘bold conjectures’, alone the product of his imagination, is at variance with historical evidence. In Astronomia nova (1609), Kepler describes how he wrestled with Tycho Brahe’s data in order to bring these in accord with the theories of Ptolemaeus, of Nicholas Copernicus, of Tycho Brahe himself, all three dependent on the Platonic dogma of uniform circular motion. Only after several years of hard labour Kepler had to abandon these attempts.[28] Only after extensive calculations he recognized the pattern of the non-uniform elliptic motion including the area law.[29] In contrast, Kepler’s earlier model of planetary motion, published in Mysterium cosmographicum (1597),[30] relating the dimensions of the solar system with those of the five regular polygons, can be considered a not very successful bold conjecture.

When the first draft of Principia reached London, Robert Hooke learned that Newton applied the inverse-square law to problems concerning planetary motion. Previously, Hooke had conjectured that the celestial bodies attract each other according to an inverse-square law, and he demanded Newton’s recognition of his priority. Newton rejected this indignantly:  ‘Now is this not very fine? Mathematicians that find out, settle & do all the business must content themselves with being nothing but dry calculators & drudges & another that does nothing but pretend & grasp at all things must carry away all the invention as well of those that were to follow him as of those that went before.’[31]

According to Newton it deserved no merit to conjecture a law statement. His merit was to derive the law of gravity from the phenomena, by mathematical analysis, and to demonstrate its consequences for the solar system. Hooke was unable to do anything of this kind, according to Newton.[32] Let us see how he achieved his aim.


7.4. Rules of reasoning in

Newton’s heuristic


Philosophiae naturalis principia mathematica is a prominent example of the fruitfulness of mathematics for finding physical laws. After an introduction of 28 pages (in the English translation), with operational definitions of mass, momentum, and various kinds of force, and discussing the three ‘axioms or laws of motion’ as well as the metrics of time and space (chapter 4), Principia consists of three books. The first and second books are mostly mathematical treatises, the first concerned with motion in a vacuum, the second with motion in a material medium. Newton’s exposition of his method, Rules of reasoning in philosophy, precedes the third book, treating the ‘system of the world’, the solar system. Principia concludes with a ‘general scholium’, a marginal comment.

In the first book, Newton carefully distinguished the mathematical principle of vis impressa (external force) from its physical meaning. Mathematically he derived how large a force must be, and that it should be directed to a central point, if under its influence a body is to move in an elliptic path. But it is a physical matter to decide in any particular case whether this centripetal force is gravitational, elastic, electric, or magnetic, and what the nature of these forces is. The physical aspect of the gravitational force was only considered in Principia’s third book. In the second book, Newton criticized René Descartes’ theory of vortices by showing mathematically that it contradicted Johann Kepler’s laws of planetary motion (something that Gottfried Leibniz soon disproved).[33]

The following ‘rational reconstruction’[34] of how Newton found the law of gravity is based on his Regulae philosophandi, rules of reasoning in philosophy, which found their definitive form in the third edition.[35] The first edition of Principia (1687) only had two rules, still called hypotheses. The second edition (1713) called these rules, and added the third one. The fourth rule appeared in the third edition (1726), translated into English by Andrew Motte in 1727, and therefore the best known.

The first rule is: ‘We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.’ For this reason, Newton assumed that only one force operative in the solar system is sufficient to explain the curved orbits and changing velocities of the planets, satellites and comets. Newton also assumed that the same law must be valid for the solar system and for the planets having satellites, according to the second rule: ‘Therefore to the same natural effect we must, as far as possible, assign the same causes.’

Kepler’s second law, the area law, which Newton had derived mathematically from his third law of motion, proved that the force must be centripetal, i.e., directed to a fixed point. Newton observed that this ‘fixed’ point may move uniformly and even with acceleration, if there is an external force.[36] Newton needs this in order to apply his theory to the system of earth and moon, moving accelerating around the sun, and to the satellites circling Jupiter and Saturn. Several orbital shapes are consistent with the area law, among them circular orbits. In that case, Kepler’s second law says that the orbital speed is constant. For uniform circular motion Christiaan Huygens had determined the magnitude of the centripetal acceleration as a function of the radius and the speed.

Now Newton considers a number of mass points moving in hypothetical circular homocentric orbits with different radii and periods. He applies Kepler’s third law and Huygens’ formula for centripetal acceleration in order to show that the acceleration is inversely proportional to the square of the radius.[37] This straightforward part of Newton’s derivation had also been found by Christopher Wren, Robert Hooke, and Edmund Halley, and possibly several other scientists.[38]

According to Newton’s second law of motion, the force, by which each hypothetical mass point is drawn to the centre, is therefore proportional to its mass and inversely proportional to the square of the distance. Next, Newton states his third rule of reasoning: ‘The quality of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.’

This is really new, because ‘… we must, in consequence of this rule universally allow that all bodies whatsoever are endowed with a principle of mutual gravitation.’

Combined with the third law of motion, this means that if the hypothetical mass point (say, a planet) is attracted to the centre (say, the sun) by a force proportional to the planet’s mass, then the sun is attracted towards the planet with an equal force. For symmetry reasons, this is proportional to the sun’s mass. Hence, the force between the sun and the planet is proportional to both the mass of the sun and the mass of the planet, and is inversely proportional to the square of their mutual distance. This symmetry argument in the derivation of the law of gravity is completely due to Newton. Each piece of matter attracts any other one by the force of gravity, which according to Newton’s third law is a reciprocal relation. In the investigation of other kinds of interactions (electric, magnetic), this became a powerful heuristic.

Finally, Newton generalized this law, found for the ideal case of uniform circular motions, to all kinds of motion influenced by gravitational interaction: elliptical non-uniform orbits; projectile motion; free fall; and pendulum motion. This is in accord with the fourth rule of reasoning: ‘In experimental philosophy we are to look upon propositions inferred by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined till such time as other phenomena occur, by which they will either be made more accurate, or liable to exceptions.’

Newton assumed that the force responsible for the motion of the planets is the universal force of gravity. He rightly called this generalization induction, although it was performed at a higher level than that of an empirical generalization, because it is based on the belief that natural laws have universal validity, on earth and in the heavens.

Small wonder that Principia made a deep impression, even on people who rejected its principles. Both the Principia and the Opticks rejected René Descartes’ mechanicism as explained in his most mature work on physics, Principia philosophiae (1644), and its extended translation, Les principes de la philosophie (1647). The Enlightenment accepted Newton’s Regulae philosophandi above Descartes’ Discourse on method.[39]


7.5. Successive approximation


Proposing a synthesis between Karl Popper’s falsificationism and Thomas Kuhn’s historicism, Imre Lakatos developed his methodology of scientific research programmes, in which successive approximation is the central theme.[40] The third book of Principia applies this method for the solution of problems. It is a research program in which a model is a simplified representation of a material structure, and in which several models with increasing complexity succeed each other step by step. On the one hand each model should be simple enough to make the solution of problems possible, on the other hand it must be complicated enough to give rise to a new model. The program starts with a theory and consists of a series of ever more detailed models and intelligent experiments. Each model is a set of initial conditions.[41] The models in the program have a number of suppositions in common, forming a ‘hard core’, Lakatos’ sophisticated variant of Kuhn’s paradigm. Whereas Kuhn suggests that in every field of mature science only one paradigm can be operative, Lakatos produces historical evidence to demonstrate that usually two or more competitive research programs operate simultaneously in the same field. By specifying the models successively the program aims to approximate reality. Often it is known beforehand how the models must be adapted. When Newton investigated the model of a planet as a point mass, he intended to replace it by the model of a spherical planet. Each model is an idealisation: of course, the earth is not a perfect sphere. To begin with, the function of a model is not to present a picture of reality, but to formulate solvable problems. The program starts with a relatively simple problem to reconnoitre the difficulties and the possibilities of the theory and to master these. By investigating the models step by step, with ever more details, one hopes to find a way to attack the real problem of the material structure.

The hard core of Newton’s research program consists of the laws of motion and the law of gravity, together with a number of theorems derived in Principia’s book I. Newton’s first model studied the motion of a pointlike planet moving around a stationary sun. This model satisfies Kepler’s first two laws, showing that the series of successive models is on the right path. Because this model contradicts Newton’s third law of motion (of action and reaction), in the next model the sun is no longer stationary, but is moving together with the planet about their common centre of gravity. In the next model the planet is a flattened sphere rotating about its axis, for which Newton proves that a body at the equator has less weight than at the poles. Now mutually attracting planets are introduced, as well as their satellites. This model satisfies approximately Kepler’s third law and it describes the action of the tides.

The idealised data should not deviate too much from accepted facts. Newton could treat the planets as pointlike in their motion around the sun, but he could not maintain this when he calculated the motion of a falling body near the surface of a planet. He had to make sure that the inverse square law is also valid for the model of a large homogeneous sphere. For moderate heights, when the force of gravity is more or less constant, this yields Galileo’s law of free fall.

Successive approximation applied to the investigation of various material systems, such as the solar system, stars and galaxies, atoms and molecules, solids, atomic nuclei, and sub-nuclear particles, has turned out to be a very successful heuristic. It is an example of progress in science.


7.6. The myth of linear progress


Philosophical views about the progress of science were often frustrated by the implicit assumption that the development of science would be a linear process, as suggested by the success of the method of successive approximation. Since Francis Bacon’s New organon (1620), the Enlightenment considered scientific progress to be an unavoidable continuous increase of knowledge, an accumulation of theoretical insights and established facts. However, the development of science is usually not linear, but often happens by trial and error and it sometimes stagnates. According to Kuhn, linear progress only takes place as long as scientists are able to solve their problems within the framework of an accepted paradigm. If that is no longer possible a crisis occurs, after which the paradigm is replaced, one can no longer speak of progress.

However, the development of science is usually not linear, but is (like the exploration of a new country) a process in various directions, each having its own heuristic. The methods of successive approximation and of abstraction are complementary, but proceed in opposite directions: the first to increasing specification, the second to increasing generality. Also the mathematical and instrumental heuristics may be considered opposite, because in principle (though not always actually), instrumentation depends on physics, and physics depends on mathematics. Finally, deduction and induction may be considered to be opposite means of developing our knowledge of laws. Each of these heuristics determines a specific kind of research program, often if not always determined by incomparable world views. The one-sided emphasis on certain principles of explanation may lead to stagnation, but also to an increased effort to deepen the principle concerned. This means that rival research programs may alternatively be progressive and degenerative in Imre Lakatos’ sense, because some problems can better be solved starting from one principle than from another, or with the help of one heuristic rather than with another.

The choice in favour of one research program and the consequent rejection of its rival is not always objectively possible. A scientist should feel free to use all available principles of explanation, and any method he thinks fit to solving his problems, being aware of the fact that no single method is sufficient to solve all problems.

There is more to be said about scientific method than a discussion pro or contra induction. It appears that the views of the logical empiricists on induction and the relation of logic, theory, and observation are rather poor, just like Karl Popper’s method of trial and error, or conjectures and refutations.[42] Paul Feyerabend observed that scientists hardly ever work according to the views of logical empiricists, of Popper, or of Lakatos. On the contrary, scientists use any means to achieve their goal. Feyerabend points to a pluralism of methods, even speaking of an anarchy.[43]

Indeed, scientists have available a great diversity of methods. Analysis and synthesis are complementary, just like the mathematical and technical or instrumental opening up of a field of science. In the above discussed reconstruction of Newton’s derivation of the law of gravity and its application to the solar system and to free fall it is easy to recognize more than one method at work. Competent scientists master all methods of their discipline, and to tackle a problem they choose the methods which suit them best.[44] This is not anarchy, but freedom of choice.

[1] Newton 1704, 1.

[2] Newton 1704, book III, part I.

[3] Newton 1704, books I and II.

[4] Barrow 1988, 58.

[5] Galilei 1632, 234.

[6] See Barrow 1988, 58.

[7] I. Newton, ‘Letter to Mr. Bentley’, in: Thayer (ed.) 1953, 53-54; Jammer 1957, 139; McMullin 1978, 57-59.

[8] Newton 1704, 401; McMullin 1978, 8-9.

[9] Hempel 1965, 265. Swartz 1985, 4, 11.

[10] Popper 1959, 317-322.

[11] Swartz 1985, chapter 1

[12] Nagel 1961, 48; Hempel 1965, 264-278, 291-293, 335-347; Van Fraassen 1989, 25-38.

[13] Nagel 1961, 51; Hempel 1965, 339; Swartz 1985, 68, chapter 8; Carroll 1994, 4.

[14] Cartwright 1983, 3. Swartz 1985, chapter 1.

[15] Nieto 1972; Barrow, Tipler 1986, 220-222.

[16] Carroll 1994, 3, 6-10.

[17] Carroll 1994, 9-10.

[18] Carroll 1994, 3.

[19] Descartes 1637, 21, 29; 1647, 16.

[20] Reichenbach 1938, 6-7; 1951, 231; Popper 1959, 31.

[21] Hanson 1958; Kuhn 1962; Feyerabend 1975; Lakatos 1978; Laudan 1977.

[22] Gaukroger 2001, 57-67.

[23] Bacon 1620, I, CV.

[24] Newton 1704, 404.

[25] Popper 1959, 27-30; 1972, chapter 1; 1983, 11-158.

[26] Van Fraassen 1989, 146; Smith 2002, 154.

[27] Bunge 1967, I, 314-323; II, 290-294; Finocchiaro 1980, 293-297.

[28] Kepler 1609, 5-12 (Dedication); see Koyré 1961, 277-278.

[29] Hanson 1958, 72ff; Simon 1977, 41-43.

[30] Kepler 1597.

[31] Westfall 1980, 448.

[32] Cohen 1974, 312-313; Westfall 1980, 446-452.

[33] Newton 1687, 395-396 calls Kepler’s first and second law ‘the Copernican hypothesis’, see Koyré 1965, 101-103.

[34] Newton 1687, 406-422 tells his own story in  ‘Phenomena’ en ‘Propositions’ in part III of Principia, see Glymour 1980, 203-226; Harper 2002.

[35] Newton 1687, 398-400.

[36] Newton 1687, 40-45

[37] Newton 1687, 45-46.

[38] Newton 1687, 46.

[39] Cassirer 1632, 7.

[40] Lakatos, Musgrave (eds.) 1970; Howson (ed.) 1976; Feyerabend 1976; Musgrave 1978; Lakatos 1978, I, II.

[41] Lakatos 1978, I, 51.

[42] Popper 1963, 187-188; 1972, 173.

[43] Feyerabend 1975, chapter 1; 1978.

[44] Laudan 1977, 95-100, 103-105.





Chapter 8


The search for structure



8.1. Successive views

on particles and elements


Neither Cartesian nor Newtonian mechanics was fruitful for the study of the structure of matter. Both attempted to reduce these to quantitative, spatial, kinetic, and physical relations. Only in the eighteenth century enlightened chemists became interested in the specific properties of matter, and structural analysis started only in the nineteenth century. This will be the subject matter of chapter 8.

Despite the romantic criticism scientists remained faithful to the method of isolation as a hallmark of experimental science. They became more and more specialists. Since the nineteenth century they did no longer call themselves philosophers, but mathematicians, physicists, chemists, biologists, geologists, and so on, taking distance from both philosophers and theologians, who anyhow started to specialize themselves as well. Especially the study of the structure of matter required specialized attention. It started with chemistry, initially a central subject of Enlightenment philosophy, but soon a branch of natural science without strong connections to philosophy or theology.

In mechanicism, identifying matter with extension, corpuscles differed from each other because of their spatial size, shape and position, but otherwise matter was considered homogeneous. Particles moved in a plenum, and acted by contact, in collisions. Experimental philosophers endowed particles with mass, and had no preference as to their composition. The particles could move in empty space or in some medium, interacting either by contact or at a distance. Aristotle and his scholastic followers accepted minima naturalia too. In fact, all seventeenth-century philosophers agreed that matter consists of inactive particles in one way or another, though Newton’s third law of motion and his theory of gravity shed some doubt on this view.

The history of atomism can be divided into four phases, partly overlapping each other, to be called the specu­la­tive, the empirical, the experimental, and the structuralist phase. Accordingly, the definitions of ‘atom’ and ‘atomist’ are far from consistent, and the same applies to the related concept of ‘element’. We have already seen that Galileo Galilei and René Descartes are often called atomists, although they never called themselves as such, and do not satisfy any definition of this term, unless every corpuscularist would be considered an atomist.

In the first, speculative phase, the atom was a philosophical concept. From Leucippus and Democri­tus (circa 400 BC) to Pierre Gassendi, who in the seventeenth century attempted to revive ancient atomism, philosophers have speculated about the question of whether matter would be continuous, hence infinitely divisible, or built up out of atoms. Atoms are indivisible (a-tomos in Greek), indestructible, have a fixed shape and magnitude, are infinitely hard and elastic, and move in an otherwise void space. There was no unanimity about these properties. In the seventeenth century scientists came to the conclusion that an atom cannot be simultaneously hard and elastic. The existence of a void or vacuum, too, was not generally accepted. Atomism had become in discredit since Aristotle accused it of materialism and atheism.

The antique theories may be called speculative because they did not lead to experimentally testable conclusions. The alternative was Aristot­le’s and Descartes’ view that matter is infinitely divisible, although Aristotle accepted natural minima, and Descartes distinguished three kinds of particles according to their size. During the seventeenth century no scientist adhered to ancient atomism, Gassendi excepted.

The second, empirical phase is characterized by the transformation of elements. Aristotle’s philosophy distinguished matter from form, combined into every substance (something existing independently). Unformed matter, materia prima, does not exist as such. In suit of Empedocles introducing some specific variety in matter, Aristotle recognized four terrestrial elements: earth, water, air, and fire. He added a fifth element (quintessence), the celestial ether. Plato related the elements to the five regular polyhedrons,[1] but this could not serve Aristotle’s theory of change. Generation and corruption always involves a mixing of elements. Because they cannot be generated or corrupted, the celestial bodies are made of a single element and move uniformly in circles around the earth. Aristotle related the terrestrial elements to termini (end points) of change. These are pairs of contrary properties or qualities, like warm and cold, dry and moist, up and down. Earth is dry and cold, water moist and cold, air hot and moist, and fire hot and dry. Earth and water are heavy, and by their nature move downward. Fire and air move upwards. The upward and downward motions are opposite, hence point to imperfection, and to the existence of at least two elements, one heavy, the other light.[2] Aristotelian scholars never related the contrary qualities of heavy and light to density. Only neo-Platonic scholars like Giovanni Benedetti and Galileo Galilei studied density as a quantitative specific property of solids and liquids.[3]

The medieval alchemists added some ‘principles’ to Empedocles’ elements. The philosopher’s quick-silver (to be distinguished from real mercury), is the metallic principle, matter that can be flattened and forged. It is a com­bination of solid and fluid, of earth and water. It is material, passive, and female. The philosopher’s sulphur is the combustible and colourful principle, a combination of air and fire. It is spiritual or pneumatic, active, and male. Sometimes, salt was added as the principle of rigidity, solidity, dryness, and earth. Even in the eighteenth century, Antoine Lavoisier would call two newly established elements ‘oxygen’ and ‘hydrogen’, the acid respectively water forming principle.

An important aim of the medieval alchemists was the transformation of metals, considered mixtures of elements and principles. More than the Greek philosophers the alchemists were empirically inclined, they invented the laboratory. Without caring very much about theories, they performed endless experiments, sometimes with lasting results. These concern the categorizing and purification of existing materials, and the production of new ones. Because alchemists were usually suspected of sorcery, they had to keep their activities secret. Once they started to publish their results, they became the forerunners of experimental philosophy and of modern chemistry. Another aim of alchemy was the search for the elixir of life or panacee, a universal medicine. Paracelsus transformed this into iatrochemistry, the cure of illness by chemicals instead of bloodletting and steam baths.

During the third, the theoretical phase, Enlightened philosophers like Antoine Lavoisier in France and Joseph Priestley in England[4] started to accept that besides by gravity, material bodies interact with each other by electric, magnetic, and chemical forces, and that these bodies have some specific composition. Structural atomism does not consider matter to be an unformed and homogeneous substance, but recognizes a rich variety of various specific structures, like atoms, molecules, crystals and living cells. The revolutionary chemists made an end to the traditional four elements. Air turned out to be a mixture of various gases; water became a compound of oxygen and hydrogen; earths (ores) were recognized as compounds of metals and oxygen. Fire became a substance (first phlogiston, next caloric), finally to become a kind of energy. On the other hand, metals became elements, just like oxygen, hydrogen, nitrogen, carbon, sulphur, and phosphorus. Their mutual transmutation was considered impossible. In analytical chemistry the concept of an element received an entirely new meaning, that of a not analysable substance.

To start with John Dalton, shortly after 1800, the atom became primarily a theoretical concept. In the preceding century, Enlightened chemists definitively took distance from both Empedocles’ elements and from alchemy. They made distinction between elements, compounds, and mixtures. They became more interested in combinations of elements into compounds, and of atoms into molecules, than in physical forces between atoms. The existence of atoms and molecules became a fruitful assumption enabling to explain and predict many kinds of phenomena.

The fourth phase, the experimental one, started about 1900 with the discovery of particles more elementary than atoms. Atoms are not indivisible or indestructible, often no more elas­tic, not infinitely hard, having no specific shape or magnitude. They do not move in a vacuum but in an electromag­netic field. They turned out to have an internal structure, determined by electromagnetism and other kinds of interactions. The atom is no longer an ex­planans, an explanatory model, but an explanandum, something that exists and which internal structure should be explained. In the modern investigation of the structure of matter, in quantum physics and quantum chemistry, experiments play a decisive heuristic part.


8.2. Enlightened chemistry: compounds


During the eighteenth century, Enlightened chemists took distance from the scholastic tradition, dominated by Aristotle. According to Francis Bacon’s advice to be inspired by the expertise of craftsmen, apothecaries and alchemists (1.2), they made distinction between elements, compounds, and mixtures.[5]They did not define elements philosophically, but practically as substances that could not be decomposed into more elementary parts. From their practice they knew ever more compounds consisting of two or more elements in a fixed ratio, with properties which could be quite different from those of the composing elements. In 1794 Louis Joseph Proust formulated the law of constant composition, stating that in a chemical compound the elements always combine in a constant mass proportion. Aggregates that did not satisfy this law were not compounds, but mixtures, having properties shared with their components. For instance, a mixture of hydrogen gas and oxygen gas is also a gas, whereas water as a compound of oxygen and hydrogen having a fixed mass ratio occurs as a vapour, a liquid, or a solid, in which the properties of hydrogen and oxygen are not recognizably present.

Several combinations of elements form different compounds, each having their own typical mass proportion and their own specific properties. For instance, carbon and oxide form the poisonous carbon monoxide as well as the non-poisonous carbon dioxide, which turned out to play an important part in the metabolism of plants and animals Together with the law of conservation of mass in chemical reactions, Proust’s law became a major tool in analytical chemistry.

The attention of the chemists shifted from synthetical (to produce gold or medical drugs) to analytical (to find out how substances are composed). The first was characteristic of alchemy, an age-old practice, the second of the chemistry of the Enlightenment, an emerging science. The alchemists applied fire to the distillation of fluids and the purification of metals. They considered metals to be mixtures of ores (earth) with fire. Georg Stahl accepted this view, but like Robert Boyle he abandoned Empedocles’ theory, assuming that the number of elements could be more than four. One of the new elements was called phlogiston, experienced as heat, the inflammatory part of fuel. Antoine Lavoisier replaced phlogiston by caloric (calorique). The alchemists made iron from iron ore by heating it, and therefore thought that iron is a compound of iron ore and heat. Lavoisier argued that metals are elements, not composed of ores, which he considered compounds of metals with oxygen. Later on, Thomas Kuhn would call this a ‘paradigm shift’.

The criterion of an element became a substance that could not be decomposed into other substances, as far as we know at present, as Lavoisier cautioned. Instead of a natural philosophical concept, in analytical chemistry it became an empirical one. A compound was supposed to be known if it could be decomposed into its elements (analysis), and if it could be composed from the same elements (synthesis). Because plants and animals could be analyzed but not be synthesized, they were considered neither compounds nor mixtures. As organized wholes they later got the generic name of organisms.

In this investigation, the applied mathematics was not geometry, algebra or the calculus, but plain arithmetic. This concerns the mass ratios of various substances in chemical processes, specific densities, specific heats, and the heat involved in chemical reactions and in phase transitions like melting.

Initially the chemists did not distinguish carefully enough between chemical reactions and phase transitions between different states of the same substance, solid, fluid and vapour. Lavoisier believed that a liquid is a compound of caloric with a solid, and a vapour a compound of a liquid with even more caloric. When heating a solid free caloric is used to increase the temperature, and heat is bounded in a fixed proportion to melt it. During the process of melting, heat is added whereas the temperature does not increase. After the introduction of the concept of energy, the caloric theory was abandoned, and the physical phase transitions became distinguished from chemical reactions.

Attempts to arrive at a classification of chemical substances culminated in the Tableau de la nomenclature chimique (1787),[6] published by a committee of four members of the Académie royal des sciences, among them Antoine-Laurent Lavoisier. Because of his critical views expressed in Réflexions sur le phlogistique (1785) the table did not contain phlogiston but caloric. The proposed rational nomenclature for compounds was soon accepted throughout Europe, thanks to Lavoisier’s very influential textbook Traité élémentaire de chimie (1789).


8.3. John Dalton’s structural atomism


In the eighteenth-century identification and classification of chemical substances, atomism and corpuscularism were totally irrelevant.[7] Still under the spell of Cartesianism, Lavoisier and other chemists rejected Newton’s chemistry as proposed in the Queries in part III of Opticks. However, they applied the physical experimental method of accurate measurement in their investigations. In his treatment of electricity, Lavoisier’s teacher Jean Antoine Nollet, since 1753 professor of experimental physics at the university of Paris, adhered to an effluvium theory, similar to René Descartes’ theory of magnetism. An electric effluvium is conceived as a vapour surrounding an electrically charged object, with Cartesian action by contact. Michael Faraday and James Clerk Maxwell developed this later into the concept of the electromagnetic field. In contrast, Newtonian physicists like Benjamin Franklin and Charles Coulomb introduced the concepts of electric force and charge in a fluidum theory (5.1). A fluidum is a liquid within the object implicating Newtonian action at a distance between charged bodies. As electric current it became the natural starting point for the electrodynamic theory in the nineteenth century.

The Newtonian dualism of matter and force became the foundation of atomism, which after 1800 made a new start in the work of John Dalton. A strong argument was that matter turned out not to be homogeneous (as mechanicism assumed) but heterogeneous. Only in 1897 Joseph Thomson established that even electrical charge is corpuscular.

Being an autodidactic meteorologist, John Dalton was interested in the composition of the terrestrial atmosphere, which Antoine Lavoisier had recognized to be a mixture of mostly nitrogen and oxygen, with small amounts of several other gases. Dalton built his theory on the distinction of elements, compounds, and mixtures. Since about 1800, he connected the concept of elements with atoms, and the concept of compounds with molecules (although until the end of the nineteenth century, these words were often used interchangeably).

In A new system of chemical philosophy (1808-1827) Dalton supposed that all atoms of an element are unchangeable and equal to each other, having the same mass and the same chemical properties. He proposed that all molecules of a chemical compound are composed of atoms in the same characteristic way and thus are also the same. In chemical processes molecules change by exchanging atoms. Meanwhile the atoms remain the same. Because not all elements are able to form compounds, Dalton attributed the atoms not only specific properties, but also propensities, or affinities. The atom of an element may have the affinity to bind with an atom of another element. This explains the fixed mass proportion in Proust’s law.

When two elements can combine in only one proportion, Dalton assumed that the corresponding molecule contains one atom of each. Soon this would turn out to be too restrictive. In particular a watermolecule had to be H2O, not HO as Dalton proposed.

Considering all known elements and their compounds turned out to be a kind of interlocking puzzle in the hands of Jöns Jacob Berzelius, who accepted Dalton’s theory even before the first volume of Dalton’s book was published (1808). With an amazing accuracy, Berzelius determined the relative atomic weights (relative to oxygen, because there are many compounds containing this element) of 45 out of 49 elements then known. For instance, he found for lead 207.4 (modern value: 207.2), for chlorine 35.47 (35.46), and for nitrogen 8.18 (8.01).[8] In 1820 he had established the chemical composition of no less than 2000 compounds. Several of these turned out to be mistaken, but his achievement laid the basis for later improvements.

Affinityplayed a leading part in the classification, first of elements and compounds, next of atoms and molecules. In 1869 Dmitri Mendeleev ordered the elements in a sequence according to the atomic mass, and below each other according to the affinity or disposition of atoms to form molecules, in particular compounds with hydrogen and oxygen. His scheme became known as the periodic table of the elements.

Dalton adhered to experimental philosophy. His theory did not start from the mechanist dualism of matter and motion, because his atoms did not move, but from the Newtonian dualism of matter and force, for his atoms and molecules interacted with each other. Dalton introduced his ideas without bothering about Kantian or romantic natural philosophy. Like Newton he was content if his hypotheses would lead to new experiments and an increased knowledge of matter.

Dalton treated caloric as a real substance in his atomic theory by assuming that an atmo­sphere of this element surrounds each atom. This would explain why most materials expand on heating. Dalton believed that like atoms expel each other because of this atmosphere, whereas unlike atoms do not influence each other, unless bonded into a molecule. This ad-hoc hypothesis explained several properties of mixtures of gases, such as Dalton’s law of partial pressures: the total pressure of a mixture of gases is equal to the sum of the partial pressures of the individual gases in the mixture.  

Initially the atomic theory lost adherence because of a conflict between the theories of John Dalton and of Louis Gay-Lussac. Dalton based his atom theory on the supposition that equal masses of the chemical elements interact with each other, whereas Gay-Lussac discovered in 1806 that equal volumes of gases form the basis of chemical reactions between gases. Both statements were much less reliable than they are now, because of the restricted accuracy of their measurements.

Accepting a hypothesis due to Amedeo Avogadro (1811) and  independently formulated by André-Marie Ampère (1814) could have solved this contradiction. Avogadro suggested that equal volumes of gases at the same temperature and pressure contain the same number of molecu­les, irrespective of the nature of the gas.

However, this hypothesis overtaxed the imagination of his contemporaries.[9] It would lead inevitably to the existence of two-atomic molecules like H2 (hydrogen), O2 (oxygen), and N2 (nitrogen). It was by no means clear why a gas like hydrogen should have two smallest parts, the atom H and the molecule H2. It reinforced the prevailing doubt of the reality of Dalton’s structural units.


8.4. The reality of atoms and molecules


The existence of atoms as structural parts of molecules became a fruitful theoretical assumption enabling chemists to explain and predict many kinds of phenomena. An atom was still believed to be indivisible and elastic, but it was no longer the smallest amount of a chemically pure substance, which became a molecule. Between 1830 and 1860 many chemists and physicists doubted the reality of these atoms, although they all applied the atomic theory to analyse the composition of chemical compounds. The atomic hypothesis was applied by all chemists, and defended by almost none. This paradox deserves to be explained. It is a ‘… myth, as prevalent today as it was in the nineteenth century, that there existed a nonatomic chemistry which formed a viable alternative to the Daltonian system.’[10] Nevertheless, chemists almost without exception refused to defend the atomic hypothesis. Even in 1869 Alexander Williamson stated: ‘I think I am not overstating the case when I say that, on the one hand, all chemists use the atomic theory, and that, on the other hand, a considerable number of them view it with mistrust, some with positive dislike.’[11]

At the time, the atomic hypothesis had two aspects. First, it was the ancient idea of indivisible, indestructible, infinitely hard yet completely elastic smallest parts of matter. Many scientists found this idea useless, superfluous, and even contradictory: an infinitely hard object cannot be elastic. Ernst Mach, Friedrich Ostwald, and other positivists considered it a metaphysical assumption, which should find no place in experimental science.[12]

The second aspect was John Dalton’s idea of atoms as carriers of quantitative properties (their mass) and their disposition to form molecules in fixed proportions. Even the most convinced adversaries of the atomic hypothesis applied this second idea. After Dalton and Berzelius, no chemist could avoid it. As experimental scientists, chemists were mostly interested in measurable quantities, and in technical methods to perform measurements accurately. Nevertheless most chemists were initially sceptical about the real existence of atoms and molecules. Only in 1860, more than half a century after John Dalton’s epoch making book, Stanislao Cannizzaro succeeded in convincing the majority of the chemical community of their reality. Yet, even in 1869 it could be observed:

When Dalton’s static model of a gas was replaced by the dynamic models of Rudolf Clausius (1857) and James Clerk Max­well (1860), taking into account the mutual interaction between molecules, also physicists started to consider atoms and molecules realistically. However, they based their kinetic gas theory on the ancient view of indivisible elastic atoms moving in a void, rather than on Dalton’s conception of atoms having typical properties and dispositions. They developed a mechanical atomic theory, leading to experiments on gases, and to attempts to reduce the laws of thermodynamics to statistical mechanics. Their determination of the specific heats of various gases confirmed the hypothesis that the molecules of elements like hydrogen, oxygen, and nitrogen were two-atomic, such that they made a connection with the chemical atomic theories.

In 1827 Robert Brown had discovered that microscopically small particles like pollen in a gas or liquid move spontaneously but irregularly. In 1905, Albert Einstein explained this from random collisions with invisible molecules.[13] Applying his calculation, Jean Perrin determined experimentally Avogadro’s number (the number of molecules in a standard amount of any kind of gas).[14] If necessary, this combined theoretical and experi­menta­l result convinced the majority of scientists and even some positivist philosophers (Ernst Mach excepted) of the real existence of molecules. Between 1900 and 1910 Planck and Einstein invented various other methods of determining Avogadro’s number, and the results agreed satisfactorily with each other. It was now possible to calculate the individual mass of an atom, too. Applied to a liquid it also allowed to estimate the size of an individual molecule.

However, the discovery of the electron and of radioactivity with their properties, marked the end of nineteenth-century atomism. The discovery of the electron (1897) led to the insight that atoms have an internal structure and do not constitute the smallest elementary building blocks of matter. The investigation of radioactivity (since 1896) brought to light that atoms are not unchangeable. The transmutation of elements, in vain sought by medieval alche­mists, occurred in radioactive processes and made the idea of elements doubtful. Moreover, Ernest Rutherford’s research showed irrefutably that radioactive atoms decay stochas­tically, indeterministic, according to a random process.

These insights removed the basis of atomism, exactly when the real existence of atoms was put beyond reasonable doubt. The atom changed from a hypothetical explanandum, part of an explanation, into a real explanans, something that exists and which structure must be explained. Moreover, the new atomic theory should explain why physics and chemistry were so successful in applying the concept of an atom as a principle of explanation during the nineteenth century. Unmistakably, this problem situation contributed to the crisis of 1910 (12.4).


8.5. The hidden structure of matter


Quantum physics emerged from atomic physics and, at times, the two are still identified. Meanwhile, quantum formalism has been applied successfully to chemistry, to molecular physics, solid state physics, astrophysics, nuclear physics, and to sub-nuclear or high-energy physics (6.6). Quantum physics did not really begin with the discovery of Planck’s constant in 1900, nor even with Einstein’s conjecture regarding the quantum nature of light in 1905. Rather, quantum physics began with the study of the internal structure of atoms by the spectroscopists, by Ernest Rutherford, Niels Bohr, Arnold Sommerfeld, and others. Bohr, for example, emphasized as the central problem that the stability of the atom cannot be explained in the framework of classical electromagnetism and therefore required a completely new approach. Pursuing this path, in 1913 and afterwards Bohr made his most remarkable contributions to the development of quantum physics.[15]

Bohr’s approach diverted sharply from the views of most of his contemporaries, including Max Planck and Albert Einstein. Planck continuously sought a reconciliation of the new experimental and theoretical results with classical physics. Einstein was more interested in a radically unified theory embracing both electrodynamics and mechanics which would account for the new phenomena. In contrast, only Bohr’s step-by-step approach (the method of successive approximation, 7.5) turned out to be fruitful, and so he is the principal originator of the quantum theory of structures.

An interesting aspect of the modern investigation of matter is that most structures have been discovered recently, being totally unknown during the largest part of history. Even the relevance of electricity, until the seventeenth century only known as an obscure property of amber, became gradually clear during the nineteenth century, whereas the nuclear forces were only discovered after 1930, less than a century ago. This is a consequence of the fact that these forces concern the structure of stable things, screened from disturbing influences from outside. The deep structure of matter is hidden, and can only surface by intensive experimental and theoretical research. It appears that each discovered structure hides a more fundamental one.

Mainstream philosophy (let alone theology) does not pay much attention to structures.[16] A systematic philosophical analysis of structures is wanting. This is strange, for these form the most important subject matter of twentieth- and twenty-first-century research, in mathematics as well as in the physical and biological sciences. The structuralist approach is much more characteristic of modern science than the functionalist one, still favoured by philosophers of science.

[1] Plato, Timaeus.

[2] Aristotle, On the heavens, I, 2, 3.

[3] Galileo 1586; Stafleu 2016, 1.4.

[4] Aykroyd 1935.

[5] Klein, Lefèvre 2007.

[6] Klein, Lefèvre 2007, chapters 4, 5.

[7] Klein, Lefèvre 2007, 38; Gaukroger 2016, 72.

[8] Levenson 1994, 156-157.

[9] Frické 1976; Glymour 1980, 226-263.

[10] Rocke 1978, 262.

[11] Alexander Williamson in 1869, quoted by Rocke 1978, 225.

[12] Scott 1970; Elkana 1974, 3-6.

[13] Einstein 1905-1908.

[14] Nye 1972; Brush 1976, 655-701; Clark 1976, 93-98; Lindsay (ed.) 1979, 342-349; Pais 1982, chapter 5.

[15] Heilbron, Kuhn 1969; Pais 1991.

[16]Sklar 1993, 3.





Chapter 9




9.1. Natural laws in modern physics


Until 1900 physicists and chemists discovered, formulated, revised, and adapted one law after another. Usually these were called after their discoverer, though not always correctly. Then it was suddenly finished, as if natural laws are a hallmark of classical physics and chemistry.[1] The word law remained in use for the results of classical science, hailed as products of the Enlightenment. For this striking difference between classical and modern physics several explanations may be suggested, one physical, one theological, and one philosophical.

First, around 1900 physicists became aware of the stochastic character of nature. Initially, statistics was applied because of the lack of sufficient detailed knowledge of, for instance, the states of the individual molecules in a gas. However, Ludwig Boltzmann’s statistics implied that the laws of thermodynamics were merely approximately correct, and radioactive decay turned out to be an intrinsically stochastic process. Quantum physics confirmed this trend. Moreover, scientists became aware of specific laws, limited to certain classes of things and events, first in atomic physics and chemistry, later in sub-atomic nature. Physicists, chemists, and biologists became more interested in the specific structure and functioning of nature than in general laws. 

Second, during the seventeenth and eighteenth century natural laws were considered instruments of God’s government. This could be interpreted either in the idealistic-rationalistic sense of René Descartes and Immanuel Kant who considered natural laws both necessary and apodictic (irrefutable), based on a priori principles; or the voluntaristic or empiricist way of Isaac Newton, Robert Boyle and John Locke, such that the world is as God willed it, but could have been otherwise. God could have made the world differently, and the laws are not apodictic but can only be a posteriori known from empirical research. However, already during the classical period some physicists became adverse to the metaphor of natural law if it implied the recognition of a lawgiver, which they would be glad to relegate to theologians. Robert Boyle considered the metaphor of law inappropriate for quite another reason: Only an intelligent being is able to act according to a law, soulless bodies cannot do that.[2]

Most classical physicists were faithful Christians, and many adhered to some variety of natural theology, assuming that God ordained the natural laws at the creation. Newton believed that his physics proved the existence of a benevolent God.[3] At the end of the nineteenth century, scientists started to take distance from this view, either because they became atheists or agnostics, or because they asserted it to be theological or metaphysical, beyond the reach of natural science. Therefore they avoided the metaphor of law, gradually replacing it by another expression of regularity, because they never ceased to study regular patterns in nature.

Third, although Immanuel Kant’s interpretation of natural laws as being necessarily true and independent of experience was never accepted by experimental philosophers, it was quite influential among (especially German) mechanists. Kant considered natural laws to be ‘principles of necessity.’ Norman Swartz argues against this necessitarian view of natural laws, favouring the regularist view (laws express only what does occur).[4] Swartz mentions and dismisses a third and older view, the prescriptivist one that laws have been issued by God.[5] None of these views can be proved, and the choice between them depends on one’s scientific world view.[6]

The third view lost its appeal after the experimental discoveries of the last decade of the nineteenth century. In particular after the acceptance of the Copenhagen interpretation of quantum mechanics (about 1930), Ernst Mach’s influence on philosophy resulted in a revival of various positivist philosophies in the first half of the twentieth-century (8.8). However, in the second half of the twentieth century, realism returned.[7] At the end of that century, philosophers became aware that experimental physicists had never ceased to be realists, unwaveringly believing that the hidden structure of physical reality, however complex, is there to be discovered.[8] The aim of physical science, to discover regularities in nature and to apply these in many kinds of situations, has never been abandoned.

The modern view appears to be that the aim of science is not to find universal laws, but to investigate the hidden structure of matter (chapter 10). The emphasis on universal laws shifted to the insight that many regularities can be expressed as symmetries, implying that many kinds of imaginable processes are impossible.

In modern physics, the metaphor of law made room for terms like Einstein’s postulates, Bohr’s atomic model, Pauli’s exclusion principle, Heisenberg’s relations, crystal symmetry, Fermi-Dirac statistics, Zeeman effect, Schrödinger’s equation, and Boltzmann’s constant. Evidently, there are much more law statements than Newton’s or Coulomb’s laws, and the conservation laws of energy, of linear and angular momentum. It makes sense to distinguish between general functional laws concerning relations, and specific structural laws concerning restricted classes of things and events, but both are subject to symmetry principles.

In 1918 Emmy Noether published her theorem, proved in 1915, stating that with a natural symmetry usually a conservation law corresponds: if a system has a continuous symmetrical property, then there is a corresponding magnitude which value remains the same in the course of time. For instance, the symmetry of uniformly moving systems with respect to time and space lead to the laws of conservation of energy and linear momentum. For the duality of particles and waves this means the proportionality of energy with frequency, and of momentum with wave number, as was earlier established by Max Planck and Albert Einstein.

In particular high-energy physics has revealed some important regularities deserving the metaphor of law, such as the laws of conservation of lepton number (L) and of baryon number (B).[9] An important form of symmetry is that each subatomic particle has an antiparticle with the same rest mass but with opposite values for electric charge, lepton number and baryon number. (These numbers being zero for photons, a photon is its own antiparticle.) Leptons are relatively light particles like electrons and neutrino´s. Among the much heavier baryons, only protons are stable, and neutrons if bound in a nucleus. A free neutron (mean life time 900 sec, L=0) does not decay into a proton and an electron, but into a proton (L=0), an electron (L=1) and an antineutrino (L=-1). This means that some processes are impossible, and this is the way the laws of conservation of lepton number and baryon number were discovered from experiments. Another law is that for any structure, the lepton number, the baryon number, and the number of elementary electric charges is integral. Quarks are the components of sub-nuclear particles, but cannot exist as free particles. They have an electric charge of ±1/3 or ±2/3 times the electron charge, and their combinations satisfy the law that the electric charge of a free particle can only be an integral multiple of the elementary charge. Likewise, in confinement the sum of the baryon numbers (for quarks ±1/3 or ±2/3) always yields an integral number. For a meson and a lepton this number is 0, for a baryon B=+1, for an antibaryon B=-1. This law of confinement of quarks restricts their possible combinations into mesons (pairs of quarks) or baryons (trios), and therefore acts like a conservation law. Finally, the law that the electric charge of a proton equals that of an electron with opposite sign implies that combinations of nuclei and electrons may constitute electrically neutral atoms and molecules.

All these laws or symmetries are empirical generalizations, based on many experiments investigating collisions between a large variety of sub-nuclear particles moving at high energies and speeds close to that of light in vacuum.[10] Together with the classical laws of conservation of electric charge, energy, linear momentum, and angular momentum, these laws restrict the kinds of possible processes and structures severely. Reversely, they have been discovered because certain expected processes did not turn up. Together these laws constitute the ‘standard model’ of subnuclear physics dating from the seventies of the twentieth century. It was tentatively confirmed in 2012 by the experimental discovery of the Higgs particle, already predicted in 1964. Tentatively: the model does not include gravity, and some recently discovered properties of neutrinos do not quite fit into it.


9.2. Mechanical determinism


In the philosophical tension between nature and freedom, randomness appears to have a pivotal function. If freedom is exclusively human nature cannot be ascribed any latitude. Determinism is now an unavoidable dogma, but that would prohibit human freedom entirely.

Both before and after the nineteenth century determinism hardly found adherents among natural scientists, but since the beginning of that century it became a much discussed subject within the philosophy of nature. Usually, Enlightenment philosophers interpreted the idea of natural law in a deterministic way. The same applies to Romanticism, even if it emphasized human individuality and freedom. Determinism is an offspring of the mechanical philosophy initiated by Galileo Galilei and René Descartes with Benedict Spinoza as its foremost champion (chapters 2 and 3). The deterministic interpretation of natural laws clashed with theological views of miracles. With the exception of Spinoza most people considered miracles as supernatural acts of God bypassing his laws. Isaac Newton assumed that the natural laws were not sufficient. Without God’s help the solar system would not be stable. A century later Pierre-Simon Laplace proved that all planetary movements known at the time satisfied Newton’s laws within the limits of accuracy of measurement and calculation. According to a well-known legend, he assured Napoleon that he did no longer need Newton’s hypothesis about God’s assistance. The idea that God would correct the natural laws was pushed to the background of theological discussions about miracles.

Influenced by Immanuel Kant, rational mechanics (9.4) assumed that a system of point-like particles, only interacting by impact, would be completely determined by Newton’s laws of mechanics, by initial conditions (in particular the position and velocity of the particles) and boundary conditions (like a field or the walls of a container). This model was applied with initial success by Rudolf Clausius and James Clerk Maxwell in the physical theory of gases. It was corrected by Johannes van der Waals by taking into account the dimensions of the particles as well as their mutual attraction. The model was also supposed to be valid for extended bodies, assumed to be composed of point-like particles. The latter hypothesis has never been confirmed experimentally, but that did not withhold Pierre-Simon Laplace from his famous proclamation: ‘We ought to regard the present state of the universe as the effect of its anterior state and as the cause of the one which is to follow. Assume an intelligence which could know all the forces by which nature is animated, and the states at an instant of all the objects that compose it; for this intelligence, nothing could be uncertain; and the future, as the past, would be present to its eyes’[11]

Physico-theology had no problem in identifying this intelligence with God.

Even now, many reductionist philosophers and scientists maintain their unshaken belief in nineteenth-century Enlightenment determinism, confirming the primacy of nature in the tension of nature and freedom. Sometimes it leads them to the empirically unsubstantiated and therefore speculative hypothesis about the existence of universes parallel to the observable one. If something seems to be the random realization of a possibility (as in a radioactive process), they suppose that the other possibilities occur in some other universe, such that determinism is saved. However, these universes could not interact with each other, and as a consequence could not be observed. This contradicts one of the most important physical foundations of science. However, if the parallel universes are not interpreted ontologically, but epistemologically, as merely possible, thinkable, realisations of the laws that appear to be valid for the observable universe, this remark is not relevant. In that case the construction of parallel universes does neither confirm nor contradict determinism.

This illustrates that mechanist determinism has always been an article of faith, more a myth than an empirically founded theory. Determinists believed (and believe) nature to be completely determined by unchangeable mechanical natural laws. However, physicists and chemists discovered that natural laws admit of a margin of randomness, indeterminacy, contingency, or chance, subject to stochastic laws for probability, as in statistical physics, in radioactivity, and in chemical processes.[12]


9.3. Random processes


Whereas the structure of a stable physical system like an atom or a molecule is largely determined by general and typical laws, transitions between unstable states as occurring in radioactivity or the emission of light are to a large extent random processes, subject to stochastic principles and probability laws. A radioactive specimen changes according to the stochastic principle that the moment an individual atom decays is completely arbitrary, but statistically according to an exponential law with a mean decay time characteristic for the substance concerned. This is also the case with the emergence of new structures like the formation of molecules from other molecules in a chemical reaction, or of living beings by fertilization. In general, processes are more stochastic than stable structures. In a mixture of hydrogen and oxygen, only water molecules can be formed (besides some related molecules like hydrogen peroxide), but it is largely accidental which pair of hydrogen molecules will bind with some oxygen molecule to form two water molecules.

Whereas fertilization is mostly a random process, the ensuing growth of an organism is to a large extent genetically determined. Yet the probability that a fertilized seed germinates, reaches adulthood and becomes a fruit bearing plant, is very small. Therefore, a plant produces during its life an enormous amount of gametes. In a state of equilibrium, in the mean only one fertile descendant survives. But if a similar randomness would occur during the growth of a plant, no plant would ever reach the adult stage. The growth of a plant or an animal is a programmed and reproducible process, sexual reproduction is neither.

Natural selection means that within a population the organisms fitting their environment have a better chance to survive and to have offspring than the less adapted organisms. Randomness and abundance in reproduction, as well as incidental and accidental mutations are conditions for natural selection. However, their results are much more restricted by natural laws than radical evolutionists would admit.


9.4. Philosophical and theological objections


Theologians and Christian philosophers sometimes reject randomness, arguing that God’s providence would not leave anything to chance, as if God could not have created a world in which His laws leaves room for randomness and contingency. This view is not only at variance with common experience and with the natural laws as far as these are known at present, but also with the Christian view that humans are created to be free and responsible for their acts. Christian philosophy implies that God rules the world according to His laws, but not that He in all details would interfere in everything that happens. Apart from acknowledging that God sustains the creation by His laws, humans should not pretend to know how His providence works, more than the religious belief in Christ’s atonement and the continuous presence of the Holy Spirit.

Some determinists assume that determinateness of nature is less a result from than a condition for science.[13] After posing the dilemma: natural necessity (fully determined by law) or chance (in the sense of absolute arbitrariness), they reject the latter.[14] Therefore they have to question the individuality of e.g. radioactive particles, each having separate existence.[15] Determinism reduces individuality to the law while pure chance eliminates the law.

An alternative is to reject the dilemma,[16] replacing it by the correlation of lawfulness and randomness, which cannot be reduced one to the other. The individuality of atoms and atomic processes is not based on thinking about a rationalistic dilemma, but is a premise for understanding natural science.

The introduction of randomness meets with resistance from deterministic philosophers. They believe that the application of probability only masks the investigator’s lack of sufficient knowledge of a system on a molecular level. Randomness would not be an ontological but an epistemological matter. Ontologically, any system would be completely determined by natural laws, by initial conditions and by boundary conditions. However, stochastic processes form an inalienable part of the explanation of phenomena in radioactivity, quantum physics, and evolution. Ontologically, probability does not refer to knowledge (or lack of it), but to the variation allowed by a law.

Only for quantum physics, determinists are reluctantly inclined to acknowledge intrinsically stochastic processes like fluctuations. However, the view that these fluctuations have no significance on a macroscopic level does not withstand scrutiny. Consider the simplest example of throwing a die. Determinists assume that the outcome could be predicted if one knew the process in sufficient detail. However, if one pursues this path to the atomic level, one inevitably reaches a point where quantum fluctuations start to play a part. Therefore, if one accepts ontological indeterminacy at the quantum level, one has to accept it at a macroscopic level as well. One could not even say that for practical purposes the result of throwing a die is determined by physical laws, for the application of this principle to any practical case is virtually impossible. In fact, in any play of chance one had better start from a distribution of chances based on the symmetry of the game, and on the assumption that the actual process is stochastic.

The core business of quantum physics and quantum chemistry is the theoretical and experimental investigation of the hidden structures of natural things and events (8.5). Unfortunately, this has drawn much less philosophical and journalistic attention than its stochastic character. According to quantum physics, the individual state of a system like an atom does not exactly determine the result of its interaction with another one. The initial and final states are not related in a determined way, yet in a lawful way, by a probability determined by the typical structure of the interacting systems, often traceable from their symmetry.

The development of a system depends on laws, both general and specific, but also on the initial state and boundary conditions. In all applications of probability theory in physics the initial state is relevant. Although it may be partly prepared by some previous interaction, the initial state always contains an amount of disorder, in statistical physics called molecular chaos.[17] This disorder is difficult to define, being possibly a primitive concept. For instance, when checking probabilities in dice playing, it is assumed that the way the dice are thrown does not influence the result in the mean. An honest card player is assumed to shuffle his cards at random, but don’t ask how that is possible. In an opinion poll one strives after a representative sample. Criteria to avoid biased samples are proposed, but these are not universal or sufficient.

In quantum physics the initial state determining the statistical distribution also contains an element of randomness (quantitatively indicated by the ‘phase factor’), according to a theorem related to the Heisenberg relations: if any property (like the position of a particle) is completely determined by its initial state, then the ‘canonically conjugate’ property (in this case, the particle’s momentum) is completely undetermined and therefore entirely at random.


9.5. Laws for random events


In science, probability does not describe our knowledge of physical systems, but their lawfully determined, yet individual behaviour. Random events are not lawless. Since the early nineteenth century probability calculations were applied in astronomy to investigate the accuracy of measurements. Much earlier they were used in analysing games of chance and to establish the premiums for life insurances. Solutions of many problems were achieved by the application of symmetry, allowing of identifying equally probable or weighted situations. In the nineteenth century Évariste Galois devised group theory, the mathematical theory of symmetry. In physics and chemistry it was applied in relativity theory, in crystallography, and in the investigation of atomic, molecular, and solid state structures. In the investigation of the structure of matter randomness was tamed by symmetry.

However, until the end of the nineteenth century, both chemists and physicists still believed in determinism. Statistical methods were only used for practical reasons because a fully deterministic calculation of the motion of the many particles constituting a gas was (and is) beyond human capabilities.[18] Although radioactivity was considered to be a mystery, physical scientists were still confident that it could be solved along deterministic lines.

Twentieth-century science has made clear that lawfulness and randomness or contingency coexist, as conditions for the existence of real things and the occurrence of real events. Many laws concern probabilities about a collection of individual things or events, which are individually unpredictable but collectively answering statistical laws. Lawfulness does not imply determinism. Laws allow of individual variation. Quantum physics, chaos theory, natural selection, and genetics cannot be understood without the assumption of random processes.

Nevertheless, determinism remains popular contrary to all evidence on the contrary, in particular among ontological naturalists believing that everything can and must be reduced to natural laws about material interactions. In contrast, radical evolutionists believe that biological evolution is a pure random process, not subject to any law. It seems difficult to accept that lawfulness and individuality do not exclude but complement each other, in particular because natural laws do not always predict what will happen, but open up and restrict possibilities  .

Randomness is an expression of the individuality of the systems concerned, which cannot be fully delimited by specifying some of their properties. Statistical predictions can only be made with respect to systems of which at least something is known of their typical structure, like their symmetry. Complete randomness and probability without lawfulness do not exist.

The recognition that random processes occur in nature is not a warrant for the existence of human free will, but eliminates a constraint for it. In order to understand free will natural laws are not sufficient. It requires normative principles as well.


9.6. Lawfulness in biological thought


Lawfulness should not be confused with Platonic or Aristotelian essentialism (7.1). Essentialism survived longest in the plant and animal taxonomy. Until the middle of the twentieth century, this considered the system of species, genera, families, classes, orders, and phyla or divisions to be logically necessary. In this classification, each category was characterized by one or more essential properties. (1) species consist of similar individuals sharing in the same essence; (2) each species is separated from all others by a sharp discontinuity; (3) each species is constant through time; and (4) there are severe limitations to the possible variation of any one species.’[19]

Having its roots in neo-Platonic philosophy, biological essentialism was not a remains of the Middle Ages, but a fruit of the early Enlightenment. From John Ray to Carl Linnaeus, many realistic naturists accepted the existence of unchangeable species, besides biologists having a nominalist view of species.[20] Ray and Linnaeus were more (Aristotelian) realists than (Platonic) idealists. Ernst Mayr  ascribes the influence of essentialism to Plato.[21] ‘Without questioning the importance of Plato for the history of philosophy, I must say that for biology he was a disaster.’[22] Mayr shows more respect for Aristotle, who indeed has done epoch-making work for zoology,[23] but Aristotle was an essentialist no less than Plato was.

The difficulty that some biologists have with the idea of natural law is their abhorrence of essentialism. Therefore, it is important to distinguish essence from lawfulness. The ‘essential’ (necessary and sufficient) properties do not determine the character of things or processes. Rather, the specific laws constituting their character determine the lawful objective properties of the things or processes concerned.[24] These properties may display such a large statistical variation that necessary and sufficient properties, if they would exist, are hard to find.[25] Laws and properties do not determine essences but relations.

A second reason why some biologists are wary of the idea of natural law is that they (like many philosophers) have a physicalist view of laws.[26] Rightly, they observe that the (now outdated) physical and chemical model of a natural law is not applicable to biology.[27] To the nineteenth-century physicalist idea of law belonged determinism and causality. However, determinism is past, and causality is no longer identified with law conformity, but is considered a physical relation. The theory of evolution is considered a more or less plausible narrative about the history of life, rather than a theory about processes governed by natural laws.[28]

Probably biologists will not deny that their work consists of finding order in living nature.[29]  ‘... biology is not characterized by the absence of laws; it has generalizations of the strength, universality, and scope of Newton’s laws: the principles of the theory of natural selection, for instance.’[30]

The theory of evolution would not exist without the supposition that the laws for life, that are now empirically discovered, held millions of years ago as well. The question of whether other planets host living organisms can only arise if it is assumed that these laws hold there, too.[31]

A third reason may be the assumption that a law only deserves the status of natural law, if it holds universally and is expressible in a mathematical formula expressing a constant relation. A mathematical formulation may enhance the scope of a law statement, yet the idea of natural law does not imply that it has necessarily a mathematical form. Neither should a law apply to all physical things, plants, and animals. Every regularity, every recurrent design or pattern, and every invariant property is lawful. In the theory of evolution biologists apply whatever patterns (in particular genetic laws) they discover in the present to events in the past. Hence they implicitly acknowledge the persistence of natural laws, also in the field of biology.

The most important example of a biological law that cannot be expressed in a mathematical formula is the law that each living organism descends from another one, omne vivum ex vivo, or more generally, each living being is genetically related to all other ones. This law may be spatially restricted to all beings living on the earth, and it cannot be excluded that the archaea are independent of the other prokaryotes, the bacteria, but even then the genetic law is valid within these groups.

Anyhow, Charles Darwin was not wary of natural laws. At the end of his On the origin of species he wrote: ‘It is interesting to contemplate an entangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent on each other in so complex a manner, have all been produced by laws acting around us. These laws, taken in the largest sense, being Growth with Reproduction; Inheritance which is almost implied by reproduction; Variability from the indirect and direct action of the external conditions of life, and from use and disuse; a Ratio of Increase so high as to lead to a Struggle for Life, and as a consequence to Natural Selection, entailing Divergence of Character and the Extinction of less-improved forms.’[32]

Finally, the idea of natural laws is distrusted because of the relevance of randomness for biological processes, being at variance with the assumption that laws would be intrinsically deterministic. The above mentioned distinction between general, functional laws concerning relations between living beings, and specific, structural laws concerning restricted types of things etc., might be helpful for understanding the idea of law in biology.


[1] Van Fraassen 1989, 36-37.

[2] Cited by Gaukroger 2006, 462-463.

[3] Newton 1687, 544-546; 1704, 369-370, 405-506; Cotes’ Preface to Principia (second edition), Newton 1687, xxxii-xxxii; Thayer (ed.) 1953, chapter III; Alexander (ed.) 1956.

[4] Swartz 1985; Carroll 1994, 24-25.

[5] Swartz 1985, 37-38.

[6] Kant 1786, 5

[7] Popper 1959, 438; 1972, chapter 5; 1983, 80, 118, 131-149; Bunge 1967, I, 345; Hacking 1983.

[8] Athearn 1994; Psillos 1999; Torretti 1999.

[9] Pais 1986; Krach 1999; Stafleu 2015, chapter 11.

[10] Pickering 1984; Galison 1987; 1997; Kragh 1999.

[11] Lap­lace 1812, 4-5; Popper 1982, xx; Hahn 1986, 267-270.

[12] Hermann Weyl, Symmetry (1952), cited by Van Fraassen 1989, 287.

[13] Van Melsen 1946, 138ff; 1955, 148ff, 271ff.

[14] Van Melsen 1946, 157ff; 1955, 285ff.                   

[15] Van Melsen 1955, 300.

[16] Čapek 1961, 338ff.

[17] Hempel 1965, 386; Nagel 1939, 32ff; Popper 1959, 151ff, 359ff.

[18] Reichenbach 1956, 56.

[19] Mayr 1982, 260.

[20] Toulmin, Goodfield 1965, chapter 8; Panchen 1992, chapter 6.

[21] Mayr 1982, 38, 87, 304-305.

[22] Mayr 1982, 87.

[23] Mayr 1982, 87-91, 149-154.

[24] Rosenberg 1985, 188.

[25] Hull 1974, 47; Rosenberg 1985, 190-191.

[26] Sterelny 2009, 324-327.

[27] Hull 1974, 49; Mayr 1982, 37-43, 846.

[28] Mayr 2000, 68.

[29] Rosenberg 1985, 122-126, 211, 219; Ruse 1973, 24-31; Rensch 1968; Griffiths 1999; Ereshefsky 1992, 360; Hull 1974,chapter 3.

[30] Rosenberg 1985, 211.

[31] Dawkins 1983.

[32] Darwin 1859, 459.



Chapter 10


The Romantic turn


10.1. What is Romanticism?


Romanticism is often considered a reaction to the Enlightenment. It is an artistic, literary, musical, and intellectual movement that originated in Europe toward the end of the eighteenth century. In most areas it reached its peak during the first half of the nineteenth century. Romanticism is characterized by its emphasis on emotion, aesthetics and naturalness, more on society than on individuals. Its perception of naturalness is a reaction to the scientific rationalization of nature by the Enlightenment. Instead of taking distance to nature the Romantics wanted to return to a natural state of innocence, with a strong preference for a primitive society. They replaced the Renaissance and Enlightenment ideal of classical beauty as mimesis (imitation of nature) by expressionism, considering an artist as an autonomous and free creator of art. Although Romanticism was embodied most strongly in the visual arts, music, theatre, and literature, it also had a major impact on historiography, education, the natral sciences, and theology. Its political effect on the growth of nationalism was highly significant.

The influence of Romanticism on the philosophy of nature is expressed in the idea of the unity of all natural forces, objections to the lawfulness of nature, emphasis on subjectivity at the cost of objectivity, a preference for practice above theory.

The shift from the rationalist Enlightenment to emotive Romanticism marks the transition from the primacy of the domination of nature to the primacy of personality in humanistic thought. Because this controversy was not overcome in principle, the transition from Enlightenment to Romanticism could be quite smooth. The former is characterized by rationality, the rule of reason, the latter by its emphasis on feeling and sensibility, on unity, harmony, and coherence. However, Isaac Newton’s experimental philosophy and John Locke’s empiricism also took distance from René Descartes’ rationalism. In his three Critiques, Immanuel Kant restricted the scope of reason, subordinating it to feeling. The great romanticist Jean-Jacques Rousseau contributed to the Encyclopédie. Even earlier the staunch rationalist Gottfried Leibniz developed some quite romantic views.

Therefore, apart from the arts, one may consider Romanticism a correction more than a reaction to Enlightenment philosophy. Romanticism never succeeded in replacing Enlightenment philosophy entirely.


10.2. Gottfried Leibniz


Gottfried Leibniz was a diplomat in the service of several German princes. He rejected Benedict Spinoza’s radical Enlightenment, but was no more satisfied by any of its moderate competitors. In his critique of Spinoza and his discussion with Samuel Clarke the romantic principle of the identity of what cannot be discerned (principium identitatis indiscernibilium) played an important part.[1] He was foremost concerned with the harmonization of church and state, of the various strands of Christianity (Catholic, Calvinist, Lutheran, and Anglican), as well as of Cartesian mechanism with Aristotelian scholastics. Therefore he is a precursor of Romanticism with its emphasis on unity and harmony.

Leibniz was less a mechanical philosopher than René Descartes, Christiaan Huygens, and Benedict Spinoza, not sharing their views on space and motion. Like Huygens he was very critical of Isaac Newton’s concept of force. As an alternative to Descartes’ quantity of motion (the product of quantity of matter and speed) or Huygens’ momentum (the product of mass and directed velocity), he introduced vis viva, living force, the product of quantity of matter and the square of speed. Vis viva is not one of Newton’s variants of force, but it was intended to provide mechanism with a dynamic component: it is the ability to perform work. In 1847, Hermann Helmholtz called half of vis viva kinetic energy, albeit with mass as Newton’s measure of the quantity of matter.

Like Descartes’ quantity of motion and Huygens’ linear momentum, Leibniz’ vis viva was supposed to be a variable property of a moving body, transferable to other bodies by some contact action. In contrast, Newton’s impressed force was introduced as a relation between bodies, as follows from his third law of motion, the law of action and reaction. In the eighteenth century, disciples of Descartes and Leibniz quarrelled about the priority of momentum and vis viva. Jean d’Alembert demonstrated these concepts to be equally useful, momentum being the time-integral of the Newtonian force acting on a body, and vis viva being its space-integral.[2] This means that impressed force is the cause of change of momentum, and vis viva is the ability to perform work. This as a compromise presented proposal was evidently unacceptable for both parties, because it would imply the recognition of the priority of Newton’s impressed force.

Leibniz tried to synthesize matter and mind (Descartes’ res extensa and res cogitans) both in his concept of living force and in his monadology.[3] Monads are the elements of Leibniz’ ontology. Besides each material object each person is an autonomous monad, with at most a vague perception of other monads. Each monad reflects nature. They constitute a hierarchy with a continuously increasing rationality. At the base one finds purely material monads, at the top the deity as pure mathematical reason. Leibniz’ theism passed into a logical-mathematical pantheism, identifying the deity with world-harmony. Leibniz effectively replaced Spinoza’s ‘Deus sive natura’ (God, that is nature) by ‘Harmonia universalis, id est Deus’ (universal harmony, that is God).[4] Each monad reflects nature, in a harmonia praestabilita, a pre-established harmony. Monads are the only substances, individual centres of vis viva and of rationality, yet monads do not interact with each other. Space, matter, and motion are not fundamental but phenomenal, aspects of phenomena. In this way Leibniz believed to have solved Descartes’ problem of the interaction of mind and matter.

Of all Enlightenment philosophers, Leibniz was most concerned with a harmonious relation between science and theology. He discussed the question of whether the world as we know it is the best possible.[5] He argued that God is subject to the same logic as humanity. He applied the law of excluded contradiction and the principle of sufficient reason (‘that nothing happens without a reason why it should be so, rather than otherwise’[6]) as an additional argument for the existence of God. He used it to develop a theodicy, a justification of the belief that God is both good and almighty, yet allows of evil, both in the natural and in the human world.[7]


10.3. The unity of all natural forces


In the physical sciences, the period between circa 1600 and 1850 is characterized by the successive isolation and development of separate fields of science: gravity, magnetism, electricity, sound, optics, aerostatics, hydrostatics, and various branches of chemistry (5.1). Newton’s success in his investigation of gravity was partly due to the fact that he could develop it isolated from other phenomena. Isolation became an important heuristic in experimental philosophy, much more fruitful than the reductionist philosophy of mechanicism.

According to the matter-force dualism (4.3), each field of science identified its own matter and its own kind of force. Both mechanists and experimental philosophers initially adhered to the neo-Platonic view that matter could not be active, but soon after Newton, physicists started to accept matter to be the active source of some specific kind of force, as is the case in gravity, electricity and magnetism, as well as in chemistry. In several fields of science matter got the character of an imponderable fluid, especially important if a conservation law could be applied (5.1). The excess of fluids elicited Friedrich Schelling’s sarcastic comment: ‘If we imagine that the world is made up of such hypothetical elements, we get the following picture. In the pores of the coarser kinds of matter there is air; in the pores of the air there is phlogiston; in the pores of the latter, the electric fluid, which in turn contains ether. At the same time, these different fluids, contained one within another, do not disturb one another, and each manifests itself in its own way at the physicist’s pleasure, and never fails to go back to its own place, never getting lost. Clearly, this explanation, apart from having no scientific value, cannot even be visualized.’[8]

The fields of science did not remain isolated forever, but became connected by specific effects, as these bridge phenomena were later called. Franz Aepi­nus observed the pyro-electric effect already in 1756: some crystals become electrically polarized by heating. In 1821 Thomas Seebeck discovered thermoelectricity, meaning that a heat flow causes an electric current. In 1834 Jean Peltier found the reverse effect. In 1763 Ebenezer Kinnersley observed that the discharge of a Leiden jar through a thin wire caused so much heat that two iron wires could be welded together. In 1807 Thomas Young observed the same for a current delivered by a voltaic pile, a forerunner of the present-day electric battery, in which a chemical process produces an electric current. In 1820 Hans Christian Oersted observed that a magnet’s direction is influenced by an adjacent electric current.

Influenced by Immanuel Kant, these bridge effects led to the romantic idea of the unity of all natural forces, and of the unity of the sciences. This was also expressed in d’Alembert’s and Diderot’s Enclopédie and later in the positivist view of the unity of method in all empirical sciences, as well as in the twentieth-century search for a theory of everything in natural philosophy.[9]

Most of these connections were discovered at a time when Kantian mechanicism and roman­tic Naturphilosophie exerted an important influence, especially in Germany, where academic natural science was often considered to be part of the study of philosophy. ‘Naturphilosophie’ was so typically German that the word is never translated.


10.4. German Naturphilosophie


By the end of the eighteenth century, Romanticists like Johann Wolfgang Goethe, Jean-Jacques Rousseau, and the German Naturphilosophen, turned their back to both mechanicism and experimental philosophy. Rejecting rationalism as well as empiricism, the Romanticists introduced sensitivity and imagination as primary sources of knowledge. In his prize winning essay Discours sur les sciences et les arts (1750) making him famous at one stroke, Rousseau attacked the supremacy of natural science: ‘If our sciences are vain in the object proposed to themselves, they are still more dangerous by the effects which they propose.’[10]

The romantic poet and novelist Johann Wolfgang Goethe criticised Newton’s theory of light and he developed his own Farbenlehre (Theory of colours, 1810),[11] which he valued more than his poetry. He maintained that one should experience light in its totality, and that Newton’s experi­ments with his prisms in a largely darkened room could give no insight into the essence of light and its coherence with other phenomena.

The founder of Naturphilosophie, Friedrich Schelling stated: ‘The assertion is, that all phenomena are correlated in one absolute and necessary law, from which they can all be deduced; in short, that in natural science all that we know, we know absolutely a priori. Now, that experiment never leads to such a knowing, is plainly manifest from the fact that it can never get beyond the forces of nature, of which itself makes use as means ... The assertion that natural science must be able to deduce all its principles a priori, is in a measure understood to mean that natural science must dispense with all experience, be able to spin all its principles out of itself.’[12]

Georg Hegel, too, objected to Newton’s extreme mathematization of the natural sciences. In various ways, Hegel tried to make connections between the fields of science of his age.[13] He introduced the triad of thesis, antithesis, and synthesis, as a pattern of historical development, later adopted by Karl Marx. According to Schelling two oppositely directed forces do not lead to equilibrium, but to a new force, like in Hegel’s dialectical scheme. In his wake, Hans Christian Oersted assumed that the magnetic action of a wire connecting the poles of a voltaic pile (1820) was not caused by a continuous flow of some kind of matter, but by a succession of interruptions and re-establishments of equilibrium, a state of continual conflict between positive and negative electricity.[14]

Friedrich Schelling developed his Naturphilosophie before 1800, and about 1830 he had some reason to conclude with satisfaction that his views had been confirmed by the discoveries of Oersted, Seebeck, and Faraday. Yet even his friend Oersted took distance from his speculations, because these had few or no relations with empirical reality. Georg Hegel’s influence on the development of natural science is negligible. Sooner or later, any physicist or chemist having a good reputation rejected the speculative character of Goethe’s, Schelling’s, and Hegel’s romantic Naturphilosop­hie.

The urge to find the unity of all natural forces, the discovered convertibility of various interactions, and the analysis of work producing machines, have all contributed to the discovery of the law of conservation of energy by Hermann von Helmholtz and others (1847).[15] It became the First Law of thermodynamics. The Second Law was first formulated about 1850 by Rudolf Clausius and William Thomson. In terms of the concept of entropy, introduced by Rudolf Clausius in 1865, it expresses that any physical system isolated from the external world will change such that entropy increases, until an equilibrium state is reached.

The thermodynamical laws restrict the possibilities of power producing machines. A perpetuum mobile based on frictionless motion was already earlier considered impossible. In 1586 Simon Stevin applied the impossibility of a perpetuum mobile to the problem of equilibrium on an inclined plane.[16] The First Law states that producing work is impossible without applying other work or heat. Since 1775 the Paris Académie des Sciences refused to consider inventions which would be based on the possibility of producing work from nothing.[17] The Second Law forbids a machine that transfers heat completely into work. Later a Third Law was added, Walther Nernst’s theorem (1905): in a physical process in which the temperature tends to absolute zero, the change of entropy approaches zero. As a consequence, the absolute zero of temperature cannot be reached by a finite series of cyclical processes.

The use of the word ‘Law’ written with a capital (‘Hauptsatz’ in German, ‘Hoofdwet’ in Dutch) expresses the romantic idea of a new unifying theory.


10.5. Unification


William Thomson and Rudolf Clausius designed in 1850 thermodynamics starting from the Carnot cycle, in 1824 invented by Sadi Carnot as an idealised model of a steam engine. This theoretical cycle consists of two isothermal processes (at constant temperature) alternated with two adiabatic processes (in which no heat is exchanged with the environment). Its analysis initiated the largest unification project of the century.[18]

The Carnot cycle became the basis of the thermodynamic temperature scale. William Thomson, since 1892 Lord Kelvin, defined the metric of temperature by equating the ratio of the cycle’s high and low temperatures to the ratio of the corresponding exchanged amounts of heat. It was soon proved that this scale can be made to coincide with the ideal-gas thermometer, based on the laws of Robert Boyle and Louis Gay-Lussac. Later on the unit of this scale, in magnitude equal to that of the centigrade scale, but with a different zero point, got Kelvin’s name. This theoretical temperature scale is independent of the specific properties (such as the coefficient of expansion) of mercury or alcohol in a liquid thermometer; of the actual gas in a gas thermometer; or of a metal or semiconductor in an electrical thermometer. It is therefore called ‘absolute’ like absolute time and space serving as a standard for practical instruments (4.5). Also the zero point of the new scale is called ‘absolute’, because of the limiting character indicated by Nernst’ theorem.

Thermodynamics applies a generalized concept of force. Unlike Newton’s impressed force, a thermodynamic force is not related to mechanical acceleration, but it drives a current. A temperature difference drives a thermal current; an electrical potential difference causes an electric current; a pressure gradient in the atmosphere causes wind; and a concentration difference drives the flow of a chemical substance. This idea could be fruitfully applied in physical chemistry. In each current, entropy is created, making the current irreversible. In a system in which currents occur, entropy increases until equilibrium is reached. If a system as a whole is in equilibrium, there are no net currents and the entropy is constant. (An electric current in a superconductor does not produce entropy and is therefore a boundary case. In a superconducting circuit without a current source an electric current can proceed forever, whereas an ordinary current would soon decay.)

If all closed systems would approach such an equilibrium state nothing would change. This insight led William Thomson and others to speculations about the future ‘heat death’ of the universe, without bothering about the question of whether the universe (which has no environment), can be considered a closed system.

Like mechanical forces are able to balance each other, so do thermodynamic forces and currents. This explains mutual relations like thermoelectricity, the phenomenon that a heat current balances an electric current in the Seebeck-effect and in its reverse, the Peltier-effect. Relations between various types of currents are subject to a symmetry relation (sometimes called therrmodynamics’ Fourth Law) discovered by William Thomson and generalized by Lars Onsager (1931).

The First Law, the law of conservation of energy, did not confirm the romantic idea of the unity of all natural forces. Like the Newtonian force, energy is an abstract, generic concept, only fruitful because it can be specified as gravity, electric or magnetic force. Yet energy, force, and also current are unifying concepts.

Gravitational, elastic, electric, and magnetic forces, each have their own character, but they can be physically compared to each other, because forces of different kinds, acting on the same object, are able to balance each other. By accepting one force as a standard, the others can be measured. Forces are commensurable. One needs only one unit of force, aptly called after Newton.

A force without further specification does not actually exist, and the same applies to energy. Many kinds of energy are known, such as kinetic, electric, thermal, gravitational, chemical, and nuclear energy. These can be transformed into each other, meaning that like the forces, energies are commensurable. Accepting one form of energy as a standard, one can measure others. For mechanical philosophers this standard could only be mechanical work. Besides others, James Prescott Joule has done a lot to determine the proportion of the units of heat, mechanical and electrical work. Therefore the unit of energy was later called after him.

Both force and energy can be used to integrate various fields of physical science on the basis of mechanics. For experimental philosophy the measurability of force and energy was more important, and the choice of a standard was determined by practical considerations like accuracy and reproducibility, such that gradually an electric or atomic standard replaced the mechanical one.

For the mechanicists, unification meant reduction to mechanics. Entropy, thermodynamic forces, and currents, are less easy to reduce to mechanics than energy and force, in particular because mechanics is supposed to be reversible, symmetric with respect to kinetic time (as long as friction is ignored), which the Second Law is not. Therefore, during the second half of the nineteenth century, mechanist and positivist scientists sought frantically for a mechanical explanation of irreversibility as expressed in the Second Law.

Ludwig Boltzmann achieved the best results. In some cases he could make clear why a process towards equilibrium would be irreversible, but therefore he had to apply probability arguments, including the irreversible realization of a chance. The mathematical concept of probability or chance anticipates physical interaction, because only by means of a physical interaction a chance can be realized. This implies an asymmetric temporal relation. Probability always concerns future events, indicating a boundary between a number of possibilities in the present and the actualization of one of these in the future. Therefore, the reduction of irreversible processes to reversible mechanical interactions presupposed what was to be proved.

The unification of electricity and magnetism with optics was more successful. After Hans Christian Oersted’s discovery of the magnetic action of an electric flow, André-Marie Ampère unified magnetism and electricity into electrodynamics.[19] Michael Faraday observed that a magnetic field could change the direction of polarization of a beam of light, implying an experimental relation between light and electromagneti­c interaction: ‘Thus it is established, I think for the first time, a true, direct relation and dependence between light and the magnetic and electric forces; and thus a great addition made to the facts and considerations which tend to prove that all natural forces are tied together and have one common origin.’[20]

William Thomson and many others searched for a mechanical model of the ether that could explain the propagation of light and heat, as well as the electromagnetic phenomena. James Clerk Maxwell applied a mechanical model as an analogy to find the laws for electromagnetism, giving the physical concept of a field an integrating function besides energy, force, and current. En passant he established that light is not a mechanical but an electromagnetic wave, implying the unification of optics with electromagnetism instead of mechanics. Next he abandoned the mechanical model.

In Maxwell’s theory energy is the most important concept: ‘I have on a former occasion attempted to describe a particular kind of strain, so arranged as to account for the phenomena. In the present paper I avoid any hypothesis of this kind ... In speaking of the Ener­gy of the field, however, I wish to be understood literally ...’[21]

William Thomson proved that in certain cases energy can be stored in the magnetic field in a coil, and Maxwell predicted the possibility to transport energy via the electromagnetic field. In 1887, Heinrich Hertz’s experiments confirmed the physical meaning of the concept of a field apart from matter.[22] He also made clear that light is only a small part of the electromagnetic spectrum. In 1896 Guglielmo Marconi invented wireless telegraphy, the start of modern communication technology.

It took some time before Maxwell’s theory was accepted. It became the foundation of Albert Einstein’s special theory of relativity. The view that light is an electromagnetic wave, not a mechanical one like sound, did fit neither mechanicism nor the matter-force duality. It created the problem of the interaction of field with matter, eventually leading to a completely new view of nature, as expressed in relativity theory and quantum physics. It required the preceding unification of physics and chemistry.


10.6. Energetism and positivism


The introduction of the universal concept of energy, its law of conservation, its convertibility, and its commensurability, led to a new abstraction, giving rise to a new answer to the question of how physical systems can interact, although it did not give rise to a new insight into the specific character of electricity, magne­tism, heat, or chemical affinity. Thermodynamics is the most consequent elaboration of the new conservation law, just because it is independent of any theory about the structure of matter.

Many scientists arrived at the opinion that Newton’s impressed force, subject to the laws of motion, should no longer be considered the most important expression of physical interaction. They considered the law of conservation of energy to be the constitutional law of physics and chemistry. The energeticists, as they were called, stressed thermodyna­mics to be independent of mechanics. They considered the principles of Sadi Carnot, William Thomson, and Rudolf Clausius (different expressions of the Second Law) as empirical generalisations, testable by experiments independent of metaphysical suppositions, like the atomic hypothesis at that time was. But they overlooked that thermodynamics was not fit to explain the specific properties of matter, the properties distinguishing one substance from another one. For this the atomic hypo­thesis turned out to be indispensable

Wilhelm Ostwald was the most avowed energe­ticist. As a physical chemist he was more attracted to thermodynamics than to mechanics. He rejected Ludwig Boltzmann’s theories explaining the thermodynamical laws from interactions between molecules. Ernst Mach called Kant’s view of rational mechanics as the foundation of physics a prejudgment. He argued that the law of conservation of energy is independent of any mechanicist world view.[23] In electricity Mach considered both the fluidum and the ether theories superfluous.[24]

As a sensationalist Mach interpreted science to be an economic ordering of sensorial impressions (6.4, 6.5). Therefore he distrusted all concepts that were not directly based on observations. For instance, he tried to prove that Newton’s impressed force could be defined in terms of observable kinetic magnitudes like velocity and acceleration.[25] Heinrich Hertz followed him by designing a theoretical mechanics exclusively based on the fundamen­tal concepts of space, time, and mass. Hertz was mainly interested in a logical analysis of mechanics.[26] He defined the Newtonian force operationally as the product of mass and acceleration, but he failed to make clear how to distinguish electric, magnetic, gravitational, and other types of interaction only applying mechanical principles.

Ernst Mach laid the foundation of energe­ticism, but he never became one of its convinced adherents, because he valued his positivist views higher.[27] He related the law of conservation of energy to the nominalist idea of the economy of thought, the attempt to define concepts such that a minimum number is sufficient.[28] Romantic positivism started with Auguste Comte, who in his Cours de la philosophie positive (1830-1842) stated that after a theological or fictitious, and a metaphysical or abstract stage, positive science would enter social philosophy as the final stage of its development.

After the First World War, positivism returned under the flag of logical-positivism (Vienna circle with Moritz Schlick), logical-empiricism (Berlin circle with Hans Reichenbach), both pursuing Mach’s sensationalism, or Anglo-Saxon analytical philosophy (Bertrand Russell, Ludwig Wittgenstein). Neglecting the heuristic value of experiments,[29] they assumed that science should be founded on protocols of sensory observations supposed to be independent of any theory. Although they called themselves empirists, they were mostly interested in the philosophical justification of theories, in particular relativity theory and quantum theory, which appeared to require Mach’s views for their philosophical interpretation. They considered observations and experiments only as confirming or refuting theories, neglecting their heuristic value to find natural laws, which existence apart from human insights it denied anyhow. It was an a-historic natural philosophy, focussed on mathematical physics, without much concern for chemistry or biology.

Being concerned with the problem of how theories can be justified, positivism was mainly an epistemology, a theory of knowledge. It contributed next to nothing to the ontology, the history, or the sociology of science. In 1973 it could still be called ‘the received view’ (at least in the United States), but then it was already severely criticized by Karl Popper for its epistemology; by Thomas Kuhn and other historicists for its view on history; and by the social constructivists for its view on the social dimension of science.[30]

Karl Popper argued that the positivist account of the justification of theories failed. He insisted that scientists should first of all be critical about their hypotheses, such that they should not try to justify but to falsify these. Ultimate truth is not realizable (7.7). The historists (13.1) emphasized that theories as historical products arise within a particular culture, and social constructivists believed this culture to be socially determined. Like the historists they are often relativists, in the extreme asserting (but never proving) that any theory can be replaced by a different one.

It may be surprising to treat positivism in the context of Romanticism, but it is not really. Romanticism stresses subjectivity more than objectivity. It is critical of the reality of natural laws. It values sensorial experience more than experimental investigation. It shies from the consequences of scientific discoveries by relativizing, ignoring, or even denying these. The ‘flight from reality’ is a romantic feature. Positivists, historicists and social constructivists undervalue the self-correcting ability of experimental science.


[1] Alexander (ed.) 1956.

[2] Iltis 1970; Szabo 1977, 47-85; Jammer 1957, 165-166; Papineau 1977.

[3] Leibniz 1686; 1714.

[4] Dooyeweerd 1953-1957, I, 234, 240.

[5] Nadler 2008; Plantinga 1974, part I.

[6] Alexander (ed.) 1956, 16; Rutten, de Ridder 2015, chapter 1..

[7] Leibniz, Essais de Théodicée sur la Bonté de Dieu, la Liberté de l'Homme et l'Origine du Mal (1710).

[8] Cited by Gower 1973, 321-322.

[9] Barrow 1990.

[10] Oeuvres complètes de J.J.Rousseau, 1855, II, 126 (cited by Dooyeweerd 1953-1958, I, 314).

[11] Goethe 1810.

[12] Schelling in 1799, cited by Gaukroger 2016, 115.

[13] Hegel 1830, 202-221, 272-286, 302-318 (sections 312-315, 323-324, 330); Sambursky 1974.

[14] Berkson 1974, 35.

[15] Elkana 1974.

[16] Lindsay (ed.) 1975, 69-79; Ord-Hume 1977.

[17] Elkana 1974, 28-30.

[18] Elkana 1974, 55-57; Brush 1976, 571; Kestin (ed.) 1976, 111, 133; Steffens 1979, 126-127; Jungnickel, McCormmach 1986, I, 165-169.

[19] Ampère 1826.

[20] Faraday 1839-55, III, 19,20.

[21] Maxwell 1864-1865, 563-564; Maxwell 1873.

[22] Hertz 1892; Jungnickel, McCormmach 1986, II, 86-92.

[23] Mach 1872, 5, 17-19, 32-33.

[24] Mach 1883, 472-474.

[25] Mach 1883, 240-243.

[26] Hertz 1894; Jungnickel, McCormmach 1986, II, 142-143.

[27] Bradley 1971; Blackmore 1972, 116-120, 204-227.

[28] Mach 1872.

[29] Franklin 1986.

[30] Suppe (ed.) 1973.



Chapter 11


Immanuel Kant’s Enlightenment



11.1. From Pietism to Evangelicalism


Both mechanicism and experimental philosophy were opposed by scholastic theology, which was no less rationalistic than the Enlightenment. During the Romantic turn Puritanism and Pietism arose as strong opponents of theological rationalism. Mostly known from Johann Sebastian Bach’s Passions, Pietism started in the Lutheran churches with Philipp Jakob Spener’s Pia desiderata (Pious desires, 1675) and Nikolaus von Zinzendorf’s revival of the Moravian church. In the Netherlands it was related to the nadere reformatie (including Gijsbrecht Voet), later to bevindelijkheid (pious empathy), and in England to Puritanism and John Wesley’s Methodism. In America it became known as Evangelicalism. On the European continent it reached its acme during the nineteenth century, but it lost much of its appeal during the twentieth century. In the Anglo-Saxon countries and South-America it is still very strong.[1] It is related to Jewish Hasidism, emerging in Poland in the first half of the eighteenth century. In Eastern-Europe it was exterminated by the nazis, but it is still present in cities like Jerusalem, New York, and Antwerp. 

Accepting a literal interpretation of what it considered to be the authoritative text, (in the Anglo-Saxon countries the Authorized King James Version, 1611, in the Netherlands the Statenvertaling, 1637), Evangelicalism took distance from the historical Bible critique, often rejecting modern translations of the Bible. As a consequence, Evangelicalism became known as an anti-science movement, creationism. As an extreme expression, ‘young earth creationism’ rejected the findings of geology and evolution theory.[2]


11.2. Immanuel Kant


The most important continental Enlightenment philosopher, Immanuel Kant, was educated in a Pietistic environment. He lived his whole life at Prussian Köningsbergen (now Russian Kaliningrad). Whereas in England, the Netherlands, and initially in France, rationalism was replaced by John Locke’s empiricism, in Germany rationalist Enlightenment as understood by Gottfried Leibniz reached its peak in the work of Christian Wolff. As a reformer of university education he was the most influential German philosopher between Leibniz and Kant, who in turn was impressed by Hume’s sceptic empiricism and Rousseau’s romanticism, though he shared neither. Kant divided his life into a pre-critical period (his ‘dogmatic slumber’) and a critical one, starting with the publication of Kritik der reinen Vernunft (Critique of pure thought, 1781, second revised edition 1787).

According to Kant’s Beantwortung der Frage: Was ist Aufklärung (Reply to the question: what is Enlightenment, 1784)[3], Enlightenment is the human being’s emergence from his self-incurred minority. Kant argued that only by the resolution and courage to engage in rigorous critical thought in a public debate one is able to escape this immaturity. His ‘Sapere aude!’ (Dare to know!) became the motto of Enlightenment philosophy. Courage to use your own understanding implies a cognitive and epistemological process of idealistic personal character formation (Bildung), free of any suppression. It became the leading motive of German education in the nineteenth century.

In his main work, Kritik der reinen Vernunft,Kant explored the limits of metaphysics understood as the rational foundation of science. He considered dogmatism and Humean scepticism as two necessary stages on the way to critical philosophy. Dogmatism, according to Kant, is the view that on the basis of pure reason, one can attain knowledge of the existence of God, of the existence of freedom in a world governed by necessity, and the existence and immortality of the soul. Scepticism reveals the limits of dogmatism. The third stage, the critique of pure reason, does not concern the contents, the facta of reasoning, but reason itself.[4] Therefore he calls this critique transcendental.

Kant argued that for each metaphysical proposition (for instance, that the world has a beginning in time and is finite in extension), its contradiction can be defended equally well. By identifying four of these ‘antinomies of pure reason’ Kant effectively undermined the claims of  metaphysics to arrive at fundamental truths on its own force. He proposed to complement theoretical thought with practical knowledge, which he investigated in  Kritik der praktischen Vernunft (Critique of practical reason, 1788) and Kritik der Urteilskraft (Critique of judgement, 1790).

Kant replaced Cartesian mechanism by a kind of mechanism based on a rationalist a priori foundation of Newton’s laws of motion and of gravity.[5] Applying Newton’s methods of mathematization and successive approximation (7.4-7.5), classical physicists and mathematicians solved many problems put forward in Principia. As far as these concerned the solar system, these culminated in Pierre-Simon Laplace’s Méchanique céleste (celestial mechanics, 1798-1805). Its lasting result was rational mechanics, ‘the science of motion resulting from any forces whatsoever’[6], which as ‘classical mechanics’ became a mainly mathematical part of classical physics.[7] It replaced Newton’s geometric way of proof by the developing integral and differential calculus, such that Joseph-Louis Lagrange in the preface to his Méchanique analytique (1788) could boast: ‘no figures will be found in this work. The methods I present require neither constructions nor geometrical or mechanicist arguments, but solely algebraic operations subject to a regular and uniform procedure. Those who appreciate mathematical analysis will see with pleasure mechanics becoming a new branch of it.’[8]

However, celestial physics is not only based on mechanics, but also on observations, which in itself cannot lead to absolute certainty. The technique of observation progressed enormously in the hands of astronomers like William Herschel, his sister Caroline and his son John in England; Friedrich Bessel in Germany; and Urbain Leverrier in France. Astronomers discovered that they could enhance the accuracy of their observations with probability calculus. Astronomy did not have the same status of certainty as mathematics and mathematical physics or rational mechanics, the only kind of natural science considered by Kant. He believed true natural knowledge to be apodictic, as clear and distinct as Descartes did. In contrast, Newton and his adherents believed the laws they discovered to be contingently dependent on the will of the Creator. The application of probability to observations did not diminish the mathematical character of astronomy as proposed by Newton, but increased it by adding new mathematical methods. The use of statistics was not restricted to natural science, but it became increasingly applied in the social sciences as well.

Being concerned with forces acting between mass points, extended bodies (whether elastic or not), and incompressible fluids, rational mechanics was no longer considered the foundation of matter theory (as it was in mechanical philosophy), but was studied as an independent field of science apart from chemistry, electricity, magnetism, and optics. Avoiding Cartesian mechanicism as an a priori natural philosophy, mechanics was restricted to a pure theory of motion and any kind of forces without attempting to unify these.

Only because of its mathematical rigour and internal consistency mechanics was put forward as a model of scientific research, for instance by William Thomson: ‘I never satisfy myself until I can make a mechani­cal model of a thing. If I can make a mechanical model, I can understand it.’[9] James Clerk Maxwell found his electromagnetic laws from an analogy to a mechanical model. In his development of the electromagnetic field he replaced Newtonian action at a distance by Cartesian action by contact. In his general theory of relativity, Albert Einstein achieved the same for gravity.

However, Leonard Euler and in his wake Immanuel Kant, introduced an amended form of mechanicism, accepting Newton’s laws of motion and gravity. With Roger Boscovich in Theoria philosophiae naturalis (1758) they emphasized a natural philosophy in which Newtonian forces (both attractive and repulsive) played a more important part than motion. Therefore Kant considered himself a Newtonian. Besides he was an adherent of the moderate form of Enlightenment associated with Newton and Locke. However, contrary to Newton’s experimental philosophy, as an idealist rationalist Kant believed that Newton’s mechanics, including the inverse square law for gravity, could be a priori derived from a few irrefutable axioms.[10] The concept of external force as related to accelerated motion, the cornerstone of Isaac Newton’s dynamic theory (chapter 4), ‘rose almost to the status of an almighty potentate of totalitarian rule over the phenomena’[11] in its interpretation along the lines of Roger Boscovich and Immanuel Kant. Boscovich was the first to realize that the spatial extension of a physical subject is determined by repelling forces.[12] He solved the matter-force dualism by reducing matter to force.

Kant had little understanding of the physical sciences of his day. He ignored Newton’s Opticks (1704) and did not pay much attention to the subsequent development of chemistry (which he did not consider a science but merely a craft)[13], electricity, magnetism, and other fields of scientific research. Nevertheless, his brand of mechanism remained influential during the nineteenth century, until it became clear that a mechanical explanation of optical phenomena is impossible. In particular Rudolf Clausius, James Clerk Maxwell, and others were initially successful in applying mechanics to the theory of gases, founding a mechanical interpretation of temperature and heat, which (even proved to be wanting) is still influential in education.[14] Heat is not always mechanical energy, but can also be radiation, for instance, and temperature is not only an expression of the mean kinetic energy of molecules, but is an equilibrium parameter with a much more wider significance .

Because Kant in his practical philosophy placed feeling above reasoning, and proposed the view that all natural forces should be united, he became one of the fathers of the Romantic movement in science, although he was by no means a romantic himself.

For Immanuel Kant, the ideal of personality prevailed above the scientific domination of nature. Space, time, and causality were not objective aspects of nature, but subjective, though necessary, conditions of human thought. He did not prove God’s existence on natural arguments, but on moral ones, subordinating rationality to feeling.


11.3. Immanuel Kant’s shift

from naturalism to moralism


Immanuel Kant observed that pure science cannot solve all problems and even leads to unbridgeable antinomies. Therefore he complemented his Kritik der reinen Vernunft with Kritik der praktischen Vernunft, pure reason with practical thought, providing the foundation of morality and religion. Also for the problem of free will he sought a solution in morality. The theoretical pure reason leaves no room for a free will, but practical reason does. From this point of view it is understandable that a neurophilosopher as a theoretician rejects free will, but simultaneously advocates a practice in which anybody may decide on their own end of life.

Immanuel Kant emphasized the dialectic of nature and freedom. He concludes his Critique of practical reason (1788)  with: ‘Two things overwhelm the mind with constantly new and increasing admiration and awe the more frequently and intently they are reflected upon: the starry heavens above me and the moral law within me.’

As a moderate Enlightenment philosopher restricting the scope of pure reason, Kant endeavoured to reconcile Christianity with Enlightenment, firmly rejecting atheism, materialist determinism, and evolutionism, defending belief in God, freedom of the will, and the immortality of the soul.[15] Thereby, Kant took distance from the teleological argument of design, and he restricted the reach of physico-theology. His argument for the existence of God did not rely on natural arguments, but on moral ones: ‘Morality inevitably leads to religion, and, through religion, extends itself to the idea of a mighty moral lawgiver outside the human being whose ultimate goal (in creating the world) determines what can and ought to be the ultimate human end.’[16]

However, his deontological arguments inspired by morality were no less rationalistic than the naturalist arguments of physico-theology. Kant considered man to be autonomous, law onto himself, such that morality cannot be derived from religion. He restricted the individual self-sufficiency by the categorical imperative (the law of unconditional duty), based on practical reason, not on divine revelation like the biblical commandments. Kant called this moral autonomy human freedom. Morality making humans different from animals means to be free of any external authority. This is Kant’s ultimate attempt to bridge the tension between nature and freedom. According to Kant practical reason does not apply any criterion from outside. It does not rely on experience. Kant’s independent arguments against the use of happiness or the appeal to God’s revealed will only reinforces a position already reached by Kant’s vision on the function and possibilities of reason. It belongs to the essence of reason that it lays down principles which are universal, categorical, and internally consistent. A rational theory of morality presents principles which are and should be acceptable for anybody, independent of circumstances and conditions, and which are consistently obeyed by each rational person at each opportunity. Kant distinguishes the maxim, the subjective principle to act, from the practical law, the objective principle. The test for a proposed maxim is easy to formulate:  can we or can we not consistently will that everyone always acts according to the maxim?[17]

Kant summarized the universal moral law in the golden rule: act always such as you would like everybody to act.[18]  In Jesus’ words: ‘Always treat others as you would like them to treat you: that is the Law and the prophets.’[19] But whereas Jesus refers to God’s word, Kant states that the autonomous individual determines ethics on rational grounds, according to ‘the idea of the will of every reasonable being as a general law-giving will.[20] This generalized autonomous individual is an abstraction in which concrete individual people seem to get lost.[21] In its elaboration Kantians have stressed what one ought not to do: ‘don’t ever to another what you don’t want to be done to yourself’. Kant himself considered this negative expression of the categorical imperative to be trivial.[22]

Kant’s ethics of duties is reduced to precluding acts that restrict the freedom of other people, without paying attention to the consequences. For instance, according to Kant it is not allowed to lie, even if one could save a friend’s life. Kant’s absolutization of the prohibition of lying (probably inspired by his Pietist upbringing) is a consequence of his rationalism: lying is a transgression of the logical principle of excluded contradiction, the principium contradictionis, according to the rationalists the highest law, to which even God is subject.

Alasdair MacIntyre argues that Kant’s maxims are not as consistent as he believes and that his morality is that of a rather conventional bourgeois. Macintyre concludes that the project to find a rational justification of morality is a failure.[23]

Whereas for Aristotle justice is the summary of virtue,[24] since Kant a division of ethics and justice appears. (Even Herman Dooyeweerd considered the judicial and the ethical to be mutually irreducible aspects of being and human experience, though based on different arguments from Kant’s).[25] Ethics becomes internalized, it concerns the individual attitude of people towards their rights and duties, whereas justice becomes external as legalism, determined by a system of laws given by the state, which one has to obey, even if it would contradict one’s individual ethics.

In Kant’s three critiques (1781-1790), constituting the pinnacle of moderate Enlightenment, the emphasis shifted from the domination of nature to human freedom. Kant separated natural laws from moral laws, pure reason from practical reason, rational science from the no less rational religion. His philosophy convinced many people, Protestants, Catholics, and Jews, although his influence was mainly restricted to the European continent. Yet he experienced much resistance too, first of all from the radical Enlightenment philosophers, who found his social and political views much too conservative.


11.4. Faith and religion


In Die Religion innerhalb der Grenzen der blossen Vernunft (Religion within the boundaries of mere reason, 1793),Immanuel Kant distinguished religion from ecclesiastical belief, assuming that religion is universally based on reason and hardly differs from ethics, whereas faith concerns the specific dogmas of the churches.[26]

Kant’s view on religion as based on morality was soon challenged by romanticists. For Georg Hegel religion as representation was the second form of development of the absolute mind, after art as contemplation. These two reach a synthesis in philosophy, the third and highest form of development.[27]

Friedrich Schleiermacher, too, based religion on aesthetic experience.[28] He compared an artist to ‘a true priest of the Highest in that he brings Him closer to those who are used to grasping only the finite and the trifling; he presents them with the heavenly and eternal as an object of pleasure and unity.’[29]

In his Reden über die Religion (Addresses on religion, 1799), Schleiermacher emphasized that religion is neither a metaphysic nor a morality, but first of all an intuition, a feeling, the experience of infinity and eternity in the universe. Later he wrote about religion as the sense of absolute dependence. He developed a theory of language and became the father of modern hermeneutics, the theory of interpretation, as a general field of enquiry, including the textual criticism of the Bible. He initiated liberal Protestant theology as an alternative to both Evangelicalism and traditional Reformed theology. ‘Liberals saw the Bible as one of many religious writings, Jesus as one of many religious teachers; they viewed progress as inevitable, human nature as essentially good, and morality as the heart of religion.’[30]

Liberal theology considered many biblical stories (such as to be found in Genesis) as myths. The word myth (from muthos, spoken word) has originally the meaning of a faith story, often concerned with the past, the emergence of mankind, of a tribe or a village, like the founding of Rome by Romulus and Remus.[31] Sometimes a myth is a utopian scheme, an expectation regarding the future. A myth marks a transition from prehistory to history. Someone accepting a myth does so because they believe the story, not because it can be proved, whence during the rationalistic Enlightenment myths received the negative image of an unreliable story.

A myth does not present verifiable historical facts. It represents a world view having a connective and inspiring function in a community. Such a myth can be found in Genesis 1-3, the story of creation, fall into sin, and the promise of a redeemer. For Emil Brunner, the core of the doctrine of the creation is that persons depend for their existence on God, in whose image they are made. The meaning of the fall into sin is that persons seek, or suppose they have, an autonomy ignoring the distinction between Creator and creature. These claims do not conflict, or compete, with the claims of natural science.[32] In contrast, Rudolph Bultmann proposed to demythologize the Bible.

A faith story like a myth is not a scientific text. Since the Enlightenment, scientific research of the scriptures has sown doubt about the reliability of the Bible. This research supposed wrongly that for Christian faith the Bible acts as a historical book or a scientific discourse. The Bible does not have the intention to write history in Leopold von Ranke’s objectivist sense (13.1). Just like Homer’s Iliad and Odyssey,the biblical books may be used as documents for historical research, for each faith document has an historical origin. It is delivered by former generations, or put into words by a prophet like Ezra or Mohammed, an apostle like Paul, a preacher like Buddha, a reformer like Martin Luther, a philosopher like Karl Marx, or a scientist like Charles Darwin. Enlightenment philosophers adhered to the myths of the social contract, the Communist manifesto, determinism, free market liberalism, evolutionism, materialism, and other forms of reductionism, as well as a variety of nationalistic myths.

In modern theology, the Bible is not first of all considered a historical document, but a normative directive for faith. Nobody needs to accept on historical grounds that Jesus is the son of God. The Bible itself indicates that this is a confession of faith, not a scientifically verifiable fact. No more does anybody need to believe on the basis of historical research that Jesus has risen from the death, even if the Bible mentions a large number of witnesses having met Him alive after his death.[33] Christians accept the resurrection not primarily as a historical fact, but as the corner stone of their faith.[34] It is a dogma, a hopeful expression of their faith. Meanwhile no Christian can doubt the historicity of the man Jesus. Because God became man, He is part of human history.

In Der Römerbrief, a comment on Paul’s Epistle to the Romans (1919, second revised edition 1922), Karl Barth rejected liberal theology, emphasizing the saving grace of God and humanity's inability to know God without God's revelation in Christ. The Bible itself is not a revelation, but it points to acts of God in history, about which it fallibly reports. In the dialectic between God and humanity, in which revelation is only given if it is received, God is ‘entirely different’. He can only be known through interpersonal revelation, not by any kind of natural philosophy or theology.

Barth placed religion as conceived by his liberal contemporaries as Unglaube (unbelief) over and against true belief. Whereas God works faith by grace, religion as an attempt of people believing to be autonomous to achieve knowledge of God: ‘Religion is unbelief; religion is a matter, perhaps one should say the matter of godless people ... The impotent, but also haughty, presumptuous as well as helpless attempt, by which a man should want to but is unable to achieve, because he only can do that when and if God himself gives it to him: recognition of the truth, recognition of God.’ [35]

In contrast, Herman Dooyeweerd considered faith to be a mode of human experience, of which religion is its central motive, ‘the innate impulse of human selfhood to direct itself toward the true or toward a pretended absolute Origin of all temporal diversity of meaning, which it finds focused concentrically in itself.’ [36]

Dooyeweerd rejected the possibility of theoretical thought about God. He stressed that in our pre-theoretical knowledge of God through Jesus Christ not only belief but all human modes of experience are involved. Apart from their terminological differences, the Calvinists Barth and Dooyeweerd agreed on their rejection of natural theology as an autonomous theoretical approach to God.

However, Karl Barth also rejected Christian philosophy as proposed by Abraham Kuyper and continued by Dirk Vollenhoven and Herman Dooyeweerd. In contrast to Barth, these philosophers called the relation of people to their God (whether recognized or not) their ‘religion’. Human persons concentrate themselves in their religion on their true or assumed origin, just like in Jesus’ summary of the law, the central command of love, God’s law is concentrated. This concerns natural laws as well as values and norms. In His summary, Jesus mentions the love for God and the love for one’s neighbour in the same breath. This means that the relation between God and an individual human being does not stand apart from the relations that this person maintains with his or her fellows, and with other creatures.

The ethical meaning people apply to their acts determines their attitude towards norms and values, their individual character and communal ethos. As soon as they wonder what the meaning of life is, all people act religiously, even if they do not believe in a personal God. In their religion they respond to the calling to conduct a meaningful life, the calling to do good and counter evil. The empirically established fact that people are conscious of this calling does not coincide with true knowledge of God. To know intuitively to be called does not imply explicit knowledge or recognition of who does the calling. True knowledge of God does not originate from people, but reaches people through revelation and prophecy. The religious choice persons make gives direction to their acts and influences their individual character as well as their shared ethos.

Karl Barth and Herman Dooyeweerd both emphasized that the transcendent God is totally different from created reality. Speaking of temporal reality as the set of all relations within the creation implies that God is not temporal, but eternal, which means that He is totally different from the temporal creation, which He transcends. People have no direct knowledge of the eternal God. Knowledge of God depends on His revelation in Jesus Christ, who became a human being like them. In this way the transcendent God became immanent in the temporal world, allowing of human relations with Him.

This takes distance from the naive view that eternity is nothing but prolonged time, such that eternal life would be a never-ending afterlife. Before the eighteenth century, theologians considered infinity to be an exclusive attribute of God, but since then infinity became part and parcel of mathematical practice, for instance in infinitesimal calculus. Infinite sequences are as temporal as finite ones. Infinity is not eternity. If eternity would be nothing but perpetual existence, God would be as temporal as His creation. Only God is eternal, totally different from temporal being which He transcends. People can only have knowledge of the eternal God through Jesus Christ, who became temporal when he came into the flesh.

The faithful confession ‘God is righteous’ does not mean that righteousness would be an attribute of God. Because God is totally different from the created world it would be wrong to consider the normative principles as attributes of God. Being commandments, these are not apply to God, but to people.

[1] Sewell 2016.

[2] Numbers 1986; Ruse 2005; Ryrie 2017.

[3] Kant, in Cahoone (ed.) 2003, 45-49.

[4] Kant 1781-1787, A760-761, B788-789.

[5] Kant 1786.

[6] Newton 1687, Preface xvii.

[7] Gaukroger 2010, chapters 3, 8; Goldstein 1959.

[8] Cited by Gaukroger 2010, 148.

[9] Thomson 1884, cited by Brush 1976, 580.  

[10] Kant 1786.

[11] Jammer 1957, 241.

[12] Jammer 1957, 171ff; Berkson 1974, 25-28; Hesse 1961, 163-166.

[13] Kant 1786, 4.

[14] Clausius 1857; Maxwell 1860; Brush (ed.) 1965-1972; Brush 1976.

[15] Israel 2011, chapter 26.

[16] Kant 1793, Preface, IX-X; Israel 2011, 727.

[17] MacIntyre 1981, 45.

[18] Kant 1785, 48, 73-74, 95-96, 108.

[19] Matthew 7: 12; Luke 6: 31.

[20] Kant 1785, 87.

[21] Noddings 1995, 161.

[22] Kant 1785, 86.

[23] MacIntyre 1981, 50 and chapter 5.

[24] Aristoteles, Ethica, V:1.

[25] Stafleu 2007.

[26] Kant 1793.

[27] Hegel 1830, III.

[28] Safranski 2007, chapter 7.

[29] Schleiermacher, cited by Taylor 1989, 378.

[30] Yandell 1986, 448.

[31] Langer 1960, 188 (chapter 7); Troost  2004, 232-233; Von der Dunk 2007, 157-234; Ankersmit 2005, 400-405 (section 8.10).

[32] Yandell 1986, 453.

[33] I Corinthians 15, 6.

[34] I Corinthians 15, 14.

[35] Barth 1957, 51-53 (original edition: I 2, 327-330), my translation.

[36] Dooyeweerd 1953-1958, I, 57.



Chapter 12





 12.1. Radical Enlightenment       


Whereas in nineteenth-century academic philosophy in continental Europe after Immanuel Kant the personality ideal prevailed, elsewhere naturalism stressed the dominion of humanity by nature – not of nature, as was the case during the early Enlightenment of Francis Bacon, Galileo Galilei, René Descartes en Isaac Newton. Utilitarianism (David Hume, Jeremy Bentham, and William Paley), reduced morality to natural utility, in particular to the experience of pleasure and pain (hedonism). After naturalism reached a peak in the radical Enlightenment of the eighteenth century, it received a new stimulus from the theory of evolution.

In 1745 the physician Julien Offray de la Mettrie (disciple of Herman Boerhaave) published L’Histoire naturelle de l’âme (The natural history of the soul ) at Paris, and in 1747 l’Homme machine (Man a machine) at Leiden, deploying mechanistic, deterministic, atheist, and materialistic views on human nature, reminding of Benedict Spinoza (3.4).[1] As a monist rejecting any dualism of mind and body, La Mettrie argued that humans are not different from animals, which in turn he treated like machines. Together with his hedonism this met with much resistance both in France and in the Netherlands, urging him to fly to Berlin. Protected by the enlightened despot Frederick the Great he composed his opus magnum Discours sur le bonheur (Discourse on happiness, 1748). La Mettrie opposed moderate Enlightenment with its deism, the teleological argument of God’s existence from design, and physico-theology.

Since about 1750 Denis Diderot and Paul-Henri d’Holbach propagated materialism as a permanent part of radical Enlightenment. Besides La Mettrie’s works, this was expressed in Denis Diderot’s Lettre sur les aveugles (1749),[2] George-Louis Buffon’s Histoire naturelle (1749-1783, 24 volumes), d’Holbach’s Système de la nature (1770), and especially the Encyclopédie ou dictionnaire raisonné des sciences, des arts et métiers (1751-1772, 28 volumes), edited by Jean d’Alembert and Denis Diderot.[3]  This Encyclopédie was initiated in 1745 by a consortium of publishers. Besides d’Alembert (especially concerned with science and mathematics), Diderot soon became the main editor. Also d’Holbach became an editor. He contributed several hundreds of articles, on many subjects, including chemistry and mineralogy.[4] After having initially adhered to it, Diderot abandoned physico-theology, opposing Locke, Newton, and deism, and adopting a deterministic evolutionary naturalism.[5] In France, even after d’Alembert in his introduction to the Encyclopédie paid lip service to Francis Bacon, Isaac Newton, and John Locke (1.2), these radical views started to dwarf moderate Enlightenment. In political and social life radical philosophy became the mouthpiece of the French revolution of 1789.[6]


12.2. Moderate and counter-Enlightenment


Moderate Enlightenment, in France represented by François-Marie Voltaire[7], remained strongest everywhere else.

A common feature of all Enlightenment philosophers is their rejection of scholastic Aristotelianism. This induced the opposition of many conservative theologians, whether Catholic (including both Jesuits and Jansenists), Calvinist, Lutheran, or Anglican. They stuck to the medieval accommodation of Aristotelian philosophy with Christian theology, as wrought by Thomas Aquinas, after Avicenna and Maimonides did the same for Muslim and Jewish theology, respectively.

Their distinction of the natural and the supernatural realms was not disputed by the mechanists and the moderate Enlightenment philosophers. Like almost all theologians, they abhorred Balthasar Bekker’s book De betoverde weereld (The world bewitched, 1691-1693), criticizing many superstitious views and magical practices, and arguing that only God is supernatural. About 1500 almost everybody believed in God for three reasons: the natural world testifies of a divine plan; social communities like cities, kingdoms and the church point to a higher authority; and people lived in a charmed world full of benignant and malignant spirits.[8] Therefore, when the reformed minister Bekker asserted that the world is not under any supernatural spell, he undermined common faith, according to his colleagues. Though much despised, his book helped to make an end to witch hunting, after the Middle Ages introduced by the Renaissance.[9] During the seventeenth century, only Benedict Spinoza denied the possibility of miracles,[10] although both Isaac Beeckman and Simon Stevin (with his slogan ‘wonder en is gheen wonder’: a miracle is not a miracle) were sceptical in this respect. Beeckman observed that in philosophy one must proceed from wonder to no wonder, whereas in theology the reverse should occur. Thereby he rejected any realm of ghosts, witches, and monsters between the natural and supernatural realms.[11]

As a weapon against atheist or pantheist radical Enlightenment, both moderate Enlightenment and counter-Enlightenment made use of physico-theology as a rational foundation of natural religion. In the first half of the eighteenth century, especially Newtonians like Samuel Clarke and Richard Bentley in England, Colin MacLaurin in Scotland, Jan Swammerdam and Bernard Nieuwentijt in the Netherlands, and François-Marie Voltaire in France, as well as Gottfried Leibniz and Christian Wolff in Germany propagated physico-theology based on natural insights. Colin MacLaurin asserted that ‘natural philosophy is subservient to purposes of a higher kind, and is chiefly to be valued as it lays a sure foundation for natural religion and moral philosophy, by leading us, in a satisfactory manner, to the knowledge of the author and governor of the universe’.[12]

During the French Terreur, the rule of terror lasting from 1792 to 1794, the deist Maximilien de Robespierre acted as a high priest in the worship of reason and the supreme being as the natural state religion, with immortality as its main dogma. Glorifying the views of Jean-Jacques Rousseau, it was a romantic reaction against the atheism of the radical Enlightenment.

During the nineteenth century the counter-Enlightenment became more and more romantic. In particular the revival of the Catholic Church was strongly influenced by Romanticism. The Prussian Lutheran Friedrich Julius Stahl and the Dutch reformed Guillaume Groen van Prinsterer and Abraham Kuyper were romantic representants of the counter-Enlightenment, in the Netherlands called ‘anti-revolutionary’.

In nineteenth-century England, William Paley’s books Principles of moral and political philosophy (1785), and Natural theology, or evidences of the existence and attributes of the Deity (1802), were widely read and very influential. Paley became famous because of the teleological watchmaker argument, though he did not invent it.[13] Anybody being confronted with something as complicated as a watch will admit that an intelligent being must have designed and made it. In a similar way plants, animals and human beings point to an intelligent creator, in the twentieth century disguised as ‘intelligent design’. This argument from design for the existence of God adds to the ontological argument from perfection (God is perfect, and someone who does not exist cannot be perfect), as well as to the argument from causality (if everything has a cause, there must necessarily be a first cause).[14]

In some polytheistic religions, the Gods are like men supposed to be subjected to an impersonal moral power, such as the Greek anankè or the Indian karma.[15] Because an impersonal superpower was no option in the West, rationalistic theologians took a different path. In suit of Augustine’s neo-Platonism they chose as a starting point for their rational analysis the definition of God as a perfect being with perfect attributes (3.1, 3.4).[16]

God’s perfection would imply the simplicitas Dei, the neo-Platonic doctrine (due to Augustine, and defended by Anselm of Canterbury and Thomas Aquinas) that God is one and does not have parts. In this respect Christian philosophers had to argue against Jews and Muslims about the trinity.

Assuming that whatever is changeable cannot be perfect, perfectionists state that God must be unchangeable. The theologian Emil Brunner criticized the general trend of Protestant Scholastics to ground their entire systematic theology in this idea of simplicitas Dei. Brunner argued that the notion only arises if one makes the abstract idea of the Absolute the starting-point for our thought.

The Bible nowhere indicates that God would be unchangeable in all respects.[17] God accompanies the history of His people, sharing in their suffering. Compared to Homer’s epics, the Bible presents itself as a volume of historical narratives with various authors.[18] It is not strange to write a biography, in which God acts as the principal person in a literary work.[19] God reveals Himself in the Old and New Testament always as a concrete person in historical situations and never as a rational abstraction like the absolute, the perfect being, or as providence.


12.3. Physico-theology


As a branch of natural theology, physico-theology welcomed each scientific result as a new proof for the existence of a benevolent creator.[20] The belief in God was increasingly built on the progress of Newtonian science.[21] In particular the argument from design, more due to Plato[22] than to Aristotle, was popular. The effectiveness and usefulness of nature required as an explanation the existence of a suitable building plan and a conscious designer. David Hume rejected the argument from design,[23] but his views being purely philosophical made little impact in the scientific community, which generally adhered to physico-theology until the middle of the nineteenth century.

However, in 1755, Portugal experienced an earthquake with a death toll in Lisbon alone of between 10,000 and 100,000 people. It made a deep impression on the Enlightenment philosophers who started to question the idea of a benevolent God. It also led to the birth of modern seismology and earthquake engineering, replacing supernatural intervention by natural explanations.

In physico-theology, the almighty God was required to explain all phenomena that could not be explained by natural laws, but the increasing knowledge of nature diminished the range of this ‘God of the gaps’. Generally, besides reason, two sources of knowledge of God were acknowledged: the Holy Scripture as word revelation, and nature as creation revelation. In case of conflict, both Francis Bacon and Galileo Galilei gave priority to natural science (1.1). Word revelation lost much of its appeal, not because of science, but because of the criticism exerted by Enlightened theologians, treating the Bible as any other human text (12.5). Since the end of the nineteenth century, the two revelations appeared to lead to contrary views, and many people started to consider science a competitor of religion, with its own view of creation, fall into sin, and redemption. In the twentieth century the not very successful idea of a physical ‘theory of everything’ expressed the temptation to find God through science.[24]

The weakness of physico-theology is that it may be able to prove the existence of God as the Creator of the world, but it in no way leads to the message of the Gospel, to miracles, and to the authority of the church. Already in the seventeenth century Blaise Pascal criticized the Enlightenment project to find the ‘philosophers’ God’ (5.4). In fact physico-theology provided more support for a natural religion, an enlightened providential deism, than for Christianity, whether Catholic or Protestant. Therefore miracles as reported by eye-witnesses in the New Testament (in particular concerning Christ’s resurrection) were often presented as evidence additional to natural theology.

Starting with Spinoza the radical Enlightenment rejected the existence of a supernatural being entirely, and therefore physico-theology as well. Benedict Spinoza and Albert Einstein identified God with nature or with natural laws,[25] other people replaced God by nature. This pantheism led inevitably to naturalism. This is a kind of reductionism, but apart from that, there is little consensus about its contents.[26] One may distinguish ontological, epistemological, and methodological naturalism.

Ontological or metaphysical naturalism is the deistic or atheistic world view denying supernatural interventions in reality and assuming that humanity is completely subject to natural laws. Human values and norms should be explained as results of evolution. An important characteristic of ontological naturalism is its monism, the rejection of the duality of body and mind, as proposed by Aristotelianism, moderate Enlightenment, and almost all theologians. Agnostic epistemological naturalism says that supernatural intervention is unknowable, colliding with the biblical and later stories of miracles. Darwin’s most important defender, the great rhetoric Thomas Huxley (‘Darwin’s bulldog’), introduced in his enlightened confession the concept of agnosticism as an alternative for both theism and atheism: ‘Agnosticism, in fact, is not a creed, but a method, the essence of which lies in the rigorous application of a single principle. That principle is of great antiquity; it is as old as Socrates; as old as the writer who said: “Try all things, hold fast by that which is good”; it is the foundation of the Reformation, which simply illustrated the axiom that every man should be able to give reason for the faith that is in him; it is the great principle of Descartes; it is the fundamental axiom of modern science. Positively the principle may be expressed: In matters of the intellect, follow your reason as far as it will take you, without regard to any other consideration. And negatively: In matters of the intellect, do not pretend that conclusions are certain which are not demonstrated and demonstrable. That I take to be the agnostic faith, which if a man keep whole and undefiled, he shall not be ashamed to look the universe in the face, whatever the future may have in store for him.’[27]

This rationalistic view, reminding of Immanuel Kant, is a far cry from experimental philosophy (chapter 5), as applied in 1865 by Gregor Mendel in his discovery of the laws named after him, which formed the basis of the development of Darwin’s theory of evolution in the twentieth century.

Methodological naturalism excludes supernatural intervention, even if it would exist or if it could be known, as a principle of explanation in science. Since physico-theology lost its attraction, theistscientists started to adhere to this moderate form of naturalism, or at least to practise it. It leads to a separation of Sunday’s faith and weekly science. What remains is the inclination of naturalists to explain everything in the experienced reality with the help of natural laws alone. This reductionism finds for instance an expression in evolutionism, since the twentieth century the prevailing western world view. 

12.4. The uniformity of natural laws

When Isaac Newton in 1703 became president of the Royal Society, he proclaimed: ‘Natural philosophy consists in discovering the frame and operations of nature, and reducing them, as far as may be, to general rules or laws, - establishing these rules by observations and experiments, and thence deducing the causes and effects of things.[28]

Since the seventeenth century, the aim of physical science was to discover the laws of nature (chapters 6 and 7). These laws were assumed to be valid everywhere, expressing a cosmic order. Initially it was not stressed that they would also be valid for all times. This was not a pressing problem as long as the created cosmos was believed to be relatively young, at most ten thousand years, and would not last much longer, as far as the second coming of Christ was expected soon. The ordered cosmos seemed to be quite stable. Only with the rise of geology, around 1800, the question arose of whether the laws are uniformly valid everywhere and always. Moreover, the findings of the geologists appeared to be at variance with the biblical stories about the creation and the flood.

Already in the seventeenth century, Nicolaus Steno investigated the geological history of Tuscany, proposing an organic origin of fossils. In Prodomus (1669) he stated as a principle for research that the surface of the earth contains the evidence of its own development. Steno asserted the then generally accepted view that no discrepancy between the biblical and scientific insights could exist. During the eighteenth century, this view changed dramatically. Scientists started to claim that their findings should lead to a revision of the exegesis of the Bible (12.5). Geologists arrived at the insight that the earth is much older than the Bible suggests. [29] Investigations of mountains, river valleys, quarries, and mines, made clear that the surface of the earth consists of layers or strata, recognizable by the occurrence of specific fossils. The lowest and oldest layer, called primary, does not contain fossils, which on the other hand are abundantly present in the secondary, tertiary, and quaternary layers. Fossils of sea life were found at considerable heights. [30]

The explanation of this stratification divided the geologists into two camps. The neptunists, followers of Abraham Weber, assumed a universal flood. The plutonists, such as the Enlightenment philosopher James Hutton, stressed the internal terrestrial heat, giving rise to volcanic eruptions. [31] Neptunists explained stratification by assuming that all rock formation had been precipitated, either chemically or mechanically, from an aqueous solution or suspension. [32] (In a solution the particles are molecules, in a suspension they are larger but still microscopically small.)

Initially most geologists supported neptunism because it confirmed their natural theology, but the vulcanists had better geological arguments. [33] They did not deny that some recent strata could have an aqueous origin, but they believed that the oldest ones are igneous, referring to experiments made by Hutton’s friend, Joseph Black.

The controversy between neptunists and vulcanists receded to the background after William Smith, a drainage engineer and surveyor who was not much interested in philosophy or natural theology, introduced the method of identifying geological layers by their fossil contents. Thereby he founded palaeontology. [34] In 1815 he produced the first geological maps of England and Wales.

In 1788 James Hutton made the famous remark that ‘the result, therefore, of our present enquiry is, that we find no vestige of a beginning, - no prospect of an end’, taking distance from natural theology. In Theory of the earth (1785), he proposed the uniform validity of natural laws as a leading principle of geological research. Processes in the past or in the future are not really different from those in the present that can actually be observed. He opposed actualism (in the past also called uniformitarianism) to the then prevailing catastrophism, but of course he did not deny the occurrence and results of catastrophes like the earthquake of Lisbon (1755). He made clear that the mountains and valleys and even islands are not really stable, but continually rise, sink, and slide horizontally.

It was now generally accepted that the earth is much older than the Bible suggests. Nevertheless, natural theology still succeeded to convince most geologists of the occurrence of the flood as part of human history. The catastrophists, including Georges Cuvier in France and William Buckland in England, believed that the occasionally occurring catastrophes play a much larger part than Hutton admitted. [35] However, Charles Lyell argued successfully in favour of the uniformity of natural laws in his Principles of geology(1830-1833). [36] 

By administering the coup de grâce to the deluge he deprived catastrophism from its most popular example. In his contribution to the eight Bridgewater treatises the power, wisdom and goodness of God as manifested in the creation (1833-1840), financed from the estate of Francis Bridgewater intended to support natural theology, Buckland did not even mention the biblical flood. [37] Like Cuvier, Lyell criticized Jean-Baptiste Lamarck’s theory of transformational evolution. His book stimulated Charles Darwin in writing On the origin of species(1859). Earlier Robert Chambers’ anonymously published and popular Vestiges of the natural history of creation(1844), [38] although very controversial, prepared the acceptance of Darwin’s theory of evolution, eventually replacing Lamarck’s theory.


12.5. Biblical exegesis


Initially, natural theology was concerned with finding proofs of the existence of God and with deriving His attributes from natural knowledge. It took as an infallible dogma that natural and biblical truths cannot contradict each other. Since the nineteenth century this was no longer evident. Both geology and evolution theory made a new exegesis of the first few chapters of Genesis desirable if not necessary. [39] Natural theology shifted its attention to the harmonization of biblical exegesis with scientific insights, stimulating theologians to reconsider the principles of biblical exegesis.

Before, during, and after the Enlightenment, opposition to science was often derived from a literal interpretation of the Bible, but this was never considered exclusive. In the third century, Origen of Alexandria divided scriptural interpretation into literal, moral, and allegorical. Medieval biblical exegesis distinguished literal from allegorical, and figurative from analogical exegesis. Modern is the difference between lingual, historical, and theological exegesis of the Bible. The Bible is not only partly at variance with science and historiography, but also contains internal contradictions. [40]

It is not merely a matter of the exegesis of a given text, but also the establishment of the text itself. Since the Renaissance, hermeneutics provides semantic rules for the interpretation of texts, as well as methods of lingual analysis, such as comparing different texts with each other. In the fourteenth and fifteenth centuries, humanist philosophers like Francesco Petrarca and Lorenzo Valla criticized various documents on hermeneutic principles. They found that ancient works were often translated poorly, and that biblical manuscripts sometimes contradicted each other. Desiderius Erasmus produced a new version in Greek of the New Testament (1536) comparing a number of different manuscripts, showing many discrepancies with the Latin Vulgate. This was composed in the fourth century by Eusebius Jerome, who had already observed discrepancies between the then available Hebrew text of the Old Testament and the older Greek Septuagint. The council of Trent (1545-1563) declared the Vulgate authoritative for the Catholic Church, prohibiting any other translation, in particular in the vernacular. Various manuscripts of both the Old and the New Testament appeared to differ sometimes considerably. For their translations of the Old Testament Calvinists preferred the Masoretic text (circa 1100) also used in the synagogues. Theologians defending the literal inspiration of the Bible were forced to assume that the text as we know it is not the original one, which was supposed to be lost. However, this gave rise to the problem of how to base a reliable theology on the available ‘corrupt’ text.

John Calvin rejected a literal interpretation. For instance, he wrote positively about new findings of astronomy, even if these were at variance with a literal reading of the Bible. He stated that the Bible is written for common people accommodating assumptions accepted at the time of writing. This view was shared by Galileo but rejected by the papal Inquisition (2.2). Calvin stated that the Bible is not a source of knowledge of nature; it is not an encyclopaedia of natural or historical facts. Instead he argued that the Bible is meant to direct human life to the service of God.

Because Calvinism assumes that the Bible accommodates common sense and daily knowledge as accepted by its authors, it does not need to harmonize the Bible with modern science or history, and not even with itself. Each Bible book or part of it should be read in the context of the community of believers for which it was primarily intended, at least as far as this is known. This principle, also showing how to deal with various contradictions within biblical texts, differs from canonical exegesis, from concordism and from fundamentalism.

Canonical exegesis states that each part of the Bible must be explained in the context of the canon, of the Bible as a whole, as conceived in the tradition of the church. The canonical exegesis aims to harmonize diverging biblical texts (in particular the four gospels) with each other. Christian exegetes are often inclined to explain texts from the Old Testament such as to confirm Christian theology. The canonical exegesis is the official view of the Catholic Church, in 2008 confirmed by pope Benedict XVI, but it finds also adherence among Protestants, perhaps with some less stress on the ecclesiastical tradition.

Concordism says that the Bible does not contain scientific information, but is partly in need of harmonization with the results of science and extra-biblical historical sources. A recent example is Gijsbert van den Brink’s En de aarde bracht voort (And the earth brought forth, 2017). From his reformed background he poses the question: Suppose that the current theory of evolution is correct. What does this mean for Christian faith?[41] Although he criticizes concordism,[42] he arrives at an uncertain harmony, uncertain because he refuses to take position in the question of whether the current theory of evolution is correct, with the weak excuse that he is not a natural scientist.

Fundamentalist theologians and other Christians consider the Bible as an unerring source of knowledge, an encyclopaedia to be used to criticise and eventually to correct scientific results. They are inclined to ignore differences between biblical texts. Modern naturalists commenting on Christian faith have a tendency to direct their criticism to encyclopaedic fundamentalism, ignoring Calvin’s views, as well as canonical exegesis and concordism.[43]


12.6. Enlightened biology


As long as natural philosophy was focussed on the physical sciences, ontological naturalism implied materialism, a world view many people rejected intuitively. The radical Enlightenment’s materialism assumed that plants and animals consist of the same substances as classified in anorganic chemistry, in particular hydrogen, oxygen, carbon, and phosphorus, although Antoine François Fourcroy admitted that chemical processes could not reproduce living matter.

As an antidote some philosophers and scientists propagated vitalism.[44] Besides the physical forces and chemical affinity, Friedrich Kielmeyer postulated a vital force, only acting in living beings (1793). Jöns Jacob Berzelius introduced vital power, now restricted to animals and situated in the nervous system.Berzelius pointed out that such a biological principle is necessary to explain the existence of the multitude of different species of plants and animals. Yet, because force is a physical concept after all, vital force was not a promising concept. Moreover, nobody was able to identify anything like a vital force until the rise of evolution theory, when Charles Darwin and Alfred Wallace introduced natural selection as the engine of the evolution of living beings, without calling it a force. Meanwhile materialism prevailed, until Louis Pasteur in 1860 proved that generatio spontanea is illusory: living beings only arise from other living beings.

Despite the Enlightenment, biology remained initially faithful to Aristotle. Carl Linnaeus' classification of plants and animals, too, was inspired by Plato and Aristotle. The binomial nomenclation he applied in Systema Naturae (1737, tenth edition 1758) and in Species plantarum is still en vogue. For scientific reasons, he classified in 1758 mankind as a species among the mammals, related to the apes,. This move was severely criticized, not only by theologians.

Mechanist philosophers tried to explain the functioning of plants and animals in mechanical terms. Descartes assumed that an animal is just a machine, but he did not apply this to human beings. A century later, this consequence was drawn by Julien de La Mettrie in l'Homme machine  (1747).

Just like Immanuel Kant, Linnaeus believed that the species are unchangeable. However, shortly afterwards geologists investigating fossils established that the earth is much older than was previously perceived. Many species of animals and plants living in prehistoric times are now extinct. Evolution became part of radical Enlightenment philosophy, in particular in George-Louis Buffon’s influential Histoire naturelle (1749-1783, 24 volumes), and in Johann Herder’s no less influential Ideen zur Philosophie der Geschichte der Menschheit (1784-91, 4 volumes). In biology Jean-Baptiste de Lamarck’ book Philosophie zoologique(1809) received little support for its views that properties achieved during an organism’s life can become inheritable and that evolution is a process that continually repeats itself.

In contrast, the publication of Charles Darwin’s On the origin of species by means of natural selection (1859) drew much attention and approval besides the criticism to be expected. Darwin questioned the invariance of species and thereby Linnaeus’ classification. He undermined effectively the argument from design for the existence of God (but not the idea of God as the first cause), because he explained biological evolution by natural selection on the basis of random events. He reversed the argument of design, intended to explain improbable situations by intelligent design, by using improbable events as necessary elements of natural selection without direction. Darwin rejected any kind of goal directedness, contrary to Lamarck, who observed in evolution an inherent strife after perfection: evolution is rectilinear, goal directed, and climbing the ladder of nature, he asserted. Until the end of the nineteenth century, evolution was not generally accepted, not even by scientists. Biologists who accepted evolution often preferred Lamarck’s theory, until it became clear that there was not a shed of evidence for the inheritance of acquired properties.

It is ironical that Gregor Mendel’s almost contemporary discovery (1865) of the laws mentioned after him and starting genetics as the necessary foundation of evolution theory was ignored for 35 years. Only the synthesis of Darwin’s idea of natural selection with genetics, microbiology, and molecular biology (about 1930) made the majority of biologists to accept evolution.

Physicists only became convinced after they accepted that also the macrocosmos is subject to evolution. Until the investigation of radioactivity they accepted a calculation by William Thomson that the earth is not old enough to satisfy Darwin’s theory. The discovery of Edwin Hubble’s law in 1929 based on observations (in 1927 theoretically predicted by Georges Lemaître from general relativity theory) implied that all distant galaxies are moving apart at a speed proportional to their mutual distance, meaning that the universe expands continuously at a decreasing temperature. From this law the age of the universe can be estimated to be about 13.7 billion years.

Nuclear, atomic, and molecular science in bond with astrophysics has been able to explain the evolution of chemical elements and compounds, where evolution is understood as their gradual realization.[45] Physical and chemical laws determine which structures are possible in certain circumstances, such as temperature and the availability of necessary components. Therefore natural laws, both generic and specific, may be called the pull of the chemical evolution, whereas random events, in particular circumstances, constitute its push. A similar metaphor can be applied to biological evolution, the pull being specific laws allowing of viable species, and the push being accidental mutations and natural selection in suitable circumstances.

Whereas for physical and chemical structures specific laws are sufficiently known, this is not (yet) the case for biological species. On the highest taxonomic level, about 35 living animal phyla are known each with its own body plan. [46] This is a morphological expression of the law for a phylum, a covering law for all species belonging to the phylum. It is remarkable that all these phyla manifested themselves almost simultaneously (i.e., within a geological period of several millions of years at most) during the Cambrium, about 550 million years ago. Afterwards, not a single new phylum has arisen, and the body plans have not changed. [47] The evolution of the animal world within the phyla (in particular the vertebrates) is much better documented in fossil records than that of other kingdoms. Nowadays also DNA research contributes much information. Evolution is an open process, which natural history can be investigated, but which future cannot be predicted.

How suitable are the physical circumstances for the emergence of living beings in the universe? It is a remarkable and unexplained fact that the values of a number of physical constants (including, for instance, the gravitational constant) seem to be ‘fine-tuned’ in order to allow of the existence of living beings. This means that if one or more of these constants would have had a slightly different value, living systems as we know these would be impossible. [48] Some adherents of natural theology consider this a new argument for the existence of God, [49] but it does not differ much from William Paley’s argument from design (12.2).


12.7. Evolutionism

Evolution theory may be summarized into four different steps. [50] Historical evolution or progressive creation concerns the insights that according to the geological time scale the earth is about 4.6 billion years old, and that living beings appeared on earth successively, as can be derived from the fossil archive. The second step, common descent or common ancestry, is the explanation of this historical succession by the hypothesis that any form of life is descended from an earlier one. The strongest version assumes that all living beings on earth have the same common ancestor. The third step is the strong Darwinian evolution theory, stating that the only engine of evolution is natural selection based on random mutations. The fourth step dates from the synthesis of natural selection with genetics, microbiology, and molecular biology, circa 1930. [51] This moderate neo-Darwinism recognizes that besides natural selection structural principles constitute constraints on evolution. This structuralist evolution theory [52] is rejected by radical evolutionists, but is practised by all biological paleontologists, investigating structures lasting since hundreds of millions of years according to fossil evidence and DNA analysis.

The view that natural structures are realized successively by evolution belongs to the now prevailing scientific world view and is also accepted by many Christian philosophers and scientists. In 1956 Jan Lever published Creatie en evolutie (Creation and evolution, 1958), convincing many Christians of the viability of evolution.[53] According to Herman Dooyeweerd, evolution is a subjective process of becoming in which structural principles of created reality are successively realized. ‘It concerns the realization of the most individualized and differentiated structural types in plants and animals. It does not concern the structural types as laws or ordering types for the long process of the genesis of the flora and the fauna within the order of time.’[54]

Neither evolution as a natural phenomenon nor its theory should be identified with evolutionism. This is a reductionist, ontological- naturalist, materialist, and exclusive world view, in which ‘... evolution functions as a myth, ... a shared way of understanding ourselves at the deep level of religion, a deep interpretation of ourselves to ourselves, a way of telling us why we are here, where we come from, and where we are going.’[55]

Evolutionism applies the concept of evolution at all times and everywhere, including the humanities, theology not excepted.[56] In contrast to evolutionism the evolution theory is a scientific construction, restricted to physical, chemical and biological processes, as practised by natural scientists.[57]

One of the basic assumptions of the standard Darwinian theory of evolution is that each living organism is genetically related to all others. As far as known there is no living individual that does not descend from another one. This proposition, omne vivum ex vivo,  expresses a universal biological law. It is not an a prioristatement (until the middle of the nineteenth century scientists considered generatio spontanea very well possible [58] ), but is based on empirical research. This general law prohibits a biological explanation of the emergence of the first living beings. There are more unexplained transitions, like the emergence of the first eukaryotic cells (having a cell nucleus, unlike prokaryotes); of multi-cellular living beings; of sexual reproduction; and of the first plants, animals, and fungi. Finally, there is the emergence of humankind, for which the theory of evolution may be able to give a necessary, but not a sufficient explanation.

Naturalists reduce the normative aspects of reality to the natural ones. They believe that everything is restless subject to natural laws. Sometimes they believe that people are not free to act, and cannot be held responsible for their acts and the ensuing consequences. [59] That is highly remarkable, because both physics and biology heavily depend on the occurrence of stochastic or random events, and do not provide a deterministic basis for naturalism (chapter 9).

The laws of Darwinian evolution, about adaptation, natural selection, and common descent are generic, not specific. This is a property they share with the physical laws of mechanics and of thermodynamics. In Darwin’s time positivist and materialist energeticists (10.6) like Friedrich Ostwald, Ernst Mach, and initially Max Planck, believed that all of physics should be explained from these general laws, interpreted to be deterministic. They scorned Ludwig Boltzmann for applying statistics to physical problems. They rejected the reality of atoms and molecules. The development of physics during the twentieth century made clear that the generic laws act as constraints, not showing what is possible but rather what is impossible. Processes violating the law of energy conservation are prohibited, for instance. In the twentieth century it became clear that these generic laws are not sufficient. Physicists discovered typical conservation laws like the law of conservation of electric charge, besides symmetry laws, again prohibiting certain conceivable processes (chapter 8). These laws give room for processes that might happen, without determining which processes that would be, which in part depends on accidental circumstances.

Similarly, Darwin’s theory may be able to explain which circumstances allow (or in particular do not allow) species to come into being, or force them to be extinguished. But the theory does not explain why some species correspond with stable organisms in these circumstances and others do not. Since the twentieth-century synthesis of Darwin’s theory with genetics and molecular biology, biologists have become aware that the generic laws of evolution should be complemented with specific laws in order to explain the enormous variety of living beings (6.7). [60]

Naturalism interprets human history as the continuation of natural evolution, determined by physical-chemical, biological and psychological laws and relations. The study of animals living in groups is called ‘socio-biology’.[61] For quite some time, Edward Wilson’s sociobiology has been controversial as far as its results were extrapolated to human behaviour.[62] Sociobiology was accused of ‘genetic determinism’, i.e. the view that human behaviour is mostly or entirely genetically determined.

In the biological evolution the transfer of genetic information is central. Radical evolutionists like Richard Dawkins even assume that the bearers of this information are not the individual plants and animals or their populations, but the ‘selfish’ genes themselves.[63] However, Ernst Mayr asserts: ‘The geneticists, almost from 1900 on, in a rather reductionist spirit preferred to consider the gene the target of evolution. In the past 25 years, however, they have largely returned to the Darwinian view that the individual is the principal target.’[64]

Naturalists are inclined to describe human history analogous to the evolution, in particular by applying Charles Darwin’s ideas about adaption and natural selection. Instead of genes they consider memes as culture elements which are non-genetically distributed about bearers of information. Memes would form the units of the cultural transfer of experience.[65]

The historical and cultural transfer of experience in asymmetrical relations (like that of teachers and their pupils) is as diverse as human experience itself.[66] It includes the transfer of knowledge, to start with practical know-how. Education and language are instrumental in the transfer of experience, which is completely absent in the animal world. The transfer of experience as an engine of history replaces heredity as an engine of biotic evolution, but the genetic theory of evolution is not applicable to history.

Natural selection is a slow process. The evolution of humanlike hominids to the present homo sapiens took at least six million years, which is not even long on a geological scale. But human history is at most two hundred thousand years old. Because of human activity, it happens much faster than biological evolution, and is even accelerating. Moreover, human experience cannot be inherited. In contrast to Jean-Baptiste Lamarck, Darwin excluded the genetic transfer of experience.

Besides radical evolutionists like Richard Dawkins and Daniel Dennett, in the twentieth and twenty-first centuries several neuroscientists became strong defendants of ontological naturalism. Whereas mainstream philosophy was mainly concerned either with positivist epistemology or with existentialism, both with the focus on the ideal of personality, neurophilosophers stressed the natural functioning of the human brain and thus the ideal of science.

Evolutionism as radical ontological naturalism has its counterpart in the no less radical relativism, to which we shall return in chapter 13.

But first we shall criticize evolutionism by looking at some differences between animals and human beings.


12.8. Animal behaviour and human activity


Enlightenment philosophy was especially interested in the natural sciences, in natural laws and evolution. Romanticism was more involved with the humanities, with history, with social and political values, like human rights and the famous triad of freedom, equality, and fraternity. In this development several views on ethics played a part.

Whereas ethics is as old as philosophy, only in the twentieth century, biological ethology entered the scene. It studies the behaviour of animals, which is not subject to values or norms, but to specific natural laws, restricted to the species to which the animal belongs. Psychic and organic needs determine the strongly programmed animal behaviour as well as related kinds of human behaviour. In contrast, human acts are characterized by free will and normative relations. ‘If we describe what people or animals do, without inquiring into their subjective reasons for doing it, we are talking about their behaviour. If we study the subjective aspects of what they do, the reasons and ideas underlying and guiding it, we are concerned with the world of meaning. If we concern ourselves both with what people are, overtly and objectively, seen to do (or not to do) and their reasons for so doing (or not doing) which relate to the world of meaning and understanding, we then describe action.’ [67]

Human sensitivity has a primary or a secondary character. Feelings people have in common with animals like fear, pain, cold or hunger, are primarily psychic or organic. Besides, people have a secondary sense of skilful labour, beauty, clarity, truth, service, management, justice and loving care. The awareness of these values points to a human disposition which is not yet articulated, a heritable intuition, shared by all people, laid down in the human genetic and psychic constitution. When this intuition is developed in education one speaks of a virtue or a vice.

Animals have a sense of regularity, such that they are able to learn, but only people are able to achieve explicit knowledge about natural laws as well as about values and norms. This knowledge rests on intuition, and is opened up by image formation, interpretation, argumentation, conviction, and education. During this lifelong process, people develop experienced values into norms within the context of their history, culture, and civilization. Hence, values, being normative principles, should be distinguished from actual norms. ‘Values are central standards, by which people judge the behaviour of one’s own and that of others. In contrast to a norm, a value does not specify a concrete line of action, but rather an abstract starting point for behaviour. Therefore, values or ‘principles’ are ideas, to a large extent forming the frame of reference of all kinds of perception. Often, a value forms the core of a large number of norms.’[68] Instead of ‘value’ the term ‘commandment’ could be used in order to indicate both the agreement and the difference with a natural law. Animals satisfy coercive natural laws. People do that too, but moreover they obey (or disobey) commandments.

The distinction of animals and human beings is a problem for the Enlightenment. On the one hand naturalism requires that human beings do not fundamentally differ from animals, that human acts are not different from  animal behaviour. On the other hand the ideal of personality demands a different view of people, expressed by their moral, not determined by natural laws but by values and norms.

The central theme of Enlightenment philosophy is the self-image of people, in particular their autonomy, to be a law onto themselves. Probably animals (at least mammals and birds) have a sense of identity, too, but contrary to humans animals cannot take distance to their environment, their relatives, or to themselves. Animal behaviour is largely stereotype, laid down in the genetic structure of the species. In contrast, human acts, as far as these transcend animal behaviour, are free and responsible. An animal is bound to its Umwelt, the environment as it experiences it immediately, its physical, organic and psychic relations, in which it has specialised itself such that it has optimal chances to survive and to reproduce. In contrast, humans are not completely fixated, they are Weltoffen,[69] open to the world.

The standard naturalistic practice is to reduce all normative principles to the natural ones. In order to deny normativity, ontological naturalists often assume that people are not free to act, and cannot be held responsible for their acts and the ensuing consequences. Everything, including human activity, is completely determined by natural laws. People are not really different from animals, the differences are at most gradual. This rather dogmatic and theoretical view is opposed by the generally accepted practical assumption that human beings are to a certain extent free to act, and therefore responsible for their deeds. Although this confirms common understanding, in philosophy it is an unprovable hypothesis. Naturalist philosophers denying free will cannot prove their view too, but they should carry the burden of proof for a conviction deviating from common sense.[70] Of course, many human acts are based on a reflex or some other fixed action pattern, wired in the brain or the nervous system. Experiments pointing this out cannot prove, however, that this is always the case.

Naturalistic evolutionism that considers a human being like an animal or plant merely as an accidental natural product, wants to explain the evolution of humankind as part of the animal world as a completely natural process. This does not explain, however, the universal notion of norms and values by which humanity transcends the animal world, the metaphorical notion that humanity has been called out of the animal world. The evolution of humankind, like the evolution of plants and animals, occurs partly according to natural laws, in the future maybe providing a minimum necessary, though not a sufficient explanation for the coming into being of humanity.[71] For a sufficient explanation one has to take into account commandments, irreducible to natural laws.

Concerning a minimum necessary explanation, there is no reasonable doubt that human beings, as far as their body structure is concerned, evolved from the animal world. This is a hypothesis, for which no logical proof exists, and probably never will exist. Scientific laboratories cannot copy evolution. However, scientific evidence differs from logical proof. Science does not require conclusive proof for the hypothesis of human descent from the animal world.It requires empirical proof that does not contradict the hypothesis, but corroborates it. Evidence for evolution, including the human one, is available in abundance. Moreover, for the aforementioned hypothesis no scientifically defensible or viable alternative appears to be at hand.

Both human beings and animals belong to the world of living beings because of their organic character, but they transcend it as well. In contrast to plants, the character of animals is not primarily organic, but psychic, characterized by their behaviour. Likewise, the assumption that humans have a place in the animal kingdom does not imply that they are characterized by their natural behaviour. It does not exclude that a human body differs from an animal body in several respects.[72] The size of the brain, the erect gait, the versatility of the human hand, the absence of a tail, and the naked skin point to the unique position of humankind in the animal world.

According to Martin Buber[73] human being starts with taking distance to the Umwelt, such that a person stands opposite nature, something an animal cannot do: its Umwelt is its immediately experienced world. The Urdistanzierung or Urdistanz at the start of humanity repeats itself in the development of each child. This movement is followed by another one, Beziehung, becoming related to the world, in particular to fellow people. In the ich-du (I-you) relation each human being searches for self-confirmation, Bestätigung.

Hence the human self starts with the possibility to take distance. This is connected to the consciousness of time, of past, present and future, enabling people to cultivate the earth. By taking distance people become free to disclose themselves and the earth.

Human beings are called out of the animal world in order to command nature in a responsible way, to love their neighbours, and to serve their God. People are called to further good and combat evil, in freedom and responsibility. Science or philosophy cannot explain this vocation from the laws of nature. Yet it may be considered an empirical fact that all people experience a calling to do well and to avoid evil. This fact is open to scientific archaeological and historical research, as well as for philosophical and theological discussion.

The question of when this calling was manifested for the first time can only be answered within a wide margin. It is comparable to the question of when (between conception and birth) a human embryo becomes an individual person, with a vocation to be human. The creation of humanity before all times, including the vocation to function as God’s image, should be distinguished from its realization in the course of time. Contrary to the first, the latter can be dated in principle.

The fact that animals can learn from their experience shows that they have a sense for natural regularity, but only people consider commandments. Though not coercive, values appear to be as universal as the natural laws. From the beginning of history, human beings have been aware that they are to a certain extent free to obey or to disobey these commandments in a way that neither animals nor human beings can obey or disobey natural laws. Moreover, sooner or later they discovered that the normative principles are not sufficient. In particular the organization of human societies required the introduction of human-made norms as implementation or positivization of normative principles. Therefore, human freedom and responsibility have two sides. At the law side it means the development of norms from normative principles, which norms are different at historical times and places, in various cultures and civilizations. At the subject side, individual persons and their associations are required to act according to these norms, in order to warrant the execution of their freedom and responsibility. There is no need to argue that both have been misused at a large scale.

The normative principles like justice are universal and recognizable in the whole of history (as far as documented), in all cultures and civilizations. Human skills, aesthetic experience, and language may widely differ, but are always present and recognizable where people are found. The sense of universal values and norms is inborn.


[1] Israel 2006, chapter 31.

[2] Israel 2006, chapter 32.

[3] Israel 2006, chapter 33. 

[4] Israel 2006, chapter 33. 

[5] Israel 2006, 822.

[6] Israel 2014.

[7] Israel 2006, chapter 29.

[8] Taylor 2007, 71-73.

[9] Israel 2001, chapter 21.

[10] Israel 2001, chapter 12.

[11] Wootton 2015, 299-300.

[12] Colin MacLaurin, cited by Israel 2006, 201.

[13] Gillispie 1951, 35-40; Dawkins 1986, 4-5.

[14] Plantinga 1974; Rutten, de Ridder 2015.

[15] Miles 1995, 108.

[16] Kohnstamm 1948, 286-289; Taylor 1989, 140; Troost 2004, 283.

[17] Miles 1995, 18 (prologue); Armstrong 1993.

[18] Auerbach 1946, chapter 1.

[19] Miles 1995.

[20] Toulmin, Goodfield 1965; Lindberg, Numbers (eds.) 1986; Barrow, Tipler 1986, chapter 2; Bowler 1989; Israel 2001, chapter 24; de Pater 2005.

[21] Newton 1687, 544; 1704, 402-403.

[22] Plato, Timaeus.

[23] Hume 1779.

[24] Hawking 1988; Barrow 1990.

[25] Einstein in 1929, quoted in Schilpp (ed.) 1949, 103, 659-660; Spinoza 1677, First part.

[26] Papineau 1993, 1-2; Gaukroger 1995, 147-150; Plantinga 2011.

[27] Huxley (1889), cited in Dupree 1986, 362-363.

[28] Isaac Newton, ‘Scheme for establishing the Royal Society’ (1703), quoted by Westfall 1980, 632.

[29] Rudwick 2005, 115-131.

[30] Rudwick 2005, 90-94.

[31] Gillispie 1951, chapter II, III; Rudwick 2005,158-172.

[32] Gillispie 1951, 44; Rudwick 2005, 172-178.

[33] Gillispie 1951, 46-48.

[34] Rudwick 2005, 434-445.

[35] Gillispie 1951, chapter IV. Rudwick 2005 deals extensively with Cuvier’s works.

[36] Gillispie 1951, chapter V; Rudwick 2008, 201-206, 244-390.

[37] Rudwick 2008, 423-436.

[38] Gillispie 1951, chapter VI.

[39] Clouser. 2016; Van den Brink 2017, chapter 4.

[40] Lane Fox 1991.

[41] Van den Brink 2017, 14.

[42] Van den Brink 2017, 114-120.

[43] Clouser 2016, 1.

[44] Klein, Lefèvre 2007, 251-253.

[45] Mason 1991.

[46] Raff 1996, 400.

[47] Raff 1996, chapter 3.

[48] Denton 2016, chapter 13.

[49] Rutten, de Ridder 2015, chapter 3.

[50] Van den Brink 2017, section 2.1 does not mention the fourth step.

[51] Mayr 1982, chapter 12.

[52] Denton 2016; Alexander 2018.

[53] Cook, Flips 2017.

[54] Dooyeweerd 1959, 143.

[55] Plantinga 1991, 682; Medley 1985.

[56] Van den Brink 2017.

[57] Miller 1999, 53-56.

[58] Farley 1974.

[59] See e.g. Swab 2010, chapter XVIII.

[60] Miller 1999; Cunningham 2010, chapter 4.

[61] Wilson 1975.

[62] Medley 1985; Segerstråle 2000; Ruse 2005.

[63] Dawkins 1976; Sober 1993, chapter 4.

[64] Mayr 2000, 68-69.

[65] Dawkins 1976; Cunningham 2010, 206-212; Dennett 2017, chapters 10-11.

[66] Stafleu 2008.

[67] Reynolds 1976, xv, referring to Max Weber.

[68] Van Doorn, Lammers 1959, 99 (my translation); Hübner 1978, 108.

[69] Scheler 1928, 37-39.

[70] Popper 1982, 27-28, See also Popper 1972, chapter 6.

[71] Mayr 1982, 438.

[72] Reynolds 1976, 87

[73] Sperna Weiland 1999, chapter 10.



Chapter 13




13.1. Romanticism and history


History plays an important part in Romanticism. Both the Renaissance and the rationalist Enlightenment were critical if not disdainful of medieval history, but the Romantics were fond of it. Moreover they introduced various new philosophies of history. Historism and historicism (which cannot always be distinguished from each other), as well as constructivism and post-modern relativism are part and parcel of the romantic movement in the philosophy of Enlightenment. They have strongly influenced the twentieth-century views of the development of science. There is a naturalist philosophy of history as well, assuming that human history is no more than the continuation of natural evolution. These views deserve a prominent place in the history of natural philosophy and natural theology as discussed in the present book.

Historicism supposes that history is subject to invariant laws, after the model of natural laws. It figures in Georg Hegel’s idealism, in Auguste Comte’s positivism, and in Karl Marx’s historical materialism.[1] Its main law was that of historical progress. The intuition of progress as a value is neither due to the Enlightenment nor to the Renaissance. The later belief in progress as an inevitable law identified progress as a normative cultural principle with the factual history of the seventeenth to nineteenth-century science and technology.[2] The Eurocentric belief in progress considered the technical and scientific progress as characteristic for the whole of history of mankind.[3] In 1931 Herbert Butterfield criticized the ‘Whig interpretation of history’ describing history as continuous progress after the model of the British Empire. This optimistic view on the actual history became a deep felt disappointment at the outbreak of the great European war in 1914, when science and technology became instruments of mass destruction on a large scale. Progress turned out to lack the compulsory law conformity of a natural law.

In contrast, Romantics like Jean-Jacques Rousseau and Johann Herder relativized historical law conformity by individualizing history. This is sometimes called historism, to be distinguished from historicism. It only recognized mutual relations between an endless stream of accidental, contingent, unique and individual occurrences,[4] emphasizing diachronic succession, ‘for historism resolves everything in a continuous stream of historical development. Everything must be seen as the result of its previous history.’[5] ‘It was believed that the understanding of x consisted in knowing the history of x.’[6] Historism absolutizes individual history by relativizing everything else,[7] in particular denying the law-side of normativity, thereby destroying the meaning of history.

A third kind of historism absolutizes the objectivity of historical events, ‘bloss zeigen wie es eigentlich gewesen’ (merely show how it actually happened), according to Leopold von Ranke.[8] This recipe was already applied by Edward Gibbon in The history of the decline and fall of the Roman empire (1776-1789).[9]  

It goes without saying that these three variants of relational historism relativize each other, and as an absolutization of law conformity, subjectivity, or objectivity, neither can do justice to historical reality. Any responsible historical treatise should consider these three points of view equally critically.

Historicism became part of the Soviet philosophy of nature.[10] In theology Ernst Troeltsch was a historist,[11] who like Friedrich Schleiermacher (13.5) argued that religion is directed to the Absolute. Both were also authorities in biblical text criticism, the vehicle of theological historism. Historism entered philosophy of science in the work of Pierre Duhem, followed by Thomas Kuhn and Paul Feyerabend, and finally social constructivism and post-modern relativism. A key concept became the objectivity of facts.


13.2. Public facts


Since Plato it is considered a fact that there are exactly five regular polyhedra, but Imre Lakatos devoted his doctoral thesis to argue that this fact is no more than a convention.[12]

Before the seventeenth century, a fact was called a phenomenon.[13]  In England, Thomas Hobbes, and Robert Boyle belonged to the first scholars writing of facts, but both Robert Hooke and Isaac Newton stuck to ‘phenomena’ or ‘observations’. The word factum, meaning ‘that what has been done’, originates from law courts, which first task is to establish the relevant facts in any legal procedure, distinguishing relevant matters of fact from matters of law or matters of faith. Antoine Arnauld and Blaise Pascal in France used the word fact in their dispute (5.4) , arguing that the Jansenist propositions convicted by the pope could not factually been found in Cornelis Jansen’s book on Augustine, quite apart from the question of whether or not these propositions were heretical.

Knowledge of a part of reality is called a fact if everyone concerned is convinced of its truth. A fact is therefore dependent on human activity, it is an artefact. ‘Everyone’ does not mean literally all people, because then no facts or data would exist. There is always someone to find who doubts everything. Here it concerns a consensus in a public subjective network of experts. In physics something is considered a fact if most physicists accept it as such, but they do that only after it has been established in a proper, scientific way, if it is replicable and reproducible. The same applies to all sciences and practices. The establishment of a fact is an elaborate process. Facts are by no means self-evident or self-explanatory.[14] Each fact is a part of an objective public network of scientific theories and practical protocols.

Clearly, facts are culturally and historically determined social constructs. Extreme historists state that facts are no more than historical and cultural constructs. Constructivists assume that all facts are no more than social constructs, but scientists believe that genuine, reliable facts are distinguished by being firmly based in critical scientific research. This implies that any fact is open to critique, and may be challenged: no fact is undeniable.

Until the seventeenth century two sources of knowledge were generally recognized: authority and reason. Thomas Hobbes revolutionized this scheme. Rejecting authority he acknowledged besides reason only experience, memory, and testimony by third parties as sources of facts.[15] His contemporary Robert Boyle emphasized that besides observations also experiments provide facts, if being witnessed by reliable experts and made public.[16]

Often the justification of a fact cannot be understood by someone who is not an expert, who can only accept it as given and if necessary apply it on the authority of experts. Notwithstanding Hobbes, no society can operate without knowledge based on authority. In particular facts have a public function in discussions. A fact is never entirely objective, for it is always part of a subject-object relation within the public network from which it is taken. It can be quite legitimate to doubt a fact, if one does so in an argued way. The truth of a fact depends on the context of the dialogue. What one accepts as a datum in one case (‘the earth has the shape of a sphere’) is in another case object of discussion (‘is not the earth flattened at the poles?’). Sometimes one has to establish a fact by reasoning. Historical facts and data are only objective in a relation to a subject responsible for it. Yet they ought to be available to the public.

During the first half of the twentieth century, logical-empiricism emphasized objectivity in science. It considered something a fact if it was the argued result of empirical observation. It was only interested in the proof of theories, and not at all in the history of science or in heuristics, the method of finding theories. This a-historical view of the performance of science came under attack when historians and sociologists of science stressed that science cannot withdraw from historical and social influences. They called attention to the social relevance of networks of laboratories and other research institutes.[17] Especially in the social sciences, Thomas Kuhn made a deep impression with The structure of scientific revolutions (1962).[18] Although his book deals with natural science, it received in the philosophy of science (outside the United States) far less attention than in the social sciences. Kuhn asserted that a mature science in each historical period depends on a paradigm. This is both an authoritative example for the performance of science conceived as problem solving (like Newton’s Principia or Opticks, and Darwin’s Origins) and a matrix, a social network of scientists exerting research according to this example. The introduction of a new paradigm means a scientific revolution, which cannot be rationally explained. Imre Lakatos combined the views of Thomas Kuhn with those of Karl Popper into the methodology of scientific research programmes, in which he only considered successive approximation as a method of scientific discovery (7.5).[19] Both Imre Lakatos and Paul Feyerabend defended the subjectivistic construction of historical facts, although Lakatos told in footnotes how history really happened, according to Leopold von Ranke’s prescript.


13.3. Crisis and revolution


Both the words revolution and crisis belong to the romantic vocabulary. During the seventeenth century, ‘revolution’ was only used in the astronomical sense of planetary motion around a centre, like in Copernicus’ De revolutionibus orbium coelestium, whereas ‘reformation’ indicated a social upheaval. Romantic philosophers were fond of revolutions, from the glorious revolution in England, the American, French, and Dutch political revolutions to the industrial one. Marxists made revolution a leading motive for political action. Immanuel Kant coined the expression ‘Copernican revolution’, and Antoine Lavoisier’s work was heralded as the ‘chemical revolution’. The cliché of the seventeenth-century ‘scientific revolution’ came into use only in the twentieth-century.[20] 

According to Thomas Kuhn a period of normal science is characterized by a social group of scientists investigating their field of science according to an accepted paradigm. The period ends in a crisis, induced by a persistent anomaly or an increasing number of anomalies, problems which cannot be solved according to the accepted paradigm.[21] Eventually, a new paradigm replaces the old one and this constitutes a scientific revolution.[22] As far as this counts as a historical law, Kuhn may be considered a historicist rather than a historist.

For example, Kuhn points to the crisis preceding the publication of Nicholas Copernicus’ Revolutionibus (1543),[23] but this example does not tally with the historical facts.[24] (It is of course an open question whether Kuhn would be bothered about that). Before Copernicus all experts considered Claudius Ptolemy’s theory quite satisfactory.[25] Copernicus himself was the first to signalize a situation of crisis, but he was hardly unbiased. He had an obvious interest in putting the old theory in an unfavourable light. In the introduction to his Tabulae prutenicae (Prussian tables, 1551), based on Copernicus’ calculations but not on his heliocentric theory, Erasmus Reinhold stated: ‘The science of celestial motions was nearly in ruins; the studies and works of (Copernicus) have restored it,’ but he referred to the quality of the available astronomical tables.[26] His new tables were better than the outdated Alfonsine tables (called after Alfonso X of Castile, thirteenth century), but this was hardly due to the introduction of a heliostatic model. Tycho Brahe found both unsatisfactory. Only in 1627 Johann Kepler published the much improved Rudolfine tables, called after emperor Rudolf II, based on Tycho Brahe’s observations, and applying Kepler’s laws.

Also the publication of Isaac Newton’s Principia was not preceded by a crisis. Except for the conservatives, who held fast to Aristotelian physics, most educated people considered Cartesian physics satisfactory, promising, and acceptable. The criticism of Cartesian physics was primarily levelled by Newton himself, who again had an interest in putting his competitor in an unfavourable light.

Contrary to Kuhn, one could propose that a crisis is more often than not an effect of the introduction of a new fundamental theory, rather than its cause, namely when a new theory contradicts the generally accepted presuppositions.[27] In that case a new theory requires adapting the presuppositions. This evokes resistance and may lead to a conflict.

This is clearly the case with Copernicus’ theory as well as with Tycho Brahe’s discoveries concerning the new star of 1572 and the comet of 1577 (2.3), contradicting the most important astronomical presuppositions of their time. Hence, the initial response to Copernicus’ theory was negative. The first scholars accepting Copernicanism already doubted the Aristotelian presuppositions beforehand. The crisis became a fact only when Galileo’s astronomical discoveries (1609-1610) made the new theory a serious threat to the Aristotelian philosophy accepted by the theologians (2.2). The great debate concerning the merits of the Ptolemaic and the Copernican systems did not take place before 1543, as Kuhn wants to make us believe, but between 1610 and 1640.

Newton’s theory of gravity, too, was not the effect of a crisis, but its cause. This crisis did not occur in the theory of gravity, but in its presuppositions, mechanics and mathematics. In mechanics, the principle of action by contact in a plenum had to be replaced by action at a distance in a void. Newton’s theory led to the introduction of integral and differential calculus, causing a crisis in mathematics. In order to avoid this crisis, Newton presented the proofs in his Principia in an old-fashioned geometric way. Mathematicians have struggled with the foundations of the calculus until the nineteenth century. Even the crisis leading to the abandonment of the Pythagorean brotherhood was an effect of the theory leading to Pythagoras’ law.


13.4. The crisis of 1910


The deepest crisis in physics, possibly the only one in the physical sciences deserving that name, occurred about 1910, and was initially only experienced by a handful of physicists and chemists. Yet it caused an earthquake in the anorganic science of that time. It marks the transition from ‘classical’ to ‘modern’ physical science,  

At the end of the nineteenth century many people believed that physics had acquired so many successes that not much was left for future generations. As a student, Max Planck was advised to avoid physics as being almost finished. William Thomson said that all physical problems were solved, with a few exceptions. Albert Michelson was of the opinion that natural science could restrict itself to determining the constants of nature with more precision, in more decimals. In contrast, in 1910 several physicists and chemists concluded that their science viewed a serious crisis.[28] What happened?

  1. The introduction of the theory of relativity (1905) demonstrated that Newtonian mecha­ni­cs, the paradigm of science, was not infallible. Which scientific theory may then be trusted to lead to certainty?
  2. The nice separation between material particles (ato­ms, molecu­s, ions, and electrons) on the one side, and continuous waves (sound, light, and radio) at the other hand, was disturbed by Max Planck’s and Albert Einstein’s quantum­ hypothesis (1900-1905). This was achieved after the experimental investigation of the spectrum of infrared radiation emitted by a black body at various temperatures, for which Planck found a mathematical expression defeating any classical alternative.
  3. The quantum hypothesis made an explanation possible of black radiation, of the photo-electric effect, of the temperature-dependence of the specific heat of solids, and of a number of properties of X-rays and gamma-radiation, but because it contradicted common sense few people were inclined to accept it. In 1908 Hendrik Antoon Lorentz proved that the then accepted classical physics could not lead to Planck’s formula, although it was firmly embedded in undeniable experimental results.
  4. The discovery of the electron (1897) as part of atomic structures and of radioactivity (1896) made clear that atoms are not indivisible and not unchangeable. It turned out to be impossible to devise an atomic theory in agreement with James Clerk Maxwell’s laws for electromagnetism (1873), which were finally accepted after Heinrich Hertz’s experiments (1887). In particular the stability of atoms became a huge problem. Before 1900 people believed to know how atoms looked like, but one was not sure whether they existed. After 1900 one knew that atoms existed, but it remained a riddle how that was possible.
  5. The decline of determinism as a consequence of some experimental facts (Brownian motion, radioactivity) led people to question whether science is able to give an explanation of natural phenomena.

Another problem was the discovery in 1911 of superconductivity by Heike Kamerlingh Onnes, who attended the Solvay conference of 1911.

Something that was not experienced as a crisis but as progress concerns the introduction of entirely new experimental techniques, as a consequence of the development of the cathode ray tube (applied in the discovery of the electron). This led to electronics, the use of amplifiers, providing the investigation of the structure of matter with entirely new possibilities. Eventually mechanics as a standard of science was replaced by electroni­cs.

Of course, it took some time before scientists were aware of the crisis, in particular because most scientists did not immediately accept a number of its aspects. Many physicists and chemists met the theory of relativi­ty and the quantum hypothesis with unbelief. Deter­minism was too much entrenched in the accepted world view to be abandoned directly. The impossibility to find a suitable atomic model was only convincing for those who had attempted it. The interna­tional communication was not favourable to a correct estimation of the states of affairs. The increasing nationalism and mutual distrust, leading to the First World War, influenced international communication in science not exactly positively.

But after 1910 the crisis was unmistakable, as Max Planck wrote: ‘[The theoreticians] now work with an audacity unheard of in earlier times, at present no physical law is considered assured beyond doubt, each and every physical truth is open to dispute. It often looks as if the time of chaos again is drawing near in theoretical physics.’[29]

It was a crisis in the theory of physics, for the experimental results that provoked it were accepted without much doubt.

In order to discuss the matter, Walther Nernst, financially supported by the Belgian industrialist Ernest Solvay, called for an international conference in 1911. With Lorentz as its chairman, a group of scientists discussed a variety of problems. The first Solvay conference may be considered the start of the period of crisis mentioned above, which lasted till 1927, when the new quan­tum physics was accepted during the seventh Solvay conference, the final one with Lorentz as its chairman.

In this period, Niels Bohr (not yet invited in 1911) played the most important part. In 1913, he simply postulated that the hydrogen atom is stable, and that its electron moves around a nucleus like a planet around the sun without radiating energy, as long as it remains in a stationary orbit. He introduced a numbered sequence of discrete  orbits, explaining the spectrum of atomic hydrogen as emerging from quantum jumps between successive orbits. His theory was amazingly correct in a quantitative sense, as was confirmed by the analysis of the spectrum of ionized helium atoms (like neutral hydrogen atoms having one electron, but a heavier nucleus and a double electric charge). However, nobody (including Bohr) understood how it worked.

The crisis meant the end of the Kantian mechanicist world view. In principle Newton’s now called ‘classical’ mechanics was replaced by the special (1905) and general (1916) theory of relativity and by the quantum theory (1927). However, the mechanist ideal remained alive, witness the fact that quantum physics is still often called quantum mecha­nics, even if quantum electrodynami­cs would have been more to the point. (Later, quantum electrodynamics (QED) became the name of the theory of electromagnetic interaction between subatomic particles.)

The solution of the crisis was acceptable because the classical theory remained useful in practical situations, as a limiting case, when the speed of the bodies concerned is low compared to the speed of light, and their mass is much larger than that of atoms and molecules.


13.5. Postmodern relativism


The later social constructivists stated that each theory arises from negotiations between groups of scientists.[30] ‘Within such a program all knowledge and all knowledge claims are to be treated as being socially constructed; that is, explanations for the genesis, acceptance, and rejection of knowledge claims are sought in the domain of the social world rather than in the natural world.’[31]

Social constructivism is a form of postmodernism or poststructuralism, but it can also be considered a revival of positivist conventionalism, having quite a lot of adherents during the first half of the twentieth century. It could also have been called post-Enlightenment, for it takes leave from the modernist project of the autonomous person, from the domination of nature, and from fundamentalism.[32] As a reaction to the horrors of the long European war (1914-1991) and the decline of Marxism and existentialism, postmodern philosophers and historians rejected the ‘grand narratives’, the all encompassing idealistic views on humanity and its history, including Christianity, which did not convince them any more.[33] More than the original Enlightenment, postmodernism stresses the social connections within human communities. The central question from antique to modern philosophy concerned the possibility of autonomous human persons to achieve absolutely certain knowledge, on the basis of propositions which anybody can understand to be true. In twentieth-century philosophy human knowledge lost the central position which it had occupied since René Descartes (the father of modernism).

Social-constructivism or radical relativism maintains that each truth is bound to culture, dependent on insights of individual scientists and the scientific community. Ludwig Wittgenstein with his language games is the twentieth-century grandfather of constructivism, if its fathers are the sociologists Barry Barnes and David Bloor with their ‘strong programme’.[34] Nobody is a tabula rasa, an unwritten piece of paper absorbing knowledge from outside through the senses. According to social constructivism, human beings construct their knowledge from a tangle of experiences, and anyone’s construction is not better than that of someone else.

The social-constructivist’s stress on subjectivity evokes much resistance among scientists and other scholars, because it undermines the public character of science and underestimates the force of mutual criticism in the scientific community.[35] A correct balance of subjective and objective aspects of the performance of science can only be achieved by not losing out of sight the law side of reality, both natural laws and normative principles. These are neither determined by an objective theory nor by subjective insight, but can be found in reality, they are open for research. The achievement of knowledge is not only objective or subjective, but also normative. Who wants to receive trustworthy knowledge ought to search for the truth in a critical way, otherwise one will only find confirmations of one’s own prejudices.

About such prejudgments it could be observed that the positivist position cannot be justified; that Popper’s philosophy cannot be falsified; that historism is a historical phenomenon itself; and that the social constructivists hardly ever apply their relativism to their own views.

However much facts are clearly human products, historically, culturally and socially determined, they are more than that, having an objective content when people use them in a responsible way. In particular one should be aware that facts as established and confirmed in scientific research are quite reliable. Relativists correctly observe that in advanced research many widely diverging hypotheses are considered, but they tend to overlook that especially experimental and instrumental observations together with careful measurements usually succeed in separating chaff from wheat very fast.

Radical relativism with respect to facts would be detrimental for historiography, for the first task of a reliable historian is fact finding, according to the ‘Reality rule’. This states that the historian writes about the past ‘wie es eigentlich gewesen’ in the words of Leopold von Ranke, for ‘historians are concerned and committed to tell about the past the best and most likely story that can be sustained by the relevant extrinsic evidence.’[36]

In most of his works, Thomas Kuhn confirmed his own paradigm by treating the facts from the history of natural science in a constructivist and not in an objective way according to the recipe of Leopold von Ranke, as Eduard Dijksterhuis and Alexandre Koyré did. In contrast, Kuhn’s Black body theory and the quantum disconti­nuity (1978) is a ‘classical’ historical work, not written according to his original paradigm. Likewise, when writing alternative historical accounts of science, several social constructivists base themselves on solid historical facts.[37]

The self-destroying relativism expressed by some positivist, historist, and social-constructivist science writers, lost much of its credibility because it could neither explain the success of the natural sciences investigating the hidden structure of organic and anorganic matter, nor the undeniable applicability of modern technology and medicine based on natural science. ‘Science, as a method and practice, is a social construct. But science as a system of knowledge is more than a social construct because it is successful, because it fits with reality.’[38] ‘Realism, the belief that science gets at the truth, is the only philosophy that doesn’t make the success a miracle’.[39]

These are strong arguments in favour of a critical realistic interpretation of scientific laws and facts, as it returned in the philosophy of science at the end of the twentieth century. In experimental science it never disappeared.

Radical relativism is the counterpart of evolutionism as radical ontological naturalism (12.7) in the dialectic of nature and freedom. Especially popular science writers act as the legal heirs of radical Enlightenment. However, they find both structural neo-Darwinism and critical realism on their path. It appears that these moderate views are fairly consistent with each other.


[1] Löwith 1949; Popper 1945, 1957; White 1973; Ankersmit 1983; Fukuyama 1992; Lemon 2003, part I, IV.

[2] Toulmin, Goodfield 1965, chapter 5; Fukuyama 1992, 30-33 (chapter 1); Hobsbawm 1994, 19 (introduction, II).

[3] Pinker 2018, part II.

[4] Ankersmit 1983, 171-182.

[5] Ankersmit 2005, 143.

[6] Danto 1985, 324.

[7] Huizinga 1937, 136-138.

[8] Danto 1985, 130-133, 139.

[9] Gibbon 1776-1789.

[10] Lenin 1908.

[11] Klapwijk 1970.

[12] Lakatos 1976.

[13] Wootton 2010, 99-100; 2015, chapter 7.

[14] Shapin, Schaffer 1985, 225.

[15] Wootton 2015, 298.

[16] Shapin, Schaffer 1985.

[17] Heelan, Schulkin 1998, 139; Shapin, Schaffer 1985, chapter 6; Latour 1987; Galison 1987; 1997.

[18] Kuhn 1962.

[19] Popper 1959, 1963; Lakatos, Musgrave (eds.) 1970; Lakatos 1976; 1978; Feyerabend 1975.

[20] Wootton 2015, 16-17, 34-35.

[21] Kuhn 1962, chapter 6-8.

[22] Stafleu 2016, 5.4.

[23] Kuhn 1962, 68-69.

[24] See, e.g., Rosen 1984, 131-132 and the discussion in Beer and Strand (eds.) 1975, session 3, in particular Gingerich.

[25] Dijksterhuis 1950, 325 (IV:9, 10).

[26] Koyré 1961, 94; Duhem 1908, 70-74.

[27] Laudan 1977, 14ff, 45ff, 88.

[28] Jammer 1966; Kuhn 1978; Pais 1982, 1986, 1991; Jungnickel, McCormmach 1986; Kragh 1999; Stafleu 2016, 6.5.

[29] Max Planck in 1910, cited by Pais 1991, 88.

[30] Cole 1992, 5; Niiniluoto 1999, chapter 9; Winner 1993.

[31] Pinch, Bijker 1987, 222.

[32] Smart 2000; Cahoone (ed.) 2003, 1-13; Wiersing 2007, 660-687.

[33] Lyotard 1979.

[34] Wootton 2015, 41-49.

[35] Cole 1992; Winner 1993.

[36] Vann 1995, 53.

[37] For instance, Pickering 1984; Shapin, Schaffer 1985; Galison 1987, 1997.

[38] Wootton 2015, 540. See also ‘Notes on relativism and relativists’, Wootton 580-592.

[39] Wootton 2015, 568, quoting Hilary Putnam.




Chapter 14


Critical realism

14.1. Karl Popper’s critical realism


The spirit of the Enlightenment remains influential. During the twentieth century Karl Raimund Popper was one of the most important philosophers of science. In 1982-1983 he published a postscript to The logic of scientific discovery (1959), the English translation of Logik der Forschung (1934). The postscript’s three volumes,[1] written mainly during the years 1951-1956, are specifically directed to physics, treating respectively realism, determinism and quantum theory. Much could be found earlier in Popper's publications after 1959, several of these collected in Conjectures and refutations (1963) and Objective knowledge (1972).

As a critical-realist (13.3), Popper believes that scientific theories ought to say something meaningful about reality. In these theories laws are concerned with things and events, subject to laws. Laws and subjects are correlates, one cannot understand one without the other, and they are mutually irreducible. The attempt to reduce subjects completely to laws leads to determinism, whereas to separate subjects from laws leads to an unlimited indeterminism. Popper’s realism is a reaction to the romantic logical-positivism of the Vienna Circle, inspired by Ernst Mach (8.8).

For Karl Popper’s views on nature and freedom the distinction of three ‘Worlds’ is important. He argues that World 1 (physical and biological objects) is in part indeterministic. Moreover it is influenced by World 2 (the human and animal psyche) and World 3 (the products of the human mind).[2] Popper insists that ‘… something exists, or is real, if and only if it can interact with members of World 1, with hard, physical, bodies.’[3]  Therefore the interaction between the three Worlds is a crucial element in Popper’s views.’ Our universe is partly causal, partly probabilistic, and partly open: it is emergent. … Man is certainly part of nature, but, in creating World 3, he has transcended himself and nature, as it existed before him. And human freedom is indeed part of nature, but it transcends nature – at least as it existed before the emergence of human language and of critical thought, and of human knowledge. Indeterminism is not enough: to understand human freedom we need more; we need the openness of World 1 towards World 2, and of World 2 towards World 3, and the autonomous and intrinsic openness of World 3, the world of the products of the human mind and, especially, of human knowledge.’[4]

The assumption of three independent but interacting Worlds contradicts monistic materialism, physicalism or philosophical behaviourism, as well as the ‘identity theory’, asserting that mental experiences are, in reality, identical with brain processes.[5] According to Popper the human identity is expressed in the partial autonomy of World 3, the set of all ideas and theories freely invented by people. Responsibility has a critical function, not only with respect to others, but first of all regarding oneself. Who withdraws from critique acts irrationally, irresponsibly, succumbing sooner or later to authoritarianism, repression and dictatorship. It is clear that Popper believes in this critical function. Poppers idea of human freedom ultimately depends on human autonomy.

Karl Popper’s philosophy is characterized by rational criticism, objectivity, realism, and the hypothetical-deductive method. From this position he criticises subjectivism, idealism, and inductivism, which he usually identifies and radically rejects. His critique is immanent as well as transcendent.

Immanent or intratheoretic critique attacks a theory from within, by accepting its basic assumptions but nothing else.

Transcendent or intertheoretic critique comes from outside, starting from a theory with alternative presuppositions.[6] Popper rejects the view that immanent critique is relatively unimportant, because it can only point out inconsistencies. Each theory intends to solve a problem, and immanent critique may consist of the proof that the problem is not solved, or that the solution is not better than that of a competing theory, or that the solution yields merely a problem shift. But transcendent critique is also acceptable. It leads to a comparison of two theories, such that we may prefer one to the other.[7] The most effective is a combination of immanent and transcendent critique: make clear what is wrong with a theory, and propose a viable alternative.

Referring to Immanuel Kant (8.4), Popper also discusses transcendental criticism. This method accepts scientific knowledge as a fact, investigating the principles explaining this fact.[8] Popper believes that transcendental critique can only be applied negatively, i.e., by criticising current views. He rejects an inductivist theory leading to the denial of the hypothetical-deductive method, considering theories to be superfluent.[9] By this double denial Popper achieves a confirmation of the hypothetical-deductive method, of course. This cannot be based on the experience that this method is fruitful, because that would be an inductive argument. It can neither be proved via the hypothetical-deductive method - that would be a petitio principii – like one cannot prove induction in an inductive way. What remains is that Popper believes in his method, just like the inductivists believe in theirs.

This is how transcendental criticism ought to operate: not convincing but revealing. It makes clear the driving motive of the philosophy concerned. It is directed to the pretentions of philosophical thought. One may wonder whether it is applicable to Popper’s views, as far as the transcendental critique is directed to the certainty of scientific results, because Popper maintains that there is no such certainty. However, Popper upholds other certainties. He is convinced of the hypothetical-deductive method, of logic, of realism, of objective truth, of the existence of natural laws, and of the relevance of rational criticism. Popper admits this, but he calls it metaphysics – it concerns statements of a superscientific character. The impossibility to achieve certain knowledge concerns theoretical knowledge, found with the help of theories, from the world outside us.

It is advisable to distinguish science from theoretical thought. In contrast to natural thought, theoretical thought is instrumental, it is thinking with the help of theories, propositions, and concepts. Theories have a characteristic logical structure, and are not characterized by any purpose. They can be used for many different goals: to make predictions, to solve problems, to provide explanations, both in science and elsewhere. In contrast, science as a human activity is characterized by its purpose: the opening up of the law side of reality. To achieve this, theories are invented, developed and applied, together with many other methods, like observing, measuring, experimenting, researching literature or the internet, archaeological digging, etc. Besides theories as logical instruments scientists apply their experience, their intuition, and their affection, their feeling of harmony and economy of thought, etc. Of course there is a strong connection between science and theoretical thought. Science does not only want to discover law conformities, but also to connect these rationally, to generalize, to draw conclusions and testing these, for which other theories are required. Reversely, theories almost always contain law statements, such that science can help to formulate theories. Therefore it will not be necessary to distinguish scientific activity from theoretical thought in every context.

Popper’s critique of inductivism can be summarized as: induction has no theoretical foundation, there is no defensible theory of induction. For Popper this means a rejection of induction. But by distinguishing theories with their hypothetical-deductive character from science, one at least creates room for the possibility that induction plays a heuristic part in science, even if induction is not theoretical but heuristic.

Transcendental criticism proceeds in several phases.[10] The first concerns the structure of theoretical thought, placing theories, statements and concepts as instruments between the thinking subject and the object of thought. This means that reality is taken apart, implying an antithesis between logical subject and logical object that does not exist in natural thought. This leads to the second phase of transcendental critique, the question of how to arrive at a synthesis of what is taken apart in the first phase. The answer to the problems of the first and second phase Popper finds in the theory of three worlds. The three worlds influence each other, and the synthesis between Worlds 1 and 2 is wrought by World 3. The ideas and theories are free inventions of humanity, but are tested in World 1 objects. This synthesis can only be achieved by critical self-reflection. Popper observes that by their theoretical activity, persons can transcend themselves.[11] By producing theories in freedom, persons transcend themselves, achieving an intellectual self-liberation.[12] Therefore, the third phase of transcendental critique is expressed as: how is this critical self-reflection possible? Popper’s answer is: by the method of rational criticism, by trial and error, by learning from one’s mistakes. By this method persons are able to transcend themselves. This also concerns the community of thought, for Popper expressly invites everybody to criticise him.

However, such a community can only exist by the grace of a common faith, a religious motive, constituting the driving force for their thought and activity. Popper recognizes the existence of regulative or ‘metaphysical’ ideas, the ideas of origin, unity and coherence of reality, directing theoretical thought. From these ideas one cannot derive theoretical knowledge, but these determine the methods of theoretical research. Popper is clearly driven by the Enlightenment motive of nature and freedom. He looks for a synthesis of nature and freedom in World 3, where a person in freedom poses hypotheses, in order to test these in World 1. By this synthesis a person liberates himself. By the rational critique humankind transcends itself, becoming free of prejudgments. However, Popper does not transcend logic, which he considers tautological, and therefore not subject to criticism. Without any critique he accepts that logic and mathematics are not empirical sciences.

The coherence of the three Worlds is weak, as weak as the interaction between René Descartes’ res extensa and res cognitans (3.1). Karl Popper does not come further than the assumption that there must be some kind of interaction between Worlds 1 and 2, and also between Worlds 2 and 3. Within World 1 there is a continuous coherence of physical things, plants, animals, etc., and within World 3 there is a corresponding coherence of ideas. Popper demarcates empirical, i.e., testable ideas from ideas that are not testable, in logic, mathematics and metaphysics. The demarcation occurs in World 3 and is not bridged. Therefore the tension between nature and freedom is also manifest in World 3, such that World 3 cannot constitute a synthesis between Worlds 1 and 2. By their rational critique people cannot transcend the Enlightenment dialectic of nature and freedom.

Karl Popper’s idea of self-transcendence appears to be an illusion.


14.2. The critical-realistic ethos

of the scientific community


In the twentieth century critical realism arose, among others proposed by Karl Popper (7.7), but especially as a characteristic feature of natural science and technology, since the industrial revolution commanding western society. In this context, critical means methodical self-criticism, whereas realism concerns the lawfulness of nature. Like any other human group the scientific community has its own ethos, based on a shared world view. This ethos depends on values and norms. A valueless science according to a positivist ideal does not exist.

In chapter 3, The normative structure of science, of The sociology of science (1973), Robert Merton argued that the ethos of seventeenth and eighteenth-century science was strongly influenced by English Puritanism and German Pietism, with which it shared some vital values and norms.[13] Whereas Catholics stressed obedience to the church as the leading normative principle of conduct, both Puritans and Pietists emphasized intellectual autonomy, the freedom to believe and to propagate one’s faith.

According to Merton the scientific ethos or code of conduct consists of communism (science is public knowledge, freely available to all); universalism (there are no privileged sources of scientific knowledge); disinterestness (science is done for its own sake); and scepticism (scientists take nothing on trust).[14] John Ziman replaced Merton’s communism by communalism and added originality (science is the discovery of the unknown).[15] In a present-day context, this would describe the ethos of critical realism, albeit with some additional comments.

The relatively large certainty, provided by the natural sciences for instance in technology and medical practice, is not derived from their ethos, but from the object of their research, the lawfulness of reality. Science cannot provide complete certainty out of itself. In particular it cannot account for the origin and validity of natural laws and normative principles conditioning human activity, including science itself. Science can only provide certainty by trusting that the natural laws and normative principles which it studies are universally valid, now, in the past and in the future. This includes the faith or conviction that antinomies do not occur, meaning that natural laws (nomos is law) and commandments are consistent with each other. This is not a logical, but a cosmological principle, surpassing the logical principle of excluded contradiction.[16]

The results of science pretend to be universally valid, yet they are not always true. The self-critical character of science makes that it continuously revises its results. Current Western science is not fundamentalist, if fundamentalism is understood as a world view accepting the absolute truth of some propositions or axioms. The force of modern science is not having a firm foundation, but its critical striving after consistency. Its network structure is open, liable to critical reflection and extension.

Therefore there is no unity of science,[17] no uniform scientific method except self-criticism. It is a historical irony that the final but one volume of The international encyclopaedia of unified science (1938-1969) was Thomas Kuhn’s The structure of scientific revolu­tions, which made an end to the positivist ideas of the Wiener Kreis, constituting the editorial ethos of this encyclopaedia. Yet there is a coherence and mutual dependence among related fields of science, informing and inspiring each other. Freedom of the exertion of science means the freedom of having different opinions, to debate with each other continually, to correct and to be corrected.

Not the sciences but the laws they try to find are supposed to be universally valid. Being valid for anybody, these are not the property of scientists. Who believes that the laws are given in reality, should not consider a scientific theory a logical construction of reality, but at most a reconstruction. Science can discover the natural and normative principles, but cannot found them. Scientists investigate the law side of reality, what everybody concerns. Therefore the performance of science belongs to the public domain. Scientists and scholars constitute a public intersubjective network, in which they freely use each other’s data stored in the objective public network of their theoretical and experimental results, in order to expand their shared knowledge by extrapolation and interpolation.


[1] Popper 1982, 1983.

[2] Popper 1972, chapters 3 and 4; Popper, Eccles 1977, chapter P2.

[3] Popper 1982, 116.

[4] Popper 1982, 130.

[5] Popper 1982, 117.

[6] Popper 1983, 29.

[7] Popper 1983, 30.

[8] Popper 1983, 87, 316, 339.

[9] Popper 1983, 334, 339.

[10] Dooyeweerd 1953-1958, I, 38-52.

[11] Popper 1983, 27, 154.

[12] Popper 1983, 157, 259-261.

[13] Merton 1973, 223-278; Hooykaas 1972; Lindberg, Numbers (eds.) 1986.

[14] Merton 1973, 267-278.

[15] Ziman 1984, 84-90; 2000, 33-46.

[16] Dooyeweerd 1953-1958, II, 36-49.

[17] Gaukroger 2006, 16.





Chapter 15


Christian critical realistic

philosophy of science 




This chapter intends to provide a critical update of the Christian philosophy of the cosmonomic idea, as proposed by Herman Dooyeweerd and Dirk Vollenhoven nearly a hundred years ago. It was inspired and directed by the biblical message of God’s sovereignty over the whole of life but it lacked a systematic analysis of the dynamic development of the creation.

Before the seventeenth century, philosophers adhered to a closed world view, in which everything worthwhile to be known was contained either in the secretive revelations of alchemists; or in the books of Plato, Aristotle, and others from antiquity, with medieval comments written by Jewish, Muslim, and Christian scholars; and especially in the Bible. This view was challenged by the voyages of discovery since the fifteenth century, undermining many previously accepted insights. From the sixteenth century onwards scholars found that they could open up their world by the use of daring theories, systematic observations, and unheard of experiments. Mathematics, physics, biology, and ethology were developed at an amazing pace, at first assuming a closed determinist mechanist worldview, but later allowing of stochastic processes with an open end. Since the nineteenth century, evolution is recognized as the natural kind of dynamic development preceding history as its cultural form. History has an open future, for which people take their responsibility to act in freedom, both individually and together in free associations and on the public domain. Dynamic development is both made possible and restrained by laws. It also requires some latitude of randomness in nature, of human freedom in history, and of variability in both.

Sections 15.1-15.3 deal with three fundamental topics: the ideas of law, of relation, and of character. In this context, an idea is a limiting concept, not to be defined simply, but to be grasped in one’s intuition. Sections 15.4 is concerned with natural evolution as a dynamic process, section 15.5 with the opening up of natural behaviour into normative acts. Sections 15.6-15.10 discuss human acts, artefacts and associations with their relevance for the dynamical development of history, culture and politics.


15.1. Discovery of the law


The idea of law is the critical realist confession that God created the world developing according to natural laws and normative commandments. These are invariable because He sustains them. Christians know God through Jesus Christ, who submitted himself to the Torah, the Law of God. In contrast to the eternal God, the creation is in every respect temporal, in a perennial state of dynamic development under the law as the upper boundary of the cosmos. The idea of natural law as critically used in the physical sciences since the seventeenth century confirms this idea of law. Natural laws are not a priori given, but partial knowledge thereof can be achieved by studying and disclosing the law conformity of the creation.

The focus of sections 1-4 is on natural laws. Normative principles will be discussed later on. The idea that invariant laws govern nature is relatively new. The rise of science in the seventeenth century implied the end of Aristotelian philosophy, having dominated the European universities since the thirteenth century. According to Aristotle, four causes (form, matter, potentiality, and actuality) determine the essence of a thing and the way it changes naturally. Each thing, plant, or animal has the potential to realise its destiny, if not prohibited by circumstances. The aim of medieval science was to establish the essence or nature of things, plants, animals and humans; their position in the cosmic order; and their practical use.

Although essentialism is still influential, since the seventeenth century it became replaced by the search for laws. The medieval distinction of positive law, given by terrestrial authorities, from (mostly moral) natural law, ordained by God, was hardly ever applied in science.

About 1610 Johann Kepler broke with this tradition by formulating a law as a generalization of observed mathematical relations. At first sight, Kepler’s first law (each planet moves in an elliptical path with the sun at one focus) does not differ very much from the view, generally accepted since Plato, that the orbits of the celestial bodies are circular, albeit with the earth at their centre. After all, both circles and ellipses are geometrical figures. But Plato put uniform circular motion forward as being the essential form of perfect celestial motion, not as a generalization from observations and calculations. From Hipparchus and Ptolemy up to Nicholas Copernicus, astronomers have tried to reconcile the observed planetary motions with a combination of circular orbits. In his elaborate analysis of Tycho Brahe’s systematic observations in twenty years, Kepler found the orbit of Mars to be an ellipse, with the sun in a focus rather than at the centre. A similar assumption could solve several problems for the other planets, too. Plato’s perfect circular motion was a rational hypothesis, a priori imposed on the analysis of the observed facts. Kepler’s elliptical motion was a rational generalization a posteriori of fairly accurate observations made by Tycho Brahe. It was a mathematical formulation of a newly discovered natural law.

Since antiquity, astronomers knew very well that planets as seen from the earth have variable speeds. They applied various tricks to adapt this observed fact to the Platonic idea of uniform circular motion. Kepler, however, accepted changing velocities as a fact. He connected these to the planet’s varying distance to the sun as expressed in its elliptical path. He established a constant relation, his second law: as seen from the sun, a planet sweeps equal areas in equal times.

This area law is the first instance of a method to become very fruitful in natural science, by relating change to a constant, a magnitude that does not change by the motion. It means formulating of natural constants and of several conservation laws, of energy or electric charge, imposing restraints on any changes to occur.

By assuming that the apparent retrograde motion of the planets is a projection of the real motion of the earth around the sun, Copernicus was able to estimate the relative distances of the planets to the sun. Kepler observed that the third power of these distances is proportional to the square of the periods of revolution. This law was used by Isaac Newton to derive the inverse square law for universal gravity, explaining Kepler’s first and second laws as being approximately true.

Since Kepler, and especially due to Newton, natural science became the search for natural laws, their status and their justification, and later their restriction by randomness.


15.2. Dynamic coherence


The view that anything is related to everything else is far less controversial than the idea of law, but as a philosophical idea it is equally important. The diversity of temporal reality cannot be reduced to a single principle of explanation. Like a prism refracts the light of the sun into a spectrum of colours, the coherence of created reality is refracted into a wide variety of temporal relations: among things and events; among people; between people and their environment and all kinds of objects; between individuals and associations; and between associations among each other. Also the relations of people with their God display the same diversity.

These relations can be grouped into relation frames, also called law spheres or modal aspects of being and experience. In each relation frame, all relations among subjects and objects are governed by one or more natural laws or normative commandments, characterizing the relation frame concerned. The relation frames are mutually irreducible, yet not independent. They show a recognizable serial order in which earlier relation frames anticipate later ones, and reversely, later relations can be projected on earlier ones. For instance, genetic relations presuppose physical interaction. Kinetic relations can be projected on spatial relations, and both can be expressed in quantitative relations. Each relation frame presupposes the preceding ones (the spatial frame cannot exist without numbers) and deepens them (spatial continuity expands the denumerable set of rational numbers into the continuous set of real numbers).

Because nothing can exist isolated from everything else, the relation frames constitute conditions for the existence of anything. Experience, too, is always expressed in relations. As a consequence, these frames are aspects of being and experience as well as sets of relations.

Each relation frame can be considered as an aspect of time with its own temporal order. Simultaneity may be considered the spatial order of time, preceded by the quantitative order of earlier and later in a sequence, and succeeded by the kinetic order expressed by the uniform motion from one instant to another. In each relation frame the temporal order functions as a natural law or normative value for relations between subjects and objects, especially among subjects. Relations receive their meaning from the temporal order. Serial order is a condition for quantity, and simultaneity for spatial relations. Periodic motions would be impossible without temporal uniformity. Irreversibility is a condition for causal relations; rejuvenation for life; and without purpose, the behaviour of animals would be meaningless.

The relation frames each contain a number of unchangeable natural laws or normative principles, determining the properties and propensities of relation networks of subjects and objects.

The temporal order is the law side of a relation frame. The corresponding relations constitute the subject-and-object side of relational time. Philosophically speaking, something is a subject if it is directly and actively subjected to a given law. An object is passively and indirectly (via a subject) subjected to a law. Therefore, whether something is a subject or an object depends on the context. A spatial subject like a triangle has a spatial position with respect to other spatial subjects, subjected to spatial laws. A biotic subject like a plant has a genetic relation to other biotic subjects, according to biotic laws. Something is a physical subject if it interacts with other physical things satisfying laws of physics and chemistry. With respect to a given law, something is an object if it has a function for a subject of that law. Properties of subjects are not subjects themselves (physical properties like mass do not interact), but objects.

It is a matter of empirical research to determine which relation frames there are and how they are ordered. Natural relations can be grouped together into six natural relation frames, of quantity, space, motion, interaction, life and feeling. Unavoidably this result is hypothetical, tentative, and open to correction.


1. Putting things or events in a sequence produces a serial order. This order can be expressed by numbering the members of the sequence. The sequential order of numbers gives rise to quantitative differences and ratio’s, being quantitative subject-subject relations. The subjects of the laws belonging to the first relation frame are first of all the numbers themselves: natural and integral numbers; to be opened into fractions or rational numbers and real numbers. All can be ordered on the same scale of increasing magnitude. Numbers are subject to laws of addition and multiplication. Everything in reality has a numerical aspect. Expressing some relation in quantitative terms (numbers or magnitudes) one arrives at an exact and objective representation. The numerical relation frame is a condition for the existence of all other frames.

2. The second relation frame concerns the spatial synchronous ordering of simultaneity. The relative position of two figures is the universal spatial relation between any two subjects, the spatial subject-subject relation. It is objectively given as the distance between two representational points, for instance the centre points of two circles. Whereas the serial order is one-dimensional, the spatial order consists of several mutually independent dimensions. In each dimension the positions of spatial points are serially ordered and numbered, referring to the numerical frame. Relative to each of these dimensions, there are many equivalent positions. Independence and equivalence are spatial key concepts, just like the relation of a whole and its parts. The spatial relation frame returns in wave motion as a medium; in physical interactions as a field; in ecology as the environment; in animal psychology as observation space, such as an animal’s field of vision; and in human relations as the public domain. Magnitudes like length, distance, area or volume are spatial objects, having a quantitative function for spatial subjects.

3. The third relation frame records how things are moving and when events occur. Relative motion is a subject-subject relation. Motion presupposes the serial order (the diachronic order of earlier and later) and the order of equivalence (the synchronic order of simultaneity or co-existence), and it adds a new order, the uniform succession of temporal instants. Although a point on a continuous line has no unique successor, it is nevertheless assumed that a moving subject runs over the points of its path successively. Hence, relative motion is an intersubjective relation, irreducible to the preceding two. The law of uniformity concerns all kinds of relatively moving systems, including clocks. Therefore, it is possible to project kinetic time objectively on a linear scale, as well as on a circular scale, representing the periodicity of kinetic time.

4. In contrast to kinetic time, the physical or chemical ordering of events is marked by irreversibility. Different events are physically related by an irreversible causal relation. All physical and chemical things influence each other by some kind of interaction, by exchanging energy or matter, or by exerting a force on each other. Each physical or chemical process consists of a number of interactions. Therefore, the interaction between two things should be considered the universal physical subject-subject relation. Because mechanist philosophers wanted to reduce all physical relations to motions, until the end of the nineteenth century they tried to eliminate the physical order of irreversibility. 

5. The biotic order may be characterized by rejuvenating and ageing, both in organisms and in populations of plants or animals. An organism germs, ripens and rejuvenates itself by reproduction before it ages. By natural selection, populations rejuvenate themselves before they die out. For the biotic relation frame, the genetic law is universally valid. Each living being descends from another one; all living organisms are genetically related. This applies to the cells, tissues, and organs of a multicellular plant, fungus, or animal as well. Descent and kinship as biotic subject-subject relations determine the position of a cell, a tissue or an organ in a plant or an animal, and of an organism in one of the biotic kingdoms. Hence, the genetic law constitutes a universal relation frame for all living beings.

6. Teleology, being goal-directed, describes the psychic order. Behaviour, the universal mode of existence of all animals, is directed to future events. Recollection, recognition and expectation connect past experiences and present insight to behaviour directed to the future. Internal and external communication and processing of information are inter- and intra-subjective processes, enabling psychic functioning. Animals are sensitive for each other. By means of their senses, they experience each other as partners; as parents or offspring; as siblings or rivals; as predator or prey. By their mutual sensitivity, animals are able to make connections, between cells and organs of their body, with their environment, and with each other.


Although they are supposed to be mutually irreducible, the relation frames are not independent of each other. Except for the final one, all relation frames anticipate the succeeding frames. For instance, the set of real numbers anticipates both spatial continuity and uniform motion. Reversely, each relation frame (except the first one) refers back to preceding frames. The subject-subject relations of one relation frame can be projected onto those of a former one. Numbers represent spatial positions, and motions are measured by comparing distances covered in equal intervals. Energy, force and current are generalized projections of physical interaction on quantitative, spatial and kinetic relations respectively.

These projections are often expressed as subject-object relations. A spatial magnitude like length is an objective property of physical bodies. The possibility to project physical relations on quantitative, spatial and kinetic ones forms the foundation of all physical measurements. Each measurable property requires the availability of a metrical law (including a scale and a unit) for the relations to be measured and their projections.

The dynamic of the creation is inter alea expressed in the scientific and practical opening up of anti- and retrocipations in the relation frames. This is even more the case in the development of natural and normative characters, to which we now turn.


15.3. Abundancy of kinds


The diversity of the creation is not only apparent in the relations between all that is, but no less in the rich variety of various kinds. Each species is determined by a specific cluster of laws, to be called its character.

The realist idea of law assumes the validity of invariant natural laws and normative principles. These are not a priori stated as in a rationalist philosophy, but a posteriori discovered like in the empirical sciences. As a consequence, law statements are fallible and liable to revision. Laws and principles give rise to recognizable clusters of two kinds. Generic laws for relations determine six natural relation frames and ten normative ones. Clusters of specific laws form characters and character types for kinds of individual things and events, artefacts and associations, each with their specific nature. In this way, relations and characters complement each other. We shall see that character types can be distinguished with the help of relation frames.

In the history of science a shift is observable from the search for universal laws, via structural laws, toward characters, determining both processes and structures. Even the investigation of structures is not as old as might be expected: structuralism dates from the nineteenth century. In mathematics, it resulted in the theory of symmetry groups, later to play an important part in physics and chemistry. Before the twentieth century, scientists were more interested in observable and measurable properties of materials than in their structure. Initially, the concept of a structure was used as an explanans, as an explanation of properties. Later on, structure as explanandum, as object for research, came to the fore. During the nineteenth century, the atomic theory functioned to explain the generic properties of chemical compounds and gases. In the twentieth century, atomic research was directed to the specific structure and functioning of the atoms themselves. Of course, people have always investigated the design of plants and animals. Yet, as an independent discipline, biology established itself not before the first half of the nineteenth century. Ethology, the science of animal behaviour, only emerged in the twentieth century.

Mainstream philosophy does not pay much attention to structures. Philosophy of science is mostly concerned with epistemological problems (for instance, the meaning of models), and with the general foundations of science. A systematic philosophical analysis of characters is wanting. This is remarkable, for characters form the most important subject matter of twentieth-century research, in mathematics as well as in the physical and biological sciences.

The theory of characters as summarized in section 3 is the most theoretical, if compared with the idea of law (section 1) and the hypothesis of mutually irreducible relation frames (section 2), but it is very important for understanding the dynamic of the creation, which is expressed in an overwhelming richness of species.

It is quite common to speak of the structure of thing-like individuals having a certain stability and lasting identity, like atoms, molecules, plants, and animals. However, the concept of a structure is hardly applicable to individual events or processes, which are transitive rather than stable and lack a specific form. A dictionary description of the word structure would be the manner in which a building or organism or other complete whole is constructed, how it is composed from spatially connected parts. In this sense, an electron has no structure, yet it is no less a characteristic whole than an atom. Depending on temperature and pressure, a solid like ice displays several different crystal structures. The typical structure of an animal, its size, appearance, and behaviour depend characteristically on its sex and age, changing considerably during its development. The structure of an individual subject is changeable, whereas its kind remains the same.

A character defined as a cluster of natural laws, values, and norms is not the structure of, but the law for individuality, indicating how an individual may differ from other individuals of the same kind or of a different kind. The character of something concerns its law side, including its structure if it has one. It points out which properties an individual has and which propensities; how it relates to its environment; under which circumstances it exists; how it comes into being, changes and perishes. In this sense, an electron has no structure, but it has a character. Often, a character implies several structures. The structure of water is crystalline below 0oC, gaseous above 100oC, and liquid in between.

A character often shares its laws (sometimes expressed as objective properties or propensities) with other characters. It is never a single law, but always a specific cluster of laws that characterizes things or events of the same kind.

A character is not a definition in a logical sense. Although each science needs definitions, theories stating and deriving laws are far more important. One can never be sure of knowing the character of a thing or event completely. Human knowledge of most natural kinds is very tentative and fragmentary, even if it were possible to define them fairly accurately by calling some of their objective properties and dispositions.

Besides characters, character types should be mentioned. An iron atom satisfies a typical character, different from that of an oxygen atom. They have also properties in common, both belonging to the character type of an atom. Because a natural kind is characterized by a cluster of laws partly shared with other kinds, it is possible to find natural classifications, like the periodic system of the chemical elements or the taxonomy of plants and animals. One may discuss the generic character of an atom or the specific character of a hydrogen atom. From a chemical point of view all oxygen atoms have the same character, but nuclear physicists distinguish various isotopes of oxygen, each having its own character. The biological taxonomy from species to phyla corresponds to a hierarchy of character types.

In three ways typical kinds are connected to the relation frames introduced in section 2.


1. Primarily, each kind is specifically qualified by the laws for one of the sixteen relation frames. The universal relation of physical interaction, specified as for instance electric, primarily characterizes physical and chemical things, processes and events. General and specific genetic laws constitute primarily the law clusters valid for living beings and life processes. The psychical relation frame, expressed in its goal-directed behaviour is the primary characteristic of an animal’s character.

Each relation frame qualifies numerous characters. A traditional point of view acknowledges only three kingdoms of natural kinds, the physical-chemical or mineral kingdom, the plant kingdom and the animal kingdom. However, the quantitative, spatial and kinematic relation frames characterize clusters of specific laws as well. A triangle, for instance, has a spatial structure, oscillations and waves have primarily a kinetic character, and mathematical groups are quantitatively qualified. Each is characterized by a set of generic and specific laws.

2. Except for quantitative characters, a relation frame preceding the qualifying one constitutes the secondary characteristic, called its foundation. A character is founded in a preceding frame, not directly but in a projection of the primary (qualifying) relation frame. For instance, electrons being primarily physically characterized are secondarily characterized by physical magnitudes like their mass and charge, and are therfore quantitatively founded. These magnitudes determine to what amount an electron is able to interact with other physical subjects. Atoms, molecules and crystals have a characteristic spatial structure as a secondary characteristic, being as distinctive as the primary (physical) one.

For each primary type one expects as many secondary types as relation frames preceding the qualifying one. For biotically qualified organisms this means four secondary types, corresponding to projections of biotic relations on the quantitative, spatial, kinetic and physical relation frames. Prokaryotes (bacteria) and some organelles in eukaryotic cells appear to be subject to law clusters founded in a quantitative projection of the biotic relation frame. Being the smallest reproductive units of life, they are genetically related by asexual multiplication, subject to the serial temporal order. In multicellular organisms, eukaryotic cells operate as units of life as well, but eukaryotic cell division starts with the division of the nucleus, having a prokaryotic structure. The character types for eukaryotic cells, multicellular undifferentiated plants, and tissues in differentiated plants are founded in symbiosis, being the spatial expression of shared life.

3. So far the description of character types is parallel to Dooyeweerd’s analysis of structures of individuality. This does not apply to the tertiary characteristic of a character, the natural tendency or affinity of a character to become interlaced with another one. This is not a property, but a disposition. Dooyeweerd calls this phenomenon  ‘enkapsis’, but he does not treat it as a disposition. This structural interlacement occurs either because the individuals concerned cannot exist without each other (a eukaryotic cell cannot exist without its nucleus and organelles, and vice versa) or because an individual has a natural tendency to become a constitutive part of another one, in which it performs an objective function. Whereas the secondary characteristic refers to properties, the tertiary characteristic is usually a propensity. A particular molecule may or may not have an actual objective function in a plant, yet the propensity to exert such a function belongs to its specific cluster of laws. Structural interlacement makes characters extremely dynamic.

In physics and chemistry, the characters of atoms and molecules are studied without taking into account their disposition to become interlaced with characters primarily characterized by a later relation frame. But biochemistry is concerned with molecules such as DNA and RNA, having a characteristic function in living cells. Like other molecules these are physically qualified and spatially founded, witness the double-helix structure as a characteristic property of DNA. But much more interesting is the part these molecules play in the production of enzymes and the reproduction of cells, which is their biotic disposition.

Hence, taking into account its propensities, the specific laws for a physical subject like a molecule not only determine its structure and physical-chemical interactions, but also its dynamic meaning. The theory of interlacement avoids both reductionism (stressing the secondary, foundational properties of things) and holism (emphasizing the tertiary functions of things in an encompassing whole).

Although here is a large abundancy of natural kinds, not everything has a specific character. For instance, a gas like oxygen has a typical character, but a mixture of oxygen and hydrogen has not. In this respect the theory of characters is less general than the theory of relations.


15.4. Evolution of natural characters


Billions of years ago the then hot universe only contained two elements: hydrogen and helium. Now in terrestrial circumstances more than eighty stable elements are known, with countless compounds. The chemical evolution took place in stars and planets in natural processes during many centuries. The phenomenon of the emergence of new characters plays an important part in the natural evolution of the astrophysical universe; of stars and planets; of the chemical elements and their compounds; and of the living world. It should be understood as the realization of characters as sets of laws that were potentially but not actually valid before. The subjects of invariant characters come into actual existence if the circumstances permit it. Assuming that natural laws do not change, evolution occurs at the subject side of natural characters, not at their law side. Yet natural evolution is not a completely random process, but lawful dynamic development towards an open and ever more varied future.

Being clusters of natural laws, characters do not evolve, but their subjects do. This does not appear to pose a problem to the astrophysical or the chemical theory of evolution. The characters of physical and chemical things and events like molecules and molecular processes are supposed to hold for all times and places, taking into account the circumstances. Evolution in the organic world is a random process with natural selection as a dynamic force. Genetic relations and sexual reproduction constitute equally important engines of evolution. These engines push evolution at the subject-and-object side. At the law side, the characters to be realized pull the evolution. It is restrained by the laws determining characters, which are gradually realized into populations of living beings. Evolutionists are inclined to neglect this dynamic pulling force, because they consider evolution to be a purely random process.

With respect to physical and chemical characters like those of atoms and molecules, everybody seems to accept that at the law side these do not change, but are realized at the subject-and-object side when circumstances like temperature and other initial and boundary conditions are favourable. Biologists assume that the evolution of populations occurs within each species, and occasionally between species, such that new species arise. This micro-evolution fits very well into the assumption that a species corresponds to a character. However, macro-evolutions like the emergence of eukaryotes from prokaryotes; of multi-cellular eukaryotes; or of plants, animals and fungi, remain unsolved problems.

Whereas for physical and chemical characters specific laws are sufficiently known, this is not the case for biotic species. On a higher taxonomic level, about 35 contemporary animal phyla are known each with its own body plan. A body plan may be considered a morphological expression of the law for the phylum. It is a covering law for the characters of all species belonging to the phylum. These phyla manifested themselves almost simultaneously (i.e., within several millions of years) during the Cambrium radiation, about 550 million years ago. Afterwards, not a single new phylum has arisen, none has disappeared, and the body plans have not changed. The evolution of the animal world within the phyla (in particular the vertebrates) is much better documented in fossil records than that of other kingdoms.

Lawfulness and chance are both conditions for natural evolution to be an open process, which past can be investigated, but which future cannot be predicted. Critics state that chance is at variance with law conformity. In fact, stochastic processes can only occur on the basis of existing characters. Biotic evolution starts from physical characters, and always dynamically builds on previously realized biotic characters. Psychic characters of behaviour have a physical and an organic basis. Chance plays an important part in the reproduction of plants and animals (and therefore in natural selection), but far less in their development after germination, which is a much more regular process.

As in the astrophysical, chemical, and biotic evolution, the dynamic evolution within the animal world requires a random push and a lawful pull. The pull is the character of the emerging animals. The push is sexual reproduction, in which the animals concerned take an active part in choosing their mates, but which result is still largely a random process, although much less so than in plants and fungi.

The insight that natural characters realize themselves successively by evolution belongs to the prevailing scientific worldview. It should not be identified with evolutionism, a reductionist, naturalist and materialist belief applying the concept of evolution at all times.

Whereas Baruch Spinoza and Albert Einstein identified God with nature or with natural laws, naturalists replace God by nature, attempting to explain everything by natural causes, reducing all regularity to physical laws and natural evolution.

Some extreme naturalists cross the boundary between a worldview and a religion, by posing that science proves that there is no supernatural origin of reality and of its lawfulness. Evolutionism assumes that the theory of evolution provides not merely a necessary, but also a sufficient explanation for the emergence of the living world.

Evangelical creationism is proposed as a Christian alternative for a supposed atheist or agnostic evolutionism. Foundational creationism uses biblical texts as reliable data for scientific theories, as an authoritative source for empirical knowledge. It rejects the view that evolution offers a necessary explanation for the rise of humanity, considering the biblical text as both necessary and sufficient. Whoever rejects this biblical exegesis is therefore not necessarily committed to atheism or evolutionism. Rejecting creationism, many Christians and other believers accept the theory of evolution as a minimally necessary but not sufficient explanation of the emergence of humanity from the living world. Such an explanation may be suggested by comparing natural behaviour with normative acts.


15.5. Natural behaviour and normative acts


Christian philosophical anthropology ought to dissociate itself from naturalistic evolutionism that considers a human being merely as a natural product no more than any animal, stating that the evolution of humanity from the animal kingdom should be explained entirely in a natural scientific way. On the other hand, Christian philosophy does not need to object to the hypothesis that humanity emerged from the animal kingdom. The evolution of humankind, like the evolution of plants and animals, occurs partly according to natural laws, providing a necessary, though by no means sufficient explanation for the rise of humanity. There is no reasonable doubt that human beings, as far as their body structure is concerned, evolved from the animal world. However, for a sufficient explanation one has to take into account normative principles irreducible to natural laws.

The natural behaviour of humans and animals differs from the normative acts performed by people in freedom and responsibility. The assumption that people have a position in the animal world does not mean that they are psychically qualified. The human body differs  in many respects from the animal one. The size of the brain, the erect gait, the manoeuvrability of the hand, the absence of a tail, and the naked skin point to the unique position of humanity in the surrounding world.

Animals are qualified by their goal-directed behaviour. People too behave naturally, but this is dynamically opened up by normative acts. Each individual act starts internally, within the boundaries of  one’s bodily and spiritual existence, as an intention. This is based on experience from the past, on imagination of the presence, on consideration of possible future consequences, and on the will to achieve something. After having reached a decision someone actualises this intention into a deed outside body and mind, in a subject-object relation or in a subject-subject relation. For their deeds people are responsible. Each act is qualified by one of the normative principles, being intuitively known to all. Actual acts are determined by norms derived from the normative principles, which leave space for the freedom and responsibility of the acting person.

The philosophy of the cosmonomic idea assumes that animals do not function as subjects in the post-psychic aspects. Because it considers the logical aspect to be the first normative aspect this means that animals can only be objects of thought, not subjects. However, some higher developed animals like the mammals share a limited measure of natural thought with people. It is meaningful to distinguish natural thought from conceptual or theoretical thought, being instrumental and exclusively human (section 8). In theoretical thought people take distance from whatever they think about. In contrast, natural thought is a direct subject-object relation. Conceptual thought implies the formation of concepts, propositions, and theories, and is therefore natural thought opened up by the formative, semiotic and logical aspects.

In other ways, too, natural behaviour is opened up by the normative relation frames. This is fully the case with people, but in a restricted sense also for animals. Birds build nests and beavers construct dams. Mating behaviour often makes an aesthetic impression. Some animals know a primitive use of language. The significance of the dance of bees is well-known. Birds warn each other for danger. In groups of apes recognizable communication has been established. Many animals show social behaviour: bees, ants, birds during the annual migration, mammals in herds, ape families, etc. Sometimes a restricted kind of division of labour, economic behaviour and leadership is recognizable. In the breeding season a primitive kind of care is observable in various kinds of animals.

This subjective animal behaviour in the post-psychical relation frames is always primitive and instinctive. It is genetically determined, characteristic for the species, retrocipating, never anticipating. All post-psychic behaviour of animals serves their biotic and psychic needs, in particular the gathering of food, reproduction and survival of the species. In contrast, human acts are opened up, transcending heredity, anticipating, and ultimately religious.

The starting point of any Christian philosophical anthropology should be that people are called  from the animal world in order to command nature in a responsible way, to love their neighbours, and to honour their God. People are called to promote the good and to fight evil, in freedom and responsibility. Science and philosophy cannot explain this calling from natural laws, but it is an empirical fact that all people experience the calling to do good and to avoid evil. This fact is open for scientific archeological and historical research.

The question of when this calling took place can only be answered within a wide margin. It is comparable to the question of when between fertilization and birth a human embryo becomes an individual person, with a calling to be human. The creation of humankind before all times, including its functioning as God’s image, is different from its realization in the course of time. In contrast to the first, the latter can be dated, albeit within wide boundaries.

By leaving the animal world humanity took an active part in the dynamic development of nature. The opening up of the windows on humanity concerns all natural relation frames and the characters qualified by these. People extend their quantitative, spatial, kinetic, physical, biotic and psychic relations with other creatures, as well as with each other.

Whereas ethology studies animal behaviour, ethics is concerned with human acts qualified by the normative relation frames following the psychic one. People have the will to labour or to destroy; to enjoy or to disturb a party; to understand or to cheat; to speak the truth or to lie; to be faithful or unreliable; to keep each other’s company in a respectful or in an offending way; to conduct a business honestly or to swindle; to exert good management or to be a dictator; to do justice or injustice; to care for or to take advantage of each other’s vulnerability. The various virtues and vices express the will to do good or evil in widely differing circumstances. The will to act rightly or wrongly opens the human psyche towards the normative relation frames. The desire to act freely and responsibly according to values and norms raises men and women above animals, a human society above a herd.

By distinguishing natural laws from values and norms, Christian philosophical ethics accounts for human freedom and responsibility. No less than animals, people are bound to natural laws, being coercive and imperative, though leaving a margin of randomness. Like natural laws, values or normative principles are given by the Creator as dynamic conditions for human existence, but human beings are able to transgress these commandments. For instance, people ought to act righteously, but they do not always behave accordingly.

Normative principles cannot be derived from human existence as such, as if there are first human beings with their activity and next the morals. Each fundamental value is a condition for human existence in its rich variety.

Whereas different kinds of animals can be distinguished from each other by their genetically determined behaviour subjected to generic and specific natural laws, human activity opening up natural behaviour is relatively free and responsible, however much someone’s personal situation in their environment and their relations with other people may restrict their freedom to act. It is a generally held assumption that human beings are to a certain extent free to act, and therefore responsible for their deeds. Although this confirms common understanding, it is an unproved and perhaps unprovable hypothesis. Naturalist philosophers denying free will cannot prove their view too, but because they contradict common sense, they should carry the burden of proof.


15.6. Values and norms

for human acts and relations


Although not being coercive, the normative principles appear to be as universal as the natural laws. Since the beginning of history people are aware to be able to obey or disobey these principles, something neither people nor animals are able to do with respect to natural laws. Moreover they discovered that the normative principles are not sufficiently articulated. In particular the organization of human societies required the establishment of humanmade norms as implementation or positivation of normative principles. Therefore the idea of human freedom and responsibility has two sides. At the law side it means the development of norms from normative principles, which norms are different at different historical periods and places, in various cultures and civilizations. At the subject-and-object side individual persons and their associations are required to act according to these norms, such that freedom and responsibility can be warranted.

Historical development takes place in all normative relation frames, not merely at the subject-and-object side (as is the case with natural evolution), but at the law side as well. In natural evolution heredity functions as a driving force. Learned properties cannot be inherited.  In each normative relation frame an asymmetric subject-subject relation functions as a dynamic engine of history driving the transfer of experience, for instance from one generation to the next.

The normative relation frames qualify the characters of both typical objects or artefacts and of typical subjects called associations. These will be mentioned occasionally in section 6, and will be discussed in more detail in sections 7-10.

Being free and responsible images of God, men and women satisfy no more than their acts a specific character as described in section 3. Their individual character is their attitude towards the law side of the creation, both natural and normative.

This section surveys the normative relation frames. Both their number and their order are tentative and disputed.


1. The progressive development of culture and civilization started and continues with skilled labour. After the natural relation frames this is the first normative one, called the historical or cultural aspect by Dooyeweerd who posited it after the logical one. Initially the philosophy of the cosmonomic idea paid little attention to technology, but meanwhile an interesting group of reformational technical philosophers appeared. Anyone ought to exert their work according to their skills. Progress may be considered the temporal order for technical development. An event, process, artefact or association as well as a person may be called historical if contributing to or hampering progress. During the nineteenth century, progress was not viewed as a normative principle, but as an inevitable factual feature of Western history. This optimistic view was shattered during the First World War. The dynamic engine of technical progress is the transfer of practical know-how and skills, from parents to their children in households; from skilled to untrained labourers in workshops; and from teachers to pupils in schools.

Technical artefacts like tools are instruments in the history of tilling the earth, the opening up of the natural characters and their succeeding technical development. The character of a technical instrument is its design, the set of natural laws and norms the apparatus should satisfy. Technical artefacts are primarily characterized by the technical relation frame and secondarily founded in one of the natural frames. They function as typical objects in the transfer of technical skills, or in a technical subject-object relation, in which a skilled subject (an individual or an association like an enterprise) may be its designer, its producer or its user. Technical progress as expressed in the dynamic development of many kinds of technical artefacts is an important part of historical research. Besides, all natural subjects (things, plants, animals) may be objects for technical development. By their skilled labour with the help of technical instruments, people develop natural characters. The religious calling of mankind is to till and preserve the earth in a responsible way.


2. Like Calvin Seerveld but unlike Herman Dooyeweerd, I believe that the aesthetic relation frame succeeds the technical one. The Greek word technè and the Latin ars do not only mean technical ability, but art as well. Labour is a prerequisite of art and play.

History is usually divided into periods according to a dominant style, the normative law for aesthetic phenomena like fashion, decoration, plays and the fine arts. Aesthetic artefacts like a piece of art, a musical performance or a football match are subjected to the style of the time, and are instrumental in the transfer of aesthetic experience from an artist, an orchestra or a football team to their audience or spectators. By making images persons show themselves to each other and to their God. Religion finds its aesthetic expression in the cults, in the epiphany of God.

For the transfer of the aesthetic experience of beauty people use artefacts like novels and other pieces of art, as an important contribution to the dynamic development of the creation. The production of aesthetic artefacts requires specific technical skills. In each piece of art or performance, the perspective of the spectator, auditor, or reader plays an important part, constituting a weighty criterion for judging its quality. The artists determine the perspective and the spectators follow them.

The products of the performing arts are subject to a specific set of human-made norms, like the text of a play, the choreography of a ballet, the score of a piece of music, or the script of a movie, having the character of a prescription. Although the performers are bound to the text, they are free to find their own interpretation, as long as it testifies to their aesthetic skills.


3. An important engine of dynamic development is the human ability to remember and to make sense of things and events, and to communicate these with each other. Information is a semiotic form of the transfer of human knowledge. The common name for a semiotic object is a sign, but the semiotic frame does not necessarily qualify a sign. For instance, a fossil is a sign of a formerly living body, and is therefore qualified by the biotic relation frame. In contrast, a human-made semiotic artefact is usually called a symbol. A rainbow is a sign that it is raining while the sun shines, whereas the Bible interprets it as a symbol of God’s covenant with the world. For the transfer of semiotic experience a language forms an important instrument. Without language, the individual memory of people would be as limited as animal memory. The use of language, both oral tradition and written texts, forms the basis of shared memory and remembered history.


4. Logic is derived from the Greek logos, meaning word or conversation rather than reason, derived from the Latin ratio. Nevertheless, logic is the name of the science of reasoning, of analysis and synthesis, of drawing conclusions. The logical relation frame concerns the relevance of argumentation as a universal value for humanity. Everything we want to know, anything that presents itself to our experience, is object for our reasoning. The ratio of history consists of finding logical connections between events and their consequences, the explanation of recorded historical events based on earlier events, circumstances and human intervention.

Reasoning always concerns the solution of a problem. In part, history consists of imagining and solving new problems, increasing rational insight. By generating and solving problems and communication of their solutions people create a rational order in their environment.

Apparently, rationality is concerned with ‘thinking about ...’, but this emphasizes the subject-object relation too much. Whoever wants to put the subject-subject relation to the fore may observe that logic concerns the discussion between two logical sub­jects, attempting to achieve agreement about something on which their opinions differed before. This can be done either in a direct manner, or indirectly, in an abs­tract, objectifying and theoretical way. A discussion is subject to the law of excluded contradiction.

Continuously people confer with each other, exchanging information and drawing conclusions for the future. The logical engine of history is the dynamic transfer of rational knowledge and insight, with logic as instrument to analyse past events and predict future events. Logical extrapolation, as in prediction, explanation and rational choiceis subjected to the logical temporal order of prior and posterior, in which a conclusion follows from premises.


5. So far the order of the relation frames is the same as Seerveld’s, but I think that faith succeeds reasoning. With Dooyeweerd this is the final modal aspect, opening a window on eternity, which I believe to be a prerogative of religion.

Whereas the meaning of language is to speak the truth, and the meaning of logic is to prove statements to be true, on their own force these cannot arrive at reliable truth. To arrive at certitude people must be convinced of the validity of their arguments. Acts of faith are characterized by the mutual trust of people and their trust in all kinds of objects and in their God. The temporal aspect of this universal value is expressed in the wish to reform the world while preserving what is good.

Artefacts like myths, confessions, party programs and mission statements play an instrumental part in the reform of views and the transfer of beliefs. Often these lie at the foundation of associations, in particular but not exclusively of faith communities. Being narratives, myths appear to be founded in the semiotic relation frame. Confessions and dogmas (often established after a theological investigation) appear to be founded in the logical frame and icons in the aesthetic one.


6. People seek company in unorganized communities with a network structure and in organized associations with some kind of authority. The home base of education and nurture, the nuclear family (or its replacement) educates children to keep each other’s company and that of others. Education serves as the dynamic engine of integration, the temporal order for the relation frame of companionship.

In this relation frame habits or customs play an instrumental part in education, the transfer of how to act as a civilized person in any company. Integration is not restricted to children, however. Emancipation is a candidate for expressing the historical meaning in this relation frame. Reverence is the leading social motive in the religious intercourse with God.


7. Whereas each animal kind is specialized in its Umwelt, human beings are able to perform many different tasks. In the economic relation frame the normative order is best described as differentiation, without which economic acts like the exchange of goods or services would make no sense. Mutual service is the dynamic engine of economic differentiation. The service of God expresses religion in the economic aspect of human existence.

As far as it can be owned and sold, anything may be an economic object without being economically qualified. The most obvious economic artefact besides capital and contracts is money as an instrument for trade, the transfer of services and commodities made possible by the economic division of labour.


8. Keeping peace, good government, accountability, and democracy or participation are universal political values, not reducible to one of the other relation frames, not even the frame of justice as Dooyeweerd assumes. At the subject-and-object side it means giving and accepting leadership as an asymmetric engine of dynamic development.

A state law is a human-made artefact qualified by the political relation frame, serving as an instrument in leadership and discipline, the transfer of policy. Peace is the historical meaning of this relation frame. In a religious sense, anybody should be obedient to God. This means that neither leadership in an association nor that association’s sovereignty in its own sphere can ever be absolute, because it always concerns a mandate derived from the supreme Sovereign.


9. In order to open the future, justice meets history as the unfinished past. The past cannot be undone, but sometimes one can do something about its consequences. The history of civilization means not only integration, differentiation, and policy, but also correcting events, restoring order, compensating wrong doing, rectifying an incorrect news item, as well as repairing a defunct apparatus, restoring a piece of art, or reconstructing a document: all being acts of justice leading to conceptions of what is right or wrong, a legal order.

A human right or duty is an artefact qualifiedby the juridical relation frame. Customs determined by the relation frame of keeping company, economic contracts and state laws have juridical consequences, playing an important part in the transfer of justice.


10. Each human being and everything created or human-made is vulnerable and is therefore in need of care. People have always tried to diminish their vulnerability, to become invulnerable, independent, autonomous. Besides being relatedto others, each person also depends on other persons, on their environment and on God. The care for one’s fellows, compassion, misericordia or pity means showing respect for people who suffer or are hurt, knowing to be vulnerable oneself. Contrary to loving care, people take advantage of each other’s vulnerability, by insulting, robbing, dominating, injustice, maltreating or murdering. The denial of mutual dependence leads to the fall into sin.

The care for vulnerable people like widows, orphans and the poor belongs to the nucleus of the Gospel. The miracles wrought by Jesus and his disciples according to the New Testament do not testify of God’s omnipotence (Jesus rejected this emphatically when tempted by the devil), but of his care for vulnerable people. The gospels do not present Jesus as an almighty magician, but as a healer. The early Christians expected the end of the times to be imminent. They were not concerned with the politics of the government. But they developed a new life style and new ways of living together, characterized by love for one’s neighbour, mercy, charity and care for vulnerable people.


15.7. Growing technical ability


The history of mankind is stimulated by the invention and spread of human-made devices, both typical objects and typical subjects. In contrast to acts both have a generic character and a variety of specific characters.

Typical objects will be called artefacts, typical subjects associations. Whereas the character of a natural thing or process is defined as a cluster of natural laws, the character of a human product consists of values (normative principles) and norms besides natural laws.

Because people are free to develop their own norms from invariable normative principles, the variability of the characters of artefacts and associations is quite large. Abstracting from norms one finds a much more restricted set of character types. These types do not depend on culturally and historically variable norms, but only on natural laws and normative principles, both supposed to be invariant existential conditions for created reality. Character types are no more variable than the natural laws and normative principles of which they consist.

Associations and artefacts corresponding to character types function in any normative relation frame as typical subjects or objects respectively. Each type is primarily qualified by one of the normative frames. It is secondarily founded in a projection of the qualifying frame on a preceding normative or natural frame. Tertiarily, each artefact has the disposition to be interlaced with another artefact. The same applies to associations.


As products of skilful labour, artefacts are either primarily or secondarily characterized by the technical relation frame. Artefacts primarily characterized by technical labour have a singular character, secondarily characterized by one of the natural relation frames. In contrast, artefacts being primarily as an object characterized by one of the succeeding relation frames satisfy a dual character, an interlacement of a generic and a specific character. The generic character, distinguishing for instance an aesthetic artefact from what is not an aesthetic artefact, is secondarily characterized by the technical relation frame, because all artefacts are human-made, requiring technical ability to make and handle them. The specific character type distinguishes various types of for example aesthetic artefacts from each other. It is primarily characterized by the same relation frame as the generic character, but secondarily by a preceding one (not necessarily the technical one), and tertiarily by any relation frame. In this way one distinguishes between, for instance, music and painting, both being primarily characterized by the aesthetic aspect, both requiring technical craft of an appropriate kind, but otherwise quite different.

We can now define an artefact as the collective name for any human-made object of human acts, a product having a typical structure primarily characterized by one of the normative relation frames. This is a much wider definition than that applied in technology, where artefacts are technical products, or in archaeology, where artefacts are human-made material remains. Artefacts or constructions are often not primarily technical, and by no means always material. In each relation frame artefacts are distinguished from other objects which are not typically characterized by that relation frame.

A painting, for instance, is a material aesthetic artefact. It is an object characterized by the aesthetic relation frame, an instrument in one’s aesthetic experience. As such it is not economically qualified artefact, although it can clearly be an economic object. In contrast, its proceeds at an auction is an economic immaterial artefact, as a price established by a buyer and a seller. The price of a painting is primarily not characterized by aesthetic but by economic relations, and only secondarily by its aesthetic quality, rarity, and so on. The price of a painting has a quite different history than the painting itself has as an aesthetic artefact.

The characters of artefacts are in part determined by natural laws, limiting many possibilities. For another part they are determined by norms, expressing ethical conditions for the production, quality, and use of artefacts. Artefacts are not always things. Human-made events and processes (including the invention, design, production, and use of artefacts) are artefacts too.

Artefacts are not merely relevant for the relation frame by which they are characterized. They play an objective and instrumental part in all normative relation frames. For instance, without language all social relations, commerce, government, and justice would be impossible. In this way, artefacts have a very open and dynamic character.

Artefacts function as instruments in the transfer of experience in asymmetric subject-subject relations. They are subjected to the normative order of time in the relation frames by which they are characterized. Because the technical relation frame characterizes all artefacts either primarily or secondarily, artefacts should at least satisfy objectively the historical norm of progress. Therefore artefacts have a history of their own, constituting an important instrument for historiography as the interpretation of signs from the past. Indeed, each artefact is an objective sign of the dynamic history of human acts by subjective producers and users. Artefacts are objective witnesses of the past.

Artefacts sustain human experience, like sensory experience is sustained by various kinds of instruments and human labour by tools. Several examples of artefacts have already been mentioned. Because of their relevance for epistemology, in the next section more will be said about some typically lingual and logical artefacts, neither of them being material.


15.8. Lingual and logical artefacts


A language is a dynamic semantic artefact defined as a set of words (a vocabulary) subjected to grammar and semantics, pronunciation and spelling, together acting as the specific character for the language. The rules for a language are not coercive but normative. According to the grammar, words are transformed and connected into sentences, which in turn are combined into narratives or texts. Semantics determines the meaning of words in the context of a sentence and a text. The generic character of any lingual form is primarily qualified by the sign aspect and is secondarily founded in the technical one, in lingual skills. The specific character of a word is secondarily founded in the quantitative aspect. Words are the elementary units of a language, alphanumerically ordered in a dictionary, in which words are not logically defined but described by other words. A sentence is founded in the spatial relation frame, for in a sentence the words find their position determined by syntax. A narrative or a text is kinetically founded, for it consists of a flow of sentences according to a plot.

Whereas language is ambiguous, inviting interpretation, logic wants to hear arguments. In order to find out whether the truth of a statement can be proved, one has first to establish its semantic meaning. If we interpret the sun as the celestial body occupying the centre of the planetary system, the statement ‘she is the sun of my life’ cannot be true. Everybody will understand that the sun here has a metaphorical meaning, interpreted differently than in astronomy. Metaphoric expressions are not logically true, but are significant. They provide insight, but cannot function in a proof. Logical reasoning presupposes the use of language, but cannot be reduced to it.

In logical reasoning, people make use of two different methods. The first is part of natural experience, which is much more than logical. Natural thinking is a direct relationship, not taking distance to the object of reasoning. It is no less rational than the second method, conceptual or theoretical thought, in which a thinking subject opposes its object. Applying logical artefacts, this detachment includes methodological isolation and idealization.

Such an opposing and therefore critical attitude does not occur in theoretical thought only. It occurs whenever human beings leave natural experience, by putting an artificial instru­ment between themselves and their object. A telling example is how people extend their sensorial abilities by using a telescope or a microscope. In this case, too, one assumes an opposing attitude, creating distance, and narrowing one’s vision. One sees further, but one’s field of view is diminished. The other senses (hearing, smelling, tasting, touching) are set apart. The observed object is more or less abstracted from the coherence in which it functions. This distance taking attitude is absent in the natural experience of people as well as in the functioning of animals. It allows people to take part in nature and to keep distance from it simultaneously.

In contrast to natural thought, conceptual or theoretical reasoning argues with the help of logically qualified artefacts, like concepts, statements and theories. Often these are experienced as being abstract, posing higher demands than lingual artefacts like words, sentences and texts. Nevertheless, besides science and philosophy, ordinary life and literature applies them often. A theory is an artefact, people making, inventing, improving, applying or rejecting theories. Theories are used as instruments of thought to form concepts and to prove statements. Often the results of theoretical thought have a strained relation with natural thought, contradicting common sense. For this reason, a theory requires proof. But in practice, theoretical thought is never separated from natural thought. Theoretical activity requires common sense and intuition as well as logical skills.

The Greek word theoria means something like contemplation. The word theatre is derived from it. Often an unproven hypothesis is called a theory. However, the earliest Greek philosophers already connected theoria to delivering proof, to deductive argumentation.

Fundamentalist philosophers assume that a theory should start from well-known and generally accepted evident truths, in order to derive initially unknown statements. Fundamentalism or foundation thinking is any ideology supposing people to dispose of sources of absolute truth not open to critical empirical research. Examples are the rationalist view that the axioms of a theory should be self-evident, making theoretical thought autonomous; the positivist view that unbiased observations provide an undeniable source of truth; the firm belief of many philosophers that the laws of logic are inescapable, for people and for God as well; the authoritarian view ascribing authority to the utterances of great scientists or world leaders; and religious fundamentalism deriving scientific data from a holy text. A non-fundamentalist scientific world view rejects the pretension of science to be capable of leading to absolute truth. Critical realists believe that a scientific theory should start from new and daring hypotheses, arriving at empirically testable conclusions by logical reasoning.

Characteristic for a theory is rendering proof, the logical deduction of theses from premises. If the proof is correct and the premises are assumed to be true, then one ought to accept the derived statements to be true as well. A theory is not strictly objective, but is accepted and used by one or more persons, individually or in social groups like the physical community. They accept some statements to be true, in order to prove others.

Each theory consists of statements or propositions containing concepts. Theoretical concepts serve to identify things, events, processes and relations, and to establish similarities and differences. They form the base of theoretical analysis, of logical identification and of classification. A concept refers to a class of similar things and to differences between classes. Therefore the character of a concept is primarily characterized by the logical relation frame and secondarily by the quantitative frame. According to the logical law of identity, each thing and every event is identical with itself and distinguishable from other things or events. In the course of a logical argumentation one cannot with impunity change the identity of objects to be discussed.

A concept is introduced into a theory by presenting a definition (which is a statement). The view that a definition automatically leads to the existence of the defined object, implied by the identification of thinking with being, is an essentialist fallacy. The weaker view, that one has to lay down the significance of a concept once and for all, contradicts scientific practice. A dynamic theory deepens and clarifies the significance of a concept during the theory’s development. This means that the initial definition may be adapted, of course without causing contradictions within the theory. In various theories a concept may have different meanings, for the meaning of a concept depends on its context. A fundamentalist axiom of logical empiricism was that empirical concepts should be definable independent of any theory. Historicists accepted the other extreme, assuming that a concept is entirely dependent on its context. Critical realism takes an intermediate position: concepts, statements and theories have a relative autonomy with respect to each other, meaning that different theories are comparable.

The logical function of a theory is to establish the truth of theses by connecting these deductively to other statements which truth is accepted. However, each statement itself already makes logical connections, both between concepts and between the objects signified by the concepts. Therefore, the character of a statement is primarily characterized by the logical relation frame and secondarily by logical connection being a logical projection on the spatial relation frame.

Whereas concepts appear to be founded in quantitative relations, and statements in spatial connections, theories are founded in deduction, the logical movement from one statement to another one. The possibility to interlace these logical artefacts with each other allows of opening up human insights about reality, contributing to the   dynamic development of humanity.

This analysis leads to an investigation of dynamic processes like observation, experiment, data gathering, prediction, explanation, problem solving, finding and formulating laws, the systematization of knowledge, and its application in practical situations. In the course of history, several of these have been singled out as the foremost aim of science, but it appears that science derives its relevance from its diversity.

Theories have little use if they are not based in experience. Their axioms should reflect laws, and their propositions should be suggested and confirmed by empirical research. In the physical sciences experiments form a forceful instrument for investigating reality, both to find laws and to test theoretical results. Also in the other sciences, in which the experimental method is less applicable, empirical methods like observation and statistics play an increasingly important part in relating theories to human experience. Indeed, science is much more than theoretical thought alone. Its multiple applicability testifies to its reliability, but not to its infallibilty.


15.9. Increasing socialization


After having discussed artefacts as typical objects, we now turn to associations acting as typical subjects besides individual persons. The distinction between organized and unorganized social connections is very relevant for social philosophy. An unorganized group of people without leadership will be called a community. Instances are a lingual community, a nation or people, a social class or caste, a culture or a civilization, a party during a reception or the public during a concert. A community has a social coherence, forming an intersubjective network, often sustained by an objective network. For instance, the international lingual community is a subjective network requiring intertranslatable languages. It is divided into specific lingual communities of people speaking and writing the same language. The subjective semiotic network of people communicating with each other is based on an objective network of lingual acts; on signs, symbols and lingual artefacts like words and sentences; as well as on technical networks like telephone and internet. However, it is not an organized whole.

An organized social group with leadership and members will be called an association. It is also called a corporation, a company, or an institute. As an organized whole an association has authority at the law side and discipline at the subject-and-object side. Its board (whether monocratic or collective) is empowered and entitled to determine the course of affairs within the association and to represent it outdoors. It acts on behalf of the association as a subject in all relation frames. Any association has members, sometimes called citizens (of a state) or employees (of an enterprise or a school). Some associations, like the European Union, have associations as members.

Like individual persons, but contrary to unorganized communities, associations act as subjects in all relation frames. An association has its own continuous identity, independent of the identity of its members. It maintains its identity at the leave of members from the association and the resignation of members of the board. It has its own character, it is actively subjected to normative principles and it is involved with their realization into norms. Usually, the authority is restricted to members of the association (and to the objects possessed by the association) and within the association by the freedom and responsibility of its members.

Like many individuals, an association has a name and address. A flag, logo, or ideogram, and a mission statement symbolise the association’s identity. Members identify themselves with the association, socializing them. In a household any member should feel at home. As a metaphor this is also stated about other associations. Immigrants are supposed to do their utmost to struck root in their new country.

The board has a restricted and temporal competence to act with authority within and on behalf of the association. Its authorization rests on the recognition by the members, on discipline. The association cannot long exist if its board loses the respect of its members, for instance by neglecting to consult them. Moreover the members of an association ought to have respect for each other, expressed by mutual solidarity and a sense of communality, by social connectedness. Otherwise the association explodes sooner or later. These are normative principles, which not every association satisfies. Sometimes an association only exists by the grace of the exertion or threat of violence. This may occur in a state, a criminal gang, or a terror group, and also in a household.

Both individual persons and associations act as subjects in all relation frames, but contrary to human beings, each association has a specific character. It is primarily characterized by one of the normative relation frames: a household by labour; an orchestra by aesthetics; a publisher by semiotics; a university by logic; a church by faith; a pub by social intercourse; a bank by economy; a state by its policy; a court by justice; and a hospital by care. Each of these is secondarily founded in a preceding relation frame, like a family in biotic descent and a church in the aesthetic celebration of faith, the cult. According to its tertiary character, an association can be interlaced with other associations, like a factory in a commercial enterprise, a canteen in a school, or a choir in a church. The dynamic of associations requires an increasing professionalization of their members.

Besides its specific character, as an organized whole each association satisfies a generic character, the same for all associations. This accounts for the many organizational similarities of specifically widely different corporations. The generic character of an association is primarily characterized by the political relation frame (because it has leadership, taking decisions binding for the association) and secondarily by the frame of companionship (because it has members).

For most associations the specific character is qualified by a different relation frame than the political one, for instance the character of the church by the frame of faith and the character of an enterprise by the economic frame. Only the character of the republic as the guardian of the public domain appears to be qualified both specifically and generically by the political relation frame. This confirms that the state is the most political of all associations and explains why politics is often exclusively connected to state affairs. Yet each association has its own internal authority and policy.

Since the sixteenth century protestants argue and practice that associations belong to a character type of its own; that these types are irreducible to individual or collective interests; that associations are not subordinate but coordinate; that each person belongs to several associations; that there are no all encompassing associations; that several mutually irreducible character types of associations can be distinguished. There is no better warrant for freedom than this protestant view of a civil society.

This principle of sovereignty in its own sphere is a social principle, characterized by the way people ought to cope with each other and with associations. It is also a political principle, for it indicates that an association does not derive its authority from other associations but from the creation order, from God’s sovereignty, such that it can never be absolute. It is not an organizational pronciple, like the principle of subsidiarity, stating that a higher organ should not do whatever can be done better by a lower organ. Each association has an internal organization with its own dynamic, especially in large associations having an economic character (because of the internal division of tasks). The principle of subsidiarity is applicable when the organization is layered.

Sovereignty means a form of authority. Therefore sphere sovereignty is only applicable to associations, not to unorganized communities. It does not mean that each association is autonomous, independent of other associations. In fact associations form many kinds of networks in which they cooperate in order to achieve their aims. The meaning of sphere sovereignty is that each form of authority is restricted, in particular that of the state. It furthers the freedom and responsibility of individual persons. Because individuals may belong to different associations they can be alternately leaders in one and subservient members in an other association.

Both individual persons and associations are actors on the public domain.


15.10. The state

and the public domain


Chapter 10 descibes the character type of the state. Each actual state has its own character, shaped in history, but some characteristics can be indicated to which each state ought to confirm during its dynamic development.

The character type of the state is dual. Generically the state has the character of an association as discussed in the preceding section. Considered as an association, any state’s generic character is qualified by the political relation frame and is founded in the frame of social intercourse. The state’s members are its citizens, but who ought to be called citizens has always been disputed. Sometimes a state has no individual citizens. A corporative state has associations as members. The state may also determine a people or nation as a community. In the Greek polis, women and slaves were not citizens, although they were inhabitants of the state. Since the nineteenth century, romantic nationalism has tried and failed to found the character of a nation in ethnicity, race or language, on the expense of many wars and much suppression. Nowadays nationality merely means belonging to the state, requiring national solidarity, patriotism instead of nationalism.

Like any other association the state ought to be subject to justice. If that is sufficiently the case we speak of a ‘rechtstaat’. In a constitutional state (the usual translation of ‘rechtstaat’) this is formalized in a constitution, limiting the power of the state and warranting the rights of others (individuals and associations). The constitution also indicates how citizens may influence the state’s policy. In a modern democracy (often uncritically confused with both rechtstaat and constitutional state) all citizens have the right to vote.    

Whereas the state governs its citizens, inhabitants and properties according to its generic character as an association, on its territory it rules the public domain according to its specific character. Besides individual persons many associations act in public. In public, people do not necessarily act as citizens or even inhabitants of the state. For instance, the state regulates traffic on public roads and in public transport. All travellers should obey these rules, whether they are citizens, inhabitants or visiting tourists, but the state does not determine their destiny.

The public domain consists of a set of open communities. Each community depends on an internal network of intersubjective relations between individuals and associations. Often a community is based on an objective network. For instance, a lingual community is an intersubjective network of all people speaking the same language. Because lingual acts can be translated, forming an objective lingual network, all people and all associations take part in a world-wide intersubjective lingual public network.

In contrast to associations, public networks lack an internal authority and do not act like individuals. This does not exclude a peculiar kind of activity, influencing the accompanying objective networks. Fashions, the markets, languages, the public opinion, etc., continuously change because of irregular subjective interactions between the actors on the public domain, much like a herd of beasts or a swarm of birds behaves communally without leadership. The individual freedom of the actors on the public domain implies that their acts are to a large extent unpredictable, but it turns out that their collective behaviour is subject to statistical laws, allowing of, for instance, life insurances.

The technical infrastructure forms the objective basis of all other networks constituting the public domain. Each relation frame succeeding the technical one determines its own characteristic network of public subject-subject relations, in which both individuals and associations partake. Architecture is the public art par excellence and public buildings serve the arts, sports and cults. Public opinion forms a semiotic network. Public science is constituted by various intersubjective networks of scientists, sustained by an objective network of theories. Churches, political parties and commercial enterprises make propaganda in public. Public relations define society as a network of public social intercourse. Markets and financial networks have an economic public character. The states themselves, their provinces and cities form a public political network. The courts form a network of public justice sustained by the state, and the public health and welfare networks are increasingly important.

Hence, the state has an exceptional dual character. Its generic character as a political association differs from that of other associations because its membership is not voluntary, but regulated by law. Its internal organization is mapped on the public domain. Its specific character as a republic differs from that of other associations because it is tied to the public domain which guardian it is, and to which not only its citizens, but all people and all associations have access; or rather should have access, for the public order is a normative one.

The state should not be identified with the public domain. With respect to its generic character as an association, it acts on the public domain as a subject. A constitutional state is bound to its own laws and international treaties and is therefore subject to national and international courts of justice. With respect to its specific character, it should be emphasized that the state oversees the public domain, but does not necessarily own it. People ought to be free to use the public domain, and the public rules of the state should have no more ambition than to warrant this freedom and to facilitate public networks.

In a free society, the state as a republic warrants the liberty of people and of associations to make use of the public domain and it stimulates the development of public networks. It maintains the objective structure and the functioning of the public networks. Besides, the state upholds the public order and defends the public domain by means of its intervention powers: army and police.

Historically, the defence of the public domain was probably an important incentive to develop tribal coalitions into warrior states. The assumption that the state is characterized by its sword power has led many Christian theologians and philosophers to believe that the state exists because of the fall into sin. However, the armed power is merely a historical consequence of the specific character of the state as guardian of the public domain. It does not characterize the state itself, but the intervention powers as organs of the state, which indeed are necessary because of sin. Like all types of characters that of the state is given in the creation, and is not caused or changed by the fall into sin. Because the public domain is expressed in all relation frames, in a developed historical situation the state as its guardian has a protective function in any frame. The specific political character of the state means that it orders the public domain, by formulating and maintaining its positive laws. The state maintains peace on the public domain. Even imperialism is always defended by the intention to bring peace, from pax Romana to pax Americana. Nowadays, the maintenance of peace is considered the shared responsibility of all states.

Besides the police and the army the intervention powers include many other organs of the state, for instance inspectors of public education, health or safety. Intervention powers do not rule the public domain, but intervene if people or associations threaten to disturb the public order. The intervention powers are not intended to restrict the freedom and responsibility of individual people or associations. Rather, they ought to ensure that anybody is free to use the public domain according to their own responsibility.

The national public networks become more and more interconnected. There are still separate, so-called sovereign states, but the corresponding public domains are no longer separated and cross national boundaries. This means that the various states have to open up their borders in order to coordinate their tasks. Of old states have conducted treaties, sometimes forming coalitions. More recently, states form associations like the United Nations or the European Union, to which they transfer part of their sovereignty.




In this treatise on the dynamic of the creation and its open future the character types receive as much emphasis as the relation frames,  which form traditionally the face of the philosophy of the cosmonomic idea. Whereas this introduces the law spheres as modal aspects of being, the present treatise interprets the relation frames primarily as dynamical laws for both intersubjective and subject-object relations, both very important for understanding both natural evolution and human history. Because nothing can exist isolated from anything else, the relation frames constitute subsequently the conditions for everything that exists. The relation frames are also aspects of human experience, because experience is invariantly expressed in relations. They are aspects of cosmic time as well.

The theory of characters as a complement of the theory of relations is a consequent elaboration of Herman Dooyeweerd’s conception of structures of individuality, with their primary qualification, secondary foundation and the tertiary disposition of each character to interlace titself with other characters. Characters form the law side of individuality and diversity. There is an enormous variety of characters, but a much more restricted set of character types. Normative character types of artefacts and associations are not determined by historically and culturally determined variable norms, but only by invariant normative principles, besides natural laws. Therefore philosophy will be more interested in character types than in the characters themselves, with which history is more concerned.

Being clusters of generic laws the relation frames are far more general than the specific characters. Whereas everything is subject or object to generic laws, many things, events, etc. don’t have a character. Atoms and molecules have a specific character, but mixtures do not. Artefacts and associations have a specific character, acts or communities do not.  

Theoretical or conceptual thinking differs from natural thought because of the application of artefacts like concepts, propositions and theories. This leads to an alternative conception of epistemology. The analysis of unorganized social communities and organized associations gives rise a new view of the character type of the state and the public domain.

The dynamic of the creation as sketched in this chapter manifests itself in the opening up of relation frames and of characters.