Perhaps Whitehead's real target is those "Newtonians" who think always of the following words of the General Scholium: "I frame no hypotheses. . . . It is enough that gravity does really exist and act according to the laws which we have explained." Such people avert their gaze from the passages surrounding those famous sentences, and from Newton's letters to Bentley, and from the Queries he added to the Opticks; for these abound in theistic hypotheses. (The hypothesis, for example, of a divine plan in "that Order and Beauty which we see in the world" and above all in such organs as the eye. The hypothesis that "the mere Laws of Nature" could never induce the world to "arise out of a Chaos." The hypothesis that the reason why "matter should divide itself into two sorts" and why "that part of it which is fit to compose a shining body should fall down into one mass and make a sun" could lie only in "the counsel and contrivance of a voluntary Agent.") Newton's reputation supposedly demands that this orgy of hypothesizing be quietly forgotten. People recall the scorn Leibniz poured on the Newtonian idea that God would from time to time "rewind," "clean," "repair" the cosmos, imparting impulses to the planets to correct their perturbations and to overcome the friction, small but eventually important, of an invisible ether. Did not Laplace show the solar system to be stable despite its perturbations? Has not friction from an ether proved scientifically disreputable? Has not Darwin removed all need for God's hand? Newton can appear to seek God in the gaps of our scientific understanding, gaps which have had an embarrassing habit of snapping shut.
My argument, however, will be that Newton's blending of science with theism is something glorious. I shall not, indeed, defend him against Leibniz and Darwin since the notion that God constantly intervenes in the world's workings seems unfortunate. (As Leibniz said, God seems here portrayed as "an unskillful workman, oft obliged to mend his work." And the Problem of Evil-of reconciling worldly disasters with divine beneficence-looks overwhelming unless God has strong moral reasons for not perpetually correcting how the world operates.) But the forms taken by the laws of physics, and perhaps also the distribution of material early in the Big Bang, do suggest God's creative activity. I shall illustrate this with facts unknown to Newton, but this is as he would have wished. He lacked, he said, "that sufficiency of experiment which is required" for the proper development of his system.
I shall appeal chiefly to recent evidence, often discussed in connection
with the Anthropic Principle.
(a) Many suggest that basic characteristics of the observable cosmos-the strengths of its main forces, the masses of its particles, its early expansion speed, the photon to baryon ratio,-and so forth are remarkably "fine tuned" for producing life.
(b) Instead of introducing God to explain the fine tuning, they typically propose that there exist countless "universes," (that is largely or entirely separate systems, perhaps of immense size: Soviet writers often call them "metagalaxies") and that force strengths, particle masses, expansion speeds and so on, vary from universe to universe. Sooner or later, somewhere, conditions permit life to evolve. The Anthropic Principle reminds us that, obviously, only such a somewhere could be observed by living beings.
(c) But an alternative interpretation could be offered. This is that there exists just a single universe. Its force strengths and particle masses are the same everywhere (as is suggested by the Principia's second Rule of Reasoning, which Newton illustrated with the remark that "the light of our culinary fire and of the sun" should be seen as governed by the same laws.) And force strengths, particle masses, expansion speed, and other factors were selected with a view to making life possible. They were selected by a Mind or by a more abstract Creative Principle which can reasonably be called "God."
(Black holes are very disorderly, "very high entropy" systems.) Here, special importance attaches to the fact that instantaneous action at a distance is impossible. (Newton dismissed as "inconceivable" the notion that "inanimate brute matter should without the mediation of something else affect other matter without mutual contact.") Lack of instantaneous communication would mean that regions coming out of a Big Bang could not know of one another until light had had time to pass between them. Hence their movements could be expected to be thoroughly uncoordinated. When they made contact, friction might bring about some large-scale uniformity, but in so doing it would produce life-excluding temperatures or black holes. This is the Smoothness Problem. P.C.W. Davies wrote that frictional smoothing away of even a tiny amount of early roughness "would increase the primeval heat billions of times," disastrously. And "if the primeval material was churned about at random it would have been overwhelmingly more probable for it to have produced black holes than stars": "the odds against a starry cosmos" become "one followed by a thousand billion billion zeros, at least." R. Penrose similarly calculated that in the absence of new physical principles which ensured a smooth beginning "the accuracy of the Creator's aim" when he placed a pin to select our orderly world from the space of physically possible ones would need to have been "at least of the order of one part in 10E+10".
The Smoothness Problem remains of great magnitude even if mechanisms active at early instants adjusted the ratio of matter particles to photons in ways reducing its magnitude. For such mechanisms would operate only at very early times, whereas regions which had never before interacted could keep coming over one another's horizons for billions of years.
Any solution to the Problem must allow for life-encouraging local departures from smoothness: the galaxies. Volumes of gas must condense into stars. Yet if the entire universe behaved similarly, collapsing upon itself, then this could yield swift disaster. What hinders the stars from falling upon one another? Newton answered, as we saw, that God placed them "at immense distances," but a more complete answer would be that our cosmos was from very early instants expanding at a speed placing it very close to the line dividing continued explosion from gravitational implosion. Tiny early deviations from this line would grow immensely, as was stressed by R.H. Dicke in 1970. He calculated that a 0.1% early speed increase would have yielded a present-day expansion thousands of times faster that what we find. An equivalent decrease would have led to recollapse when the cosmos was a millionth its present size.
Such calculations have since been refined. In 1978 Dicke said  that a decrease of one part in a million when the Big Bang was a second old would have produced recollapse before temperatures fell below 10,000 degrees; with an equally small increase "the kinetic energy of expansion would have so dominated gravity that minor density irregularities could not have collected into bound systems in which stars might form." And S.W. Hawking estimated that even a decrease by one part in a million million when the temperature was 10E+10 degrees "would have resulted in the Universe starting to recollapse when the temperature was still 10,000 degrees." The fine tuning must be more accurate, the more one pushes back the time at which it is carried out.
Another way of expressing the need for fine tuning is to consider early cosmic densities, which are closely related to expansion speeds. If we can trace things back to the Planck time 10E-43 seconds after the Bang started, then the density must seemingly have been within about one part in 10E+60 of the "critical density" at which space is precisely flat, this placing it precisely on the line between collapse and continued expansion. Temperatures (measured in terms of energies) would then have been around 10E+19 GeV; at a later, 10E+17GeV stage about which we can be more confident, the fine tuning would have to be accurate to about one part in 10E+55. The Expansion Speed Problem can thus be restated as a Flatness Problem. Why is space not more curved?
Many now claim that Smoothness and Flatness Problems can both be solved by an Inflationary Scenario. A.H. Guth and others developed such a scenario to explain the absence of magnetic monopoles. At very high temperatures the four main forces of Nature-gravity, electromagnetism, and the strong and weak nuclear forces-are thought to have been only aspects of a single force; there may also have been just one kind of particle. As temperature dropped the forces split apart in symmetry- breaking phase transitions: compare how water, on freezing, loses its complete rotational symmetry, its property of looking the same in all directions, and takes on the more limited symmetry of ice crystals. Now, the phase transitions could proceed in different ways. It would be highly probable that in areas that were causally separated, light rays not having had time to link them, they would in fact proceed differently: a million monkeys are unlikely to type always the same sequence of letters! The outcome would be vastly many domains with different symmetries, and topological knots where these came into contact. Such knots would be magnetic monopoles. These would be so heavy and so numerous that the universe would recollapse very rapidly. But this disaster could have been averted if any monopole- creating phase transition were associated with an exponentially fast inflation of space. And such inflation (like the growth of a warren in which each rabbit gives birth to ten others) might occur at early instants. It could push monopoles and domain walls far beyond the reach of any telescope.
Massive inflation could give us extremely flat space: a greatly inflated balloon has a very flat surface. And the Smoothness Problem might find much the same answer as the Expansion Speed or Flatness Problem. In the absence of inflation the visible universe would have grown from perhaps 10E+83 initially separated regions, tremendous turbulence resulting when these made contact. Inflation, though, could mean that our horizon was deep within a single such region, one whose parts formed a coordinated whole because they had interacted at pre- inflationary moments. (Inflation would have hurled them asunder far faster than the speed of light. General relativity permits such velocities when expansion of space produces them.)
However, the two Problems seem only to have been solved by introducing others. Model-builders have difficulties in getting inflation started, in persuading it to end without excess turbulence ("the Graceful Exit Problem"), and in having it produce irregularities neither too small nor too large to allow galaxies to grow. Even when you cunningly select your Grand Unified Theory to achieve the desired results-which can look suspiciously like the "fine tuning" which the inflationary hypothesis is so often praised for rendering unnecessary-you may still be forced to postulate a gigantic space containing rare regions, perhaps ones already very unusually smooth, in which inflation of the right type occurs. Further, in the most popular models the inflation is powered by repulsion of the sort Einstein introduced when he gave a nonzero value to the cosmological constant. Though appearing naturally in General Relativity's equations, this constant has long been treated as being zero and thus disregarded. Einstein remarked that his use of it had been his greatest blunder: instead of employing it to keep everything static he should, he said, have predicted the cosmic expansion. Yet Einstein's puzzle of how the cosmos could be kept static has been replaced by that of how it could avoid immediate collapse, for today's physics fills space with fields of so great an energy density (in particular in the form of quantum vacuum fluctuations in which particles attain a fleeting existence) that gravity could be expected to roll everything up into a sphere measuring 10E-33 cm.
To deal with this new puzzle two components of the cosmological constant, "bare lambda" and "quantum lambda," are viewed as cancelling each other with an accuracy of better than one part in 10E+50. How this beautiful result is achieved is totally unclear. While we could invent mechanisms to perform the trick, it can seem best to treat such precise cancellation as a question of Chance-that is, of what would be quite likely to happen somewhere inside any sufficiently gigantic Reality-or else of Divine Selection. For it could seem that the cancellation cannot be dictated by any fundamental law, since the quantum activity of the vacuum involves many fields each contributing in a temperature- dependent way, the masses of a host of scalar particles seeming crucial to the outcome. Nor could one explain it as a product of an inflation which occurred appropriately, for this would put the cart before the horse. Inflation could occur appropriately only if the cancellation were already enormously accurate, though it would be still more accurate afterwards. (Today, the cosmological constant is zero to one part in 10E+120.)
A change in the presently measured strengths either of gravitation or of the weak nuclear force by as little as one part in 10E+100 could end this cancellation, on which our lives depend. And it seems that inflation would result in galaxy-producing density fluctuations only if a Grand Unified Force had a coupling constant (a measure of how strongly this Force affected particles) of only 10E-7, which could be thought "unnaturally small."
Assuming, though, that inflation managed to occur appropriately, then the cosmos could be "dilute" enough to escape collapse for the billions of years which intelligent life may need for its evolution, and also smooth enough to permit life-encouragingly low temperatures. And these would be no mean achievements. As J.A. Wheeler has stressed, "no universe can provide several billion years of time, according to general relativity, unless it is several billion light-years in extent," which can be so only if it has an average density no higher than about ten hydrogen atoms per cubic meter. (Besides, one needs a very dilute cosmos to solve Olbers' Paradox: Why is the Sky Dark At Night instead of being Hot Enough to Fry Us, when each line of sight could be expected to end in a star or in some dust particle heated by that star? Many books wrongly appeal to the fact (which makes little difference to the problem) that the universe is expanding. The right answer is that matter is so diluted that even were it all converted to radiation, the sky would not be hot. A more major threat is from cosmic rays, so destructive that we can only be very thankful that their sources are so far spread out.)
Inside each galaxy, far greater densities occur without disaster. Even so, stars must not be much more closely packed than in our galaxy if they are to avoid frequent near-collisions spelling ruin to planetary systems (one of the points made by G.M. Idlis in the earliest major statement of what we now know as the Anthropic Principle) and to the galaxy as a whole, whose collapse is speeded whenever a near-collision occurs. Again, were galaxies clustered more densely then their collisions could make things very difficult for life.
Difficulties, furthermore, in making the step from mere chemistry to something like DNA biochemistry may be so great that the 10E+22 stars of the visible universe are needed to give much chance of life's evolving even once. Now, an inflationary era could be characterized by matter- producing mechanisms giving rise to those 10E+22 stars. The mechanisms would exploit the fact that gravitational energy, like all physical binding energies, is negative energy. This could balance the positive energy of vastly much newly created matter.
Have we now discovered them? Well, we now know energy is never lost entirely. When (to borrow Newton's example) two masses of clay collide, they become hotter. And though heat is energy in a "disorderly," "high- entropy" form, heat differences can produce the orderliness of living things. The world's rush towards disorder, proceeding at different speeds in different places, sets up eddies. Thus local order is often increased.
Still, what originated the differences which are in this way exploited, given that the Big Bang had no cold region into which to expand since it filled all space? Gravitational entropy may have come to the rescue. On large scales at least, all may have started off with extreme gravitational orderliness-a fact in which a modern Newton might well see God's hand. On a microscopic scale there could have been extreme disorder, this cancelling itself out on a slightly less microscopic scale; compare how a colored gas in a high-entropy state can seem smooth to the eye; but it would still be hard to understand how on still larger scales the Bang was a gravitationally smooth affair rather than a ragged chaos giving rise to a cosmos of black holes or temperatures that remained searing for billions of years. If, however, by divinely accurate pin-placement or otherwise, large-scale gravitational smoothness could be had, then this could give rise to stars which generated heat in a steady way. For whereas thermodynamic entropy increases through dissipation, as when a gas expands, gravitational entropy increases through concentration, as when a large mass of gas falls together to form a star.
Newton was wrong in supposing that matter would need to "divide itself into two sorts," the one forming the planets and the other a sun or suns. (Our sun is mainly hydrogen, but so is Jupiter.) Yet he was right in seeing the sun's immense size as the key to its long-lasting activity and in his weird suggestion about "the changing of Bodies into Light" which we now know to be the source of the sun's power. (Binding energy being negative energy, nuclear fusion can yield a mass- energy decrease, the difference being carried off as radiation.) Moreover suns and planets can seem to depend on impressively much "fine tuning." Thus, the Big Bang needed to deliver atoms usable in stellar fusion reactions, not ones which had already undergone fusion. Two things were crucial: the high expansion speed when atoms first formed- they were rushed apart before they could fuse-and the extreme weakness of the nuclear weak force. The weak force controls proton-proton fusion, a reaction 10E+18 times slower than one based on the other nuclear force, the strong force. But for this, "essentially all the matter in the universe would have been burned to helium before the first galaxies started to condense" so there would be neither water nor long-lived stable stars, which are hydrogen-burning. (Helium- burners remain stable for times too short for the evolution of life as we know it.) Again, the weak force's weakness makes our sun "burn its hydrogen gently for billions of years instead of blowing up like a bomb."
Had the weak force been appreciably stronger then the Big Bang's nuclear burning would have proceeded past helium and all the way to iron. Fusion-powered stars would then be impossible.
Notice, though, that the weak force could not have been much weaker without again giving us an all-helium universe. (There are thus two threats to hydrogen, one setting an upper and the other a lower limit to the values of the weak force compatible with life as we know it.) For at early moments neutrons were about as common as protons, things being so hot that the greater masses of the neutrons, which made them harder to generate, had little importance. The weak force, however, could make neutrons decay into protons. And it was just sufficiently strong to ensure that when the first atoms formed there were enough excess protons to yield roughly 70% hydrogen. Without a proton excess there would have been helium only.
Again, weakening the weak force would ruin the proton-proton and carbon- nitrogen-oxygen cycles which make stars into sources of the heat, the light and the heavy elements which life appears to need.
How do these heavy elements get to be outside stars, to form planets and living things? The weak force helps explain this. When stars explode as supernovae they lose their heavy-element-rich outer layers. (Elements heavier than iron, which play an important role in Earth's organisms, can be synthesized only during the explosions.) Now, these layers are blasted off by neutrinos which interact with them via the weak force alone. Its extreme weakness, which allows neutrinos to pass through our planet more easily than bullets through air, permits also their escape from a supernova's collapsing core. Still, the force is just strong enough to hurl into space the outer-layer atoms needed for constructing astronomers! (Strong enough, also, to fuse electrons with protons during the core's collapse, thus enabling the collapse to continue. The result is an implosion whose violence-the core shrinks thousands of times in under a second-gives rise to a gigantic explosion.)
It is often held that the formation of our solar system, and presumably also of many or all other such systems of star and planets, was triggered by a nearby supernova explosion. Meteorites contain oxygen of only a single isotope, seemingly screenable only by such an explosion.
While the calculations are hard, it seems a safe bet that weakening the weak force by a factor of ten would have led to a universe consisting mainly of helium and in which supernovae could not occur.
The nuclear strong force, too, must be neither over-strong nor over-weak for stars to operate life-encouragingly. "As small an increase as 2%" in its strength "would block the formation of protons out of quarks," preventing the existence even of hydrogen atoms, let alone others. If this argument fails then the same small increase could still spell disaster by binding protons into diprotons: all hydrogen would now become helium early in the Bang and stars would burn by the strong interaction which, as noted above, proceeds 10E+18 times faster than the weak interaction which controls our sun. A yet tinier increase, perhaps of 1%, would so change nuclear resonance levels that almost all carbon would be burned to oxygen. A somewhat greater increase, of about 10%, would again ruin stellar carbon synthesis, this time changing resonance levels so that there would be little burning beyond carbon's predecessor, helium. One a trifle greater than this would lead to "nuclei of almost unlimited size," even small bodies becoming "mini neutron stars." All which is true despite the very short range of the strong force. Were it long-range then the universe would be "wound down into a single blob."
Slight decreases could be equally ruinous. The deuteron, a combination of a neutron and a proton which is essential to stellar nucleosynthesis, is only just bound: weakening the strong force by "about 5%" would unbind it, leading to a universe of hydrogen only. And even a weakening of 1% could destroy "a particular resonance in the carbon nucleus which allows carbon to form from He plus Be despite the instability of Be" (which is however stable enough to have a lifetime "anomalously long" in a way itself suggesting fine tuning). "A 50% decrease would adversely affect the stability of all the elements essential to living organisms": any carbon, for example, which somehow managed to form would fast disintegrate.
I.L. Rozental estimates that the strong force had to be within 0.8 and 1.2 of its actual strength for there to be deuterons and all elements of atomic weight greater than four.
Electromagnetism also needs to fall inside narrow limits if the stars are to encourage anything like life as we know it. For one thing, it is the strong force's strength by comparison with electromagnetism (it is some hundreds of times stronger) which is the real topic of the above remarks about carbon synthesis and about the deuteron's being luckily just bound while the diproton is equally luckily just unbound. Again, electromagnetic repulsion between protons prevents most of their collisions from resulting in proton-proton fusion, this explaining how stars can burn so slowly: each second our sun generates thousands of times less energy per gram than the human body. The strength of electromagnetism by comparison with gravity is crucial here.
Let us look at some further details.
First, a star's surface temperature must be suitably related to the binding energies of chemical reactions used by organisms: it must be hot enough to encourage construction of new chemicals, as in photosynthesis, but also cool enough to limit destruction such as is produced by ultraviolet light. (One probably cannot compensate for changes in stellar temperature by placing the life-bearing planet nearer or further. The constructive or destructive power of individual, "quantized" energy packets is crucial; compare how in a photographer's darkroom no amount of red light affects the film as each individual photon packs too little punch. Now, this power remains the same at any distance.) W.H. Press and A.P. Lightman show that an interestingly delicate balance between electromagnetism and gravity is involved.
As in the cases of other such balances, further factors are involved as well: the masses of the proton and the electron are relevant. So even if electromagnetism and gravity stood in a different relationship the delicacy of balance might perhaps be maintained, could those masses be varied at will. But here our imaginations could easily run away with us. To guard against one disaster by tinkering with these or those factors would be likely only to introduce some new disaster because each factor enters into so many vital relationships. And even if disaster could in theory be avoided, actually avoiding it-compensating for variations by making appropriate changes elsewhere-could itself be a very impressive instance of fine tuning!
Next, B. Carter draws attention to how our sun's luminescence would fall sharply were electromagnetism very slightly stronger. Solar surface temperatures lie close to those at which ionization occurs, at which point opacity increases markedly. Had electromagnetism been slightly stronger (for in Carter's formula its strength is raised to its twelfth power) then the main sequence, on which stars spend most of their lives, would consist entirely of red stars, losing heat chiefly by convection and life-discouragingly cold. A planet near enough for warmth would suffer tidal forces which reduced its rotation until it turned always the same face to its star, its liquids and even its gases then collecting in frozen masses on the far side. And had electromagnetism been very slightly weaker, then all main sequence stars would be blue: very hot, radiative and short-lived. Even as matters stand, stars of above 1.2 solar masses probably burn too briefly to support the evolution of intelligence on their planets, if they have any, and hot blue giants remain stable for only a few million years.
Davies holds that Carter has shown that changes either in electromagnetism or in gravity "by only one part in 10E+40 would spell catastrophe for stars like the sun." In the background is Dicke's remark of 1957 that a star's radiation rate varies as the inverse seventh power of the dielectric constant so that if electromagnetism were appreciably stronger "all stars would be cold. This would preclude the existence of man."
Again, Rozental observes that all quarks (and hence all protons, essential to atoms) could be transformed into leptons by superheavy bosons, whose mass is related to the electromagnetic force, were this force strengthened by as small a factor as l.6; and further, that if this argument failed then a threefold increase in their electric charge would make protons repel one another sufficiently to prevent the existence in stars or anywhere of nuclei with atomic weights greater than three. With a tenfold increase there could be no stable atoms: the protons would pull the electrons into the nucleus.
Finally, remarks about how weakening the strong force affects, for example, protons-they can no longer be persuaded to come together in atomic nuclei so hydrogen becomes the only element-can be re-expressed as arguments for the disastrousness of electromagnetism's being slightly stronger.
Similar points could next be made about the gravitational force.
Some of them could be viewed as rephrasings of the statements of Carter and others about electromagnetism's needing to be appropriately strong by comparison with gravity, or of the remark that the weak force must be weak if any hydrogen is to come out of the Bang. Others would be reworkings of the point that the cosmic expansion speed must be "just right" if galaxies are to form: thus, gravity may need an appropriate strength if inflation is to occur, or maybe inflation is a false hypothesis and the speed had to be fine tuned from the very start by immensely accurate choice of the gravitational constant. Again, gravity must be extremely weak for the universe to avoid fast collapse.
Other points are at least in part new.
(a) One reason stars live so long is that they are so huge (for besides providing more to burn, size slows down the burning because radiation's random walk to the stellar surface takes millions of years) and yet are compressed so little by gravity. Though the figure varies with whether we consider electron-electron or proton-proton interactions, we can say roughly that gravity is an astonishing 10E+39 times weaker than electromagnetism. Were it appreciably stronger than it is, stars would form from smaller amounts of gas; and/or would blaze more fiercely (Teller calculated in 1948 that the radiation increases as the seventh power of the gravitational constant); and or would collapse more easily to form white dwarfs, neutron stars or black holes. Were it a million times stronger (which would leave it 10E+33 times weaker than electromagnetism, while we lack any well developed theory saying that it had to be at all weaker) then stars would be a billion times less massive and burn a million times faster. With even tenfold strengthening, a star with as much matter as our sun would burn only a million years.
(b) Were gravity ten times less strong, it would be doubtful whether stars and planets could form. And any appreciable weakening could mean that "all stars would be chemically homogeneous due to convective mixing and one would not get the onion-skin shell structure which characterizes pre-supernova models": hence, perhaps, no supernovae scattering heavy elements.
(c) As things are, clouds the right size to form stable stars are just able to cool fast enough to avoid fragmentation. Tinkering with gravity could destroy this happy phenomenon.
(d) If the protogalaxies formed by fragmentation of larger clouds then, J. Silk has argued, this required gravity's strength to be interestingly close to its actual value.
(e) Violent events at the galactic core presumably exclude life from many galaxies. In Cygnus A "the level of hard, ionizing radiation is hundreds of times more intense than on the surface of the earth." Strengthening gravity could make every galaxy this nasty.
One last factor crucial to the stars is the neutron-proton mass difference. As S. W. Hawking says, if this "were not about twice the mass of the electron, one would not obtain the couple of hundred or so stable nucleides that make up the elements and are the basis of chemistry and biology." Here are the reasons. The neutron is the heavier of the two particles, by about one part in a thousand. Less energy thus being tied up in a neutron, decays of neutrons into protons would have yielded a universe of protons only, with hydrogen the only possible element, had neutrons not become bound to protons in atoms. Here the presence of electrons and the Pauli Principle discourage their decay; but even that would not prevent it were the mass difference slightly greater. And were it smaller (one third of what it is), neutrons outside atoms would not decay; all protons would thus change irreversibly into neutrons during the Bang, whose violence produced frequent proton-to-neutron conversions. There could then be no atoms: the universe would be neutron stars and black holes. The mass of the electron enters the picture like this. If the neutron mass failed to exceed the proton mass by a little more than the electron mass then atoms would collapse, their electrons combining with their protons to yield neutrons. (Proton mass: 938.28 MeV. Electron: 0.51. Total: 938.79. And the neutron weighs in at 939.57. Neutrons, being electrically neutral, can add to the strong- force interaction which holds complex nuclei together, without also adding enough electromagnetic repulsion to blow them to bits.)
As things are, the neutron is just enough heavier to insure that the Bang yielded only about one neutron to every seven protons. The excess protons were available for making the hydrogen of long-lived stable stars, water and carbohydrates. Notice that hydrogen stars burn by producing neutrons: despite the neutrons being heavier than the proton, it is so little heavier that a process whereby two protons fuse to form a deuteron-a combination of a proton and a neutron-is energetically advantageous when the comparatively small binding energy is taken into account. (It could be added that an increase in Planck's constant by over 15% would prevent the existence of the deuteron.)
Another way of seeing things is that an increased electron mass would spell disaster. Rozental comments that the electron is amazingly light: twenty times lighter than the next lightest particle, the pion, and some thousand times lighter than the average for the known particles. The electron's being a lepton is not enough to explain this for the tau lepton is heavier than the proton. Further, the neutron- proton mass difference is tiny compared with those found in almost all other cases of isotopic multiplets.
Neutrons and protons differ in their quark content so their very fortunate mass difference can be explained as a reflection of the "up" quark's being slightly lighter than the "down." But such explanation may succeed only in pushing puzzlement back a step. (A believer in God need not think that each fortunate phenomenon is directly due to divine choice, lacking all further explanation. Newton took artistic and religious delight in how Nature was "very conformable to herself, and very simple, performing all the great Motions of the heavenly bodies by the Attraction of Gravity, and almost all the small ones of their Particles by some other attractive and repelling Powers." The seeming failure of the simplest Grand Unified Theory, "minimal SU(5)," as evidenced by failure to see sufficiently many proton decays, should however discourage the idea that some Principle of Simplicity is the only factor selecting Nature's laws. Hugely many alternative GUTs now compete for the physicist's attention. God had an immensely rich field from which to choose.)
He was in part wrong. Atoms can be broken (ionized) by striking a match. And you cannot identify Newton's hard, unchanging particles with subatomic entities since subatomic entities of one type often change into some other type. Even the proton is now believed liable to decay-which can be viewed as beneficial since the factors involved are probably responsible for how matter came out of the Bang in any quantity instead of annihilating with antimatter to produce a universe of light. The story runs like this. Superheavy bosons can transform quarks into leptons so that, for instance, protons-made of quarks-are not eternal. At present temperatures the superheavies are created rarely, proton decays being correspondingly rare; but early in the Bang superheavies were common, their own decays very fortunately producing unequal numbers of quarks (for making protons) and antiquarks (which make antiprotons).
(i) that the story's details are uncertain: thus even the "sign" of the inequality-whether it would be more quarks or more antiquarks which were produced-can as yet be determined only by the verbal principle that we are sure to call the outcome "matter" rather than "antimatter";
(ii) that laws of charge and charge-parity conservation must fail, which means there must be two "generations" of quarks and leptons in addition to those of our everyday world;
(iii) that one needed not only an expanding universe but also, probably, the very rapid expansion which inflation provides. And,
(iv) something had to ensure that the excess of protons over antiprotons was paralleled by a precisely equal excess of electrons over positrons, to avoid charge imbalance. Charge imbalance would make condensations of matter hard to achieve if the universe were "open"; and in a "closed," finite universe the case would be if anything even worse since lines of force would wind round and round, building up an infinite electric field.
(v) And as well as all that, neither too much nor too little matter was to be produced. Roughly, the actual excess was of one proton for every hundred million proton-antiproton pairs. Too many more protons, and the universe would quickly collapse, assuming that its expansion rate reflected the number of photons per proton; or it would become a collection of neutron stars and black holes; or at the very least, there would be helium everywhere instead of hydrogen. Too many fewer, and there would be over-rapid expansion coupled with radiation pressures guaranteeing that protagalaxies and stars could not condense: any massive bound systems managing to form despite the expansion would trap radiation which stopped the fragmenting into the smaller bodies on whose existence life depends.
Proton decay, moreover, must please be slow. Proton lives of 10E+16 years, about 10E+6 times the present age of the universe, would mean (says M. Goldhaber) that even the decays occurring in you would kill you with their radiation.
All this implies that the masses of the superheavy bosons must fall inside interestingly narrow limits: for instance, they must be at least one hundred million times heavier than the proton if protons are to be stable enough. Again, making the electromagnetic constant larger than l/85 would result in too many proton decays for there to be long- lived, stable stars (while 1/180 is a lower limit suggested by GUTs). And if high levels of radiation must be lethal then 1/85 would itself be too large as the stability of living organisms would then be more sensitive than that of stars.
Newton's being partly wrong must therefore not blind us to how nearly he was right when writing that "the Changes of corporeal things" are merely "new Associations and Motions of permanent Particles." The average proton will live longer than 10E+31 years. Further, particles do at least come in unchanging types: a DNA molecule transmits information equivalent to ten thousand pages because atomic particles (and hence the atoms they compose) come in unvarying brands. Now, even in the 1970's Wheeler could write that "the miraculous identity of particles of the same type must be regarded as a central mystery of physics." Riemannian geometry was, he said, useful in physics only because of its suggestion- which "exposes itself to destruction on a hundred fronts"-of a gauge symmetry without which "electrons brought by different routes to the same iron atom at the center of the Earth would be expected to have different properties." Failure of the symmetry would mean that "the iron atom-and the center of the Earth -would collapse," since now the Pauli Principle would fail. As V.F. Weisskopf explained, this Principle "in many ways replaces the classical concept of impenetrability and hardness"; by keeping apart all matter particles of the same type, it prevents atomic collapse. But, he added, one would like to know why electrons and other matter particles (fermions) come in specific types: "Very little can be said in regard to why the electron has the properties which we observe," things being made specially difficult by how Nature "has provided us with a second kind of electron, the muon," which seemingly "differs from the ordinary one by its mass only."
The Pauli Principle's "spreading out" of the atom by keeping electrons in a hierarchy of orbits is decidedly fortunate. Could electrons take just any orbit then, (i) thermal buffetings would at once knock them into new orbits, so destroying the fixed properties which underlie the genetic code and the happy fact that atoms of different kinds behave very differently, and (ii) atoms would quickly collapse, their electrons spiralling inwards while radiating violently. Now, the "wave-particle" natures of atomic particles could give us some insight into the Principle. For consider sound waves: air in an organ pipe likes to vibrate at a particular frequency or at simple multiples thereof. Observe, however, that bosons also have wave-particle natures yet are not restricted by the Pauli Principle. If electrons behaved like bosons then all could occupy the lowest possible orbit and there could be no chemistry.
How does an electron in the lowest orbit escape being sucked into the oppositely charged atomic nucleus? Quantum theory answers that Heisenberg Uncertainty relating position and momentum makes the electron speed up as it nears the nucleus. This-together with (a) the non- collapse of white dwarf and neutron stars, supported by similar "Heisenberg agitation," (b) quantum tunnelling through force barriers, which makes stars burn faster and underlies radioactivity, (c) quantum creations of particles which exist on "borrowed" energy until Heisenberg's formula relating energy and time demands a cleaning up of the accounts, (d) the fact that whether a gigantic box contains "gross features such as a huge black hole" can be a matter of how the quantum dice have happened to roll-makes it implausible to attribute Heisenberg Uncertainty merely to how conscious beings cannot find out the details of events. The Uncertainty must surely be "out there" in the world. And it is out there in a way whose fortunateness matches its strangeness. Electrons are not, thank heaven, forever being sucked into nuclei! A further point about hardness concerns solids. As G. Wald said, "If the proton had not so much greater mass than the electron, all matter would be fluid" because "all motions involving these particles would be mutual and nothing would stay put." It is because their heavy nuclei are confined inside clouds of light electrons, clouds interacting in complicated ways, that individual atoms can have fixed positions.
F.D. Kahn similarly pointed out that water molecules, benzine rings, DNA, and so forth, have structures that "persist owing to the great difference between the mass of an electron and the mass of an atomic nucleus." At stake is "the existence of chemistry (and also chemists)" since chemistry needs atoms "full of open space with well- defined central nuclei." Electromagnetism's comparative weakness is involved too, as is the fact that electrons cannot feel the hundreds-of- times-more-powerful strong nuclear force. (Kahn added that such reflections threw severe doubt on the possibility of non- chemical life based on the strong force rather than on electrons and electromagnetism. Protons and neutrons, the main particles governed by the strong forces have virtually equal masses so "no precision could be given to their locations".)
T. Regge argued that "long chain molecules of the right kinds to make biological phenomena possible" could be threatened by "the slightest variation" in the electron-proton mass difference.
Important, too, is that electron and proton have charges opposite but numerically equal. Were things otherwise then the consequent charge imbalance would be fully as disastrous as the one I discussed earlier. Wald commented that "if a universe were started with charged hydrogen it could expand, but probably nothing more." (R.A. Lyttleton and H. Bondi had dreamed in 1959 that proton and electron charges differed by about two parts in a billion billion, this tiny difference accounting for the cosmic expansion!) The actual charge equality seemed to him particularly mysterious because the proton had "about 1840 times the rest mass of the electron." There were other pairs of oppositely charged particles, proton and antiproton for example, whose charges were exactly equal, but those "can be generated as pairs of anti-particles out of photons" (which are chargeless) so that there the equality was "just an aspect of conservation of charge." No such explanation was available here. True, one could explain that protons were made of quarks bearing charges one third or two thirds that of the electron; and this might be understood in terms of the possibility that quarks can change into leptons (the class to which electrons belong). But, said Wald, that would only push the need for an explanation to another level, as the charges on the various kinds of quark would now have to be "equal or simple sub-multiples of each other" to enormous accuracy.
Wald was writing prior to the very bold theories of the 1980s, which might throw some light on this area. But as I said earlier, theists should not be too opposed to the idea of fundamental principles which dictate this or that fortunate phenomenon; for though such principles may be comparatively simple, they will still be impressively intricate and very far from being logically inevitable. Even the simplest modern GUT due to H. Georgi and S. Glashow, now thought to be too crude, involves twenty-four force fields. Hugely many more complex theories now compete for the physicist's attention. And claims to have derived this or that quantity "from basic principles" typically gloss over the fact that some other quantity, often the mass of a force-conveying "messenger particle" like the pion, had first to be put in by hand.
Rozental estimates that an electron-proton charge difference of more than one part in ten billion would mean that no solid bodies could weigh above one gram. (Again, he says, reduce the electron charge by two thirds and even the low temperatures of interstellar space then destroy all neutral atoms.)
Barrow and Tipler remark that in any case the difference between material things and waves is important only thanks to the smallness of the electromagnetic constant. It has to be a small fraction (it is about l/137) to ensure "the distinguishability of matter and radiation," for reasons centered on how electrons spend that same fraction of their lives as waves. Had the fraction been much larger, atoms would be very impermanent.
Still, might not biology of some kind be based on waves instead of matter? More precisely, might it not be based on bosons (such as make up light waves) rather than fermions (electrons, protons, neutrons, etc.)? Alas, the patterns which bosons weave lack properties of the kinds which seems essential. They tend to pass through each other freely, and they could not provide the unchanging bricks, coming in unvarying brands and capable of precise positioning, with which genetic messages, for example, could be built up. (Light waves typically treat one another as do those of the ocean. True, they are in a complex sense composed of particles, and these can interact; but when they do, this is only in the way made familiar by laser light. They hurry to suppress their individualities, building up patterns of mass action.)
Finally, long-lasting particles exist only because of space's topological and metrical properties. For instance, it seems to be three-dimensional, which was not a logical inevitability. Currently popular Kaluza-Klein theories suggest that it in fact has at least ten dimensions, seven now being invisible because each became very tightly rolled up, compacted. The real difficulty is of understanding how the others could remain uncompacted in view of the enormous energy density of a "vacuum" crammed with quantum fluctuations: see the discussion of the Flatness Problem.
Were more than three spatial dimensions present in uncompacted form, atoms or elementary particles could be impossible. (a) Physicists' discussions of solitons suggest that particles may be knots which persist in time because three-dimensional space is the one kind in which true knots can be tied. (b) Many have developed P. Ehrenfest's argument that the stability of atoms and of planetary orbits, the complexity of living organisms, and the ability of waves to propagate without distortion (perhaps crucial in nervous systems and elsewhere), are all available only in three dimensions. (c) Wheeler has suggested that only three-dimensional space is complicated enough to be "interesting" while still simple enough to escape total break-up through quantum effects, effects making nonsense of a point's having "a nearest neighbor."
Actually, it is sometimes held that space could have "fractional dimensionality." Fractals, infinitely complex curves, partially fill the higher-dimensional space in which they wriggle, so tending to take on its dimensionality. If our space might have been (or is) of some dimensionality like 2.99999998 or 3.0000000l, then there is much scope for fine tuning here.
Were the topology of space variable (and some have suggested that it does vary at each Big Squeeze of a perpetually oscillating universe), then whether there were parity conservation laws could also vary: in their absence, heaven knows whether life of any kind would be possible. Again, Davies and S.D. Unwin argue that space's having a "nontrivial" topology could help account for how the cosmological constant is so very close to zero. Twisted scalar fields would make the constant take different values in different regions. All that our telescopes can probe may be inside a single such region. In regions where the constant took measurably non-zero values, observers could not exist.
A.D. Linde reasons that life depends on space's having the right metric signature, A.D. Sakharov having shown that its coming out of the Bang with some other signature was possible. Reality might be split into domains with different signatures. The observed signature is +++- (meaning that instead of the d^2= x^2 + y^2 of Pythagoras' theorem we have d^2 = x^2 + y^2 + z^2- (ct)^2, where t is time and c the velocity of light). Signature ++++ , for instance, would imply that "life would be impossible due to the absence of particle-like states."
One should also mention the current idea that the space we inhabit is only metastable, like a pencil balancing upright: it is filled with a field which might (with the unpredictability of a quantum phenomenon) "tunnel" to a lower value. The resulting bubble of stable space would expand at virtually the speed of light, destroying observers as it hit them. If the top quark has a mass as great as 125 GeV then we may be lucky that our world has lasted this long. Equivalently, had its mass much exceeded 125 GeV then our world almost certainly would not have been so long-lasting.
This was fine guesswork. Nature is governed by at least two main forces (the strong and weak nuclear) in addition to gravity and electromagnetism. All these are essential to life based on heat, light, atoms, stars and chemistry. They do differ greatly in range and in power, the very short-range strong nuclear force being the strongest. And what seems like one and the same force can attract at one distance, repel at another.
Here are a few particulars. (i) Electrons are "screened" by clouds of "virtual" positrons, short-lasting entities conjured out of emptiness as quantum fluctuations. This stops an electron's influence from growing without limit as it is approached-which would make the electron immensely destructive, yet was prima facie to be expected of so apparently pointlike a particle. The quarks in the atomic nucleus, on the other hand, maintain their separate identities thanks to "antiscreening" by gluons which "spread out" the interquark colour force so that it vanishes at short range (much as gravity does at the Earth's centre, since it tugs equally from all directions). (ii) The strong nuclear force, probably just the colour force in complex guise, is repulsive at extremely short distances, attractive at somewhat longer ones. By repelling it helps prevent the protons and neutrons in a complex atom from collapsing together, but by attracting it binds them tightly, giving the atom a very precisely located center (with the benefits mentioned earlier). And at ranges even greater (but very short) the force fortunately falls to zero: the messenger particles which convey it can travel no further as they must repay the energy they "borrowed" in order to exist. Were the force long-range, it would rapidly collapse the universe. (iii) Electromagnetism, in contrast, is conveyed by particles of zero rest mass, photons. Not having to repay borrowed energy, photons can travel onwards indefinitely, but this is undisastrous because matter in bulk tends to exert no electric force: the positive charges are cancelled by the negative so no cosmos- collapsing or cosmos-exploding field is built up. (iv) Thus the universe on a large scale is ruled by the much feebler force of gravity. Planetary and galactic systems maintain themselves against its pull by the rotation to which Newton drew attention or (in the cases of some galaxies) just by random motions.
Result: A greatly complex dance of material particles. It is ruled by mysterious principles such as Baryon Conservation (not associated with any obvious force field as electromagnetism's Charge Conservation is, yet without it "the entire material contents of the universe would disappear in a fireball of gamma radiation"). Also, intricate checks and balances keep things moving smoothly for billions of years: for instance, the balance in atomic nuclei between strong-force attraction and the electromagnetic repulsion which nearly blows apart any atom with two protons or more. Together with the Pauli Principle, the small mass of the electron, the fact that electrons do not feel the strong force, etc., this balance allows for a hundred or so markedly different kinds of atom-building-bricks whose electrons make them more useful than the solid spheres imagined by an early physics (for as A. Szent-Gyorgyi commented, "You will find it rather hard to build any mechanism out of marbles").
Both in atomic nuclei and in whole atoms, such checks and balances lead to force-field "hills" penetrable only with difficulty by particles trying to get in or others striving to get out. This makes for great stability. But the hills can be penetrated, for example in powerful collisions inside stars; so stars can burn. Atoms are moreover very complexly "sticky." The positively charged nucleus of one can attract the electrons of another; the two then approach until their electron clouds repel each other forcibly. This constitutes the feeble van der Waals bond, keeping liquids liquid. But the atoms may then exchange an electron, share electron pairs, or engage in intricate electron-electron and electron-proton interactions (maybe involving other atoms also, as in hydrogen bonds). Very many other physical and chemical ties can thus be built up. The weaker ones underlie readily reversible reactions. Life exploits these during photosynthesis, during muscle movements (hydrogen bonds repeatedly formed and broken), when making or burning the cellular fuel ATP (phosphate bonds made and unmade), or when transporting new matter into cells. The latter are reminiscent of candle flames; their forms persist though their atoms are forever being replaced. Thus do men outlast their shoes.
On a larger scale we find stability of the kind to which F. Dyson drew attention,[24 ] "hangups" in the flow of energy. (a) Dyson's "thermonuclear hangup" may be the most immediately impressive: it allowed our sun to support life's evolution for what Lord Kelvin thought impossibly much longer than any sun could burn. (As a star grows hot, heat movements or its particles fight further compression by gravity. The star remains spread out, its fusion processes slow). (b) When, however, we come to understand the Flatness Problem then the "size hangup," the fact that the galaxies and the cosmos are large enough to avoid immediate gravitational collapse and other life-excluding developments, can impress us still more.
Much of the impressiveness of such affairs lies in how their basic laws are fairly straightforward. (In "Do we live in the simplest possible interesting world?" E.J. Squires argues that they could be the very most straightforward of those allowing anything as intricate as chemistry.) Still, while we could see in this "such principles as might work with considering men for the belief in a Deity", might not an opposite reaction be better? "In a mixed solid," says Wheeler, "there are hundreds of distinct bonds, but all have their origin in something so fantastically simple as a system of positively and negatively charged masses moving in accordance with the laws of quantum mechanics." Rather than being evidence of divine ingenuity in selecting those laws, might this not show only that complex structures are sure to result wherever there are laws? See how readily flames, crystals, bubbles shaken by winds, reproduce themselves. Read M. Eigen and R. Winkler: besides repeating Darwin's point that intricate organisms can evolve from simpler ones by Natural Selection, they show how the simplest might originate. Their examples include "dissipative patterns" set up by energy flows, and the "gliders" (oscillating, travelling, yet stable) evolved in J.H. Conway's game played with beads which reproduce or die in obedience to three short rules.
Such a style of reaction could seem unappreciative of the near- incredible intricacy of living things: the "simple" cell has a microscopic structure about as complex as that of a whole man as viewed by the naked eye. We could also challenge the assumption that there is bound to be an environment in which Natural Selection can proceed smoothly. Treating a world of life as unsurprising could strike us as obsession with what can be quantified, a blind eye being turned to the qualitative. But reasonable men can disagree over these points. This paper has therefore stressed that quantitative considerations contribute strongly to a modern Design Argument. Tiny changes in fundamental constants would have made life's evolution very unlikely. Look again at that figure of one part in 10E+100, representing how accurately gravity may have to be adjusted to the weak force for the cosmos not to suffer swift collapse or explosion. Recall the claim that changing by one part in 10E+40 the balance between gravity and electromagnetism could have made stars burn too fast or too slowly for life's purposes. Think of the many other claims I reported.
True, few such claims involve figures as huge as 10E+100; but they often compensate for this by being very firmly established. And my survey has been far from comprehensive. For instance I did not mention how fast the cosmos would have collapsed if electron neutrinos, often thought to have a small mass, had weighed even a hundredth of what the electron does. (The Bang produced some billion of them for every proton.) I was silent, too about P.W. Atkins's calculation that a 1% increase in electromagnetism's strength could have doubled the years needed for intelligent life to evolve, while doubling it could have meant that 10E+62 years would be needed. Atkins comments that were atoms more tightly knit then only "prods like nuclear explosions" could have much probability of inducing changes in living structures made from them.
Observe that argument on these lines need not appeal to any need for an ozone layer to defend us against ultraviolet rays; or for ice to float so as to form a protective cover over ponds; or for there to be calcium, chlorine, magnesium, potassium, phosphorus, sodium and sulfur (mineral elements essential to the actual organisms on our planet). It need not even be assumed (though Wald and others give powerful grounds for it) that without carbon as a basis for complex chains and water's special properties there would be no life in our universe. The big point is instead the one insisted on by Rozental: that small changes in fundamental constants-force strengths, masses, Planck's constant, and so forth-would have meant the total absence of "nuclei, atoms, stars and galaxies": "not merely slight quantitative changes in the physical picture but rather the destruction of its foundations." Presumably this would mean the absence not just of observers made of carbon and water, but of absolutely all observers. There would be no fire, crystals, wind-shaken bubbles; and even if there were still things "reproducing" much as fire does, that would be a long way from anything worth calling life.
How about life based not on chemistry, (that is, on electromagnetism) but on the strong nuclear force or on gravity? Could it not flourish without divine fine tuning? No water or carbon can exist on a neutron star; its heat, gravity and magnetism might destroy ordered structures in a quadrillionth of a second; yet could not the strong nuclear force work so fast that this would not matter? An entire neutron star civilization might last "only a billionth of a second" while the evolution of intelligent life took "one thirtieth of a second." Or might not "gravitational life" ("individual stars play the role of individual atoms or molecules in Earth life") evolve "after billions of billions of years, not the mere billions of years needed for life based on electromagnetic forces"? I answer, that this is speculation such as makes belief in God appear tame indeed; (ii) that neither "nuclear life" nor "gravitational life" could have elements as precisely positioned as the electrons whose precise positioning is crucial to our genetic code (see above: Kahn, and so forth); and (iii) that one would not have the star-studded heavens of gravitational life or the neutron stars of strong-nuclear life had basic constants been much altered. A trifling change, and the cosmos collapses in a thousandth of a second or flies to pieces so quickly that there is soon nothing but gas too dilute to become gravitationally bound. Another, and there is almost no excess of matter over antimatter: the universe is for practical purposes made of light alone. Another, and the first trillion years are too hot for stars to form, after which all is far too dilute. Another, and the Bang produces black holes only.
We need not claim that of all logically possible universes only a small fraction would contain life. We need look only at universes in "the local area" of possibilities, ones much like ours in their basic laws but differing in their force strengths, particle masses, expansion speeds, and so on. A parable may help. A wasp on a wall is surrounded by a fairly wide area free of all insects. Just one bullet is fired. It hits the wasp. Was it fired by an expert, probably? In tackling this we need not care whether distant areas of the wall bear many insects. There is only one insect locally!
In cosmology our wasp becomes a small "window" inside which various constants had to fall, for life to evolve. The local area becomes an area (or volume) of possibilities, measurable with the help of axes giving possible values for those constants. And "hitting a window" can be impressive even when the area might have one or two other small windows. (A pioneering paper by Rozental, I. Novikov and A. Polnarey illustrates this. With axes showing various strengths of gravity and electromagnetism, they find one tiny window of possible life- encouragingness in addition to the one inside which the actual strengths lie. But more research could well reveal that the second window is illusory. For when we list ten reasons for thinking that a strength or mass or other constant must fall inside narrow limits if life is to evolve, we are not just guarding against error by giving ten arguments for one conclusion. Rather, we are offering ten grounds for saying that tinkering with this constant, or with a balance between it and others, will result in disaster somewhere.)
Note that whereas changes of one part in a hundred (or in a trillion) could ruin Life's prospects, Nature's forces have strengths so varied that the strongest is some trillion, trillion, quadrillion times stronger than the weakest; and remember, no one has calculated their strengths "from theory" without smuggling in, say, the observed masses of their messenger particles. Particle masses, furthermore, vary inside limits about as wide: l eV or much less is plausible for neutrinos while magnetic monopoles may weigh 10E+25 times more. Are they predictable? Some have suggested reasons forcing photons to have zero rest mass, thus removing all danger that these (they are as common as neutrinos, about 109 to each proton) would quickly collapse the universe. There is also an understandable tendency for "higher generation" particles to have masses greater and closer together. But as with the force strengths, nobody can say that just this array of masses was inevitable. And as was touched on in connection with monopoles, a widely accepted story says the forces were originally all equal, mere aspects of one "unified force," and that there was just a single kind of particle: as the universe cooled this "symmetry" was broken, force strengths and masses then taking values which were largely unpredictable. Compare how when magnetic material cools below its Curie point an electromagnetic symmetry breaks; a magnetic field then appears, a vector field whose direction (detectable by compass needles) cannot be known beforehand.
True, the strengths and masses may be dictated by the strengths of scalar fields (characterized only by intensity, not direction, and therefore hard to detect if they have the same intensities right across the visible universe). But any such field's strength was quite probably itself a chance affair.
However, this can suggest that any "fine tuning" might be accounted for without bringing in God's creative choice. Perhaps life-encouraging force strengths and particle masses are just what would be bound to occur somewhere in any sufficiently gigantic Reality. Such a Reality could be split into immensely many huge domains (S.Weinberg compares them to the ice crystal domains which form when water freezes), perhaps almost all of them ones in which symmetries had broken in ways not compatible with life.
Newton's own words can suggest this possibility.
It seems, then, that we could follow him without rejecting outright the today quite popular idea of a "World Ensemble", a large-U Universe with very many regions ("small-u universes," worlds) that are largely or wholly separate, and vary widely. (l) Wheeler has proposed perpetual oscillations: Big Bang, Big Squeeze, Big Bang, and so on. At each Squeeze information about properties is lost. Successive Bangs thus have differing amounts of matter, force strengths, particle masses, and so on, so each might count as a new World. (2) Many Worlds Quantum Theory, originated by H. Everett III, has Reality forever branching into Worlds almost fully separate. Every set of possibilities that quantum mechanics recognizes becomes real in some branch. (3) Following E.P. Tryon, many describe Worlds appearing as quantum fluctuations, ex nihilo or in an already existing Superspace. (4) Linde has toyed with an eternally expanding de Sitter space, forever boiling with bubble Worlds in which the energy density of the vacuum is lower. (5) G.F.R. Ellis and G.B. Brundrit remind us that if the universe is "open" then it is standardly thought to contain infinitely much material. Outside the region we can see (a "bubble" of an epistemic sort), who can say what might not be happening? (6) Guth's inflationary cosmos, now getting to be the Standard Cosmos, is gigantic and with domains individuated by the various ways in which symmetries broke. Guth and P.J. Steinhardt suggest that our domain stretches 10E+35 light years; the cosmos may be 10E+25 times larger. (7) Et cetera, (for example, the many-celled cosmos of F. Hoyle and J.V. Narlikar).
All or most of these approaches (the Hoyle-Narlikar one is the possible exception) allow for early symmetry-breaking in which force strengths and masses are settled largely by chance, different strengths and masses thus appearing in different Worlds.
The finest illustration of this is Linde's version of the inflationary cosmos. It expands in its first 10E+30 second to perhaps 10E+800 cm. (Compare the mere 10E+28 cm which light has travelled since the Bang.) Variations in force strengths and particle masses are produced by one or many scalar fields, there being several possible stable values, minima, to which any such field could fall. Different minima are arrived at in different regions, randomly, so Reality is "a lunch at which all possible dishes are available."
In the background is, (i) the Nobel-Prize-winning Weinberg-Salam account of a symmetry-break which separated the weak force from electromagnetism: a scalar field appeared as the universe cooled, giving large masses to the messenger particles of the weak force while leaving electromagnetism's messenger massless. Such an account can plausibly be extended to cover the masses of all quarks, leptons, intermediary bosons and superheavy bosons, and then perhaps absolutely all differences in force strengths and masses. Interacting with a field, a particle can gain effective mass by suffering drag or "eating" the particles of which the field itself is made. And force strength differences are attributable largely or entirely to differing masses: of the messenger particles, of the particles involved in "screening" and "antiscreening," and of those which the forces push around or transform into one another (as in the case of the weak force, responsible for neutron-proton transformations). (ii) Any World Ensemble theory which allows for inflation can avoid an apparently fatal flaw: namely, that if strengths and masses can vary from region to region thanks to symmetry-breaking then we should expect them to differ in regions that were separate (because light rays had not had time to link them) when the symmetries broke. This could seem to imply that the volume now visible to us has grown from hugely many domains divided by monopoles, walls and other defects; why then is there so little sign of this? Distant galaxies separated by wide angles would even now be making causal contact for the first time; why do studies of them suggest that force strengths and masses are identical everywhere? As said earlier, inflation could answer such questions. All that we can see could be inside one hugely inflated domain.
Thus some fairly well developed Ensemble stories offer to explain why there are living beings who observe a situation "fine tuned" to life's needs. Even if most "lunches" in an Ensemble are poisonous, sooner or later there will exist a life-encouraging World. Only such a World could be seen by living beings (Anthropic Principle). God need not enter into this.
A) World Ensembles are very speculative. The main evidence for them is the apparent fine tuning, but God could account for that.
B) God might act through laws which produced an Ensemble, relying on Chance to generate life-encouraging Worlds. (Were the Ensemble infinite, it would generate infinitely many!) True, we should now be tempted to attribute life-encouraging properties to Chance alone, dismissing the God hypothesis as an unnecessary extra; yet the evidence for that hypothesis would not have been eroded entirely. For one thing, any World Ensemble explanation of fine tuning is in difficulties unless we assume inflation (for why don't we see domain walls, and so forth). Now, as discussed earlier, inflation of any appropriate kind may itself need much fine tuning (particularly if quantum fluctuations are to be inflated to produce density variations from which galaxies can grow). And even if inflation of that kind were dictated by the Unified Theory which applies to our cosmos, there would be the question, more pressing now that "minimal SU(5)" has failed,[69 ] of why just that Theory applies.
C) To answer this last question we might postulate an Ensemble in which every possible Unified Theory is exemplified somewhere. (Atkins seems to do so.) But would it not be simpler to introduce God to select the Theory appropriately, and to answer also why there is any world at all? A Neoplatonist theology is not stumped by the child's query, "Then who created God?"
D) Let us list factors which appear very fortunate and also unable to vary from World to World as readily as a force strength or a particle mass.
i) Our world is complex: even at high temperatures, any formula describing it must have many terms, this being what makes possible a complex hierarchy of forces and particles when the Big Bang cools and symmetries break. Yet it is simple enough to be understood. This is necessary if consciousness is to evolve; for where would be the evolutionary advantage of being conscious of the world, without at all understanding it?
Is the mixture of simplicity and complexity anything to be surprised at? Consult E.P. Wigner's "The Unreasonable Effectiveness of Mathematics in the Natural Sciences." Again, think of how little we could understand, were it not for inertia. (Guaranteeing that particles do not shoot off at great speed in response to tiny forces, inertia is mysterious: Ernst Mach even blamed it on each object's being somehow attuned to every other object in the cosmos.) Or consider Bertrand Russell's point that things are understood only because the causal influences of distant objects are usually weak. (This was far from inevitable. Interquark forces grow with distance, and so may any force associated with the cosmological constant.)
(ii) Special Relativity. Life can develop in different inertial systems no matter how fast they move towards or away from one another. (Not inevitable. Depends on space's having signature +++- and on how light's speed enters into this. Life might well have been impossible in various systems because forces propagating in certain directions found it hard or impossible to catch up with the particles ahead of them.)
(iii) Quantization and Least Action. Energy is not dissipated uselessly but concentrated in bursts or propagated in straight lines; also (see above) electrons do not spiral into atomic nuclei; and so forth.
(iv) Renormalizability. Life exists only because quantum fluctuations, added to fluctuations-of-fluctuations, and so on, do not yield infinite results; nor do infinities arrive by, for example, "virtual" point-particles materializing indefinitely close to one another. Only recently has anyone had much idea of how such infinities might be avoided. (Many of them may cancel one another. And instead of being infinitely divisible, space may become "foam" at about 10E-33 cm; or point-particles may be replaced by "superstrings.")
(v) The still perplexing fact that there is an "Arrow of Time", a direction of entropy increase. (The suggestion looked at above, that the cosmos starts off with low gravitational entropy, could give only a partial explanation; for why is such entropy ever low, and why is there a dimension along which it can grow higher? As Penrose says, we are here "groping at matters that are barely understood at all from the point of view of physics" and may have to accept time- asymmetric basic laws.)
(vi) Rozental holds that if particles had no spin there would be neither electromagnetism nor gravity; and that had all hadrons lacked isotopic spin, complex stable nuclei would not exist. Yet spins for particles are odd enough to have been laughed at when first proposed.
(vii) Baryon Conservation (see above) and other bizarre conservation principles, linked to strange and beautiful symmetry principles.
First: In a field so complex, no very firm conclusions will be justified even a million years from now. The early instants of the Big Bang, the presence of vastly many mini-universes, are far beyond direct experience. The same applies, I would say, to the reality of God.
Second: There are nonetheless good grounds for thinking of the visible universe as remarkably fine tuned for producing life. Now, despite the extent to which it goes beyond direct experience, the World Ensemble interpretation of this is straightforward enough to be powerful. And the same could be true of the theistic interpretation suggested by Newton's magnificent writings.
First letter to Bentley.
Letter to the Princess of Wales, November 1715.
A Neoplatonist Creative Principle is defended in many of my writings: particularly in Value and Existence (Oxford: 1979) and in articles in American Philosophical Quarterly 7 (1970); Mind 87 (1978); International Journal for Philosophy of Religion 11 (1980); and Religious Studies (to appear).
Second letter to Bentley.
Fourth letter to Bentley.
Third letter to Bentley.
Other Worlds (London: 1980), pp. 160-1 and 168-9.
Quantum Gravity 2 (Oxford: 1981), eds. C.J. Isham, R. Penrose, D.W. Sciama, pp. 248-9.
Gravitation and the Universe (Philadelphia: 1970), p. 62.
Page 514 of R.H. Dicke and P.J.E. Peebles in General Relativity (Cambridge: 1979), eds. S.W. Hawking and W. Israel.
Page 285 of Confrontation of Cosmological Theories with Observational Data (Dordrecht: 1974), ed. M.S. Longair.
B.J. Carr, Irish Astronomical Journal 15 (1982), p. 244; cf. p. 20 of P.C.W. Davies' superb "The Anthropic Principle," in Particle and Nuclear Physics 10 (1983), pp.1-38, or p. 411 of J.D. Barrow's and F.J. Tipler's impressively wide-ranging The Anthropic Cosmological Principle (Oxford; 1986).
Page 348 of A.H. Guth, Physical Review D 23 (1981).
Page 433 of Barrow and Tipler, cf. Guth, p.352.
See, e.g., The Very Early Universe (Cambridge: 1982), eds. G.W. Gibbons, S.W. Hawking, T.C. Siklos, pp. 271, 393 ff.; or A.D. Mazenko, G.M. Unruh, R.M. Wald, Physical Review D 31 (1985), pp. 273-282.
Pages 28-30 of Davies, "The Anthropic Principle."
See p. 413 of Barrow and Tipler; or pp. 6, 26, 475-6, of The Very Early Universe.
S.W. Hawking, Phil. Trans. Roy. Soc. London A 310 (1983), p. 304.
Davies, "The Anthropic Principle," p. 28.
Barrow and Tipler, p. 434.
American Scientist 62 (1974), p. 689.
F. Dyson, Scientific American 225 (1971), pp. 52-4; Idlis, Izvest. Astrofiz. Instit. Kazakh. SSR 7 (1958), pp. 39-54 and esp. p. 47.
P.C.W. Davies, Superforce (New York: 1984), pp. 183-205.
Opticks, Query 31.
R. Penrose in Quantum Gravity 2, pp. 244-272; Davies, God and the New Physics (London; 1983), pp. 50-54 and 177-181.
Dyson, p. 56.
Davies, Other Worlds, pp. 176-7.
J. Demaret et C. Barbier, Revue des Questions Scientifiques 152, (1981), p. 500.
M.J. Rees, Phil. Trans. Roy. Soc. London A 310 (1983), p. 317.
J. D. Barrow and J. Silk, Scientific American 242 No. 4 (1980), pp. 127-8.
Davies, "The Anthropic Principle," p. 8, and I.L. Rozental, Elementary Particles and the Structure of the Universe (Moscow: 1984, in Russian), p. 85.
Dyson, p. 56.
F. Hoyle, Astrophys. J. Suppl. 1 (1954), p.121; E.E. Salpeter, Physical Review 107 (1957), p. 516.
I.L. Rozental, Structure of the Universe and Fundamental Constants (Moscow: 1981), p. 8.
B.J. Carr and M.J. Rees, Nature 278 (1979), p. 611.
B. Carter in Atomic Masses and Fundamental Constants: 5 (New York: 1976), eds. J.H. Sanders and A.H. Wapstra, p. 652. 42 P.W. Atkins, The Creation (Oxford: 1981), p. 13.
P.W. Atkins, The Creation, (Oxford; 1981), p. 13.
Davies, "The Anthropic Principle," p. 7.
M.J. Rees, Quart. J. of the Royal Astron. Soc. 22 (1981), p.122, with the figure of about 1% coming from a conversation of that year; cf. the above-cited works of Hoyle and Salpeter.
Barrow and Tipler, pp. 252-3.
Ibid., p. 327.
On Numerical Values of Fundamental Constants (Moscow: 1980), p. 9; on the question of atomic weights above four he cites E. E. Salpeter, Astrophys J. 140 (1964), p. 796.
Phil.Trans.Roy.Soc.London A 310 (1983), pp. 323-336.
V. Trimble, American Scientist 65 (1977), p. 85; I.L. Rozental, Soviet Physics: Uspekhi 23 (1980), p. 303.
Confrontation etc., pp. 296-8.
G. Gale, Scientific American 245 No. 6 (1981), pp. 154-171 and esp. p. 155.
R.T. Rood and J.S. Trefil, Are We Alone? (New York: 1982), p. 21.
Superforce, p. 242.
Reviews of Modern Physics 29 (1957), pp. 375-6.
Soviet Physics: Uspekhi 23 (1980), pp. 303 and 298.
Physical Review 73, p. 801.
M.J. Rees, Phil.Trans.Roy.Soc.London A 310 (1983), p. 312.
R.Breuer, Das Anthropische Prinzip (Munich: 1983), p. 228.
Carr and Rees, p. 611.
Barrow and Tipler, p. 339.
Nature 265 (1977), p. 710.
I.S. Shklovskii and C. Sagan, Intelligent Life in the Universe (New York: 1966), p. 124.
Physics Bulletin, Cambridge, 32, p. 15.
Barrow and Tipler, pp. 371, 399-400; Davies, "The Anthropic Principle," pp.9-10, and The Forces of Nature (Cambridge: 1979), pp. 100-102, 172; Rozental, Elementary Particles etc., pp. 78-84.
Rozental, p. 298 of the Uspekhi paper.
Rozental, Elementary Particles etc., pp. 78-84.
Davies, Superforce, pp. 137-8.
Barrow and Tipler, pp. 403-7; or G.G. Ross, pp. 304-22 of Quantum Gravity, 2.
Demaret et Barbier, p. 489.
H. Pagels, Perfect Symmetry (New York: 1985), pp. 275-9.
Demaret et Barbier, Rev. des Quest. Sci. 152, p. 199; S. Weinberg, The First Three Minutes, second edition, (London: 1983), p. 87.
Carr and Rees, p. 610; Demaret et Barbier, pp. 478-80, 500; D.V. Nanopoulos, Physics Letters, 91B pp. 67-71; Davies, "The Anthropic Principle," pp. 24-5; Barrow and Tipler, p. 418.
Weinberg, p. 157.
Barrow and Tipler, pp. 358-9.
Gravitation (San Francisco: 1973), authors C.W. Misner, K.S. Thorne and J.A. Wheeler, p. 1215, and Problems in the Foundations of Physics (Amsterdam: 1979), ed. G. Toraldo di Francia, p. 441.
CERN bulletin 65-26, 2 July 1965, pp. 2-3,12.
R.Penrose in Quantum Gravity 2, p. 267.
Cosmochemical Evolution and the Origins of Life (Dordrecht: 1974), eds. J. Oro, S.L. Miller, C. Ponnamperuma, R.S. Young, pp. 7,24.
The Emerging Universe (Charlottesville: 1972), eds. W.C. Saslaw and K.C. Jacobs, p. 79.
Barrow and Tipler, p. 297.
Atti del Convegno Mendeleeviano, Acad. del. Sci. de Torino (1971), p. 398.
Cosmochemical Evolution, pp. 23-4.
Davies, Superforce, p. 131.
On Numerical Values, p. l4.
Uspekhi paper, p. 298.
Atkins, pp.86-7; C.Rebbi, Scientific American, 240 No. 2 (1979) pp. 76-91; Z. Parsa, American J. Of Physics 47 (1979), pp. 56-62.
Proc. of the Amsterdam Academy 20 (1917), p. 200; for discussion of many other authors see Barrow and Tipler, pp. 258-276.
Gravitation, p. 1205.
Barrow and Tipler, pp. 248-9 and p. 283, n. 95.
Proc.Roy.Soc.London 1377 (1981), pp. 147-9.
Reports on Progress in Physics 47 (1984), p. 974.
M. Turner and F. Wilczek, Nature 298 (1982), p. 633; Wilczek on p. 27 of The Very Early Universe, reporting work by R. Flores and M. Sher.
Davies, The Forces of Nature, pp.229-30; Rozental, Uspekhi paper, p. 301.
Drawn in part from such classics as V.F. Weisskopf, Knowledge and Wonder (New York: 1962) and H.F. Blum, Time's Arrow and Evolution (Princeton: 1968).
Davies, The Forces of Nature, p. 160.
European J. of Physics, 2 pp. 55-7.
First letter to Bentley.
Gravitation, p. 1206.
Laws of the Game (New York: 1981), transl. by R. and R. Kimber.
Davies, "The Anthropic Principle," p. 15.
The Creation, pp. 10-12.
Uspekhi paper, p. 296.
D. Goldsmith and T. Owen, The Search for Life in the Universe (Menlo Park: 1980), pp. 220-1.
Ibid., pp. 221-2.
Izvest. Akad. Nauk Estonskoi SSR Fiz. Matemat. 31 (1982) pp. 284-9.
Several authors discuss all this in Los Alamos Science 11 (1984), and Phil. Trans. Roy. Soc. London A 310 (1983); see esp. C.H. Llewellyn Smith on pp. 253-9. See also G. 't Hooft, Scientific American 242 No. 6 (1980), pp. 104-38; S. Weinberg, Physica Scripta 21 (1980), pp. 773-781; W. Willis, New Scientist 100 (1983), pp. 9-12; M.J.G. Veltman, Scientific American 255 (1986), pp. 76-84.
The First Three Minutes, p. 140.
Wheeler in Gravitation, ch. 44, and Quantum Gravity (Oxford: 1975), eds. C.J. Isham, R. Penrose, D.W. Sciama, pp. 538-605 and esp. 556-7; Everett and others in The Many-Worlds Interpretation of Quantum Mechanics (Princeton: 1973), eds. B.S. DeWitt and R.N. Graham; Tryon, Nature 246 (1973), pp. 396-7, and New Scientist 101 (1984), pp. 14-16; Linde in The Very Early Universe, p. 239; Ellis and Brundrit, Quart. J. of the Royal Astron. Soc. 20 (1979), pp.37-41; Guth and Steinhardt, Scientific American 250 (1984), pp. 116-128; Hoyle, Ten Faces of the Universe (San Francisco: 1977), ch. 6.
See ref. 98; also The Very Early Universe, pp. 205-249 and esp. 247 on the "lunch"; also New Scientist 105 No. 1446 (1985), pp. 14-18, where inflation by a factor of ten to the power of a million is suggested.
Demaret et Barbier, p. 205.
See, e.g., Linde on p. 216 of The Very Early Universe.
Commun. in Pure and Applied Math. 13 No. 1 (1960) p. 227.
Pages 581-638, and esp. 594, of General Relativity.
Uspekhi paper, p. 302.
My other cosmological papers try to remedy this. See Philosophy 53, pp. 71-9; American Philosophical Quart. 19 (1982), pp. 141-151, with misprints corrected in No. 4; pages 53-82 of Scientific Explanation and Understanding (Lanham and London: 1983), ed. N. Rescher; Mind 92 (1983), pp. 573-9; pages 91-120 of Evolution and Creation (Notre Dame: 1985), ed. E. McMullin; pages 111-119 of Current Issues in Teleology (Lanham and London: 1986), ed. N. Rescher; pages 87-95 of PSA 1986: Volume One (Ann Arbor: 1986), Proceedings of the Phil. of Science Assoc. ed. A. Fine and P. Machamer; "Probabilistic Phase Transitions and the Anthropic Principle," to appear in Origin and Early History of the Universe (Liege: 1987), Proceedings of the 26th Liege International Astrophysical Colloquium: "The Leibnizian Richness of Our Universe," to appear in Science and Metaphysics in the Philosophy of Leibniz (1987), ed. N. Rescher.