earth space milky way
Earth and space, milky way
Stephen C. Meyer Philosopher of Science
The Latest

The Return of the God Hypothesis

Journal of Interdisciplinary Studies, Vol. XI, No. 1/2 (Jan. 1, 1999)
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Historian of science Frederic Burnham has stated that the God hypothesis is now a more respectable hypothesis than at any time in the last one hundred years. This essay explores recent evidence from cosmology, physics, and biology, which provides epistemological support, though not proof, for belief in God as conceived by a theistic worldview. It develops a notion of epistemological support based upon explanatory power, rather than just deductive entailment. It also evaluates the explanatory power of theism and its main metaphysical competitors with respect to several classes of scientific evidence. The conclusion follows that theism explains a wide ensemble of metaphysically-significant evidences more adequately and comprehensively than other major worldviews or metaphysical systems. Thus, unlike much recent scholarship that characterizes science as either conflicting with theistic belief or entirely neutral with respect to it, this essay concludes that scientific evidence actually supports such belief. 


The Rise and Fall of Theistic Arguments

In 1799, the physicist Pierre Laplace presented copies of his Treatise on Celestial Mechanics to the new French Emperor, Napoleon Bonaparte. In it, Laplace sought to explain the origin of the solar system not as the product of divine design, as Isaac Newton had done, but as the result of purely natural gravitational forces. When Napoleon eventually summoned Laplace to discuss the Treatise in 1802, he asked Laplace directly about the role of God in his theory. 

Newton spoke of God in his book, said Napoleon. I have perused yours, but failed to find his name mentioned even once. Why? Laplace reportedly issued the now famous reply: Sire, I had no need of that hypothesis (cited in Kaiser 1991: 267). While many historians are uncertain about the factual status of this conversation, few dispute that it accurately depicts Laplace s attitude about the God hypothesis, or that it accurately expresses a change in philosophical attitude that occurred among many scientists during the nineteenth century. Indeed, the publication of Laplace s Treatise and its fully naturalistic account of celestial origins came just as Western philosophy of science began to turn from its long-established theistic orientation. Up to the nineteenth century, leading philosophers like David Hume and Immanuel Kant denied the soundness of classical arguments from nature for God s existence. Hume and Kant raised powerful philosophical objections to the design and cosmological arguments, the two most formidable arguments of this kind. Further, despite the now well- documented influence of Judeo-Christian thinking on the rise of modern science from the time of Ockham to Newton, natural science throughout the nineteenth century would take a decidedly materialistic turn (Hooykaas 1972). 

Scientific origins theories in particular seemed to support the materialistic vision of an autonomous and self-creating natural world. Not only Laplace s work in astronomy, but developments in other fields supported this trend. In geology, Charles Lyell explained the origin of the earth s most dramatic topographical features mountain ranges and canyons as the result of slow, gradual, and completely naturalistic processes of change. In cosmology, a belief in the infinity of space and time obviated any need to consider the question of the ultimate origin of matter. In biology, Darwin s evolutionary theory sought to show that the blind process of natural selection acting on random variations could and did account for the origin of new forms of life without any divine intervention or guidance. Darwin s theory suggested that living organisms only appeared to be designed and that the mechanism of natural selection sufficed to explain that appearance (1968: 130-72). As Francisco Ayala explains, The functional design of organisms and their features would . . . seem to argue for the existence of a designer. It was Darwin s greatest accomplishment to show that the directive organization of living beings can be explained as the result of a natural process, natural selection, without any need to resort to a Creator or other external agent (1994: 4). 

These theories taken jointly suggested that the whole history of the universe could be told as a seamless, or nearly seamless, unfolding of the potentiality of matter and energy. Thus, science seemed to support, if it could be said to support anything, a materialistic or naturalistic worldview, not a theistic one. Science no longer needed to invoke a pre-existent mind to shape matter in order to explain the evidence of nature. Matter had always existed and could in effect arrange itself without a pre-existent designer or Creator. Thus, by the close of the nineteenth century, both the evidential and philosophical basis of theistic arguments from nature had seemingly evaporated. Neither science nor philosophy had need of the God hypothesis. 

The demise of theistic arguments from nature and the corresponding rise of a scientifically-based materialistic worldview would alter the way many intellectuals conceptualized the relationship between science and theistic religious belief throughout the twentieth century. With the rise of scientific materialism or naturalism, many twentieth-century scientists, philosophers, and theologians perceived science and theistic belief as standing in overt conflict. Others, however, have denied that science contradicts religious belief. Nevertheless, they typically have done so by portraying science and religion as such totally distinct enterprises that their teachings do not intersect in significant ways. Two such models, compartmentalization and complementarity, assume the religious and metaphysical neutrality of scientific knowledge (Van Till 1986; Peterson 1989; Meyer 2000). Thus, some see the witness of science as hostile to a theistic worldview, while others attempt to cast it as entirely neutral. Few, however, have thought in contrast to the founders of early modern science like Kepler, Boyle, and Newton that the testimony of nature (or science) actually supports important tenets of theism or the Judeo-Christian religion. 

The Demise of the Design Argument

Two types of arguments for God s existence from nature have proven especially effective in the history of Western thought: design and cosmological arguments. The classical design argument begins by noting certain highly ordered or complex features within nature such as the configuration of planets or the architecture of the vertebrate eye. It then proceeds to argue that such features could not have arisen without the activity of a pre-existent intelligence (typically equated with God). The cosmological argument starts from the existence and causal regularity of the universe and seeks to deduce a necessary being that is, God as the First Cause or sufficient reason for the universe s existence (Craig 1994: 79-83). Perhaps the most empirically contingent version of the argument, the kalam cosmological argument, asserts that the universe had a temporal beginning a proposition that medieval philosophers typically sought to justify by showing the logical or mathematical absurdity of an infinite regress of cause and effect. The argument then concluded that the beginning of the physical universe must have resulted from an uncaused First Cause (God) that exists independently of the universe (Craig 1994: 79-80; Swinburne 1979: 116-32). Throughout Western history, many philosophers and scientists have formulated various empirically-based theistic arguments. Many, then, have viewed science and theistic belief as mutually reinforcing. Yet the most important versions of these arguments came into disrepute by the end of the nineteenth century, chiefly due to developments within science. 

With the advent of the Enlightenment, both Judeo-Christian belief and the design argument came under attack. Thus, the skeptical empiricist philosopher David Hume (1711-76) rejected the existence of God and the validity of the design argument (1989: 61-66). Hume maintained in Dialogues Concerning Natural Religion (1779) that the design argument depended upon a flawed analogy with human artifacts. He admitted that artifacts derive from intelligent artificers, and that biological organisms have certain similarities to complex human artifacts. Eyes and pocket watches both depend upon the functional integration of many separate and specifically configured parts. Nevertheless, he argued, biological organisms also differ from human artifacts they reproduce themselves, for example and the advocates of the design argument fail to take these dissimilarities into account. Since experience teaches that organisms always come from other organisms, Hume argued that analogical argument really ought to suggest that organisms ultimately come from some primeval organism (perhaps a giant spider or vegetable), not a transcendent mind or spirit. 

Despite such objections, Hume s categorical rejection of the design argument did not prove decisive with either theistic or secular philosophers. Thinkers as diverse as the Scottish Presbyterian Thomas Reid (1981: 59), the Enlightenment deist Thomas Paine (1925: 6), and Kant (1963: 523), continued to affirm various versions of the design argument after the publication of Hume s Dialogues. Moreover, science-based design arguments continued into the nineteenth century, in such works as William Paley s Natural Theology (1852). Paley catalogued a host of biological systems that suggested the work of a superintending intelligence. Paley argued that the astonishing complexity and superb adaptation of means to ends in such systems could not originate strictly through the blind forces of nature. Paley also responded directly to Hume s claim that the design inference rested upon a faulty analogy. A watch that could reproduce itself, he argued, would constitute an even more marvelous effect than one that could not. Thus, for Paley, the differences between artifacts and organisms only seemed to strengthen the conclusion of the design argument (1852: 8-9). Indeed, despite the widespread currency of Hume s objections, many scientists continued to find Paley s watch-to-watchmaker reasoning compelling well into the nineteenth century. 

Thus, it was not ultimately the arguments of the philosophers that destroyed the popularity of the design argument, but the emergence of increasingly powerful materialistic explanations of apparent design, particularly Charles Darwin s theory of evolution by natural selection. Darwin argued in 1859 that living organisms which had always been seen as the most obvious example of God s creative power only appeared to be designed. Darwin proposed a specific mechanism, natural selection acting on random variations, that could explain the adaptation of organisms to their environment (and other evidences of apparent design) without actually invoking an intelligent or directing agency. If the origin of biological organisms could be explained naturalistically, as Darwin argued, then explanations invoking an intelligent designer were unnecessary and even vacuous (1968: 453). 

This trend was reinforced by the emergence of other fully naturalistic origins scenarios in astronomy, cosmology, and geology. It was also reinforced and enabled by an emerging positivistic tradition in science that increasingly sought to exclude appeals to supernatural or intelligent causes from science by definition (Gillespie 1979: 41-66). Natural theologians like Robert Chambers, Richard Owen, and Asa Gray, writing just prior to Darwin, tended to oblige this convention by locating design in the workings of natural law rather than in the complex structure or function of particular objects. While this move certainly made the natural theology tradition more acceptable to shifting methodological canons in science, it also gradually emptied it of any distinctive empirical content, leaving it vulnerable to charges of subjectivity and vacuousness. By locating design more in natural law and less in complex contrivances that could be understood by direct analogy to human creativity, later British natural theologians ultimately made their reserach program indistinguishable from the positivistic and fully naturalistic science of the Darwinians. As a result, the notion of design, to the extent it maintained any intellectual currency, soon became relegated to a matter of subjective belief. One could still believe that a mind super-intended over the workings of nature, but one might just as well assert that nature and its laws existed on their own. Thus, by the end of the nineteenth century, natural theologians could no longer point to any specific artifact of nature that required intelligence as a necessary explanation. As a result, intelligent design became undetectable except through the eyes of faith. 

The Demise of the Cosmological Argument

The demise of the cosmological argument also began with Enlightenment philosophers. Kant in particular challenged the arguments of medieval Christian, Islamic, and Jewish thinkers about the need for a First Cause of the universe. To many medievals, the principle of causality and the existence of the material universe implied the existence of a necessary First Cause a Cause that they equated with God. Kant denied that the universe needed a necessary First Cause. He argued that there could be an unbroken line of effects and causes going back infinitely in time, thus eliminating the need for a temporally transcendent or divine First Cause. Kant accepted the possibility that the universe itself might be eternal and self-existent (1963: 511-12). 

Kant s skepticism about the cosmological argument, and the kalam version in particular, was reinforced by the science of his day. Though Newton supported the design argument, one aspect of his physics the postulation of infinite time and space helped to undermine the classical kalam cosmological argument. According to Newton s theory of universal gravitation, all bodies attract one another with a force proportional to the product of their masses and inversely proportional to the square of the distance between them. His theory implied that all bodies of matter in the universe attract one another. Yet this created a puzzle. According to Newton s theory, every star should gravitate towards the center of the universe, until the whole universe collapses in on itself. Thus, the universe must either be collapsing or expanding to offset its tendency to collapse. Either way, it could not be static. 

To avoid having to abandon either his theory of gravity or the notion of a static universe, Newton proposed that the matter was evenly diffused through an infinite space, so that it would never convene into one mass (1959, 3: 234). Newton thought that if there were an infinite number of stars scattered evenly throughout the universe, then every star would attract every other star with equal force in all directions simultaneously. Thus, the stars would remain forever suspended in a tension of balanced gravitational attraction (Hawking 1988: 9). Newton himself found the infinite universe appealing for theological reasons. He thought of space and time as a Divine Sensorium, a medium in which God perceived His Creation. Since God was infinite, space and time had to be as well. Naturalistically-minded physicists following Newton found his infinite and static universe paradigm philosophically agreeable. Some philosophical naturalists rallied to support the infinite-static model proposed by Newton specifically because it eliminated the need to explain the beginning of time and space. By the end of the nineteenth century, this view had become deeply entrenched in the scientific community and provided a powerful reason for rejecting the kalam cosmological argument which depended upon the premise of a finite universe. 

Clearly, the demise of theistic arguments did not eliminate theistic belief, even among scientists. The demise of such arguments and the emergence of a fully materialistic account of the origin of the natural world from the infinite past to the dawn of human life did, however, have a profound effect on the perception of the relationship between science and theistic belief. Indeed, since the late nineteenth century, scientists generally either asserted that science contradicts theistic belief or denied that science has any religious or metaphysical implications whatsoever. Either way, scientists and philosophers have for the most part denied that the testimony of nature lends support to a theistic worldview. 

The Big Bang and General Relativity

During the twentieth century, a quiet but remarkable shift has occurred in science. Evidence from cosmology, physics, and biology now tells a very different story than did the science of the late nineteenth century. Evidence from cosmology now supports a finite, not an infinite universe, while evidence from physics and biology has reopened the question of design. 

In 1915-16, Albert Einstein shocked the scientific world with his theory of general relativity (Chaisson & McMillan 1993: 604-5). Though Einstein s theory challenged Newton s theory of gravity in many important respects, it also implied (as did Newton s) that the universe could not be static, but instead was simultaneously expanding and decelerating. According to relativity theory, massive bodies alter the curvature of space so as to draw nearby objects to them. Einstein s conception of gravity implied that all material bodies would congeal unless the effects of gravitation were continually counteracted by the expansion of space itself (Eddington 1930). Einstein s theory thus implied an expanding, not a static, universe. 

Einstein disliked this idea, in part for philosophical reasons. An actively expanding universe implied a beginning to the expansion, and thus, to the universe. As the Russian physicist Alexander Friedmann (1922: 377-86) showed, general relativity implied that, in the words of Stephen Hawking, at some time in the past (between ten and twenty thousand million years ago) the distance between neighboring galaxies must have been zero (1988: 46). Relativity theory suggested a universe of finite duration racing outward from an initial beginning in the distant past. For Einstein, however, a definite beginning to the universe seemed so counterintuitive that he introduced an arbitrary factor in his theory to eliminate the implication. In 1917, he postulated a repulsive force, expressed by his cosmological constant, of precisely the magnitude necessary to counteract the expansion that his theory implied.1 Like Newton, Einstein inadvertenly concealed an important cosmological reality implicit in his theory. 

Yet the heavens would soon talk back. In the 1920s-30s, Edwin Hubble, a young lawyer-turned-astronomer, made a series of observations that shocked even Einstein. While working at the Mt. Wilson Observatory in Southern California, Hubble discovered for the first time that our Milky Way galaxy is but one of many galaxies spread throughout the universe. More important, he discovered that the galaxies beyond the Milky Way are rapidly receding from ours. Hubble noticed that the light from these distant galaxies was shifted toward the red-end of the electromagnetic spectrum. This red-shift suggested recessional movement, for the same reason the so-called Doppler Effect that a train whistle drops in pitch as a train moves away from a stationary observer. Hubble also discovered that the rate at which these other galaxies retreat from ours is directly related to their distance from us just as if the universe were undergoing a spherical expansion in all directions from a singular explosive beginning the big bang (1929: 168-73). 

During the remainder of the twentieth century, physicists and cosmologists formulated several alternatives to the Big Bang theory that preserved an infinite universe. Some of these cosmological models were formulated for explicitly philosophical reasons. For example, in the late 1940s, Fred Hoyle, Thomas Gold, and Hermann Bondi proposed the steady state model to explain galactic recession without invoking the objectionable notion of a beginning. According to their theory, as the universe expands new matter is generated spontaneously in the space between expanding galaxies. On this view, our galaxy is composed of matter that spontaneously popped into existence between other galaxies, which in turn came out of the empty space between other galaxies, and so on (Bondi & Gold 1948; Hoyle 1948). Thus, the steady state theory denied the need to postulate a singular beginning, and reaffirmed an infinite universe without beginning or end. 

By the mid-1960s, however, Hoyle s theory had run aground as the result of a discovery made by two employees of Bell Telephone Laboratories in New Jersey. According to the steady state model, the density of the universe must always remain constant, hence the creation of new matter as the universe expands. Yet in 1965, the Bell Lab researchers, Arno Penzias and Robert Wilson, found what physicists believed to be the radiation left over from the universe s initial hot, high-density state (1965: 419-21). The discovery of this cosmic background radiation, at roughly 2.7 degrees Kelvin equivalent, proved decisive. Physicist George Gamow had predicted its existence as a consequence of the Big Bang (1946: 572-73). Yet advocates of the steady state acknowledged that, given their model, such radiation should not exist. The steady state theory also implied that galaxies should have radically different ages, but advances in observational astronomy have revealed that galactic ages cluster narrowly in the middle-age range. By the 1970s, even Bondi, Gold, and Hoyle had abandoned their theory (Kragh 1993: 403). 

Following the demise of the steady state model, the oscillating universe model arose as an alternative to a finite universe. Advocates of this model envisioned a universe that would expand, gradually decelerate, shrink back under the force of its own gravitation, and then, by some unknown mechanism, re- initiate its expansion, on and on, ad infinitum. But, as physicist Alan Guth showed, our knowledge of entropy suggests that the energy available to do the work would decrease with each successive cycle (Guth & Sher 1983: 505-7). Thus, presumably the universe would have reached a nullifying equilibrium long ago if it had indeed existed for an infinite amount of time. Further, recent measurements suggest that the universe has only a fraction about one-fifth of
the mass required to create a gravitational contraction in the first place (Peebles 1993: 475-83; Coles & Ellis 1994: 609-13; Sawyer 1992: A5; Ross 1993: 58). 

Prior to the formulation of the oscillating universe theory, three astrophysicists, Hawking, George Ellis, and Roger Penrose, published a series of papers that explicated the implications of Einstein s theory of general relativity for space and time as well as matter and energy (Hawking & Penrose 1970). Previously, physicists like Friedmann showed that the density of the universe would approach an infinite value as one extrapolated the state of the universe back in time. In a series of papers written between 1966-70, Hawking and his colleagues showed that as one extrapolated back in time the curvature of space also approached infinity. But an infinitely curved space corresponds to a radius (within a sphere, for example) of zero and thus to no spatial volume. Further, since in general relativity space and time are inextricably linked, the absence of space implies the absence of time. Moreover, neither matter nor energy can exist in the absence of space. Thus, Hawking s result suggested that general relativity implies that the universe sprang into existence a finite time ago from literally nothing, at least nothing physical. In brief, general relativity implies an absolute beginning of time, before which neither time and space, nor matter and energy, would have existed. 

The space-time theorem of general relativity was, of course, conditional. It stated that, if general relativity obtains for the universe, then space and time themselves must have originated in the same initial explosion that created matter and energy. In a series of experiments, beginning just two years after Einstein published his results and continuing on to the present, the probable error of general relativity (estimated quantitatively) has shrunk from 10 to 1 to .05 percent, to a confirmation out to the fifth decimal place. Increasingly accurate tests conducted by NASA, such as the hydrogen maser detector carried by a NASA rocket in 1980 and 1994, have continued to shrink the probable error associated with the theory (Ross 1993: 66-67; Vessor 1980: 2081-84). Thus, general relativity now stands as one of the best confirmed theories of modern science. Yet its philosophical implications, and those of the Big Bang theory, are staggering. Taken jointly, general relativity and the Big Bang theory provide a scientific description of what Christian theologians have long described in doctrinal terms as creatio ex nihilo Creation out of nothing (again, nothing physical). These theories place a heavy demand on any proposed causal explanation of the universe, since the cause of the beginning of the universe must transcend time, space, matter, and energy. 

Anthropic Fine-Tuning 

While evidences from cosmology now point to a transcendent cause for the origin of the universe, new evidences from physics suggest an intelligent cause for the origin of its fundamental architecture. Since the 1960s, physicists have discovered that the existence of life in the universe depends upon a highly improbable balance of physical factors (Giberson 1997). The constants of physics, the initial conditions of the universe, and many other of its contingent features appear delicately balanced to allow for the possibility of life. Even very slight alterations in the values of many independent factors such as the expansion rate of the universe, the speed of light, the precise strength of gravitational or electromagnetic attraction, would render life impossible. Physicists now refer to these factors as anthropic coincidences, and to the fortunate convergence of all these coincidences as the fine-tuning of the universe. Many note that this fine- tuning strongly suggests design by a pre-existent intelligence. As physicist Paul Davies put it, the impression of design is overwhelming (1988: 203). 

To see why, consider the following illustration: Imagine that you are a cosmic explorer who has just stumbled into the control room of the whole universe. There you discover an elaborate universe creating machine, with rows and rows of dials each with many possible settings. As you investigate, you learn that each dial represents some particular parameter that has to be callibrated with a precise value in order to create a universe in which life can survive. One dial represents the possible settings for the strong nuclear force, one for the gravitational constant, one for Planck s constant, one for the speed of light, one for the ratio of the neutron mass to the proton mass, one for the strength of electromagnetic attraction, and so on. As you, the cosmic explorer, examine the dials, you find that they can be easily spun to different settings that they could have been set otherwise. Moreover, you determine by careful calculation that even slight alterations in any of the dial settings would cause changes to the architecture of the universe such that life would cease to exist. Yet for some reason each dial sits with just the exact value necessary to keep the universe running like a giant safe with multiple combination-locks each of which has been opened. What do you infer about the origin of these finely-tuned dial settings? 

Not surprisingly, physicists have been asking the same question. As astronomer George Greenstein muses, the thought insistently arises that some supernatural agency, or rather Agency, must be involved. Is it possible that suddenly, without intending to, we have stumbled upon scientific proof of the existence of a Supreme Being? Was it God who stepped in and so providentially crafted the cosmos for our benefit? (1988: 26-27). For many scientists, the design hypothesis seems the most obvious and intuitively plausible answer.2 As Hoyle commented, a common-sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as chemistry and biology, and that there are no blind forces worth speaking about in nature (1982: 16). Many physicists now concur. They would argue that in effect the dials in the cosmic control room appear finely-tuned because someone carefully set them that way. 

Yet several other types of interpretations have been proposed: (1) the so- called weak anthropic principle, which denies that the fine-tuning needs explanation; (2) explanations based upon natural law; and (3) explanations based upon chance. Each of these approaches suggests that the fine-tuning of the universe represents only apparent design. Of these, perhaps the most popular approach, at least initially, was the weak anthropic principle (WAP). Nevetheless, WAP has recently encountered severe criticism from philosophers of physics and cosmology. WAP advocates claimed that if the universe were not fine-tuned to allow for life, then humans would not be here to observe it. Thus, they claimed, the fine-tuning requires no explanation. Yet as John Leslie and William Craig (1996: 23) argue, the origin of the fine-tuning does require explanation. Though we humans should not be surprised to find ourselves living in a universe suited for life (by definition), we ought to be surprised to learn that the conditions necessary for life are so vastly improbable. Leslie likens our situation to that of a blindfolded man who has discovered that, against all odds, he has survived a firing squad of 100 expert marksmen (1982: 150). Though his continued existence is certainly consistent with all the marksmen having missed, it does not explain why the marksmen actually did miss. In essence, the weak anthropic principle asserts that the statement of a necessary condition of an event eliminates the need for a causal explanation of that event. Yet oxygen is a necessary condition of fire, but saying so does not provide a causal explanation of the San Francisco fire. Similarly, the fine-tuning of the physical constants is a necessary condition for the existence of life, but that does not explain, or eliminate the need to explain, the origin of the fine-tuning. 

While some deny the need to explain the fine-tuning coincidences, others have sought to formulate various naturalistic explanations for them. Of these, appeals to natural law have proven the least popular for a simple reason. The precise dial settings of the different constants of physics represent specific features of the laws of nature themselves just how strong gravitational attraction or electromagnetic attraction will be, for example. These values represent contingent features of the fundamental laws themselves. Therefore, the laws cannot explain these features, they are (or possess) the features that require explanation. As Davies observed, the laws of physics seem themselves to be the product of exceedingly ingenious design (1984: 243). Further, natural laws by definition describe phenomena that conform to regular or repetitive patterns. Yet the idiosyncratic values of the physical constants and initial conditions constitute a highly irregular and non-repetitive ensemble. It seems unlikely, therefore, that any law could explain why all the fundamental constants have exactly the values they do why, for example, the gravitational constant should have exactly the value of 6.67 Newton-meters2 per kilogram2 and the permittivity constant in Coulombs law the value of 8.85 x 10-12 Coulombs2 per Newton-meter2, and the electron charge to mass ratio 1.76 x 1011 Coulombs per kilogram, and the speed of light 3 x 108 meters per second, and Planck s constant 6.663 x 10-34 Joules- seconds, and so on (Halliday & Resnick 1978: A23). These values specify a highly complex array. As a group, they do not seem to exhibit a regular pattern that could in principle be subsumed or explained by natural law. 

The chance explanation has proven more popular, but has severe liabilities as well. First, the immense improbability of the fine-tuning makes straightforward appeals to chance untenable. Physicists have discovered some seventy separate physical or cosmological parameters that require precise calibration in order to produce a life-sustaining universe (Barrow & Tipler 1986; Gribbin & Rees 1991; Ross in Dembski 1998). In Nature s Destiny (1998), Michael Denton documents many other necessary conditions for specifically human life from chemistry, geology, and biology. Moreover, many individual parameters exhibit an extraordinarily high degree of fine-tuning. The expansion rate of the universe must be calibrated to one part in 1060 (Guth 1981: 348). A slightly more rapid rate of expansion by one part 10 60 would have resulted in a universe too diffuse in matter to allow stellar formation. An even slightly less rapid rate of expansion by the same factor would have produced an immediate gravitational recollapse. The force of gravity itself requires fine-tuning to one part in 1040 (Davies 1983: 188). 

Thus, our cosmic explorer not only finds himself confronted with a large ensemble of separate dial settings, but with very large dials containing a vast array of possible settings, only very few of which allow for a life-sustaining universe. In many cases, the odds against finding a single correct setting by chance, let alone all the correct settings, turn out to be virtually infinitesimal. Oxford physicist Roger Penrose notes that a single parameter, the original phase-space volume, required such precise fine-tuning that the Creator s aim must have been to an accuracy of one part in 1010 (exp 123). Penrose remarks that one could not possibly even write the number down in full, since it would be ‘1 followed by 10123 successive ‘0 s! more zeros than the number of elementary particles in the entire universe. Such, he concludes, is the precision needed to set the universe on its course (Penrose 1989: 344). 

To circumvent such vast improbabilities, some postulate the existence of a quasi-infinite number of parallel universes in order to increase the probabilistic resources (roughly, the amount of time and number of trials) available to produce the fine-tuning. In these many worlds or possible worlds scenarios originally developed as part of the Everett intepretation of quantum physics and Andrei Linde s inflationary Big Bang cosmology any event that has positive probability, however small, must happen somewhere in some other parallel universe. So long as life has a positive probability of arising, it had to arise in some possible world. Therefore, sooner or later some universe had to acquire life-sustaining characteristics. Clifford Longley explains that according to the many worlds hypothesis: 

there could have been millions and millions of different universes created each with different dial settings of the fundamental ratios and constants, so many in fact that the right set was bound to turn up by sheer chance. We just happened to be the lucky ones (1989: 10). 

On the many worlds hypothesis, our existence in the universe only appears vastly improbable, since calculations of the probability of the anthropic coincidences arising by chance only consider the probabilistic resources available within our universe and neglect the probabilistic resources available from parallel universes. Thus, according to the many worlds hypothesis (MWH), chance can explain the existence of life in the universe after all. MWH now stands as the most popular naturalistic explanation for the anthropic fine-tuning. 

Though clearly ingenious, MWH suffers from an overriding difficulty: we have no evidence for any universes other than our own. Moreover, since possible worlds are by definition causally inaccessible to our own world, there can be no evidence for their existence except that they allegedly render probable otherwise vastly improbable events. Of course, no one can observe God directly either, though He is not causally disconnected from our world. Even so, philosophers of science like Richard Swinburne, Leslie, Craig (1988), and Robin Collins have established several reasons for preferring the theistic design-hypothesis to the naturalistic many-worlds hypothesis. First, all current cosmological models involving multiple universes require some kind of mechanism for generating universes. Yet such a universe generator would itself require precisely configured physical states, thus begging the question of its initial design. As Collins describes the dilemma: 

in all currently worked out proposals for what this universe generator could be such as the oscillating big bang and the vacuum fluctuation models . . . the generator itself is governed by a complex set of laws that allow it to produce universes. It stands to reason, therefore, that if these laws were slightly different the generator probably would not be able to produce any universes that could sustain life (1999: 61). 

Indeed, from experience we know that some machines (or factories) can produce other machines. But our experience also suggests that such machine-producing machines themselves require intelligent design. 

Second, as Collins argues, all things being equal, we should prefer hypotheses that are natural extrapolations from what we already know about the causal powers of various kinds of entities (1999: 60-61). Yet when it comes to explaining the anthropic coincidences, the multiple worlds hypothesis fails this test, whereas the theistic-design hypothesis does not. To illustrate, Collins asks his reader to imagine a paleontologist who posits the existence of an electromagnetic 

dinosaur-bone-producing-field, as opposed to actual dinosaurs, as the explanation for the origin of large fossilized bones. While certainly such a field qualifies as a possible explanation for the origin of the fossil bones, we have no experience of such fields, nor of their producing fossilized bones. Yet we have observed animal remains in various phases of decay and preservation in sediments and sedimentary rock. Thus, most scientists rightly prefer the actual dinosaur hypothesis, over the apparent dinosaur hypothesis (the dinosaur-bone-producing- field ), as an explanation for the origin of fossils. In the same way, Collins argues, we have no experience of anything like a universe generator (that is not itself designed) producing either finely-tuned systems or infinite and exhaustively random ensembles of possibilities. Yet we do have extensive experience of intelligent agents producing finely-tuned machines such as Swiss watches. Thus, Collins concludes, the postulation of a supermind (God) to explain the fine- tuning of the universe constitutes a natural extrapolation from our experience- based knowledge of the causal powers of intelligent agency, whereas the postulation of multiple universes lacks a similar basis. 

Third, as Craig shows, for the many-worlds hypothesis to suffice as an explanation for anthropic fine-tuning, there must exist an exhaustively random distribution of physical parameters and thus an infinite number of parallel universes to insure that a life-producing combination of factors will eventually arise. Yet neither of the physical models that allow for a multiple-universe interpretation Everett s quantum mechanical model or Linde s inflationary cosmology provides a compelling justification for believing in such an exhaustively random and infinite number of parallel universes, but instead only a finite and non-random set (Craig 1996: 24). 

Fourth, Swinburne argues that the theistic design hypothesis constitutes a simpler and less ad hoc hypothesis than MWH (1990: 154-73). He notes that virtually the only evidence for many worlds is the very anthropic fine-tuning the hypothesis was formulated to explain. On the other hand, the theistic design hypothesis, though also supported by indirect evidences, can explain many separate and independent features of the universe that a many-worlds scenario cannot, including the origin of the universe itself, the mathematical beauty and elegance of physical laws, and personal religious experience. Swinburne argues that the God hypothesis constitutes a simpler as well as a more comprehensive explanation in that it requires the postulation of only one explanatory entity, rather than multiple entities including the finely-tuned universe generator and the infinite number of causally separate universes required by MWH. 

Swinburne s and Collins arguments suggest that few reasonable people would accept such an unparsimonious and far-fetched explanation in any other domain of life. That some scientists dignify MWH with serious discussion may speak more to an unimpeachable commitment to naturalistic philosophy than to any compelling merit for the idea itself. As Longley noted in the London Times in 1989, the use of MWH to avoid the theistic design argument often seems to betray a kind of special pleading and metaphysical desperation. In his view, the anthropic design argument 

and what it points to is of such an order of certainty that in any other sphere of science, it would be regarded as settled. To insist otherwise is like insisting that Shakespeare was not written by Shakespeare because it might have been written by a billion monkeys sitting at a billion keyboards typing for a billion years. So it might. But the sight of scientific atheists clutching at such desperate straws has put new spring in the step of theists (Longley 1989: 10). 

Indeed, it has. As the twentieth century comes to a close, the design argument has re-emerged from its premature retirement at the hands of biologists in the nineteenth century. Physics, astronomy, cosmology, and chemistry have each revealed that life depends on a very precise set of design parameters, which, as it happens, have been built into our universe. The fine-tuning evidence has led to a persuasive reformulation of the design argument, though not a formal deductive proof of God s existence. As a result, physicist John Polkinghorne relates that we are living in an age where there is a great revival of natural theology taking place. That revival of natural theology is taking place not on the whole among theologians, who have lost their nerve in that area, but among the scientists (1996: 16). Polkinghorne also notes that this revived natural theology generally has more modest ambitions than the natural theology of the Middle Ages. Nevertheless, his statement suggests that a profound intellectual shift has begun to take place as physics and related disciplines reveal new evidence that appears to support theistic belief. 

Evidence of Intelligent Design in Biology

Despite renewed interest in the design hypothesis among physicists and cosmologists, many biologists have long remained reluctant to consider such notions. Indeed, since the late nineteenth century, biologists have mostly rejected the idea that biological organisms manifest evidence of intelligent design. While many acknowledge the appearance of design in biological systems, they insist that purely naturalistic mechanisms such as natural selection acting on random variations can give a full account of how this appearance arose. 

Molecular Machines

Nevertheless, the rumblings about design have begun to spread to biology. In 1998, for example, the leading journal, Cell, featured a special issue on Macromolecular Machines. Molecular machines are incredibly complex devices that all cells use to process information, build proteins, and move materials back and forth across their membranes. Bruce Alberts, President of the National Academy of Sciences, introduced this issue with an article entitled, The Cell as a Collection of Protein Machines. In his article, Alberts admits that: 

We have always underestimated cells . . . . The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines . . . Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world these protein assemblies contain highly coordinated moving parts (1998: 291). 

Alberts notes that molecular machines strongly resemble machines designed by human engineers, although as an orthodox neo-Darwinist he denies any role for actual, as opposed to apparent, design in the origin of these systems. 

In recent years, however, a formidable challenge to this view has arisen within biology. In Darwin s Black Box (1996), Lehigh University biochemist Michael Behe shows that neo-Darwinists have failed to explain the origin of complex molecular machines in living systems. For example, Behe looks at the acid-powered rotary engines that turn the whip-like flagella of certain bacteria (1996: 51-73). He shows that the intricate machinery in this molecular motor including a rotor, a stator, O-rings, bushings, and a drive shaft requires the coordinated interaction of some forty complex protein parts. Yet the absence of any one of these proteins would result in the complete loss of motor function. To assert that such an irreducibly complex engine emerged gradually in a Darwinian fashion strains credulity. Natural selection selects functionally advantageous systems. Yet motor function only ensues after all necessary parts have independently self-assembled an astronomically improbable event. Hence, Behe insists that Darwinian mechanisms cannot account for the origin of molecular motors and other irreducibly complex systems that require the coordinated interaction of multiple independent protein parts. 

To emphasize his point, Behe conducted a literature search of relevant technical journals (1996: 165-86). He found a complete absence of gradualist Darwinian explanations for the origin of the systems and motors that he discusses. Behe concludes that neo-Darwinists have not explained, nor in most cases, even attempted to explain, how the appearance of design in irreducibly complex systems arose naturalistically. In fact, we know of only one cause sufficient to produce functionally integrated, irreducibly complex systems, namely, intelligent design. Whenever we encounter irreducibly complex systems and we know how they arose, invariably a designer played a causal role. Thus, Behe concludes on strong uniformitarian grounds that the molecular machines and complex systems we observe in cells must have also had an intelligent source. In brief, molecular motors appear designed because they were designed. 

Complex Specificity of Cellular Components

Other developments in biology reinforce Behe s argument. The molecular machines that Behe examines inside the cell are built from smaller components known as proteins. In addition to building motors and other biological structures, proteins perform the vital biochemical functions information processing, metabolic regulation, signal transduction necessary to maintain cellular life. Biologists, from Darwin s time to the late 1930s, assumed that proteins had simple, regular structures explicable by reference to mathematical laws. Beginning in the 1950s, however, biologists made a series of discoveries that caused this simplistic view of proteins to change. Molecular biologist Fred Sanger, for example, determined the sequence of constituents in the protein molecule insulin. Sanger s work showed that proteins are made of long non-repetitive amino acids, rather like an irregular arrangement of colored beads on a string (Sanger & Tuppy 1951; Sanger & Thompson 1953). Later in the 1950s, work by John Kendrew on the structure of the protein myoglobin showed that proteins also exhibit a surprising three-dimensional complexity. Far from the simple structures that biologists had imagined, Kendrew s work revealed an extraordinarily complex and irregular three- dimensional shape a twisting, turning, tangled chain of amino acids. As Kendrew explained in 1958, the big surprise was that it was so irregular . . . the arrangement seems to be almost totally lacking in the kind of regularity one instinctively anticipates, and it is more complicated than has been predicted by any theory of protein structure (1958: 664). 

During the 1950s, scientists realized that proteins possess another remarkable property. In addition to their complexity, proteins also exhibit specificity, both as one-dimensional arrays and as three-dimensional structures. Whereas proteins are built from rather simple chemical building blocks known as amino acids, their function whether as enzymes, signal transducers or structural components in the cell depends crucially upon the complex but specific sequencing of these building blocks (Alberts 1983: 91-141). Molecular biologists like Francis Crick quickly likened this feature of proteins to a linguistic text. Just as the meaning (or function) of an English text depends upon the sequential arrangement of letters in a text, so too does the function of a polypeptide (a sequence of amino acids) depend upon its specific sequencing. Moreover, in both cases, slight alterations in sequencing can quickly result in loss of function. 

In the biological case, the specific sequencing of amino acids gives rise to specific three-dimensional structures. This structure or shape in turn determines what function, if any, the amino acid chain can perform within the cell. For a functioning protein, its three-dimensional shape gives it a hand-in-glove fit with other molecules in the cell, enabling it to catalyze specific chemical reactions or build specific structures within the cell. Due to this specificity, one protein can usually no more substitute for another, than one tool can substitute for another. A topoisomerase can no more perform the job of a polymerase, than a hatchet can perform the function of soldering iron. Proteins can perform functions only by virtue of their three-dimensional specificity of fit with other equally specified and complex molecules within the cell. This three-dimensional specificity derives in turn from a one-dimensional specificity of sequencing in the arrangement of the amino acids that form proteins. 

Sequence Specificity of DNA

The complexity and specificity of proteins both as one-dimensional arrays and three-dimensional structures raised an important question. How did such complex, but specific, structures arise in the cell? This question recurred with particular urgency after Sanger revealed his results in the early 1950s. Clearly, proteins were too complex and functionally specific to arise by chance. Moreover, given their irregularity, it seemed unlikely that a general chemical law or regularity governed their assembly. Instead, as Jacques Monod recalled, molecular biologists began to look for some source of information within the cell that could direct the construction of these highly specific structures. To explain the presence of all that information in the protein, you absolutely needed a code, as Monod would later explain (cited in Judson 1979: 611). 

In 1953, James Watson and Francis Crick elucidated the structure of the DNA molecule (1953: 737-38). Soon thereafter, molecular biologists discovered how DNA stores the information necessary to direct protein synthesis. In 1955, Crick first proposed the sequence hypothesis, suggesting that the specificity of amino acids in proteins derives from the specific arragement of chemical constituents in the DNA molecule (Judson 1979: 335-36). According to the sequence hypothesis, information on the DNA molecule is stored in the form of specifically arranged chemicals called nucleotide bases along the spine of DNA s helical strands. Chemists represent these four nucleotides with the letters A, T, G, and C (for adenine, thymine, guanine, and cytosine). By 1961, the sequence hypothesis had become part of the so-called central dogma of molecular biology as a series of brilliant experiments confirmed DNA s information-bearing properties. 

As it turns out, specific regions of the DNA molecule called coding regions have the same property of sequence specificity or specified complexity that characterizes written codes, linguistic texts, and protein molecules. Just as the letters in the alphabet of a written language may perform a communication function depending upon their sequencing, so too may the nucleotide bases in DNA produce a functional protein depending upon their precise sequential arrangement. The nucleotide bases in DNA function in precisely the same way as symbols in a machine code or alphabetic characters in a book. In each case, the arrangement of the characters determines the function of the sequence as a whole. As Dawkins notes, The machine code of the genes is uncannily computer-like (1995: 10). Or, as Bill Gates avers, DNA is like a computer program, but far, far more advanced than any software we ve ever created (1996: 228). In the case of a computer code, the specific arrangement of just two symbols (0 and 1) suffices to carry information. In the case of an English text, the 26 letters of the alphabet do the job. In the case of DNA, the complex but precise sequencing of the four nucleotide bases (A, T, G, and C) stores and transmits genetic information information that finds expression in the construction of specific proteins. 

Developments in molecular biology have raised the question of the ultimate origin of the specific sequencing the information content3 in both DNA and proteins. They have also created severe difficulties for all strictly naturalistic theories of the origin of the first cellular life. Since the late 1920s, naturalistically- minded scientists have sought to explain the origin of the very first life as the result of a completely undirected process of chemical evolution. In The Origin of Life (1938), Alexander I. Oparin, like other chemical evolutionary theorists, envisioned life arising by a slow process of transformation starting from simple chemicals on the early earth. Unlike Darwinism, which sought to explain the origin and diversification of new and more complex living forms from simpler, pre-existing forms, chemical evolutionary theory seeks to explain the origin of the very first cellular life. Yet since the late 1950s, naturalistic chemical evolutionary theories have been unable to account for the origin of the specified complexity or information content (among many other problems) necessary to build a living cell (Dose 1988; Yockey 1992; Thaxton 1992). 

Chance-based models of chemical evolution have failed, since the amount of specified information present in even a single protein or gene (a section of DNA for building a single protein) typically exceeds the probabilistic resources of the entire universe (Dembski 1998a: 203-17; Meyer in Dembski 1998b: 124-26; Yockey 1992: 246-58). Models based upon pre-biotic natural selection have failed, since they presuppose the existence of a self-replicating system (Meyer in Dembski 1998b: 126-28). Yet this in turn presupposes the presence of information-rich DNA and protein molecules the very entities that require explanation in the first place. Finally, self-organizational models have failed, since the information content of DNA defies explanation by reference to the physical and chemical properties of its constituent parts (Meyer in Dembski 1998b: 128-34). Just as the chemistry of ink does not explain the origin of the specific sequencing of letters in a newspaper headline, so too the properties of the chemical constituents of DNA text the four nucleotide bases do not explain the specific sequencing of the genetic text. As Michael Polanyi put it: As the arrangement of a printed page is extraneous to the chemistry of the printed page, so is the base sequence in a DNA molecule extraneous to the chemical forces at work in the DNA molecule (1968: 1309). 

DNA By Design 

Instead, the presence of specified information in DNA suggests a source extrinsic to physics and chemistry. When one seeks the source of the information in this morning s newspaper or in an ancient inscription, one comes ultimately to a writer or a scribe. When a computer user traces the information on a screen back to its source, he invariably comes to a mind a writer, software engineer, or programmer. If, as Gates states, DNA is similar to a software program (in its information content) but more complex, it makes sense to infer that it too had an intelligent source. Though DNA is similar to a computer program, the case for its design does not depend upon mere resemblance. Classical design arguments in biology typically sought to draw analogies between whole organisms and machines based upon certain similar features that each held in common. These arguments sought to reason from similar effects back to similar causes. The status of such design arguments thus turned on the degree of similarity that actually obtained between the effects in question. Yet since even advocates of these classical arguments admitted dissimilarities, as well as similarities, the status of these arguments always appeared uncertain. Advocates would argue that the similarities between organisms and machines outweighed dissimilarities. Critics would claim the opposite. The design argument from the information in DNA does not depend upon such analogical reasoning, since it does not depend upon claims of similarity (cf Sober 1993: 26-47). Namely, the coding regions of DNA have the very same property of sequence specificity, or information content, that computer codes and linguistic texts do. Though DNA does not possess all the properties of natural language or semantic information, that is, information that is subjectively meaningful to human agents, it does have precisely those properties that jointly implicate a prior intelligence. 

As William Dembski shows in The Design Inference (1998), systems or sequences that have the joint properties of high complexity and specification invariably result from intelligent causes, not chance or physical-chemical necessity. Complex sequences are those that exhibit an irregular and improbable arrangement that defies expression by a simple rule or algorithm. A specification, on the other hand, is a match or correspondence between a physical system or sequence and a set of independent functional requirements or constraints. As it turns out, the base sequences in the coding regions of DNA are both highly complex and specified. The sequences of bases in DNA are highly irregular, non- repetitive, and improbable and, therefore, also complex. Moreover, the coding regions of DNA exhibit sequential arrangements of bases that are necessary (within certain tolerances) to produce functional proteins that is, they are highly specified with respect to the independent requirements of protein function and protein synthesis (Thaxton & Bradley in Moreland 1994; Thaxton 1992; Yockey 1992). Thus, as nearly all molecular biologists now recognize, the coding regions of DNA possess a high information content where information content in a biological context means precisely complexity and specificity. 

Therefore, the design argument from information content in DNA does not depend upon analogical reasoning, since it does not depend upon assessments of degree of similarity. The argument does not depend upon the similarity of DNA to a computer program or human language, but upon the presence of an identical feature ( information content defined as complexity and specification ) in both DNA and all other designed systems, languages, or artifacts. While a computer program may be similar to DNA in many respects, and dissimilar in others, it exhibits a precise identity to DNA in its ability to store information content. As such, this argument does not represent an argument from analogy, of the sort that Hume criticized, but an inference to the best explanation. Such arguments turn not on assessments of the degree of similarity between effects, but instead on an assessment of the adequacy of competing possible causes for the same effect. Since we know intelligent agents can (and do) produce functionally specified sequences of symbols or arrangements of matter (information content), intelligent agency qualifies as a sufficient causal explanation for the origin of this effect. And since, in addition, naturalistic scenarios have proven universally inadequate for explaining the origin of information content, mind or creative intelligence now stands as the best and only entity with the causal power to produce this feature of living systems. 

Indeed, experience teaches that whenever we encounter specified complexity or high information content in an artifact or entity whose causal story is known, invariably creative intelligence intelligent design has played a causal role in the origin of that entity. In brief, since experience suggests that intelligent design is an empirically necessary cause of an information-rich system (the only cause known to be capable of producing the effect), one can detect (or, logically, retrodict) the past action of an intelligent cause from the presence of such an effect, even if the cause itself cannot be directly observed (Meyer in Moreland 1994). The specified pattern of red and yellow flowers spelling Welcome to Victoria in the gardens of Victoria harbor in Canada leads visitors to infer the activity of intelligent agents (gardeners), even if they did not see the flowers planted and arranged. The arrangement of symbols on the Rosetta Stone led archeologists to infer the work of scribes, though archeologists could make no direct observations of them working. Similarly, the specifically arranged nucleotide sequences the information content in DNA suggests the past action of an intelligent mind, even if such mental agency cannot be directly observed. Intelligent agents have unique causal powers that nature does not. When we observe effects that we know only agents can produce, we rightly infer the antecedent presence of a prior intelligence, even if we did not observe the action of the particular agent responsible. Since DNA displays precisely an effect information content that, in our experience, only agents can produce, intelligent design not apparent design stands as the best explanation for the information content (or specified complexity) in DNA. 

Reconceptualizing Epistemic Support

Despite the rather dramatic developments in cosmology and biology during the twentieth century, many scientists and theologians remain reluctant to revise their understanding of the relationship between science and Christian belief. True, perhaps fewer scientists today than in the late nineteenth century would assert that science and Christianity stand in overt conflict. Yet many scientists and theologians still deny that science can provide evidential or epistemic support for Christian or theistic belief. Instead, they express skepticism about what they see as a return to the failed natural theology of the nineteenth century or rationalistic attempts to prove the existence of God. They point out, perhaps rightly, that neither the evidence for a cosmological singularity nor that of intelligent design in physics and biology can prove God s existence. Thus, many Christian theologians and scientists continue to affirm the strict neutrality of science and deny that science does (or can) support theistic or Christian belief. 

Consider the view of Ernan McMullin, a prominent philosopher of science and theologian at the University of Notre Dame. McMullin explicitly denies that the Big Bang theory provides any evidential support for Christian theism, though he admits that if one assumed the Christian doctrine of Creation, one might expect to find evidence for a beginning to time: What one could say . . . is that if the universe began in time through the act of a Creator, from our vantage point it would look something like the Big Bang that cosmologists are talking about. What one cannot say is . that the Big Bang model ‘supports the Christian doctrine of Creation (1981: 39). 

Deduction and the Logic of Entailment

Many philosophers, scientists, and theologians assume that scientific evidence (A) can provide epistemological support for, or grounds for believing, a theological proposition (B) only if the latter (B) follows from evidence (A) with deductive certainty. They assume that to succeed in providing epistemic support for God s existence, or other propositional commitments of theism, arguments must necessarily take a deductive logical form such as: 

If A, then B
A_________
Therefore B 

Of course, many arguments for God s existence were framed in precisely such a deductive manner. Recall, for example, the classic statement of the kalam cosmological argument for God s existence (Craig 1994: 92): 

Whatever begins to exist has a cause
The universe began to exist
Therefore, the universe has a cause of its existence.

Such deductive arguments utilize the standard modus ponens logical form. Thus, they are logically valid. If the premises of such arguments are true, and can be known to be true with certainty, then the conclusion follows with certainty as well. In such arguments, logicians say the premises entail the conclusions. Of course, finding premises that can be known to be true with certainty can be very difficult, especially for an empirically-based inquiry such as natural science. Many deductive arguments for God s existence failed for exactly this reason. Nevertheless, deductive entailment from true premises does constitute a perfectly legitimate, if infrequently attained, form of epistemic support. If (A) logically compels (B), then it is irrational to deny (B) if one affirms (A). In such cases, (A) clearly provides support for (B) (Dembski & Meyer 1998: 418-22). Even so, deductive entailment involves a far stronger notion of support than empirical science requires. Scientists rarely prove their theories deductively from empirical evidence. Indeed, no field of inquiry short of mathematics could progress if it limited itself to the logic of entailment. Rather, most fields of inquiry employ alternate forms of inference known variously as the method of hypothesis, abduction, hypothetico-deductive method, or inference to the best explanation. 

Abduction and the Logic of Confirmation of Hypothesis

During the nineteenth century, C. S. Peirce, a logician, described the modes of inference used to derive conclusions from data (1931, 2: 375). Peirce noted that in addition to deductive arguments, we often employ a mode of logic he called abduction or 

the method of hypothesis. To see the difference between these two types of inference, consider the following argument schemata: 

DEDUCTIVE: 

DATA: A is given and plainly true.
LOGIC: But if A is true, then B is a matter of course.
CONCLUSION: Hence, B must be true as well. 

ABDUCTIVE:

DATA: The surprising fact A is observed.
LOGIC: But if B were true, then A would be a matter of course.
CONCLUSION: Hence, there is reason to suspect that B is true. 

In the logic of the deductive schema, if the premises are true, the conclusion follows with certainty. The logic of the abductive schema, however, does not produce certainty, but instead plausibility or possibility. Unlike deduction, in which the minor premise affirms the antecedent variable (A), abductive logic affirms the consequent variable (B). In deductive logic, affirming the consequent variable (with certainty) constitutes a fallacy a fallacy that derives from the failure to acknowledge that more than one antecedent might explain the same evidence. To see why, consider the following argument: 

or symbolically:
If it rains the streets will get wet,
the streets are wet
therefore it rained. If R, then W
W________
therefore R. 

Obviously, this argument has a problem as it stands. It does not follow that because the streets are wet, it necessarily rained. The streets may have gotten wet in some other way. A fire hydrant may have burst, a snow bank may have melted, or a street sweeper may have doused the street before beginning his cleaning operation. Nevertheless, that the streets are wet might indicate that it has rained. Thus, amending the argument as follows does not commit the fallacy:

or symbolically:
If it rains, then we would expect the streets to get wet,
the streets are wet therefore perhaps it rained.
If R, then W
W_______
perhaps R. 

As the above shows, even if one may not affirm the consequent with certainty, one may affirm it as a possibility. And this is precisely what abductive logic does. It provides a reason for considering that a hypothesis might be true. Indeed, it gives a reason for believing a hypothesis, even if one cannot affirm the hypothesis (or conclusion) with certainty. 

The natural and historical sciences employ such logic routinely. In the natural sciences, if we have reason to expect that some state of affairs will ensue given some hypothesis, and we find that such a state of affairs has ensued, then we say that our hypothesis has been confirmed. This method of confirmation of hypothesis functions to provide evidential support for many scientific hypotheses. Given Copernicus heliocentric theory of the solar system, astronomers in the seventeenth century had reason to expect that the planet Venus should exhibit phases. Galileo s discovery that it does exhibit phases, therefore, supported (though it did not prove) the heliocentric view. The discovery did not prove the heliocentric theory, since other theories might and in fact could explain the same fact (Gingerich 1982: 133-43). 

Peirce acknowledged that abductive inferences on their own may constitute a rather weak form of epistemic support: As a general rule [it] is a weak kind of argument. It often inclines our judgment so slightly toward its conclusion that we cannot say that we believe the latter to be true; we only surmise that it may be so (1931, 2: 375). Yet, as a practical matter, Peirce acknowledged that abduction often yields conclusions that are difficult to doubt even if they lack the airtight certainty that accompanies the logic of deduction. For instance, Peirce argued that skepticism about Napoleon s existence was unjustified although his existence could be known only by abduction: Numberless documents refer to a conqueror called Napoleon Bonaparte. Though we have not seen the man, yet we cannot explain what we have seen, namely, all these documents and monuments, without supposing that he really existed (1931, 2: 375). Thus, Peirce suggested that by considering the explanatory power of a hypothesis, the logic of abduction might underwrite more robust relations of epistemic support. 

Inference to the Best Explanation 

Since Peirce s time, philosophers of science have refined his abductive logic to show how abductive inferences (or confirmation of hypothesis) can provide a stronger form of epistemic support. The abductive framework of logic employed by natural scientists and others often provides a weak form of epistemic support, since it leaves open many possible explanations for the same evidence. Philosophers of science have recognized that this situation often forces scientists to evaluate the explanatory power of competing possible hypotheses. This method, alternatively called the method of multiple competing hypotheses (Chamberlin 1965), or inference to the best explanation (Lipton 1991; Sober 1993), often reduces, at least for practical purposes, the uncertainty or 

underdetermination associated with abductive inference. In this method of reasoning, the explanatory or predictive virtues of a potential hypothesis determine which among a competing set of possible explanations constitutes the best (Lipton 1991; Scriven 1959: Brush 1989). Scientists infer that hypothesis among a competing group which would, if true, provide the best explanation of some set of relevant data. True, both an earthquake and a bomb could explain the destruction of the building, but only the bomb can explain the presence of charring and shrapnel at the scene of the rubble. Earthquakes do not produce shrapnel nor cause charring, at least not on their own. 

This example suggests that considerations of causal adequacy often determine which among a set of possible explanations will constitute the best. Indeed, the method of inference to the best explanation suggests that determining which among a set of competing possible explanations constitutes the best depends upon assessments of the causal powers of competing explanatory entities (Lipton 1991; Meyer in Moreland 1994). Entities or events that have the capability to produce the evidence in question constitute better explanations of that evidence than those that do not. It follows that the process of determining the best explanation often involves generating a list of possible hypotheses, comparison of their known (or theoretically plausible) causal powers with respect to the relevant data, and the progressive elimination of potential but inadequate explanations. Of course, in some situations more than one hypothesis may serve as an adequate explanation for a given fact. Typically in such situations scientists expand their evaluation to include an ensemble of relevant data to discriminate between the explanatory power of various abductive hypotheses (Meyer 1990: 99- 108). 

Inference to the best explanation (IBE) as a method of reasoning has a number of advantages over either deduction or simple abduction. First, IBE can provide a strong form of epistemic support without having to achieve the often unrealistic standard of deductive certainty. If the logic of confirmation provides a weak form of epistemic support by suggesting a reason for believing that a hypothesis might be true, then the logic of comparative explanatory power the method of IBE can provide a stronger form of support by giving a reason for preferring a possibly true hypothesis over all other competitors. As Peirce noted in his discussion of the evidence for Napoleon, circumstantial evidences may establish an inference beyond reasonable doubt, even if the abductive form of argument cannot categorically exclude other logical possibilities. 

Second, in discussions of reason (or science) and faith, IBE provides a way of avoiding fideism belief without justification, or faith in faith alone on the one hand, or a return to strict rationalism, on the other. If as both rationalists and fideists assume, deductive proofs provide the only way to support a Christian worldview, then if such proofs fail, fideism or skepticism stands as the only alternative. If, however, scientific or other evidences suggest theism as a better explanation than competing metaphysical systems or worldviews, then one can affirm an evidential basis for theistic belief without embracing the failed rationalism of the past. 

Theism as Inference to the Best Explanation

With confirmation of hypothesis and explanatory power, rather than deductive entailment, constituting epistemic support, we can now see how developments in modern science provide support for Christian theism. Curiously, in the very passage in which he denies that the Big Bang model supports the Christian doctrine of Creation, McMullin suggests this very possibility: If the universe began in time through the act of a Creator . . . it would look something like the Big Bang that cosmologists are talking about (1981: 39). But does this not simply mean that if we assume the Christian doctrine of Creation (or theism) as a kind of metaphysical hypothesis, then the Big Bang is the kind of cosmological theory we have reason to expect? As Arno Penzias states, the best data we have (concerning the Big Bang) are exactly what I would have predicted had I nothing to go on but the first five books of Moses, the Psalms and the Bible as a whole (cited in Browne 1978: 54). But again, does not this statement, and McMullin s, imply that the Big Bang theory provides a kind of confirmation of the Judeo-Christian understanding of Creation and with it a theistic worldview? The previous discussion of confirmation would certainly seem to suggest as much. Explicating the above statements as an abductive syllogism helps to explain why: 

If theism and the Judeo-Christian view of Creation are true, then we have reason to expect evidence of a finite universe,
We have evidence of a finite universe,
therefore, theism and the Judeo-Christian view of Creation may be true. 

This syllogism suggests that the Big Bang theory functions to confirm the metaphysical hypothesis of theism in much the same way that empirical observations confirm scientific theories. It follows that the Big Bang does provide epistemic support for theism at least in this limited way. Yet the Big Bang theory may provide an even stronger form of epistemic support. Metaphysics offers a multitude of competing explanations for the nature and origin of the material universe, everything from naturalism to pantheism, deism, and theism. Let us initially compare the explanatory power of theism and naturalism, perhaps the two most influential worldviews in the West. 

First, theism, with its notion of a transcendent Creator, provides a more causally adequate explanation of the Big Bang singularity than a fully naturalistic explanation can offer. Since naturalism assumes that, in Carl Sagan s formulation, 

the Cosmos is all that is, or ever was or ever will be (1980: 4), naturalism denies the existence of any entity with the causal powers capable of explaining the origin of the universe as a whole. Since the Big Bang, in conjunction with general relativity, implies a singular beginning for matter, space, time, and energy (Hawking & Penrose 1970), it follows that any entity capable of explaining this singularity must transcend these four dimensions or domains. In so far as God, as conceived by Judeo-Christian theists, possesses precisely such transcendent causal powers, theism provides a better explanation than naturalism for the singularity affirmed by Big Bang cosmology. 

Theism also provides a better explanation for the origin of the universe than does pantheism, for much the same reason. Though a pantheistic worldview affirms the existence of an impersonal god, the god of pantheistic religions and philosophy exists within, and is co-extensive with, the physical universe. God as conceived by pantheists cannot act to bring the physical universe into being from nothing (physical), since such a god does not exist independently of the physical universe. If initially the physical universe did not exist, the pantheistic god would not exist either. If it did not exist, it could not be invoked to explain the origin of the universe from (physical) nothing. 

Many naturalists in effect admit the dissonance created by the Big Bang theory for their worldview. Einstein acknowledged it when he introduced his cosmological constant to maintain a static universe. Hoyle acknowledged it when he proposed his steady state theory to retain an eternal universe despite its flagrant violation of the conservation of energy. Sir Arthur Eddington acknowledged it when he refused to consider the Big Bang theory due to its philosophical repugnance (1956: 450). Of course, most contemporary naturalists now reject these earlier responses. Many claim to have resolved the dissonance by coupling Big Bang cosmology to more speculative quantum cosmologies or many worlds hypotheses. Yet, ironically, to the extent that even these cosmological ideas may have validity, they themselves may also have latent theistic implications (Craig 1996: 26-27). In any case, if the universe is finite, as the Big Bang and general relativity affirm, at least on the most straightforward rendering of each, then these theories provide confirmation and epistemic support to the metaphysical hypothesis of theism. Further, theism provides a better, more causally adequate explanation for the evidence of a finite universe than its main metaphysical competitors. Hence, if we explicate epistemic support in terms of confirmation of hypothesis or explanatory power (rather than deductive entailment), the Big Bang theory provides support for theism, and indeed for a Judeo-Christian understanding of Creation. 

Of course, the evidence for the Big Bang alone may not provide support for the other attributes of God. While the Big Bang seems best explained by a transcendent cause, it may not, by itself, imply an intelligent or rational cause. Yet this alone does not diminish the epistemic support that the Big Bang theory provides for aspects of theistic belief, namely, theism s affirmation of a finite universe and a specifically transcendent Creator. Other types of scientific evidence may provide support for other attributes of a theistic God, or even other aspects of Biblical teaching. 

Physics and cosmology suggest intelligent design as a highly plausible and arguably best explanation for the exquisite fine-tuning of the physical laws and constants of the universe and the precise configuration of its initial conditions. Since the fine-tuning and initial conditions date from the very origin of the universe itself, this evidence suggests the need for an intelligent as well as a transcendent Cause for the origin of the universe. Since God as conceived by Christians and other theists possesses precisely these attributes, His creative action can adequately explain the origin of the cosmological singularity and the anthropic fine-tuning. Since naturalism denies a transcendent and pre-existent intelligent cause, it follows that theism provides a better explanation than naturalism for these two evidences taken jointly. Since pantheism, with its belief in an immanent and impersonal god, also denies the existence of a transcendent and pre-existent intelligence, it too lacks causal adequacy as an explanation for these evidences. Indeed, a completely impersonal intelligence is almost a contradiction in terms. Thus, theism stands as the best explanation of the three major worldviews theism, pantheism, and naturalism for the origin of the Big Bang singularity and anthropic fine-tuning taken jointly. 

Admittedly, theism, naturalism, and pantheism are not the only world-views that can be offered as metaphysical explanations for the three classes of evidences. Deism, like theism, for example, can explain the cosmological singularity and the anthropic fine-tuning. Like theism, deism conceives of God as both a transcendent and intelligent Creator. Nevertheless, deism denies that God has continued to participate in His Creation, either as a sustaining presence or an actor within Creation after the origin of the universe. Thus, deism would have difficulty accounting for any evidence of discrete acts of design or creation during the history of the cosmos (that is, after the Big Bang). Yet precisely such evidence now exists in the biological realm. 

Current fossil evidence puts the origin of life on earth at 3.5-3.8 billion years ago, clearly well after the origin of the universe. If the presence of a high information content in the cell provides compelling evidence for the intelligent design of the first life, then that suggests the need for an act of creative intelligence, or a period of creative activity, well after the Big Bang. One could argue against this by asserting that the information necessary to build life was present in the initial configuration of matter at the Big Bang. Yet the implausibility of such a view can be clearly demonstrated empirically (Meyer 1999: 92-97). On the other hand, theism can explain the origin of biological information as the result of God s creative activity (within a natural order that He otherwise sustains) at some point after His initial Creation. In contrast, deism cannot account for evidence of creation or design after the Big Bang, since it stipulates that God (the absentee landlord ) chose not to involve Himself in the events or workings of the universe He created. 

Interestingly, some philosophical naturalists postulate an immanent intelligence as an explanation for the origin of the first life on earth. Thus, Crick (1981) and Hoyle (& Wackramasinghe 1981) both propose so-called directed panspermia models. These suggest that life was intelligently designed (or seeded) by an intelligence within the cosmos a space alien or extraterrestrial agent rather than by a transcendent intelligent God. Their proposal thus suggests that even if the origin of life cannot be accounted for by a naturalistic process of chemical evolution, it can be explained by reference to a purely natural intelligence within the cosmos. This explanation does not revive naturalism as an adequate metaphysical explanation for biological design, however, since no naturalistic explanations can account for the ultimate origin of high information content. Instead, it suggests that if naturalism could give an account of the origin of the specified information required to make life somewhere, it might also be able to explain the origin of life at a specific time on earth. Yet naturalistic theories have failed precisely to explain the origin of the specified information content necessary for life s origin. Thus, explaining the origin of life by reference to other life, albeit intelligent and extraterrestrial, only begs the question of the ultimate origin of life somewhere within the cosmos. In any case, naturalism has difficulty explaining other relevant evidences such as the cosmological singularity and anthropic fine-tuning as adequately or coherently as theism. 

In 1992, historian of science Frederic Burnham stated that the God hypothesis is now a more respectable hypothesis than at any time in the last one hundred years (cited in Briggs 1992: B6). Burnham’s comment came in response to the discovery of so-called COBE background radiation, which provided yet another dramatic confirmation of Big Bang cosmology. Yet it is not only cosmology that has rendered the God hypothesis respectable again. As one surveys several classes of evidence from the natural sciences cosmology, physics, biochemistry, and molecular biology theism emerges as a worldview with extraordinary explanatory scope and power. Theism explains a wide ensemble of metaphysically-significant scientific evidences and theoretical results more simply, adequately, and comprehensively than other major competing worldviews or metaphysical systems. This does not, of course, prove God s existence, since superior explanatory power does not constitute deductive certainty. It does suggest, however, that the natural sciences now provide strong epistemological support for the existence of God as affirmed by both a theistic and Judeo-Christian worldview. 

Notes

1 Recent measurements showing that the universe may be accelerating in its expansion have resuscitated discussions of the cosmological constant. These measurements seem to require some kind of repulsive force in opposition to gravitation in order to explain the acceleration. These data do not provide any new support for a static or infinite universe, however. They suggest instead a repulsive force now strong enough to accelerate the expansion and prevent any subsequent contraction from occurring, thus contradicting another infinite universe cosmology the oscillating universe model (Peebles 1999: 25-26; Wilford 1998: F1). 

2 Greenstein himself does not favor the design hypothesis, but rather the participatory universe principle ( PAP ), which attributes the apparent design of the fine-tuning of the physical constants to the universe s alleged need to be observed in order to exist (1988: 223). 

3 The term information content is used variously to denote both specified and unspecified complexity. Yet a sequence of symbols that is merely complex but not specified (such as wnsgdtej3dmzcknvcnpd ) would not necessarily indicate the activity of a designing intelligence. Thus, one might argue that design arguments based upon the presence of information commit a fallacy of equivocation by inferring design from a type of information (unspecified) that could result from random natural processes. One can eliminate this ambiguity, however, by defining information content as equivalent to the joint properties of complexity and specification, as it has been defined in biology since the late 1950s (Sarkar 1996). 

References

  • Alberts, Bruce. 1998. “The Cell as a Collection of Protein Machines: Preparing the Next Generation of Molecular Biologists.” Cell 92 (8 February): 291-94.
    _____, et al. 1983. Molecular Biology of the Cell. New York: Garland.
  • Ayala, Francisco J. 1994. Darwin s Revolution. In Creative Evolution, eds. John & J. William Schopf. New York: Jones & Bartlett: 4-5.
  • Barrow, John & Frank Tipler. 1986. The Anthropic Cosmological Principle.
    Oxford University Press.
  • Behe, Michael J. 1996. Darwin s Black Box. New York: Free Press.
  • Bondi, Hermann & Thomas Gold. 1948. The Steady-State Theory of the Expanding Universe. Monthly Notices of the Royal Astronomical Society (London) 108 (3): 252-70.
  • Briggs, David. 1992. Science, Religion Are Discovering Commonality in Big Bang Theory. Los Angeles Times (2 May): B6-7.
  • Browne, Malcolm W. 1978. Clues to Universe Origin Expected. New York Times (12 March): 54. Brush, Stephen. 1989. Prediction and Theory Evaluation: The Case of Light Bending. Science 246 (8 December): 1124-27.
  • Chaisson, Eric & Steve McMillan. 1993. Astronomy Today. Englewood Cliffs, NJ: Prentice-Hall. Chamberlin, Thomas C. 1965. The Method of Multiple Working Hypotheses. Science 148 (May): 754-59 [reprinted from Science 15 (1890): 92-96].
  • Coles, Peter & George Ellis. 1994. The Case for an Open Universe. Nature 370 (25 August): 609-13.
  • Collins, Robin. 1999. The Fine-Tuning Design Argument: A Scientific Argument for the Existence of God. In Reason for the Hope Within, ed. Michael Murray. Grand Rapids, MI: Eerdmans: 47-75.
  • Craig, William Lane. 1988. Barrow and Tipler on the Anthropic Principle v.
    Design. British Journal for the Philosophy of Science 38: 389-95.
    _____. 1994. Reasonable Faith. Wheaton, IL: Crossway Books.
    _____. 1996. Cosmos and Creator. Origins & Design 20 (2): 18-28.
  • Crick, Francis. 1981. Life Itself. New York: Simon & Schuster.
  • Darwin, Charles. 1968. The Origin of Species. London: Penguin.
  • Davies, Paul. 1983. God and the New Physics. New York: Simon & Schuster.
    _____. 1984. The Superforce. New York: Simon & Schuster.
    _____. 1988. The Cosmic Blueprint. New York: Simon & Schuster.
  • Dawkins, Richard. 1995. River Out of Eden. New York: Basic Books. H. Campbell Oxford, UK: Divine
  • Dembski, William A. 1998a. The Design Inference. Cambridge, UK: Cambridge University Press. _____, ed. 1998b. Mere Creation. Downers Grove, IL: InterVarsity Press.
  • _____ & Stephen C. Meyer. 1998. Fruitful Interchange or Polite Chit-Chat? The Dialogue Between Science and Theology. Zygon 33 (3): 415-30. 
  • Denton, Michael. 1998. Nature s Destiny. New York: Free Press.
  • Dose, Klaus. 1988. The Origin of Life: More Questions Than Answers. Interdisciplinary Science Review 13 (4): 348-56.
  • Eddington, Arthur S. 1930. On the Instability of Einstein s Spherical World. Monthly Notices of the Royal Astronomical Society 90 (May): 668-78.
    _____. 1931. The End of the World: From the Standpoint of Mathematical Physics. Nature 127 (21 March): 450.
  • Einstein, Albert. 1915. Die Feldgleichungen der Gravitation. Sitzungsberichte der Koniglich Preussischen Akademie der Wissenschaften (25 November): 844-47.
    _____. 1916. Die Grundlage der allgemeinen Relativitatstheorie. Annalen der Physik (Leipzig) 49: 769-822.
  • Einstein, Albert. 1917. Kosmologische Betrachtungen zur allgemeinen Relativitats-theorie. Sitzungsberichte der Koniglich Preussischen Akademie der Wissenschaften (8 February): 142-52. Friedmann, Alexander. 1922. Uber die Krummung des Raumes. Zeitschrift fur Physik (Berlin) 10: 377-86.
  • Gamow, George. 1946. Expanding Universe and the Origin of the Elements. Physical Review 70 (7/8): 572-73.
  • Gates, Bill. 1996. The Road Ahead. Boulder, CO: Blue Penguin.
  • Giberson, Karl W. 1997. The Anthropic Principle: A Postmodern Creation Myth? Journal of Interdisciplinary Studies IX (1/2): 63-90.
  • Gillespie, Neil. 1979. Charles Darwin and the Problem of Creation. Chicago, IL: University of Chicago Press.
  • Gingerich, Owen. 1982. The Galileo Affair. Scientific American (August): 133-43.
  • Greenstein, George. 1988. The Symbiotic Universe. New York: Morrow.
  • Gribbin, John & Martin Rees. 1991. Cosmic Coincidences. London: Black Swan.
  • Guth, Alan & Marc Sher. 1981. Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems. Physical Review 23 (2): 348.
  • _____. 1983. The Impossibility of a Bouncing Universe. Nature 302 (7 April): 505-7.
  • Halliday, David & Robert Resnick. 1978. Physics: Part Two. New York: Wiley.
  • Hawking, Stephen. 1988. A Brief History of Time. New York: Bantam.
  • _____ & Roger Penrose. 1970. The Singularities of Gravitational Collapse and Cosmology. Proceedings of the Royal Society of London A314: 529-48.
  • Hooykaas, Rejer. 1972. Religion and the Rise of Modern Science. Grand Rapids, MI: Eerdmans. 
  • Hoyle, Fred. 1948. A New Model for the Expanding Universe. Monthly Notices Astronomical Society 108 (5): 372-82.
  • Hoyle, Fred. 1982. The Universe: Past and Present Reflections. Annual Review and Astrophysics 20: 16. of the Royal of Astronomy 
  • _____ & Chandra Wackramasinghe. 1981. Evolution From Space. London: Dent.
  • Hubble, Edwin. 1929. A Relation Between Distance and Radial Velocity Among
    Galactic Nebulae. Proceedings of the NAS 15: 168-73.
  • Hume, David. 1989. Dialogues Concerning Natural Religion. Buffalo: Prometheus.
  • Judson, Horace. 1979. Eight Day of Creation. New York: Simon & Schuster.
  • Kaiser, Christopher. 1991. Creation and the History of Science. Grand Rapids, MI: Eerdmans.
  • Kant, Immanuel. 1963. Critique of Pure Reason. London: Macmillan. 
  • Kendrew, John C., et al. 1958. A Three-Dimensional Model of the Myoglobin Molecule Obtained by X-Ray Analysis. Nature 181 (8 March): 662-66.
  • Kragh, Helge. 1993. The Steady State Theory. In Cosmology, ed. Norriss S. Hetherington. New York: Garland: 391-406. 
  • Leslie, John. 1982. Anthropic Principle, World Ensemble, Design. American Philosophical Quarterly 19 (2): 141-51. Extra-
  • Linde, Andrei. 1994. The Self-Reproducing Inflationary Universe. Scientific American 271 (November): 48-55.
  • Lipton, Peter. 1991. Inference to the Best Explanation. London: Routledge.
  • Longley, Clifford. 1989. Focusing on Theism. London Times (21 January): 10. 
  • McMullin, Ernan. 1981. How Should Cosmology Relate to Theology? In The Sciences and Theology in the Twentieth Century, ed. Arthur R. Peacocke. Notre Dame, IN: University of Notre Dame Press: 17-57.
  • Meyer, Stephen C. 1990. Of Clues and Causes: A Methodological Interpretation of Origin of Life Studies. University of Cambridge, UK: Ph.D. Thesis. 
  • _____. 1999. Teleological Evolution: The Difference It Doesn t Make. In Darwinism Defeated? ed. Robert Clements. Vancouver, BC: Regent: 89-100.
  • _____. 2000. The Demarcation of Science and Religion. In The History of Science and Religion in the Western Tradition: An Encyclopedia, eds. Gary Ferngren, et. al. New York: Garland. 
  • Moreland, J. P., ed. 1994. The Creation Hypothesis. Downers Grove, IL: InterVarsity Press.
  • Newton, Isaac. 1959-77. The Correspondence of Isaac Newton. Vol. 3. Eds. Herbert W. Turnbull, et. al. Cambridge: Cambridge University Press.
  • Oparin, Alexander I. 1938. The Origin of Life. New York: Macmillan.
  • Paine, Thomas. 1925. The Life and Works of Thomas Paine. Vol. 8: The Age of Reason. New Rochelle, NY: T. Paine National Historical Association.
  • Paley, William. 1852. Natural Theology. Boston, MA: Gould & Lincoln.
  • Peebles, Phillip J. E. 1993. Principles of Physical Cosmology. Princeton, NJ: Princeton University Press.
  • _____. 1999. Evolution of the Cosmological Constant. Nature 398 (4 March): 25-26.
  • Peirce, C. S. 1931. Collected Papers. Eds. Charles Hartshorne & Paul Weiss. Vol. 2. Cambridge, MA: Harvard University Press.
  • Penrose, Roger. 1989. The Emperor s New Mind. New York: Oxford University Press.
  • Penzias, Arno & Robert Wilson. 1965. A Measurement of Excess Antenna Temperature at 4080 Mc/s. Astrophysical Journal 142 (1): 419-21.
  • Peterson, Michael. 1989. Reason and Religious Belief. Oxford: Oxford University Press.
  • Polanyi, Michael. 1968. Life s Irreducible Structure. Science 160 (June): 1308-12.
  • Polkinghorne, John. 1996. So Finely Tuned a Universe of Atoms, Stars, Quanta & God. Commonweal (16 August): 11-18.
  • Reid, Thomas. 1981. Lectures on Natural Theology. Eds. Elmer Duncan & William
  • R. Eakin. Washington, DC: University Press of America [1780].
  • Ross, Hugh. 1993. Cosmos and Creator. Colorado Springs, CO: NavPress.
  • Sagan, Carl. 1980. Cosmos. New York: Random House.
  • Sanger, Fred & E. O. P. Thompson. 1953. The Amino Acid Sequence in the Glycyl Chain of Insulin (1-2). Biochemical Journal 53 (3): 353-74.
  • Sanger, Fred & Hans Tuppy. 1951. The Amino Acid Sequence in the Phenylalanyl Chain of Insulin (1-2). Biochemical Journal 49 (4): 463-80.
  • Sarkar, Sahotra, ed. 1996. The Philosophy and History of Molecular Biology. Dordrecht, Holland: Kluwer.
  • Sawyer, Kathy. 1992. Hubble Finding: Expansion of Universe May Never End. Seattle Times (14 January): A5.
  • Scriven, Michael. 1959. Explanation and Prediction in Evolutionary Theory. Science 130 (August): 477-82.
  • Sober, Elliot. 1993. Philosophy of Biology. San Francisco, CA: Westview Press.
  • Swinburne, Richard. 1979. The Existence of God. Oxford: Clarendon Press.
  • _____. 1990. Argument From the Fine Tuning of Universe. In Physical Cosmology and Philosophy, ed. John Leslie. New York: Macmillan: 154-73.
  • Thaxton, Charles, Walter Bradley & Roger Olsen. 1992. The Mystery of Life s Origin. Dallas, TX: Lewis & Stanley.
  • Van Till, Howard. 1986. The Fourth Day. Grand Rapids, MI: Eerdmans.
  • Vessot, R. F. C., et al. 1980. Test of Relativistic Gravitation With a Space-Borne Hydrogen Maser. Physical Review Letters 45 (26): 2081-84.
  • Watson, James & Francis Crick. 1953. A Structure for Deoxyribose Nucleic Acid. Nature 171 (25 April): 737-38.
  • Wilford, John N. 1998. Cosmologists Ponder Missing Energy of Universe. New York Times (5 May): F1.
  • Yockey, Hubert P. 1992. Information Theory and Molecular Biology. Cambridge, UK: Cambridge University Press.
    _____________________________________ 

Stephen C. Meyer teaches philosophy of science at Whitworth College, Spokane, 

WA 99251, and directs the Discovery Institute s Center for the Renewal of 

Science and Culture in Seattle. _____________________________________ 

31 

Stephen C. Meyer

Director, Center for Science and Culture
Dr. Stephen C. Meyer received his Ph.D. from the University of Cambridge in the philosophy of science. A former geophysicist and college professor, he now directs the Center for Science and Culture at the Discovery Institute in Seattle. He is author of the New York Times-bestseller Darwin’s Doubt (2013) as well as the book Signature in the Cell (2009) and Return of the God Hypothesis (2021). In 2004, Meyer ignited a firestorm of media and scientific controversy when a biology journal at the Smithsonian Institution published his peer-reviewed scientific article advancing intelligent design. Meyer has been featured on national television and radio programs, including The NewsHour with Jim Lehrer, CBS's Sunday Morning, NBC's Nightly News, ABC's World News, Good Morning America, Nightline, FOX News Live, and the Tavis Smiley show on PBS. He has also been featured in two New York Times front-page stories and has garnered attention in other top-national media.