Design theory--also called design or the design argument--is the view that nature shows tangible signs of having been designed by a preexisting intelligence. It has been around, in one form or another, since the time of ancient Greece.
The most famous version of the design argument can be found in the work of theologian William Paley, who in 1802 proposed his "watchmaker" thesis. His reasoning went like this:
In crossing a heath, suppose I pitched my foot against a stone, and were asked how the stone came to be there; I might possibly answer, that, for anything I knew to the contrary, it had lain there for ever. ... But suppose I had found a watch upon the ground, and it should be inquired how the watch happened to be in that place; I should hardly think the answer which I had before given [would be sufficient].
To the contrary, the fine coordination of all its parts would force us to conclude that
... the watch must have had a maker: that there must have existed, at some time, and at some place or other, an artificer or artificers, who formed it for the purpose which we find it actually to answer; who comprehended its construction, and designed its use.
Paley argued that we can draw the same conclusion about many natural objects, such as the eye. Just as a watch's parts are all perfectly adapted for the purpose of telling time, the parts of an eye are all perfectly adapted for the purpose of seeing. In each case, Paley argued, we discern the marks of an intelligent designer.
Although Paley's basic notion was sound, and influenced thinkers for decades, Paley never provided a rigorous standard for detecting design in nature. Detecting design depended on such vague standards as being able to discern an object's "purpose." Moreover, Paley and other "natural theologians" tried to reason from the facts of nature to the existence of a wise and benevolent God.
All of these things made design an easy target for Charles Darwin when he proposed his theory of evolution. Whereas Paley saw a finely-balanced world attesting to a kind and just God, Darwin pointed to nature's imperfections and brutishness. Although Darwin had once been an admirer of Paley, Darwin's own observations and experiences--especially the cruel, lingering death of his 9-year-old daughter Annie in 1850--destroyed whatever belief he had in a just and moral universe.
Following the triumph of Darwin's theory, design theory was all but banished from biology. Since the 1980s, however, advances in biology have convinced a new generation of scholars that Darwin's theory was inadequate to account for the sheer complexity of living things. These scholars--chemists, biologists, mathematicians and philosophers of science--began to reconsider design theory. They formulated a new view of design that avoids the pitfalls of previous versions.
Called intelligent design (ID), to distinguish it from earlier versions of design theory (as well as from the naturalistic use of the term design), this new approach is more modest than its predecessors. Rather than trying to infer God's existence or character from the natural world, it simply claims "that intelligent causes are necessary to explain the complex, information-rich structures of biology and that these causes are empirically detectable."
In addition to being more modest than earlier versions of design theory, ID is also more powerful. Instead of looking for such vague properties as "purpose" or "perfection"--which may be construed in a subjective sense--it looks for the presence of what it calls specified complexity, an unambiguously objective standard.
ARN Recommends: For more information about the basic concept of intelligent design, see the following resources:
Intelligent Design: The Bridge Between Science and Theology William A. Dembski
Mere Creation: Science, Faith, & Intelligent Design edited by William A. Dembski
Rhetoric & Public Affairs Special Issue on Intelligent Design John Angus Cambell, ed.
The term specified complexity sounds like a pretty big mouthful. But it's something we can all recognize without effort. Let's take an example.
Imagine that a friend hands you a sheet of paper with part of Lincoln's Gettysburg address written on it:
Your friend tells you that he wrote the sentence by pulling Scrabble pieces out of a bag at random.
Would you believe him? Probably not. But why?
One reason is that the odds against it are just too high. There are so many other ways the results could have turned out--so many possible sequences of letters--that the probability of getting that particular sentence is almost nil.
But there's more to it than that. If our friend had shown us the letters below, we would probably believe his story.
Why? Because of the kind of sequence we see. The first string fits a recognizable pattern: It's a sentence written in English, minus spaces and punctuation. The second string fits no such pattern.
Now we can understand specified complexity. When a design theorist says that a string of letters is specified, he's saying that it fits a recognizable pattern. And when he says it's complex, he's saying there are so many different ways the object could have turned out that the chance of getting any particular outcome by accident is hopelessly small.
Thus, we see design in our Gettysburg sentence because it is both specified and complex. We see no such design in the second string. Although it is complex, it fits no recognizable pattern. And if our friend had shown us a string of letters like "BLUE" we would have said that it was specified but not complex. It fits a pattern, but because the number of letter is so short, the likelihood of getting such a string is relatively high. Four slots don't give you as many possible letter combinations as 143, which is the length of our Gettysburg sentence.
So that's the basic notion of specified complexity. But let's elaborate the idea by looking at an example that doesn't involve letters.
Imagine that you're standing in a football stadium that's covered by a dome. The stadium is well lit, and as you look around, you discover three red bull's eyes. One is painted on the dome overhead and two are painted on seats. Upon closer inspection, you find that the bull's eye on one of the seats has an arrow sticking in it, dead center.
As you're looking at the arrow, your Scrabble-playing friend enters the stadium. He shouts a greeting and hurries over to where you're standing.
"I see you found my handiwork," he says. "I did that just a few minutes ago. I turned off the lights, entered the stadium, spun around a couple of times and shot an arrow in the dark. When I turned lights back on, I discovered that the arrow had struck a bull's eye. In fact, I've shot several arrows that way, and every time I fired a shot, it hit a bull's eye."
What would you think about your friend's story? As with the Gettysburg sentence, you'd be very skeptical. The odds of hitting a bull's eye even once without aiming are so low that you doubt he could have done it even once, let alone several times in a row.
But as with the Gettysburg example, there's more to it than low probability. If your friend had told you that he'd never hit a target, and that his arrow had landed in a different spot every time, you'd probably believe him. Why? Because his shots fit no discernable pattern, as defined by the targets.
Now we're in a position to give a broader description of specified complexity: Specified complexity is displayed by any object or event that has an extremely low probability of occurring by chance, and matches a discernable pattern. According to contemporary design theory, the presence of highly specified complexity is an indicator of an intelligent cause.
ARN Recommends: For more information on complex specified information see:
The Design Inference: Eliminating Chance through Small Probabilities William A. Dembski
Intelligent Design William A. Dembski
As we saw earlier, one of the central claims of intelligent design (ID) is that "intelligent causes are necessary to explain the complex, information-rich structures of biology."
The more we learn about living organisms, the more they look like products of design rather than products of chance and natural law. Ironically, many opponents of intelligent design concede this fact. Oxford biologist Richard Dawkins, for example, says "Biology is the study of complicated things that give the appearance of having been designed for a purpose."
Similarly, in a recent issue of the biology journal Cell, Bruce Alberts, a leading cell biologist and president of the National Academy of Sciences, wrote:
We have always underestimated cells. ... The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. ... Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts.
Of course, biologists such as Dawkins and Alberts believe that the apparent design of living things is an illusion--produced not by an intelligent source, but by chance and natural law. Dawkins specifically states that Paley's "watchmaker" is natural selection, which produces complex systems by accumulating favorable genetic changes over time.
The notion of complex specified information (CSI) provides a way to test this claim.
To see how this might work, let's consider just one of the processes involved in human vision. When light strikes a rod cell, a visual cell that's located in the retina, the rod cell produces an electrical charge that runs down a nerve cell and into the brain.
How does the light set off the electrical charge?
In the absence of light, a rod cell maintains a electrically neutral state by allowing sodium ions to flow freely in and out of the cell. (An ion is an atom or group of atoms that carries an electric charge.) It does this by means of two proteins embedded in the cell membrane. One protein, called an ion channel, acts like a gate, regulating the inflow of sodium ions. Another protein acts as a pump, pushing the sodium ions back out of the cell.
The ion channel opens and closes in response to another biomolecule, called cGMP. For convenience, we'll call it the opener. When the opener attaches to the ion channel, the channel opens up and allows positively charged sodium ions to flow into the cell. When the opener falls off, the channel shuts and the flow of ions stops.
Under normal circumstances, there is a high concentration of opener molecules in the cell, and they are continually attaching to the channel and then falling off. As a result, the channel is continually opening and closing.
All that changes, though, when light enters the cell. When that happens, the light strikes a biomolecule that we'll call the trigger (it's real name is 11-cis-retinal). This causes the trigger to change its shape, setting off a cascade of chemical reactions in the cell.
The result of all these reactions is that the opener gets snipped in two, and is no longer able to attach to the ion channel. Sodium ions are no longer able to enter the cell, and as the pump pushes them out, an electrical charge develops. When the charge gets strong enough, the cell gives off an electrical impulse.
After the impulse is sent out, another cascade of reactions restores the trigger and opener molecules to their original state, allowing the ion channel to function again.
Is this system designed or was it produced by strictly natural processes?
Darwinists would say no: All biological systems were "created" by a stepwise accumulation of random genetic mutations that are preserved by natural selection--or survival of the fittest. Existing systems are simply modifications of earlier systems, which were modifications of even earlier systems and so on.
Design theorists, on the other hand, would say yes--if the system exhibits specified complexity.
Who's right? Both sides would agree that this system is complex. It has lots of parts, and all these parts have to work together.
The real question, then, is how specified the system is: How broad are the requirements for a working system?
One way to answer this question is to tinker with the system and see what happens. How well does the system function when you start knocking out proteins or other biomolecules? How well do the molecules function when you change them? If it can take a lot of hits and still work, then it isn't very specified and an undirected, stepwise process is plausible. But if it can handle only minute changes, then the system is highly specified--and the likelihood of producing it by a blind process is infinitesimal.
Some systems are so highly specified that they seem to tolerate no change at all. The bacterial flagellum, a motor that bacteria use to propel themselves, is made up of about 20 different proteins. Another 20 proteins are needed to build it. If you knock out any of these 40 proteins, the flagellum doesn't work. The flagellum thus seems to display not only specified complexity, but irreducible complexity.
More fascinating is a study reported in the journal Science. A team of researchers wanted to discover how many genes were necessary for the simplest organism to survive and reproduce. If you think of an organism's genes as its parts list, the scientists wanted to know how small they could make the parts list and still have a living, reproducing organism.
They did this, in part, by tinkering with a bacterium called Mycoplasma genitalium, which is the simplest known organism. The organism's genetic code is about 580,000 letters long and spells out 480 protein-producing genes plus 37 "species" of RNA. After "knocking out" various protein-coding genes, the scientists have estimated that 265 to 350 of this bacterium's genes are "essential" for the organism to live and reproduce under laboratory conditions--an extremely favorable environment.
Is this a designed system? It's beginning to look that way. But the main point is that specified complexity gives us a standard to guide our research.
ARN Recommends: For further study of intelligent design and biology see the following resources:
Darwin's Black Box The Biochemical Challenge to Evolution Michael J. Behe
Irreducible Complexity: The Biochemical Challenge to Darwinian Theory Michael J. Behe
You will often hear that contemporary evolution theory is supported by overwhelming evidence. But much of this evidence is unimpressive unless you're already convinced that naturalistic evolution must be true.
To understand the kind of evidence cited by naturalistic evolutionists, it may be helpful to go back to the stadium example in question 2.
Imagine that you challenged your friend's account of how the arrows landed in the targets.
"No problem," replies your friend. "I can prove it to you."
He holds up a bow and asks, "What is this?"
He then leads you to the target with the arrow in it and asks, "Now, what is this?"
"Yes! With an arrow in it," he exclaims.
Finally, your friend leads you to a panel with some switches on it. He flips the switches back and forth, which turns the stadium lights off and on.
Your friend then summarizes his case: "I've shown you the bow. I've shown you the arrow in the target, and I've shown you that I can turn the stadium lights off and on. What more evidence do you need?"
The evidence your friend presents is certainly consistent with his story. The problem, however, is that it's also consistent with other explanations, including the more likely explanation that he entered the stadium, turned on the light, walked over to the target and jammed the arrow in the bull's eye.
Much of the evidence is no more decisive than our friend's story.
For example, following the news in June 2000 that the human genome had been sequenced, Nobel laureate David Baltimore announced in a New York Times opinion piece that the discovery "confirms something obvious and expected, yet controversial: our genes look much like those of fruit flies, worms and even plants. ... [t]he genome shows that we all descended from the same humble beginnings and that the connections are written in our genes. That should be, but won't be, the end of creationism."
Such "evidence" is not remotely decisive unless you've already decided that only naturalistic causes could have created such organisms as fruit flies, worms and humans. But that's precisely what is at issue.
In fact, there is systematic evidence against contemporary evolution theory. Researchers in such fields as paleontology, embryology, microbiology, biochemistry and genetics have uncovered systematic evidence that is deeply at odds with naturalistic evolution.
A review of that evidence is beyond the scope of this FAQ, but if you're interested in further study, check out the references at the end of this section. You should also check out some of the books listed in the Access Research Network recommended reading list.
Additionally, if you have the technical background, it would pay to examine some of the original sources cited in these books. When you study the scientific literature, you'll find that there is a huge disconnect between that literature and the popularized "science" that you'll read in the press and basic biology texts. ARN also publishes a scholarly journal Origins & Design that investigates many of the evidences for intelligent design in great technical detail. If you are looking for a daily internet dialog on the topic of origins and intelligent design, be sure to visit the ARN Design Forum.
ARN Recommends: For well-researched summaries of the evidence against naturalistic evolution, see the following resources:
Icons of Evolution: Science or Myth? Jonathan Wells
Evolution: A Theory in Crisis Michael Denton
Darwin on Trial Phillip Johnson
From an ID perspective, the natural vs. supernatural distinction is irrelevant. The real contrast is not between natural laws and miracles, but between undirected natural causes and intelligent ones.
Mathematician and philosopher of science William Dembski puts it this way: "Whether an intelligent cause is located within or outside nature (i.e., is respectively natural or supernatural) is a separate question from whether an intelligent cause has operated."
Human actions are a case in point: "Just as humans do not perform miracles every time they act as intelligent agents, so there is no reason to assume that for a designer to act as an intelligent agent requires a violation of natural laws."
On the other hand, even if an object were miraculously created, it could still be studied. Take the flagellum, for example. No matter what its origins, a flagellum is a flagellum. We can take it apart, we can examine its components, we can modify it, we can figure out how it works. And we can do that whether it evolved over eons or popped into existence two seconds ago.
In the world of human technology, this is called reverse engineering. But the same process is also used in biology.
"That's basically what everybody at the bench is doing," said Scott Minnich, a microbiologist at the University of Idaho. "We don't have the blueprints in the true sense. We have the DNA code for a lot of organisms, but in terms of the assembly of these molecular machines, it's a matter of breaking them apart and trying to put them back together to figure out how they function."
This is also the kind of work that will be done with the human genome. Speaking to the New York Times in late June, when the human genome breakthrough was announced, Harold Varmus, former director of the National Institutes of Health commented, "The important thing is having pieces of DNA in your hand, and being able to figure out how they work by modifying and mutating them. That's where the game is now."
Fittingly, the metaphor he used to describe this process was examining a clock: "You can take the clock apart, lay the pieces out in front of you, and then try to understand what makes it tick by putting it back together again."
ARN Recommends: For further study on the important distinction between natural laws and naturalism see:
The Wedge of Truth Phillip E. Johnson
Darwinism: Science or Naturalistic Philosophy Phillip E. Johnson
Darwinism: Science or Naturalistic Philosophy Debate at Stanford University between William B. Provine and Phillip E. Johnson.
The American Civil Liberties Union (ACLU), the National Center for Science Education (NCSE) and other organizations have tried to portray intelligent design as another variant of scientific creationism.
For example, when high school biology teacher Roger DeHart, of Burlington, Wash., tried to teach his students about intelligent design, the ACLU of Washington state accused him of "presenting the discredited and illegal theory of creationism." Similarly, they branded intelligent design as "a smoke screen for creationists who have lost in the courts."
Although intelligent design is compatible with many "creationist" perspectives, including scientific creationism, it is a distinct theoretical position. This can be seen by comparing the basic tenets of each view.
Legally, scientific creationism is defined by the following six tenets:
Intelligent design, on the other hand, involves two basic assumptions:
"This is a very modest, minimalist position," Dembski says. "It doesn't speculate about a Creator or his intentions."
In fact, there are only two general views that aren't compatible with intelligent design: 1) a radical naturalism that denies the existence of any non-human intelligence, theistic or otherwise and 2) conventional theistic evolution.
It may seem surprising that the second view, conventional theistic evolution, is incompatible with intelligent design, since it clearly embraces the existence of God. But the view we generally associate with "theistic evolution" denies that God's creative activity can be empirically detected. As Dembski points out:
Theistic evolution takes the Darwinian picture of the biological world and baptizes it, identifying this picture with the way God created life. When boiled down to its scientific content, however, theistic evolution is no different from atheistic evolution, treating only undirected natural processes in the origin and development of life.
Theistic evolution places theism and evolution in an odd tension. If God purposely created life through Darwinian means, then God's purpose was ostensibly to conceal his purpose in creation. Within theistic evolution, God is a master of stealth who constantly eluded our best efforts to detect him empirically. Yes, the theistic evolutionist believes that the universe is designed. Yet insofar as there is design in the universe, it is design we recognize strictly through the eyes of faith. Accordingly the physical world in itself provides no evidence that life is designed.
Regarding the question of whether intelligent design is the same thing as scientific creationism, opponents of intelligent design have made much of a federal court case, Freiler v. Tangipahoa Parish Board of Education, in which the two positions were equated.
But according to David DeWolf, a law professor at the Gonzaga University School of Law, this finding came in a tangential statement in the judge's decision.
The central issue in the case, DeWolf said, was not intelligent design, but the question of whether a disclaimer about evolution mandated by the Tangipahoa school district constituted an establishment of religion.
"The judge was simply laying out the general landscape of creation theories. In one sentence, he said intelligent design is another name by which you may know creationism."
The judge struck down the disclaimer, and his decision was upheld by a panel of the 5th circuit court of appeals. But in the appellate opinion, intelligent design was never even mentioned.
"There's no finding in which you can say, 'Aha! See, the courts have found that intelligent design is just the same,'" DeWolf said. "If you cited that as your authority in a lawsuit, a judge would be pretty mad at you for having misled him into thinking that this proposition had been established."
ARN Recommends: For more information on the legal issues about teaching Intelligent Design in the public school classroom see:
Intelligent Design in the Public School Science Curricula: A Legal Guidebook David K. DeWolf, Stephen C. Meyer, Mark E, DeForrest