All posts by Gary Zacharias

The Cell’s Design–Part 4

Let’s pick up where I left off last time as I summarize the findings of Dr. Fuz Rana, in his book The Cell’s Design. This is an important book that explores the biochemical reasons to believe there is a designer behind the creation of life.

Dr. Rana continues where he left off talking about the genetic code inside the cells. He says recent discoveries have unwittingly stumbled across the most profound evidence yet for intelligent activity — a type of fine-tuning in code rules. These rules create a surprising capacity to minimize errors and communicate critical information with high fidelity. There is a redundancy to the code which is not haphazard. Deliberate rules were set up to protect the cell from the harmful effects of substitution mutations. The conclusion is that any genetic code assembled through random biochemical events could not possess such ideal error-minimization properties. Nobel laureate Francis Crick argued in 1968 that the genetic code cannot undergo significant evolution because any change would result in a large number of defective proteins. What is really amazing is that the genetic code originated at the time when life first appeared on earth. The complexity of the code makes it virtually impossible that natural selection could have stumbled upon it by accident in such a short period , according to Dr. Rana’s book.

A later chapter talks about biochemical quality control systems which are in place to identify and rectify any production errors. Biochemists have discovered that, like any manufacturing operation designed by human engineers, key cellular processes incorporate a number of quality control checks. Checkpoints occur at several critical junctions during protein manufacture, including mRNA production, export from the nucleus, and translation at ribosomes. One of the most remarkable features is the ability to discriminate between misfolded proteins and partially folded proteins that appear misfolded but are well on the way to adopting their intended three-dimensional architectures.

Next, Dr. Rana asks a key question. If life results solely from evolutionary processes, then shouldn’t scientists expect to see very few cases in which evolution has repeated itself? Random processes shouldn’t repeat over the history of the earth. That makes sense to me. He goes on to say, however, if life is the product of an intelligent creator, then the same designs should repeatedly appear in biochemical systems. He gives one hundred recently discovered examples of repeated biochemical designs (see pages 207-214). The explosion in the number of these examples is odd if life results from historical sequences of chance evolutionary events. However, if there is a creator, it’s reasonable to expect he would use the same designs repeatedly.

There is one more chapter devoted to recent discoveries that seem to require a supernatural agent. Dr. Rana takes a look at cell membranes. Forty years ago they were seen as little more than haphazard, disorganized systems. However, since then advances have dramatically changed how scientists think about these membranes. Biochemists have discovered the cells’ boundaries are highly structured, highly organized systems. They require fine-tuning of their composition to be as stable as they are. These membranes not only form a key boundary layer, but they also play a critical role in regulating the activity of proteins associated with the membrane. Some biochemists go further, suggesting that cell membranes harbor information.

The last part of his book responds to one of the most common challenges leveled against arguments for intelligent design — imperfections found in nature. I’ll save that for one last blog on The Cell’s Design.

Share

The Cell’s Design–Part 3

The next section of Dr. Fuzale Rana’s book, The Cell’s Design, covers biochemical fine-tuning which conveys a sense of the remarkable exactness of biochemical systems. Aquaporins form channels in cell membranes where they transport water across the membrane. Amino acids have to be brought into an exact alignment to form a useful three-dimensional architecture. In addition, collagens, the most abundant proteins in the animal kingdom, also contained an exact, fine-tuned amino acid composition.

Exact fine-tuning is not limited to the structure of the biomolecules. The rate of chemical processes is also carefully refined. For example, the cell’s machinery copies mRNA from DNA only when the cell needs the protein encoded by a particular gene housed in the DNA. When that protein is not needed, the cell shuts down production. Biochemists have also discovered that the breakdown of mRNA molecules is not random but precisely orchestrated. Proteins are constantly made and destroyed by the cell. Those that take part in highly specialized activities within the cell are manufactured with great timing — only when they are needed. Once the proteins are no longer useful, the cell breaks them down into the amino acids of which they are made up. This also is an exacting, delicately balanced process.

Rana then spends some time on the precise arrangement of elements in the cell. Amino acids link together in a head-to-tale fashion to form chains of proteins. These sequences appear to be highly optimal. Their exact positioning makes proteins better able to withstand mutations to DNA that result in a change to the amino acid sequence. Their structures also appear perfect to withstand damage caused by oxygen in the cell. The molecules that make up the backbone of DNA and RNA appear to be in a highly specific arrangement. Their chemical properties produce a stable helical structure capable of storing the information needed for the cell’s operation.

The Cell’s Design continues by highlighting the information found in the cell. Proteins and DNA are information-rich molecules. Just as letters form words in our language, amino acids are strung together to produce useful information. The chief function of DNA is to store information; it houses the directions necessary to make change-like molecules (polypeptides). DNA compares to the reference section of the library where books can be read but not removed. The material stored in these books has to be copied before it can be taken from the library — exactly the same thing happens in the cell. The language of DNA and RNA is translated at the ribosome into the amino acid language of proteins. DNA can store an enormous amount of information — theoretically one gram of DNA can house as much information as nearly one trillion CDs. To summarize, it’s not the mere presence of information that argues for a designer. It’s the structure of the information housed in proteins and DNA. There is a direct analogy between the architecture of human language and the makeup of biochemical systems. This information is handled exactly like a computer would do. For example, computer scientist Leonard Adleman recognized that the proteins responsible for DNA replication, repair, and transcription operated as Turing machines, which gave rise at a fundamental level for all computer operations. I don’t have the space here to develop this, but check pages 163-168 in Rana’s book.

I need a couple more blogs to finish reporting on Rana’s book. The detail in it blew me away with its careful analysis of the cell’s abilities. Hope you feel the same.

Share

The Cell’s Design–Part 2

In the last blog, I introduced you to Dr. Fuzale Rana’s book, The Cell’s Design. His claim is that today’s biochemists have uncovered amazing molecular features inside the cell that lead to one reasonable conclusion — a supernatural agent must be responsible for life. Let’s move on to further chapters in the book which offer more reasons why he believes this is the case.

In Chapter 4, he introduces molecular motors. He starts with a famous example, which has been brought up in previous books — the bacterial flagellum. Made up of over 40 different kinds of proteins, this is essentially a molecular-sized electrical motor which rotates a propeller, allowing the bacterial cell to navigate through its environment. But the rest of the chapter has many more examples of these molecular motors. Some are rotary in nature, including parts such as turbines, rotors, cams, and stators. Some motors spin, and some swivel. One amazing molecular motor, dynein, carries cargo throughout the cell along microtubule tracks. This motor literally shifts gears in response to the load that it is carrying. You can see a terrific video put out by Harvard showing dynein in operation–Google “The world inside the cell” then “cell–inner life.” His conclusion? Eexperience teaches us that machines and motors don’t just happen.

He sees these motors as an update of the watchmaker argument (just as watches which display design are the product of a watchmaker, so organisms which also display design are the product of a creator ). The discovery of these biomolecular motors and machines inside the cell revealed a diversity of form and function that mirrors the diversity of designs produced by human engineers. In addition, researchers working with nanotechnology reinforce the idea that molecular motors in the cell are literal motors in every sense. The contrast between synthetic molecular motors designed by some of the finest organic chemists in the world and the elegance and complexity of molecular motors found in cells is striking. Actually, the cell’s machinery is vastly superior to anything that the best human designers can conceive or accomplish. For example, bacterial flagella operate near 100% efficiency while man-made electric motors only function at 65% efficiency ,and the best combustion engines only attain a 30% efficiency.

Dr. Rana’s next chapter deals with a chicken-and-egg problem. DNA houses the information the cell needs to make proteins, which play such a vital role in almost every cell function. Biochemists call DNA a self-replicating molecule. However, DNA cannot replicate on its own. Instead, it requires a variety of proteins. So here’s the problem — proteins cannot be produced without DNA, and DNA cannot be produced without proteins. Many proteins, in addition, need the assistance of other proteins to fold into the proper three-dimensional shape after they’ve been produced at the ribosome. Once again, you need proteins to help fold proteins. You can’t have one without the other. Biochemical chicken-and-egg systems represent a special type of irreducible complexity where you need all the parts to function properly. He raises questions about the ability of evolutionary processes to produce these systems.

I need more space to explore further chapters in Dr. Rana’s book, but the information is pretty dense. I’ll keep these blogs short enough for you to digest the ideas–more to come next time.

Share

The Cell’s Design–Part 1

Some time ago I got to meet Dr. Fazale Rana, a biochemist and author of several books. He gave a presentation on the complexities inside the cell, which he used as an indication of the existence of God. This represents a continuation of the design argument, which says God is the ultimate engineer who designed everything in the universe. I recently finished reading Dr. Rana’s new book called The Cell’s Design. It makes an important contribution to the argument of design by explaining recent scientific discoveries that seemed to indicate complexity far beyond anything that random processes can create. For the next couple of blogs, I’d like to explore his key points in this book.

He starts by mentioning another famous book, Darwin’s Black Box by Michael Behe. This earlier book presented a case for intelligent design from the biochemical perspective. Behe had argued that biochemical systems, by their very nature, are irreducibly complex. He argued for intelligent design by emphasizing the inability of natural selection to generate such complex systems through a gradual evolutionary process. Critics, however, said that this book rested on a lack of understanding, so they rejected his argument. As result of this, Dr. Rana wanted to write a book that went beyond irreducible complexity to communicate a vast range of amazing properties that characterize life’s chemistry. These indicators of design seen inside the cell make a case for a creator based on what scientists know, not on what we don’t understand.

Before Dr. Rana starts on his proof, he describes and justifies the approach used to argue for intelligent design in biochemical systems. He says when people distinguish between the work of an intelligent agent and the outworking of natural processes, they don’t use intuition. Instead, they use pattern recognition. If biochemical systems are indeed the product of a creator who made man in his image, then the defining characteristics of those systems should be very close to the hallmark characteristics of humanly crafted systems. The rest of his book makes use of pattern recognition to build a positive case for biochemical intelligent design.

His first chapter which presents his argument talks about the minimum number of genes and essential biochemical systems necessary for life. It appears as if a lower bound of several hundred genes exist, below which life cannot be pushed and still be recognized as life. If left up to an evolutionary process, not enough resources or time exists throughout the universe’s history to get life even in the simplest form. Scientists used to see bacteria as simple bags of assorted molecules haphazardly arranged inside the cell. But actually these bacteria, as simple as they are, display an incredible degree of internal organization and exquisite composition of biochemical activity. Origin-of-life researcher David Deamer remarked, ” . . . one is struck by the complexity of even the simplest form of life.” So even these tiny bacteria speak of intelligent design.

Share

How We Got the New Testament

Ben Witherington, author of The Living Word of God, has a section in which he explains how the New Testament books were formed into the canon (books accepted as authoritative). It’s important to know this story because many people today have mistaken ideas of how this all happened. Thanks to books like The Da Vinci Code, many readers assume a powerful church set up a council and picked which books they considered legitimate. However, it was not a matter of politics or powerful men sitting down in the fourth century A.D. to decide these issues. No one ruled out other books that had been previously considered legitimate. The truth is quite different.

The New Testament canon came about due to a process that actually started in the New Testament era. Take a look at 2 Peter 3: 16, where Peter describes Paul’s writings as part of true scripture. The formation of the accepted New Testament writings was already happening in the primitive Christian community.

Some, including the author of The Da Vinci Code, will argue that Gnostic texts competed with traditional writings for inclusion in the canon. But Witherington says nobody argued for the inclusion of any of the Gnostic texts. They were seen as heresy in their own day as well as long afterward. As he notes, “Not even the earliest of the Gnostic texts, the Gospel of Thomas, was ever on a canon list or seriously considered for inclusion as a sacred text for Christians.”

Most of the New Testament became accepted as sacred with no debate surrounding the various works. The ones accepted immediately were Matthew, Mark, Luke, John, and Paul’s letters (except Hebrews, an anonymous letter).

Which works were debated? Hebrew was because it was anonymous. James and Jude were because they seemed so Jewish. Revelation was because of its prophecy.

What were the standards used to decide if a book belonged in the canon? Again, let Witherington describe this: “It needed to be an early witness, a first-century witness, one that went back directly or indirectly to the original eyewitnesses, apostles, and their co-workers or an early prophet like John of Patmos.” So accepted books needed a combination of historical and theological factors to become part of the canon. Early church fathers said accepted books needed to have some sort of connection to an apostle and should involve orthodox teaching.

There was no single church gathering that created the current list of twenty-seven books in the New Testament. The process of sifting and choosing which books should belong in the Christian scriptures was going on throughout the second through the fourth centuries. A man named Marcion came to Rome around 144 A.D. He told the church there he had a list of acceptable books for a canon, but his list was extremely short. It included only Luke’s gospel and a few of Paul’s letters. The church rejected Marcion, claiming that Matthew, Mark, Luke, and John should all be considered scripture.

By the end of the second century there was a list which looks a lot like what we have today. Only James, Hebrews, 1 and 2 Peter, and 3 John are missing. What’s interesting is that eventually all geographical areas where Christianity was popular (the Eastern empire, Africa, the Western empire) independently concluded that these twenty-seven books should be recognized as the Christian scriptures. That’s remarkable when you think about it: “The various parts of the church, without political or ecclesiastical coercion, and under the guidance of the Holy Spirit, all came to the same conclusion about the twenty-seven books of the New Testament.”

What can we say to wrap this up? Here’s the key fact — it was not a matter of the church conducting a big meeting, drawing up a list of books to form the canon, and imposing this list on its members. Instead, the church simply recognized the list of books that had been forming since the time of Peter and Paul. Over the centuries Christians had found these books valuable for worship and instruction. As one person says in Witherington’s book, “The canon thus represents the collective experience of the Christian community during its formative centuries.” So there was no conspiracy, no imposition of books, no hiding or destroying competing gospels, no huddled gathering of old men.

Share

Rules For Considering Errors in the New Testament

In his book The Living Word of God , Ben Witherington wraps up the issue of errors in the New Testament. He has six points that are important to keep in mind when we hear complaints from critics who claim they have found errors in these documents.

First, it’s not considered an error when an author intends to give a general report or the gist of something rather than a precise report. His generalizing is not falsifying the story.

Second, it’s not considered an error if an author of ancient literature arranged, edited, or paraphrased what someone said. For example Matthew uses the term “kingdom of heaven” rather than the phrase “kingdom of God” used by Mark in the same passage. We should not impose a modern standard of precision which these ancient authors were not required to follow according to the writing customs of their day.

Third, it’s not considered an error to present events out of chronological order. For example, John 2 puts the cleansing of the temple there for theological, not chronological reasons.

Fourth, it’s not considered an error of the original author if a translator makes a mistake when rendering the original into another language.

Fifth, it’s not considered an error when a New Testament author discusses the Old Testament text and appears to misrepresent it. In fact, they are often just paraphrasing the text rather than being concerned about a precise translation.

Sixth, we need to understand what an error would look like. It would violate the principle of noncontradiction, which says that A and not-A cannot both be true at the same time in the same way. For example, it would be an error if one of the gospels said Jesus was born in Nazareth, and another said he was born in Bethlehem. They might both be wrong, but they can’t both be right.

When we hear critics talk about errors in the Bible, we should remember something that Ben Witherington said: “I have yet to find a single example of a clear violation of the principal of noncontradiction anywhere in the New Testament.”

Share

More From Witherington

In the last blog covering The Living Word of God by Ben Witherington, I discussed the differences between modern biographies and ancient ones so that today’s readers might be more prepared for what they read in the New Testament gospel accounts of the life of Jesus. There are big differences in length, what is covered, and the amount of editorializing done by the author. Here is some more information about Matthew, Mark, Luke, and John that attempts to help us understand the mindset of those who wrote about Jesus.

Ancient biographies, of which the New Testament gospels are a part, had as their main goal “an adequate and accurate unveiling of the character of the person in question,” according to Witherington. That’s why there are many stories about Jesus which may have had little historical consequence but revealed his character. For example, think of the story of the wedding feast at Cana. Even though the story involved nothing of a historical nature, it did show his abilities and his relationship to his mother. Ancient biographies, in attempting to show us the character of the person, were highly selective and were not always written according to exact chronological order. For example, early parts of Matthew show Jesus doing nothing but talking or teaching, but the author is simply grouping the teaching material in one spot. When we look at Matthew, Mark, and Luke regarding the temptation of Jesus, we see a different order to the three temptations, not because the authors couldn’t get it straight but because they had a different purpose in relating this event.

Perhaps the best way to see the gospels is to think of them as interpretive portraits rather than snapshots. When we look at a painting of an individual, we see that the artist has been selective in what is shown to us so that we may gain some sort of insight into the person being portrayed. It’s not fair to hold the gospel writers to modern standards of newspaper reporting or modern biographical and historical conventions. Our question must be if the four gospels portray a good and true likeness of the historical Jesus. I think the answer to that is a definite “yes.”

Share

The Gospels As Ancient Biography

I just finished reading The Living Word of God by Ben Witherington, a professor of New Testament Interpretation at Asbury Theological Seminary. In this book the author has interesting things to say about portions of the New Testament. He believes it’s important to understand the various genres that makeup the twenty-seven books found there. Since I teach the Bible as literature at Palomar College, I wanted to share some of his points here; he believes we can understand the Bible much better if we understand the type (genre) of literature we are reading.

For this blog I’m going to focus solely on the four gospels (Matthew, Mark, Luke, John) as ancient biographies and histories. Many of my students assume that these gospels must be like modern biographies, covering the person’s entire life, producing a chronological account, and containing precise quotations. They have questions when they discover this is not the case. They assume there must be errors in the text.

But Witherington claims these four gospels “all conform quite nicely to the conventions of ancient biographies, which were quite different in scope and character than most modern biographies.” To start with, modern authors have unlimited space to tell their stories, but ancient biographies were restricted to material that could fit on scrolls. These authors had to be selective about what they covered. That’s why, for example, we don’t learn the entire story of Jesus’ life.

In addition, ancient biographies did not spend much time about early childhood development. People in the ancient world did not believe personality developed over time. Instead, they felt you were stuck with whatever personality you were born with. Again, we can see this when we look at the life of Jesus — we know very little about him before his ministry started around the age of 30.

Another characteristic of ancient biographies was a focus on the death of the individual since this event was thought to reveal the character of the person. A shameful death was considered to be a revelation that the person did not have a good character. It’s no wonder, then, that the gospel writers spent so much time on the death of Jesus — they felt they needed to argue that this death was necessary to fulfill God’s plan.

A fourth difference between modern and ancient biographies deals with the amount of editorializing the author did. Much editorializing abounds in modern biographies; the author is often eager to share his/her comments. However, the ancients tended to portray a person indirectly, allowing the words and deeds of the person in question to speak for themselves. That is certainly true of the gospels in the New Testament. We often hear the words of Jesus and are forced to decide for ourselves what he meant.

There is much more that Witherington has to say, but I’ll save that for future blogs. I’m hoping that this information will allow us to appreciate the gospels for what they are rather than what they were never intended to be.

Share

Signature in the Cell–Part 4

Here is the last part of my summary of Dr. Stephen Meyer’s new book, Signature in the Cell. It’s a bit daunting, but he has so much good info on recent discoveries that indicate a designer behind all life. The other three parts are available here in case you want to catch up.

Another complaint about intelligent design is that it does not qualify as a scientific theory by definition. Scientific theories, according to this complaint, must explain events or phenomena by reference to natural laws alone. Science must not assume there are any seen or unseen powers that interfere with the normal working of material objects. Meyer rejects this by saying the activity of a designing intelligence does not necessarily break or violate the laws of nature. He says it is the same style of explanation as other historical scientific theories in which events are explained primarily by reference to prior events. Those who say ID does not qualify as a scientific theory generally argue that it invokes an unobservable entity, it is not testable, it does not explain by reference to natural law, it makes no predictions, it is not falsifiable, it cites no mechanisms, and it is not tentative. But Meyer indicates that many scientific theories infer unobservable entities, causes, and events. For example, there are theories of chemical evolution and the existence of many transitional intermediate forms of life. Both of these are unobservable. Historical sciences commonly use indirect methods of testing as they weigh competing unobservable events to determine which one has the greatest explanatory power. The theory of intelligent design is subject to empirical testing and refutation. Many times scientists say that a theory must explain all phenomena by reference to purely material causes, but Meyer wonders why science should be defined that way. Scientists in the past have not always restricted themselves to naturalistic hypotheses. Today many scientific fields currently suggest intelligent causes as scientific explanations – consider archeology, anthropology, forensics, astrobiology.

Meyer spends time refuting the idea that intelligent design is religion. Religions usually involve various formal structures, practices and ritualistic observances, but these are all missing in ID. In addition, it does not offer a comprehensive system of belief about the intelligence behind the design of the universe. The theory of intelligent design does not affirm any sectarian doctrines. Of course this theory has religious and metaphysical implications, but these are not grounds for dismissing it. Intelligent design is not the only idea that has metaphysical or religious implications. Consider Darwinism – it has significant metaphysical and religious implications as well. Scientific theories should be evaluated on the evidence rather than the implications they may have. Antony Flew, a well-known atheistic philosopher who has now become a proponent of intelligent design, insists that we should “follow the evidence wherever it leads.” Meyer argues that the motivations of the people behind the theories should not invalidate them either because it is not the motivation that determines the merits of the idea; it’s the quality of the arguments and the relevance of the evidence marshaled in support of that theory.

Meyer ends his book by explaining why this issue matters. The scientific case for intelligent design poses a serious challenge to the materialistic worldview so dominant today in the West. Materialism may seem liberating, but it has proven “profoundly and literally dispiriting.” It suggests we have no purpose in life, we are all accidents, nothing lasts beyond the grave, everything will be gone as the universe spins down to heat death. On the other hand, intelligent design says that the ultimate cause of life is personal, suggesting there is something beyond this life.

I spent a long time going through Signature in the Cell because I like wrestling with interesting concepts. I was only able to scratch the surface of the book’s content in this summary, but my goal was to pass along the main points I got and to arouse your curiosity to know more about this fascinating field of study.

Share

Signature in the Cell–Part 3

Here’s the third part of my summary of Dr. Stephen Meyer’s book, Signature in the Cell. Check the previous two blogs for the earlier part of the book.

Meyer then presents a positive case for intelligent design as the best explanation for the origin of the information necessary to produce the first life. He begins by saying there is no other adequate explanation as to the cause. Secondly, he claims there is experimental evidence to back up intelligent design as a cause. Here he mentions experiments that try to simulate prebiotic conditions; they “invariably generate biologically irrelevant substances.” In addition he says intelligent design is the only known cause of specified information. He concludes that ID provides the “best, most causally adequate explanation of the origin of the information necessary to produce the first life on earth.” He considers other forms of specified information, such as radio signals, books, hieroglyphics, and indicates that they always arise from an intelligent source, a mind rather than a strictly material process. In addition, Meyer refers to a groundbreaking book on design detection by William Dembski – The Design Inference. This book claims that we can detect the prior activity of other minds by the effects they leave behind, namely complexity and specification. His example is Mount Rushmore – the shapes etched in the rock face demonstrate intelligence behind them because they are complex and specific to four particular American presidents. Dembski’s theory applies to the cell’s information-processing system as well as to DNA itself. Even “junk DNA” has now been found to perform many important functions.

The last part of Meyer’s book defends the theory of intelligent design against various popular objections to it. Some complain that the case for intelligent design constitutes an argument from inference. But Meyer says that is not true. We already know from experience that intelligent agents do produce systems rich in information. This is an inference to the best explanation based upon our best available knowledge rather than an argument from ignorance. Another complaint about the design inference says, “If an intelligence designed the information in DNA, then who designed the designer?” He found it odd that anyone would argue it was illegitimate to infer that an intelligence played a role in the origin of an event unless we could also give a complete explanation of the nature and origin of that intelligence. It does not negate a causal explanation of one event to point out that the cause of that event may also invite a causal explanation. For example, nobody needs to “explain who designed the builders of Stonehenge or how they otherwise came into being to infer that this complex and specified structure was clearly the work of intelligent agents.”

A third complaint about ID is that it is simply religion masquerading as science. Critics say the theory is not testable and, therefore, neither rigorous nor scientific. But Meyer says different scientists and philosophers of science cannot agree about what the scientific method is, so how do they decide what does and does not qualify as science? He rebuts the critics in several ways. First, he says the case for intelligent design is based on empirical evidence, not religious dogma – information in the cell, irreducible complexity of molecular machines, the fine-tuning of the laws and constants of physics. In addition, advocates of intelligent design use established scientific methods, especially the method of multiple competing hypotheses. For another thing, ID is testable by comparing its explanatory power to that of competing theories. As an example, Meyer refers to junk DNA. Neo-Darwinism says this is an accumulation of nonfunctional DNA through mutational trial and error while ID proponents claim that there must be some biological function in this so-called “junk.” It turns out that recent discoveries indicate this type of DNA performs a diversity of important biological functions. To further bolster the idea that ID is scientific, Meyer goes on to say the case for ID exemplifies historical scientific reasoning, it addresses a specific question in evolutionary biology (how did the appearance of design in living systems arise?), and it is supported by peer-reviewed scientific literature.

Share