The Genetic Code: A Teleological Perspective


Despite its breathtaking diversity at the morphological level, life on Earth displays a remarkable unity at the biochemical level. With few exceptions, all lifeforms employ DNA as their hereditary material, proteins constructed from the same 20 types of amino acids as their building blocks, and RNA to bridge the two worlds through the genetic code.

As the morse code describes the relationship between dots/dashes and the Latin alphabet, so does the genetic code describe the relationship between the codons of DNA and the amino acids of proteins. Like French, the language of the cell is easy: The codon “GUU” specifies valine, and it’s like that all the way through. If in doubt, consult the figure below.

The genetic code is almost universal; a number of variants have been found, all of which are derived from the standard genetic code (Osawa et al., 1992; Knight, Freeland, & Landweber, 2001). In other words, there are no precursors to the genetic code.

The genetic code as evidence for common descent?
The near-universality of the genetic code has been cited as evidence for universal common ancestry (Crick, 1968; Hinegardner & Engelberg, 1963). According to geneticist Theodosius Dobzhansky, these biochemical universals are “the most impressive” evidence for the interrelationship of all life (1973, p. 128).

And indeed, from the non-teleological perspective this makes sense: If undirected abiogenesis had occurred several times, it would be an amazing coincidence if in every case the resulting organisms had struck upon the same genetic code. Therefore, universal common ancestry is the best explanation.

This changes the moment we throw teleology into the mix. Rather than having to choose between common descent and convergence, the investigator must now also consider the possibility of common design.

The genetic code as example of common design
Suppose that the first life on Earth consisted of a diverse population of engineered cells. Why would engineers employ the same genetic code instead of giving each cell its own code?

Before answering this question, let us ask a counter-question: Why not? What would the point be, from an engineering perspective, to reinvent the wheel? Making multiple codes is extra work and increases the risk of mistakes when genes have to be designed in different languages.

Not only is there no reason for engineers to adopt multiple codes, there is good reason to use the same code. If different cell types used different codes, they would be unable to tap into the power of horisontal gene transfer (HGT).

HGT plays an essential role in bacterial evolution, where genetic models indicate that substantial HGT is required for the survival of bacterial populations (Takeuchi, Kaneko, & Koonin, 2014). Though less common in eukaryotes, HGT is not restricted to bacteria. For example, a study found that ferns adapted to shade by horisontal transfer of a gene from the moss-like hornworts from which they diverged 400 million years ago (Li et al., 2014). HGT may even have played a role in the evolution of humans, with seaweed-digesting genes from ocean bacteria having found their way into the gut microbes of Japanese individuals (Hehemann et al., 2010).

In other words, categorizing the standard genetic code as an example of common design is not an ad hoc rationalization; rather, there is a good engineering reason for reusing the code.

A code facilitated by molecular machines
Microbiologist Franklin M. Harold describes the genetic code as “one symbolic language translated into another, with the aid of a universal apparatus of quite phenomenal sophistication” (2014, p. 222).

And indeed, the molecular machinery for translating DNA into proteins is quite impressive: In bacteria, the process requires RNA polymerase to unwind the DNA double helix and transcribe its sequence to messenger RNA, sigma factor to regulate the activity of RNA polymerase, 20 different types of transfer RNA, one for each amino acid; as well as the ribosome, the protein synthesis factory of the cell, where the messenger RNA and the matching transfer RNA’s are lined up, and the amino acids linked together, assembly-line style.

In eukaryotes, the process is even more complicated.

However, a purely physical description of the complexity involved in protein synthesis ignores the conceptual exceptionalism of the genetic code, as pointed out by Harold: One symbolic language translated into another.

This makes the genetic code a prime candidate for design. As Mike Gene points out, “experience has shown us that codes typically are the products of mind, and non-teleological forces do not generate codes. In fact, if the genetic code is taken off the table, there is no evidence that a conventional code employing a linear array of symbols has ever been spawned by a non-teleological force.” (2007, p. 281)

An exceptionally good code
Is there a reason why, say, “GUU” should specify valine? Or is the standard genetic code little more than a “frozen accident”?

Even a casual look at the genetic code indicates that there is a method in the madness. Thus, one amino acids is often specified by similar codons, as in the case of leucine, which is specified by CUU, CUC, CUA, and CUG. Thus, a substitution mutation in the last letter of the codon will have no effect on which amino acid is specified.

This logic extends to a deeper level. For example, a mutation in one of the other letters may result in phenylanine (UUU, UUC), an amino acid with similar chemical properties to leucine.

In other words, the standard genetic code seems to be constructed in such a way as to make the organism robust to the effects of mutations. But is there a way to quantify the level of this optimization and compare the standard code to other possible codes?

In 2000, a team of scientists led by Stephen J. Freeland of Princeton University published such an analysis. They concluded that with respect to substitution mutations, the standard genetic code “appears at or very close to a global optimum for error minimization: the best of all possible codes” (Freeland et al., 2000, p. 515).

Substitution mutations are not the only type of mutation, though. In a frameshift mutation, a letter is added or deleted, disrupting the reading frame downstream of the mutation site. The result is a random string of amino acids that can gunk up the cell. Especially harmful are frameshift mutations that eliminate the “stop” codon, resulting in a string of random gunk that can be quite long.

The standard genetic code has as many as three “stop” codons, which seems excessive, considering that there is only one “start” codon. But having three “stop” codons instead of one increases the chances that a new “stop” codon will be encountered downstream in the case of a frameshift mutation.

The standard genetic code is made even more robust to frameshift mutations by having the sequence of the “stop” codons overlap with those of the codons specifying the most abundant amino acids. This feature, as Itzkovitz and Alon conclude, makes the standard genetic code “nearly optimal” at minimizing the harmful effects of frameshift mutations:

“We tested all alternative codes for the mean probability of encountering a stop in a frame-shifted protein-coding message. We find that the real genetic code encounters a stop more rapidly on average than 99.3% of the alternative codes.” (Itzkovitz & Alon, 2007, p. 409)

An interesting perspective is provided by a recent study by Geyer and Mamlouk (2018). Comparing the standard genetic code with one million random codes, they found that when measuring for robustness against the effects of either point mutations or frameshift mutations, the standard genetic code is “competitively robust” but “better candidates can be found”. However, “it becomes significantly more difficult to find candidates that optimize all of these features - just like the SGC [standard genetic code] does.” The authors conclude that when considering the robustness against the effects of both point mutations and frameshift mutations, the standard genetic code is “much more than just ‘one in a million’.”

The genetic code is likely the result of a compromise between providing robustness against several types of mutations. If the standard genetic code is the product of a teleological process, I expect future analyses which incorporate and compare different types of robustness - like Geyer and Mamlouk (2018) - will further support the optimality of the standard code.

In the meantime, we can conclude that the standard genetic code is exceptionally good - one in a million and possibly better.

Did the genetic code evolve?
As we have just seen, the genetic code displays a remarkable level of optimization when it comes to protecting the organism from the effects of mutations. This fact is hard to reconcile with a view of the genetic code as nothing but a frozen accident. If the genetic code was not engineered, it must have been optimized by natural selection, going through countless codes before happening on the one employed by life today.

But we find no evidence of this long trek through the fitness landscape. All life today employs the same code, and the few variants that exist are all derived from the standard code, not precursors to it.

It is possible that the lineage in which the standard genetic code arose drove all those other lineages with precursor codes extinct. But that is hard to square with the fact that variants of the code exist today, with no evidence of being driven to extinction by their superior-coded competitors. Changing an organism’s genetic code may be hard (as evidenced by the limited number of variants), but once changed, the variant does not seem to significantly decrease the organism’s fitness. At least not to the extent where one genetic code drives all competitors around the globe to extinction.

Other explanations can no doubt be formulated. But explaining the absence of evidence only establishes the possibility of the code being the product of an evolutionary process (a possibility I already accept). They do not establish that such a process actually took place.

Conclusion and perspectives
The genetic code is a prime candidate for design. It is a symbolic language, translated into another by molecular machines. The universality of the code can easily be explained as a case of common design, as there is a good engineering reason for reusing the code. Furthermore, the code appears to be exceptionally good at protecting organisms from the effects of mutations - one in a million or better.

The evidence is thus consistent with a scenario in which the first life on Earth consisted of a diverse population of engineered cells, all of which used the standard genetic code. The variants of the code which we observe today are secondarily derived from this original code.

This scenario generates predictions, potentials for falsification, and avenues for further research.

For example, Freeland et al. (2000) conclude that the standard genetic code can only be considered “the best of all possible codes” if the set under consideration is limited to those codes where amino acids from the same biosynthetic pathway are assigned to codons sharing the first base, for which the researchers give the historical explanation that the current code is expanded from a primordial code. If this pattern persists (i.e. it is not an artifact of the researchers only looking at robustness against substitution mutations) the teleological scenario would expect there to be good engineering reasons to group amino acids from the same biosynthetic pathway together like this. On the other hand, if more sophisticated models underscore the need for historical explanations and/or show the standard genetic code to be mediocre, the teleological scenario will be in trouble.

The teleological scenario also predicts that all organisms have the standard genetic code or derivatives thereof. Scientists estimate that Earth has about one trillion microbial species, with 98 percent yet to be discovered. If, as we start finding and studying those, we find multiple variants of the standard genetic code that can reasonably be considered as precursors, the teleological scenario will once again be in trouble.

Thus, we see that teleological explanations, rather than being vacuous “the designer did it” proclamations, can generate testable insights about nature.


Crick F.H.C., 1968, “The Origin of the Genetic Code”, Journal of Molecular Biology 38(3):367-379

Dobzhansky T., 1973, “Nothing in Biology Makes Sense except in the Light of Evolution”, The American Biology Teacher 35(3):125-129

Freeland S.J., Knight R.D, Landweber L.F., & Hurst L.D., 2000, “Early Fixation of an Optimal Genetic Code”, Molecular Biology and Evolution 17(4):511-518

Geyer R. & Mamlouk A.M., 2018, “On the Efficiency of the Genetic Code after Frameshift Mutations”, PeerJ 6:e4825

Gene M., 2007, The Design Matrix: A Consilience of Clues, Arbor Vitae Press

Harold F.M., 2014, In Search of Cell History: The Evolution of Life’s Building Blocks, University of Chicago Press

Hehemann J.-H., Correc G., Barbeyron T., Helbert W., Czjzek M., & Michel G., 2010, “Transfer of Carbohydrate-Active Enzymes from Marine Bacteria to Japanese Gut Microbiota”, Nature 464(8937):908-912

Hinegardner R.T. & Engelberg J., 1963, “Rationale for a Universal Genetic Code”, Science 142(3595):1083-1085

Itzkovitz S. & Alon U., 2007, “The Genetic Code is Nearly Optimal for Allowing Additional Information within Protein-Coding Sequences”, Genome Research 17(4):405-412

Knight R.D., Freeland S.J., & Landweber L.F., 2001, “Rewiring the Keyboard: Evolvability of the Genetic Code”, Nature Reviews Genetics 2(1):49-58

Li F., Villareal J.C., Kelly S., Rothfels C.J., Melkonian M., Frangedakis E., Ruhsam M., Sigel E.M., Der J.P., Pittermann J-, Burge D.O., Pokorny L., Larsson A., Chen T., Weststrand S., Thomas P., Carpenter E., Zhang Y., Tian Z., Chen L., Yan Z., Ying Z., Sun X., Wang J., Stevenson D.W., Crandall-Stotler B.J., Shaw A.J., Deyholos M.K., Soltis D.E., Graham S.W., Windham M.D., Langdale J.A., Wong G.K.-S., Mathews S., & Pryer K.M., 2014, “Horizontal Transfer of an Adaptive Chimeric Photoreceptor from Bryophytes to Ferns”, Proceedings of the National Academy of Sciences 111(18): 6672-6677

Osawa S., Jukes T.H., Watanabe K., & Muto A., 1992, “Recent Evidence for Evolution of the Genetic Code”, Microbiological Reviews 56(1):229-264

Takeuchi N., Kaneko K., Koonin E.V., 2014, “Horizontal Gene Transfer Can Rescue Prokaryotes from Muller’s Ratchet: Benefit of DNA from Dead Cells and Population Subdivision”, G3 (Bethesda) 4(2):325-339

(Stephen Matheson) #2

I will not take a position on whether the genetic code is “designed,” but this sentence is the best place to point to the weakness in your whole piece (in my opinion): the mutations that the code is “protecting the organism from” are the kind that are caused by glitches in the translation machinery (mistranslations) and perhaps in the replication and proofreading machinery (point mutations and indels leading to frameshifts). So this optimization, while perhaps real (and I think it could be), is not a global optimization at all–at least we don’t have evidence that it’s a global optimization, a truly special design. It’s an optimization of the code due to co-evolution of the code and the systems that it is inextricably a part of.

To me, this just means that the code evolved, and I don’t think that is a subject of much controversy. There are lots of recent papers on the topic, more robust and recent than the citations you provide, and I pasted a few below. I do think that views on evolution of the code are evolving, and that the “frozen accident” postulate is on the wane. Thanks for raising the topic: it’s really interesting!

Genetic Codes with No Dedicated Stop Codon: Context-Dependent Translation Termination


I guess that depends on perspective. Where you see glitches and co-evolution, I see an almost compulsive obsession with error-correction: Cells employ proof-reading at every step in their information flow, whether it is DNA-DNA replication, DNA-RNA transcription, or RNA-protein translation, distinguishing between subtle differences in chemical bonds while doing their jobs very fast, with very few errors. And for those errors that do get through, you’ve got a genetic code that is exceptionally good at making sure that those errors have no effect on the resulting protein.

I want to correct an incorrect understanding of my post (which I am not accusing you of spreading, but which a reader can inadvertently get): I am not setting up and knocking down the strawman that evolutionary biologists all think that the genetic code is a frozen accident (some probably do, but as you point out, among those who study the origin of the genetic code, that view is very much on the vane).

Instead, I point to the very exceptional robustness of the standard genetic code in protecting organisms from the effect of mutations and argue that if this robustness is not the effect of engineering, it must have been optimized by natural selection, going through countless codes before happening on the one employed by life today. But we find no evidence of this long trek trough the fitness landscape. All organisms alive today either have the standard genetic code or a variant derived from the standard code.

Thank you for the references, but from the abstracts it does not seem that any of them address the issues I raised. If you disagree, maybe you could quote from the papers where they address the points raised in my post, namely:

  • The standard genetic code is very exceptional in protecting organisms from the effects of mutations - one in a million or better
  • No precursors to the standard genetic code exist; all variants are derived from the standard code
  • Variants exist today with no evidence of being driven by extinction by their superior-coded competitors

(Stephen Matheson) #4

“Glitches and co-evolution” are explanations and, at least in principle, testable concepts. Judgments about “almost compulsive obsession” are, for me, statements about the speaker and not about the world, but are clearly not explanations or even attempts at explanation. In fact, such judgments are separate from explanation, which is good I think, since that means that people can enthuse about “design” and even “obsession” without necessarily making up silly magic stories.

That’s very interesting and I totally agree that we should put it in the category of “things that require an extraordinary explanation.” We may discover that the paucity of information is itself explainable historically, such that the evolutionary process that led to the SGC was “standard” but swept to universality. We may discover that the optimality of the SGC is situational, such that a different SGC would have arisen under somewhat different circumstances (this should not be swept under the rug: the code necessarily co-evolved with the machinery that uses it). We may discover a version of the “frozen accident,” in which the discovery of the SGC (from among competing variants) was unexpectedly fast, so that the “accident” was in fact a lucky break that cut short the exploration of the fitness space. And we may discover that we are wrong about the optimality of the SGC altogether, that in fact it is one good solution among many.

Ah well, first, I wasn’t addressing those questions with my references but was showing the audience that a lot of work on the evolution of the code is ongoing. I thought your reference list was sparse and dated. But re your main points, the first one is interesting and in need of explanation, but in fact I have already explained how I can think of at least one reason why your conclusion is exaggerated. Your second point is also very interesting but is essentially assertion of a negative and is far from a convincing argument against evolution of the code. Your third point is uninteresting and based on an erroneous premise, that being that a variant X cannot exist at time T if it is inferior to competitor X’. That’s a dumb claim.


How is “glitches and co-evolution” testable? What does the hypothesis predict? That we should find primitive cells with sloppy replication, transcription, and translation machinery and mediocre genetic codes? Is the hypothesis in trouble if those are not found? If not, what other observations would be problematic for the hypothesis?

The hypothesis that error-correction was built into the design of life does lead to testable predictions. As I wrote in my post, the hypothesis leads me to expect that further analysis of the standard genetic code will underscore its robustness for protecting organisms from different types of mutations. It also leads me to expect that more sloppy precursors to the genetic code will not be found.

Indeed, those are competing hypotheses to consider. The issue, of course, is how to test them.

This criticism would hit closer to home if my goal with the post had been to write a review of current evolutionary thought on the origin of the genetic code. But it wasn’t: My goal was to explore a teleological perspective on the genetic code, showing how (contrary to claims of being a vacuous “the designer did it” proclamation) a teleological perspective can provide insights and generate testable predictions.

I also included a short section on the possible evolution of the code, because it is, after all, a competing explanation. But I wasn’t going to spend a lot of time discussing a lot of scenarios for how the code could have evolved when it hasn’t been established (from my perspective, at least) that it did evolve.

You’ve pointed to the fact that future research could hypothetically change our view of the genetic code. That doesn’t change the fact that going by the results of the research having been done so far, the standard code is really, really good, with researchers arguing over whether it should be considered “the best of all possible codes” or ‘merely’ “much more than just ‘one in a million’.”

Another way of stating my second point is: “All known variants of the genetic code are derived from the standard code.”

As far as whether it’s a “convincing argument”, I’m not holding it out as some definite proof against the evolution of the code. I’m simply noting it as an observation that needs to be explained by those wishing to claim that the genetic code is the product of evolution.

Think about it: We’ve discovered multiple variants of the genetic code, but so far not a single one has been found that preceded the standard code. Maybe they’ve all gone extinct. Sure, but the derived versions all seem to be doing fine - including ciliates, where 12 variants of the standard code have been found.

From what we know about evolution, it is very good at adapting organisms to all kinds of weird and wonderful niches. Yet, in all this splendor of life, not a single species with a precursor to the standard code managed to nuzzle itself into some crevice of a tiny ecosystem and pass on its version of the code. Every single variant was driven by extinction in every ecosystem by competitors who had a slightly better code.

It would be, but that is not the claim I am making. Obviously, ciliates with variant codes have no problem coexisting with organisms with the standard code, even though their codes according to Freeland et al. (2000) are inferior to it.

My third point (that variants exist today with no evidence of being driven to extinction) is relevant in evaluating the claim that the reason we don’t see precursors to the standard code is because they were all driven to extinction by superior-coded competitors. It goes against what we observe today (and indeed, what we know about how evolution works).

(Stephen Matheson) #6

If you doubt that these things are testable, in principle, then you and I don’t have the basis for conversation that I thought we did. In fact, if you doubt the testability of those explanations, then you have undermined the basis of the analyses you posted yourself. Please let me know if you actually believe the co-evolutionary explanations are untestable in principle. It will be my cue to leave the conversation.

As to whether “design” leads to testable predictions, I am a skeptic on that one since “designers” tend to be omnipotent and inscrutable. Where I thought we might have common ground is on optimization and perhaps someday on principles of design that can be separated from judgments like “almost compulsive obsession,” which are obstacles to clear thought. Again, I may have overestimated the extent of our common ground.

“Maybe they’ve all gone extinct” is kinda the story of all of evolutionary history. ANYTHING that exists in biology and especially in molecular biology is a “winner” compared to variants that would have or could have existed, either in principle or in actual competition. I think you are failing to “think about it.” Specifically, I think you are taking an observation – that the SGC is effectively universal – and extrapolating beyond what the observation and our knowledge of evolutionary processes can support. You are ignoring a major principle of ecology/evolution, and indeed the basis of the “frozen accident” hypothesis, which is that of contingency. Sometimes, being the first means winning. This is pretty basic, and highly relevant to our topic. My point is NOT that I accept or prefer either of the extreme answers (frozen accident versus globally optimized solution); in fact I think both are wrong. My point is that the mere existence of a global solution is far from evidence for a globally optimized solution, and contingency is just one big reason why.

Yeah, this is just wrong. See above. The existence of variants at time T is simply not a strong piece of evidence about what was going on at an earlier time, most especially when the system co-evolved and has continued to co-evolve since the establishment of the presumed status quo. This is a weak argument, and you should reconsider whether you are basing your claims on “what we know about how evolution works.” Your claim entails a position that would, to choose just one example, argue against the stability of any kind of diversity. It’s a really weak argument.


The most obvious test would be a sequence comparison for these genetic systems. Common descent and evolution predicts that we should find a phylogenetic signal in these sequence comparisons. I don’t see why we would find a phylogenetic signal in this data if common design is true since a common designer wouldn’t be concerned with changing the sequence of these genes so that they produce a nested hierarchy.

All modern life are at the ends of evolved lineages that go back billions of years, so it makes sense that they would all have highly evolved genetic systems that reduce errors since these systems are advantageous.

(Chris Falter) #8

Evidence in favor of your argument: the vast majority of PCs still ship with a Microsoft Windows OS.



I don’t doubt that “glitches and co-evolution” is in principle testable. If that is as far as you’re willing to extend your argument, I am more than happy to concede that point. But mind you, I consider it a fairly weak claim - pretty much every claim about the natural world is in principle testable.

My questions where concerned with whether “glitches and co-evolution” is in actuality testable. What would a scientist do if he wanted to test the “glitches and co-evolution” hypothesis? Now, if you don’t wish to extend your argument that far, I am perfectly fine with leaving things at the common ground we do share.

Could you elaborate on this? If you find designers inscrutable, why would you think we could reach common ground on the issue of optimization? Why should we expect optimal designs if the designer is inscrutable.

For the record, I don’t know that the designers are supernatural, and I honestly have no idea how we’d find out. And I don’t see how we can conclude that designers are inscrutable. For example, imagine that NASA’s Pioneer 10 spacecraft was picked up by an extraterrestrial civilization. They might not understand our purpose in sending a drawing of a naked man and woman into space, but they would be able to understand the engineering principles behind Pioneer 10 itself. If one of the extraterrestrial scientists said something to the effect of “Gee, I don’t know why this electricity-producing generator is connected with this electricity-consuming instrument, since these unknown designers are so darn inscrutable,” his colleagues would probably look at him funny with their black, almond-shaped eyes.

Yes, but rarely does “winning” mean that you drive every single other species on the planet to extinction. The first eukaryotes did not drive all bacteria to extinction. The first multicellular organisms did not drive all unicellular organisms to extinction. The first flowering plants did not drive all non-flowering plants to extinction. DNA-based viruses and RNA-based viruses exist side by side. Etc., etc. And yet, when it comes to the standard genetic code, every piece of evidence that there any existed any precursors has conveniently disappeared.

Different conditions in the distant primordial times can always be imagined, and reasons for the lack of evidence can always be constructed. I am simply pointing out that if any precursors to the standard genetic code ever existed, we don’t have any evidence of it.

Sequence comparisons and the nested hierarchy can only show us that all organisms are descended from the Last Universal Common Ancestor (LUCA). But as far as we can tell, LUCA already had the standard code. So those lines of evidence can’t tell us anything about what went on before LUCA arrived on the stage.

Scientists used to consider the genetic code a frozen accident, not a highly evolved adaptation. Did those scientists not understand evolution?

Also, in the thread on the recurrent laryngeal nerve of the giraffe we learned that the current route of the nerve is a contingency of history that can’t be changed because it’s so entrenched.

But now, we learn that the genetic code, which is at the core of protein synthesis, is so fluid that it has been evolving for millions of years.

Is there a way of knowing in which cases to predict contingencies of history and in which cases to predict highly evolved adaptations? Or do we need to see what exists first?


I guess I am a bit confused then. I thought you were saying that divergent species (e.g. humans and spiders) had the same genetic molecules because they had the same designer and did not have the same genetic systems because of common descent. Am I reading this incorrectly? For example, you wrote:

"And indeed, from the non-teleological perspective this makes sense: If undirected abiogenesis had occurred several times, it would be an amazing coincidence if in every case the resulting organisms had struck upon the same genetic code. Therefore, universal common ancestry is the best explanation.

This changes the moment we throw teleology into the mix. Rather than having to choose between common descent and convergence, the investigator must now also consider the possibility of common design."

That seems to be arguing against common descent from LUCA. Am I wrong?

They may not have understood the evolution of the basic genetic systems. Nonetheless, they still had all living species inheriting the same genetic systems from a universal common ancestor.


We have plenty of evidence that humans and spiders share a common ancestor, apart from the genetic code. The nested hierarchy, for example.

(Stephen Matheson) #12

Sure. To me, it is a basic fact that design is something that we can detect and discuss and try to understand without reference to a designer (or Designer). This is one of the only things on which ID and I agree.

Optimization is merely the process of finding optima. Gods can do it, Alanis can do it, algorithms can do it, and the Blind Watchmaker can do it. Attaching “optimization” to a person is silly. In my view, shared by some but not all skeptics, attaching “design” to a person is also silly. Daniel Dennett makes this case–that we can seek design without a designer–most clearly and forcefully, and I agree strongly with him.

Furthermore, IMO, once you have a designer attached to your “theory” of design, you have serious problems. For one, if the designer is omnipotent, you abandon falsifiability, in principle, for any claim about design. I have written about this here. Second, you have to at least acknowledge the relevance of questions about the designer’s intent and her/his/its motivations. Phrases like “almost compulsive obsession” come to mind, but much more importantly there are questions about whether optimization is even the goal of the designer. A “design” that is wasteful or cruel might very well be exactly what some designers want. The conversation becomes insipid and even debasing, and it hinders the rational consideration of the designs themselves. To me, “design” is something interesting and important to understand. “Designers” are an invitation to madness whenever the designers are unseeable, unknowable, and superpowered.

You misunderstood me to be making a blanket claim about designers, but I used the phrase “tend to be omnipotent and inscrutable,” which is an indisputable fact about the force lurking behind 97-ish% of the conversations about intelligent design. Not all designers are inscrutable. All the designers behind ID are, of necessity, inscrutable.

You don’t know that and, worse, you are talking about “species” when the events in question are presumably very early in the evolution of life. Your statements suggest to me that you are picturing something utterly different from the earliest stages of evolution. This explains your errors about competition and fixation. You simply aren’t discussing the same event as I am.

Worse, you seem to actually believe that a biosphere-wide selective sweep is somehow impossible, despite the fact that extinctions, even global ones, are among the most obviously ubiquitous facts of natural history. I think your reasoning is very far off here.

(Larry Bunce) #13

The fact that all life uses the same genetic code suggests that it evolved before the first detectable life we can see from 500 MYA, and that any changes since then proved fatal, with the few exceptions that have been found. Current hypotheses on the origin of life talk about self-replicating chemical reactions that became more and more complex over time. DNA and RNA evidently worked so well to build complex living organisms as well as to allow them to adapt to changing conditions that they completely took over.
Naming it the genetic “code” implies that it was created by intelligence, something that scientists did not intend. The word “code” describes how it works, but the genetic code is the result of a chemical reaction, more akin to the fact that atoms of hydrogen and oxygen combine to make water than to a human-designed blueprint or secret code.


Dear @sfmatheson. As you talk about “ID” and “97-ish% of the conversations about intelligent design” I should probably specify a few things:

First, I don’t consider myself as part of “the ID Movement”. Yes, I have a hunch that the first life on Earth was engineered (i.e. “intelligently designed”), but I don’t share the Discovery Institute’s interest in socio-political reform.

Second, I’m an agnostic, and I have nothing riding on the engineers being supernatural or inscrutable. My views are entirely consistent with natural engineers acting in accordance with physical laws.

Now, as to some of your specific objections:

Yes, the Blind Watchmaker can act as a designer-mimic. But as the watchmaker is blind, “it does not
see ahead, does not plan consequences, has no purpose in view,” to quote Dawkins. That is why things like the laryngeal nerve of the giraffe, the blind spot in the vertebrate eye, and the pseudo-thumb of the panda are exactly the kinds of thing we should expect as products of a blind watchmaker, constrained by historical decisions which were made without their long-term consequences in mind. And also why the “frozen accident” hypothesis for the genetic code was so readily accepted by the scientific community.

I don’t know that the designers are omnipotent, and I have no idea how to find out. To the Incas, I’m sure the first Spanish conquistadors must have seemed omnipotent. Same for us, should we ever be contacted by a extraterrestrial civilization capable of crossing the chasm between habitable worlds. As Arthur C. Clarke’s dictum goes, “Any sufficiently advanced technology is indistinguishable from magic.”

Now, I tentatively infer design at the origin of life, as this leads to testable predictions. Is it possible that the designer is an omnipotent, capricious being, who has also designed the specks of dust in my window sill? I suppose I can’t rule it out. But it doesn’t lead to any testable predictions, so I’m going to live my life as though the specks landed that way by chance.

I don’t assume the engineers are wasteful, as that doesn’t seem like it represents the organization of life. If you want to formulate a model in which the engineers are wasteful, you are more than welcome to give it a shot.

I don’t really know how to assess “cruelty” when it comes to a population of single cells. Are human scientists acting cruel when they test resistance by exposing bacterial populations to antibiotics?

Okay, let’s put aside the existence of variant codes. Pretend like I never mentioned them.

What is it that convinces you that precursors to the standard genetic code existed? How would we know if they didn’t?

(Stephen Matheson) #15

Re the first question, I consider it an intellectual discipline to assume that things in the biosphere (or the whole world, for that matter) have precursors. It is, to me, intellectually irresponsible (lazy, careless, self-deluding) to reach first for superbeings to answer questions. So, right off the bat, I consider precursors to be a null hypothesis. This is what we all do, all the time, in science and in life. We have no evidence that our friends were conceived naturally, or in many cases that they were conceived at all, but only the truly insane even wonder about whether their colleagues or neighbors or housemates began as synthetic teenagers on a cloaked alien space station in orbit over Roswell. The absence of evidence against this proposal is not reasonably taken to be a rational counterargument. Your question flirts with this kind of irrationality.

But then, more concretely, there is the moderately-sized but robust literature on the biophysics and phylogenetics of translation systems and the genetic code. (Try looking up evolution and comparative aspects of aminoacyl-tRNA synthetases (AARSs)). This literature contradicts a simplistic view of the code as some kind of unexplainable event, and instead addresses questions about its origins and evolution using knowledge of its components and expanding knowledge bases from genetics and biochemistry. It’s not that these scientists “know” that there are precursors. It’s that they know that there could be precursors, and then they assume that these precursors can be understood by mortals. That’s all you ever need to do science. Reaching for superbeings is a big mistake.


Coming back to the “frozen accident” for the moment . . .

Do we know that the current arrangement of codons, tRNAs, and amino acids is the only optimum? Is the clover leaf pattern of tRNA the only molecular shape that could be used for protein binding and RNA binding? Are there possibly other genetic molecules that would work as well, such as PNAs? I don’t think we can really answer those questions. There may have been a “frozen accident” moment, but it may have been before codons were set in stone.

Those predictions seem rather post hoc, and they also don’t differentiate ID from known natural mechanisms. That seems to be two problems that I see.


In other words, it’s a methodological assumption. I can live with that.

I readily accept that there could be precursors. After all, many evolutionary scenarios are mutually contradictory yet each of them could be true.

The set of 20 amino acids employed by life also shows evidence of optimization. In their paper, “Extraordinarily Adaptive Properties of the Genetically Encoded Amino Acids”, Ilardo et al. (2015) investigated how well life’s alphabet of amino acids covers “chemistry space”, defined as their relevant physico-chemical properties, compared to random sets of amino acids. From the abstract: “Sets that cover chemistry space better than the genetically encoded alphabet are extremely rare and energetically costly.”

As for the other parameters you mention, I don’t know. I expect that as we start designing organisms ourselves, we’ll have a chance at second-guessing the Blind Watchmaker. And if life was engineered, my prediction is that life’s solutions will turn out to be pretty darn good.

Does conventional evolutionary biology make any predictions in this regard? Should we expect life’s solutions to be exceptionally good or frozen accidents?

The teleological scenario predicts that of all the microbes we have yet to discover, we will not find any that can reasonably be described as a precursor to the standard genetic code. In fact, finding such precursors will mean the scenario is in trouble. Does conventional evolutionary biology make the same prediction? Will conventional evolutionary biology also be in trouble if precursors to the standard genetic code is found?

The teleological scenario predicts that further analyses of the standard genetic code will underscore its robustness. And again, the opposite finding will mean the scenario is in trouble. Does conventional evolutionary biology make the same prediction? Will conventional evolutionary biology also be in trouble if the standard genetic code turns out to be mediocre?

Ilardo M., Meringer M., Freeland S., Rasulev B., & James Cleaves II H., 2015, “Extraordinarily Adaptive Properties of the Genetically Encoded Amino Acids”, Scientific Reports 5(9414):1-6

(Haywood Clark) #18

[quote=“Krauze, post:17, topic:39080”]
The set of 20 amino acids employed by life also shows evidence of optimization.[/quote]
Hello Krauze,

I think you’re missing something huge here.

First, if we’re talking about proteins, they are amino-acid residues.

Please bear with me, as I am not being pedantic. Do you know how many different residues have been found in life’s proteins? If you do, how do you explain the difference between that number and 20? If you don’t know, key words that will help you to learn more are kynurenine, N-myristoyl-glycine, and hydroxyproline.

“Good” in this sense is not a scientific prediction, but if you’re willing to engage on the number of different amino acid residues found in proteins, I can show you one of life’s solutions in the translation machinery that’s pretty darn bad, because it was constrained by evolution!

The general prediction comes from the constraints on evolution, and specific predictions have been remarkably accurate. I’m just amazed that you would choose translation, as it’s one of the most illogically (from a design perspective) constrained biological systems around!

Your inclusion of “reasonably be described as” takes this completely out of the realm of scientific predictions. To be useful, they must predict what you will observe directly and not allow you–or anyone else–any wiggle room. The same goes for your inclusion of “robustness” in your second prediction.

(Matthew Pevarnik) #19

Ah thanks for sharing! I had no idea that that the majority of amino acids are non-proteinogenic :open_mouth:

I agree that “good” or “bad” do not make any sense in a scientific sense. Just like I think that “designed” vs. “non-designed” also do not make any sense.

Also, welcome to the forums Haywood, it appears this is your first post!

(Haywood Clark) #20

Thanks, Matthew!