Actually, I read about it on the Wikipedia page, you’d have to take it up with those authors if you think it was written to mislead. and Yes, the immediate intention of the weasel program itself was limited to showing the effect of cumulative improvements and how this is different than utterly random chance, as he acknowledges and is quoted on the Wikipedia page I read.
Nonetheless, this remains one plank of his larger argument that unguided process can achieve what would be utterly impossible if it were up to “mere” chance… and whether you like it or not, I still find great irony that he utilized a method that demonstrated just how well selection works when frontloaded by a teleological intelligent agent in order to make this particular observation.
But since were on the topic, you wrote in the linked article…
True or false: the computer program in question is called WEASEL (or similar) and it demonstrates the stepwise generation of a famous phrase from Hamlet.
You must be using words differently than I am used to, because whatever debate we might have about what Dawkins intended to demonstrate with his stepwise generation of a famous phrase from Hamlet… that seems to be matter of fact what it was… a demonstration of the stepwise generation of said phrase from Hamlet… what, precisely, is false about this description? The phrase was from Hamlet, and it was generated in a stepwise manner, no?
By repeating the procedure, a randomly generated sequence of 28 letters and spaces will be gradually changed each generation. The sequences progress through each generation:
Generation 01: WDLTMNLT DTJBKWIRZREZLMQCO P
Generation 02: WDLTMNLT DTJBSWIRZREZLMQCO P
Generation 10: MDLDMNLS ITJISWHRZREZ MECS P
Generation 20: MELDINLS IT ISWPRKE Z WECSEL
Generation 30: METHINGS IT ISWLIKE B WECSEL
Generation 40: METHINKS IT IS LIKE I WEASEL
Generation 43: METHINKS IT IS LIKE A WEASEL
This process “generates” a “phrase” from “Hamlet”, “stepwise” seems an adequate adjective, and the program seems to be called “weasel”. Could you explain to this unlearned dotard what exactly makes this description “false”?
I’m sorry, but what you wrote provides no information at all about whether the particular solution in dolphins is a global optimum or not. It’s like coming on a tall hill and saying, “That’s taller than any hill I’ve seen before. It must be the tallest mountain in the world.” As it happens, we have enough examples of convergent evolution of similar traits, each producing remarkable and quite different solutions, to conclude that any solution we see is unlikely to be optimal.
Right. Now, if you could just demonstrate that any such achievements exist, we would have something to talk about.
Marshall, thanks… could you explain further, as Stephen seems more interested in insulting me than in explaining anything, and I would in fact like to better understand if I am missing something…
I reread the chapter, and the description seems to hold completely… Dawkins describes writing a computer program (“I was obliged to program
the computer”) that generates a phrase from hamlet (“just the short sentence ‘Methinks it is like a weasel’”) using stepwise (“the target was finally reached in generation 43”), incremental changes ( “mutations in the copying”) that cumulatively generates said phrase.
The description “the computer program in question … demonstrates the stepwise generation of a famous phrase from Hamlet” seems a perfectly accurate description of what Dawkins described.
Is the only “falsehood” Stephen is claiming the fact that Dawkins himself didn’t explicitly label this program “WEASEL” in the particular chapter in question?
My post goes step by step, and each step depends on the one before. What you are missing is in the first main point, here:
True or false: Richard Dawkins’ 1986 classic The Blind Watchmaker used a computer model (a simulation) as a key teaching device while explaining the effectiveness of cumulative selection in evolution. The program is the main focus of chapter 3 (“Accumulating small change”) of the book.
I have bolded a phrase that is particularly important, but any reading of The Blind Watchmaker would make my point obvious.
Daniel, if you’re still confused, feel free to message me. I know you’re smart and fluent in English so I don’t want to come across as holding your hand as we walk through the meaning of the three questions in Stephen’s blog post. You have read that post, right?
Incidentally, do you happen to know why that modest Weasel program has suddenly resurfaced as a live topic? I foggily remember discussions about it back during the Dover trial, but it seemed to disappear without a trace… until last week or so when it popped up at a few places.
OK, I read the post again and caught it… I didn’t catch the subtlety Stephen was using… trying to make it sound like he was talking about the weasel program when he was really talking about the biomorph program. I assumed, since he was specifically responding to ID proponent’s (mis)use of the weasel program, that he was discussing the weasel program. Silly me.
Couldn’t tell you any other reason it has come up… I only mentioned it because someone recommended Dawkins and it is one reason I find him singularly unconvincing to me. Even if all he was trying to do in that program was illustrate nothing more than the ability for cumulative small changes to do what large wholesale changes could not, one would think he would have been more cautious in proffering an illustration that essentially demonstrated how well selection works when guided by an intelligent agent toward a teleological end.
I found it an interesting post. If it weren’t ten years old I would have replied to the commenter who thought that it was an argument for fine-tuning because it failed when raising the mutation rate over 20% or so. From @sfmatheson’s article:
(“Small change” is the topic, remember.)
This is a very basic and very important aspect of the Darwinian mechanism, and yet it is maddeningly common to see it ignored or completely misunderstood.
Then later the fine-tuning commenter thinks he has a very good point, but he is forgetting that “small change” is the whole topic under discussion, and so he adjusts the algorithm to instead be about large changes (mutation rates over a fifth or a quarter of the sequence). And thinks it falsies the thesis when it doesn’t work under the “large changes” condition, when the entire point is that small changes work better than large changes. I suppose if he increased the mutation rate to 100 and calculated the same ridiculously large improbability that Dawkins originally did in the chapter, he would try to throw that against the Weasel program, too.