Intelligent Design

This is a good time to talk about stochastic processes in machine learning. I am not a Ph.D. researcher in the field; I am merely pursuing an MS in Data Science from Indiana U. However, I understand the broad contours of what is happening, and how data science teams use machine learning in real-world projects.

Perhaps you heard of the astonishing success of AlphaGo, the Google DeepMind neural network that recently defeated one of the all-time great Go grandmasters in a 5-game match. Like other deep learning neural networks, AlphaGo used stochastic gradient descent together with backpropagation to “learn” how to play Go better than any human. AlphaGo applied a stochastic mathematical process to an extremely large data set, with a feedback mechanism (backpropagation) that allowed the network to iterate until it found an optimized equilibrium.

To me, this bears striking resemblance to the biological evolution of life. Stochastic gradient descent plays the same role that the biological mechanisms of genetic variance (including drift) play in evolution. Backpropagation+evaluation functions play the same role as natural selection. The resulting optimized equilibrium resembles the ecosystem of life.

The AlphaGo result demonstrates how complex “intelligence” can emerge from rudimentary inputs. No one on the AlphaGo team can tell you what heuristics, strategies, or tactics AlphaGo is employing as it captures the stones of the world’s top Go masters. This is an important point; the “intelligence” that emerges from the stochastic mathematical process with bounded feedback is far greater than the intelligence of the inputs. Thus neural network processing resembles the process of biological evolution, from which marvelously ingenious systems have emerged from seemingly simple inputs.

Moreover, we can obtain a useful perspective on epistemology from AlphaGo and other neural networks. If you were an internal observer situated on a node of the neural network, you could observe some of the data, some of the processing, and some of the results. The stochastic gradient descent would seem ineluctably random. You would also observe that, combined with some constraints on processing, it yielded a remarkable equilibrium.

The logic behind it all, however, would be quite beyond your grasp. You would certainly not be able to infer a network designer. You would observe a kind of design, but you would not be able to infer that the design had an external origin. You would observe only that the design seemed to emerge as if by miracle from simple, often random inputs.

The situation of the internal observer is analogous to our situation as humans. We can observe the random processes and the constraints and the resulting equilibrium. We can measure it, perhaps even use scientific methodology to reverse engineer some of the equations embodied in the steps of the gradient descent.

An external observer, on the other hand–someone who knows the code, the data structures, and the network topology–would be able to see the overall logic, and how it all works together toward a particular purpose. In my analogy, the external observer is God.

To complete the analogy, I would say that we, like internal observers of the network, can only understand the existence of the designer by the designer’s self-revelation to us. Our scientific methodologies can help us understand much, but they do not provide the means to discern the designer or the ultimate design. Our ability to affirm an overall logic, an overall purpose, and even an overall designer comes by trusting the revelation that the designer provides to us.

Not at all. I am simply insisting on distinguishing between what can be learned from science and what can only be apprehended by faith.

I hope you find my modest contribution useful. Blessings on you and yours, Eddie!

EDIT:

You raise a question about the results of randomization in software programs:

Your questions assume a static system–i.e., a system whose every computation is specified by its code. It is not surprising that random modifications of code in a static system typically increase its entropy and decrease its functionality.

A neural network is not that kind of system. It is a dynamic system whose computations evolve according to a stochastic gradient descent which feeds a backpropagation + evaluation mechanism.

The dynamic system model seems much closer to what we observe in biology than the static system model.

4 Likes

Unfortunately, whenever I’ve tried to explain these types of algorithms to evolution-deniers, they insist that “Human programmers wrote the intelligence into the software. An intelligent mind was necessary or else none of it would ever have happened.” Refusal to learn is almost like a vaccination: it prevents inconvenient outbreaks of intelligence and a maturing sense of intuition.

Instead of educating themselves on why these types of neural networks are so powerful, they solely focus on denying what makes them so amazing. Just as people used to fear science because it seemed too much like wizardry, some people are afraid that if software like this actually existed, it would be like “the dark arts”, and would pave the path to evolution and atheism.

Remember: “A mind is a terrible thing to change.”

2 Likes

Hello my fellow revelationist =). I agree, and I’m sure that @Argon does too. We know God through Jesus.

3 Likes

In a very kind and humble response to a request to back of critique BioLogos as a whole for opinions expressed a long time ago, @eddie wrote…

The original post that this is quoted from is very appropriate and humble. However, I was a bit stunned by this quote. I think it explains a lot about you position.

If you have never read Phil Johnson that explains a great deal of your confusion about why we (or I) have difficulty with Johnson and ID (remember Johnson was the father of the modern ID movement). You often complain about how ID proponents are reviled and persecuted in science. Recall, that this started in the late 90’s, but really came to a climax in the aftermath of Dover in 2005 (yes, 11 years ago). This was when all the major scientific bodies made official decisions to explicitly kick ID entirely out of science.

If you can’t be bothered to read or understand that whole history (from before 11 years ago), you will never understand why ID proponents were kicked out of science. Moreover, when you ask why ID was kicked out 11 years ago, we have to go back to prior to 11 years ago to explain why they were kicked out. Of course you do not have to answer for them (frankly, their behavior was often unanswerable), but that behavior is what got them kicked out.

Moreover, I am qualified to assess the science in the ID movement. I do not find their science convincing. Often I find their science dead wrong. At best, even when I agree with them, I think they are coming to the right conclusions by way of science-engaged philosophy, but not by way of anything like modern science. This should not be a controversial point. In fact even James Tour has written the same http://www.jmtour.com/personal-topics/the-scientist-and-his-“theory”-and-the-christian-creationist-and-his-“science”/. He writes…

I have been labeled as an Intelligent Design (ID) proponent. I am not. I do not know how to use science to prove intelligent design although some others might. I am sympathetic to the arguments on the matter and I find some of them intriguing, but the scientific proof is not there, in my opinion. So I prefer to be free of that ID label.

@Eddie I appreciate that you recognize that you (1) do not have expertise to assess ID science and (2) are not really informed about the ID movement before 11 years ago. This, in my opinion, explains why you have difficulty understanding why most scientists dislike ID. You do not know the history of bad behavior ID had in science, and you do not know why scientists consider ID to be bad science. Therefore, you are missing the two most important ingredients that are necessary to understand why ID was kicked out of science. At this point, I’m not sure what can be done to explain this more.

4 Likes

Considering Walter ReMine…

Call me crazy, but I really like ReMine (even though he is wrong). He is one of the very few ID theorists that actually proposed a design principle to explain actual patterns in nature we see. His book The Biotic Message is worth reading for just this reason alone. He proposes that God is sending us a message in biology by ensuring phenotypes will fall in nested clades because it will disprove evolution and make it clear that there is one Creator. This is the design principle behind the patterns we see, according to ReMine.

He goes on to correctly note that evolution does not always predict that phenotypes will fall in nested clades (i.e. fitting a phylogenic tree), because change along any of the edges can breaks the pattern. Therefore, he correctly recognizes that evolution does not predict strict adherence to nested clades (especially at the phenotypic level), as his Biotic Message does. (Take note all creationists that point to violations of the nested clade pattern as proof against evolution).

Unfortunately for him, ReMine is falsified by the data. It turns out that most genetic features do fall into nested clades. However, because of horizontal gene transfer and incomplete sorting, not all genetic patterns fall into nested clades. Phenotypes, also do not always fall into nested clades. This is because of things like convergent evolution (on a phenotypic level) and ReMines own recognition the nested clade pattern will not be perfect in evolution because of ongoing change in organisms. So the data actually matches what we expect from evolution, and does not follow his Biotic Message design principle. Our genomes look like we are the product of evolution (consistent with neutral theory), not the perfect nested clades that ReMine predicted.

Of course, he did not even mention all the other patterns that neutral theory explains. No word on how the Biotic Message could explain these patterns (it can’t).

Nonetheless, I like ReMine because he is at least attempting what no one else in the ID movement seems willing to do: provide an explanation (beyond goddit) for the precise details of what we see. Even though he is wrong, if there is an ID theory that could be viable, it would probably look like this. I hear that now Reasons to Believe is working on this. I’m doubtful they will be successful, but curious nonetheless. Their path, at least, is a reasonable way forward, even though I would call it science-engaged theology.

2 Likes

Before diving into this, I should emphasize that in Biology we do NOT think that randomization + selection is enough to explain the full diversity of life. That simple version of Darwinism was falsified a long time ago. It turns out other mechanisms are quantitatively more important.

Still, this is a very strange request. Isn’t the answer to these questions well known?

So this is exactly what I have my PhD in @eddie. One example that we talked a lot about during my undergrad was the use of genetic algoirthms to solve difficult problems in design (e.g. aircraft wing design http://flab.eng.isas.jaxa.jp/member/oyama/papers/SMC99.pdf which has been recently updated http://enu.kz/repository/2011/AIAA-2011-5881.pdf). You can read about this here on wiki Wing-shape optimization - Wikipedia.

More close to home, one of the most widely used drug design software packages: DOCK from UCSF, used a genetic algorithm to fit molecules in protein pockets http://dock.compbio.ucsf.edu/. This program wasted about 2 years of my life in graduate school (for genetic algorithm unrelated reasons).

Now, of course, I looked on the internet to get those URLs, but I knew of these papers before searching. Remember, I am a professor and this is squarely within my domain of expertise: machine learning and artificial intelligence applied to biology.

Right now, genetic algorithms (i.e. Darwinian algorithms) are not used frequently because we have better algorithms, like stochastic gradient descent and simplex optimization. GA are most useful when these other methods do not work so well (because analytic gradients are not available or gradients are bad hints) and/or massively parallel resources are available (because genetic algorithms are trivially parallelizable). That being said, in machine learning, it is generally accepted that genetic algorithms are extremely effective in most problems (especially if there are multiple solutions), but have two limitations:

  1. They are too slow to be preferred over gradient descent based methods when only one processor is used. A true “evolutionary” algorithm (remember) would automatically scale processing power to deal equally fast with a population of a million and of one hundred (generation times are the same no matter how large the population). In simulation, however, generation time scales linearly with population size, so this makes GA very slow in practice.

  2. They are are good at getting good solutions, but sometimes struggle in the last steps to get very high precision solution. This is one way in which simplex search does better than GA, because it automatically scales down its steps as the right answer is found.

In context of evolution, neither of these limitations is relevant. In biology, generation time is independent of population size, approximate solutions are fine, large populations improve the search well beyond what is possible with a computer, there are multiple solutions, and the fitness landscape might be rough with little information. The biology design problem is different than the human design problem. GA is much better for biology than it will ever be for human design problems.

Of course, nothing I’ve written here is new knowledge. It is obvious to anyone in the field. I thought it was well known, even in the ID community. Even Dembski’s work references this at times.

Of course they will be skeptical. This doesn’t make them right. And I have no idea what “genuinely Darwinian” means. The algorithm is randomization + selection → design. If that isn’t Darwinian, I do not know what is. It is not an elegant solution, and it can be slow to use in a computer (because it scales linearly with population size) but it is very very effective. And biology does not even rely exclusively on this strategy.

5 Likes

I too have wondered why skepticism is often considered a decisive qualification before an appeal to authority.

And I have also been repeatedly surprised at how often the Argument from Personal Incredulity arises in my readings and interactions with ID advocates. It seems an odd and not at all compelling appeal to personal feelings. And I can’t help but notice that it usually comes from people not particularly conversant in the relevant fields of science, such as philosophers, attorneys, engineers, and physicians.

Dr Swamidass that was a sensational series of posts, thank you. Incredibly enlightening. In future when those same questions are raised (as they will be), and it is claimed they have never been answered, I’ll just link back to these posts of yours. You have certainly demonstrated with immense clarity, the importance of being informed properly on a topic before attempting to make dogmatic statements about it.

3 Likes

I have read that summary about evolutionary algorithms a couple of times now. The explanation of their advantages and disadvantages was excellent. I learned a great deal from that.

2 Likes

{The following is a lot of reminiscing that will bore most and enrage others. It is an early morning hours indulgence for a worn out academic…and it just started coming to mind. My first cellular automation, first design software, first evolutionary algorithm, and Don Knuth and Doug Hofstadter unintentionally convincing me that I belonged in the humanities, not math department. They could solve more problems over a casual lunch break than I could just begin to grasp the objectives.}

Dr. Swamidass, you are bringing back some fun memories for me. I remember back in the days long before DOCK, the Internet [We still had ARPANET accounts], and even all but the very earliest microcomputers, this one microbiology professor on the campus (a large state university) was writing his own protein folding programs and it was a really big deal, and quite novel. And it seemed like he and I were always hanging out together during what they called “open shop” on the university mainframe at 4am, because massive number-crunching programs and some special resources (like the VERSATEK plotter) had to be reserved in advance and run in the middle of the night when there were far fewer timesharing users on the network. And so just this professor and I found ourselves standing at the plotter watching our 3D (and sometimes 4D) graphics slowly emerge from what seemed like a magical device. That was probably 1981 or so?? I think he was even doing all of this stuff in FORTRAN and his protein folding “art works” were such a thing of beauty—and even a number of us who knew very little about his research all showed up the first night the color VERSATEK plotter purchased under an NSF grant was hooked up and available, even though most of us hadn’t yet written code to take advantage of the color features. Yet, we wanted to see his dazzling protein folding graphics in FULL COLOR because he had already been including the color parameters in his graphics libraries so that he could take advantage of the colors once he had such a printer.

You young folks probably take such things for granted, but back in those days every little step forward in terms of additional computer memory, speed, output devices, etc. was such a big deal. Believe it or not, this professor had at one time printed out his protein folding “diagrams” as “ASCII art” via crude EBCDIC characters because that’s all he had before those black-and-white VERSATEK plotters were purchased. To go color was like the Dorothy-wakes-up-in-Oz scenes of Wizard of Oz. Very big deal: protein-folding in color.

Anyway, my first exposure to an evolutionary algorithm for problem solving was thanks to Don Knuth, who was visiting a colleague’s son who happened to share an office with my T.A.and we got to shooting the breeze late one night. Of course, Knuth’s brilliant mind left me in the dust in about two minutes but he was already filling the blackboard (yes, that dates me) with Omicrons as he started figuring out orders-of-magnitude analysis of algorithm where he was imagining ways to “hybridize” evolutionary algorithms with other methods like Monte Carlo, N-dimensional min-maxing, and lots of stuff I didn’t understand. We had started out talking about specific design and problem-solving scenarios and Knuth immediately started generalizing them into the kinds of pseudo-code algorithms that would probably eventually appear in Volume 29 of The Art of Computer Programming, his magnum opus.

Anyway, I know just enough on these topics to greatly respect guys like you who can say “This program wasted about 2 years of my life in graduate school (for genetic algorithm unrelated reasons).”

To situate this dinosaur (i.e., me) in a computing era perspective, I remember when Conway’s Game of Life cellular automation program was published in Martin Gardner’s mathematical games column in Scientific American in 1970. It seemed to “go viral” and in a few months there was a group of us who tended to be at the computer center signups late at night who would often allocate little gaps in available computer time/memory to kick into the timesharing computer a “resume execution” of the LIFE program which we might keep running for many weeks (thrashing in/out of core memory when there was room.) Even though it was a trivially simple “biological life simulation”, there was so much to explore!

In order to create an “infinite sized checkerboard”, I wrote what was probably my first “dynamic virtual checkerboard” so that only live-cells and never unused-cells took up memory—and that seemed quite nifty and ingenious at the time. (Otherwise, a simple 2-dimensional array would exceed RAM for more than 10,000 x 10,000 checkerboard or so.) And we would have contests to see who could come up with the longest lived “repeating sprite” that took the most LIFE generations to either stabilize or start repeating. From what I can see when I consulted Wikipedia, there’s still a lot of students writing GAME OF LIFE programming assignments and getting carried away in their free time.

Even though such cellular automatons are not evolutionary algorithms, they did seem to emulate live organisms and it got me seriously thinking about the kinds of simulations that led to the AVIDA project and many others.

For those of us with even just a little bit of engineering background, those early VERSATEK graphics plotters (powerful color graphics monitors for animations, even for simple stick-figure cartoons, were still years in the future) and the beginnings of CAD/CAM made us feel cutting edge because we could start designing all sorts of amazing things inside of our software without consuming any metal, plastic, or other tangible material resources other than those wide green-bar continuous perforated paper printouts! It seems trivial and silly to think back on how “cutting edge” we felt, especially when today’s high school science fair students can experiment with genome design software and hook it up to a gene gun and create their own GMO. But that was the first time that I began to grasp complexity could be designed and generated by even trivially simple algorithms which made popular ID mantras denying evolution sound silly even 30 years prior! Computer automation and ever cheaper computer time on university mainframes started giving us opportunities to explore the power of very simple algorithms which von Neumann could only dream about. It didn’t take long before those experiences with design algorithms and general purpose problem-solving algorithms started eroding my resistance to biological evolution theory. ;

I kind of wish I could start over and learn things as your students do today…but we are all prisoners of our own eras.

I do wonder what would happen if IDers like Stephen Meyer and WIlliam Dembski were to take an Analysis of Algorithms grad school sequence and get practical experience writing evolutionary algorithms and the kinds of design software you are talking about. Surely that would eliminate some of the more lamentable scoffs arising from their “irreducible complexity” nonsense.

Thanks for the memories and flashbacks.

5 Likes

Unfortunately, when you posted that outstanding example of evolutionary algorithms used to design antennas for NASA space probes, a science-denialist simply pretended that the program (and countless design projects based on EA) did not exist. (Besides, if one knows people who are skeptical of whatever one feels like denying, entire fields of science can be wiped out with the wave of a hand. It’s almost magical! I don’t need to pay for cable TV to see far-fetched drama. I wouldn’t have believed it if I hadn’t watched it happen.)

When someone demands an example of evolutionary algorithms designing toasters and hypodermic needles, the reader begins to recognize that there’s not the slightest understanding of what’s being discussed. I’m astonished. (I also feel a flood of compassion. How extremely sad.)

3 Likes

So interesting to read of what those computers were really used for. I was in med school at Dallas in the mid to late seventies, and we used the computers in our lab primarily to play a rudimentary Star Trek game over the lunch hour and between lectures. ;{)

I’d virtually forgotten about that! We used to get annoyed that local high school kids would come to the campus to take up the “dumb terminals” (so called because they were just a keyboard and monitor communicating with the timesharing mainframe computer) and prevent us from checking on the status of our long-running programs.

And that brought to mind the PLATO system at University of Illinois that had terminals at major libraries all over the country. That system had the first programmable characters I had ever seen and the first CBES (computer-based education systems) I’d explored. I still recall a simulation of an ER patient being brought in after a serious auto-accident. You sat at the keyboard watching the patient’s vital signs displayed on the periphery of the screen and you would select various procedures and medical devices by pressing a number-key (no convenient mouse yet!). All the while a stopwatch was showing time drain away along with the life of the patient. No matter what I tried to do, it seemed like my patient always either bled out or died of shock.

Everything was billed back then according to amount of RAM use, CPU time, and disk storage use (and some priority cache disk space of about 64,000 bytes which only senior faculty with NSF grants could afford. Cache disk storage was the only storage that didn’t get “retired” to a magnetic tape backup if you didn’t access a file for 72 hours. So it was “use it or lose it”, and we all did the logical thing and wrote little programs which we could execute merely to “touch” all of our stored files so as to reset their timers.) And by the way, we didn’t even refer to bytes and kilobytes all that much because that only applied to the IBM 360. Other computer companies had their own word-length and basic storage units of measurement, such as the nuclear-weapons simulation number-cruncher I worked on that was engineered for double-precision arithmetic where the double-word addressable RAM unit was two “computer words” of N-bits length. (I changed the actual number of bits to N because if I included the actual number here, an experienced computer scientist from that era would know the manufacturer of the computer and be able to identify the state university where I was teaching. Computer architectures in those days were NOT all based on 8-bit bytes and disk storage expressed in kilobytes.)

You just triggered another memorable memory that was a big deal at the time: Some workers got orders to remodel and expand the computer center installation, and it included knocking out some walls (made of concrete blocks) and remove some old style hot-water/steam-based circulation pipes that went through a room down the hall from the disk-platter storage units. So the moment a workman started his jackhammer and started destroying the cement that surrounded some of those buried steam pipes, the vibration immediately transmitted through the steel-rod reinforced concrete floor and promptly caused the sensitive read/write heads which “floated” over the sensitive surfaces of the hard-drive platters to “crash”. Something like five of the fifteen hard-drive “platter-stack” units in the computer room were immediately rendered useless and effectively destroyed. Anybody who didn’t have their research backed up on magnetic tapes in the tape library room lost all of their data.

There was a lot of screaming, yelling, and cursing—not only in the immediately vicinity and aimed at the workers but the main timesharing computer went down and hundreds of time-sharing computer terminals all over the state, including the governors office and state legislature. Yet, the guy with the jackhammer was simply doing what the work-order had said. I never heard who got blamed for not thinking through the dangers. The cost of the equipment damage was staggering.

Of course, because nobody used evolutionary algorithms to repair the damage, we know thereby that EAs are worthless and can’t design anything!

Yes, evolutionary algorithms flunk The Toaster Test™.

Or so they say. (That Toaster Test™ is going to remain one of my all-time favorites! If you can’t design a new toaster with it, such algorithms just ain’t no good!)

I’ve definitely acknowledged that ReMine has tried to do something that few ID theorist have: Create a positive hypothesis for design. His book… Not so great. He spends more time bashing evolution than actually elucidating his theory. His dismissal of the evidence for lateral transfer and the origin of the mitochondria and chloroplasts was weak. But at least he put something out there.

I agree that many find that his theory was defeated before he actually proposed the idea. Frankly, it makes no sense that a designer that wanted to create species in a way which wouldn’t look like evolution would use nested patterns of similarities. The fact that common descent caught on so readily as we learned more about fossils and living organisms, pretty much kills ReMine’s theory. In actuality, there are other patterns that would’ve been much harder to confuse with evolution. Unfortunately, ReMine seems committed to never conceding the case but perhaps that’s more an issue of not willing to give up on favorite idea. Most others have moved on.

And as I mentioned earlier, Todd Wood is putting something out there as well and trying to work out the details. Todd has the added benefit of maintaining some objectivity about the problems he faces and is realistic about how others will evaluate his work.

1 Like

Precisely because he is so objective and honest, Dr. Todd Wood has had so much difficulty surviving as an academic—and he gets almost nothing but scorn from his own Young Earth Creationist camp because he dares speak the truth to them. I respect his Christian testimony and how he conducts himself even if I don’t find much of his research all that worthwhile. (“Baraminology” is doomed—which is why even Ken Ham exploits it only as a bragging point and not with a healthy budget allocation. His “AIG scientists” on staff are there for speaking engagements and window dressing. He won’t sponsor their research with any serious budget or facilities so people don’t always stay on staff for long. And finances at AIG will get even tougher as the Ark Encounter starts draining the general fund revenues just as rapidly as the Creation Museum has been doing for years now. A propaganda machine dependent on tourist dollars is unsustainable, especially when many of the people who are most supportive of AIG’s “educational efforts” are cash-starved homeschooling families. So baraminology won’t go anywhere even if there was somewhere for it to go.)

1 Like

An aside – It’s a small world…
ReMine has pursued baraminology with Kurt Wise. Ultimately a Baraminology Study Group was formed that included Todd Wood and Wise, among others. Sternberg also participated in the Baraminology Study Group and served on the journal’s editorial board as an outside (non-YEC) reference, apparently having been brought in because of his interest in structuralism or typology.

So he think that “ensuring phenotypes will fall in nested clades” will disprove evolution??? Hmmm. I always thought that that was one of my types of evidence supporting evolution! (I have several obvious punchlines popping into my head, because Our Creator is associated with this, I don’t consider such humor appropriate so I won’t share them. But I’m certainly scratching my head about this.)

I admire your desire to put the very best interpretation and motives on Walter ReMine—I really mean that—but I really have to work hard to imagine God choosing that particular approach to “send a message”. Hmm. Very interesting.

Not the strongest and clearest message that I could imagine.

I have to say this coded messages idea is one of the dumbest things I’ve ever heard. It reminds me of the stupidity surrounding “The Bible Code” a few years ago.

3 Likes

Yes, it gets very difficult to put a positive spin on such things.

I find it interesting that some sort of Bible code fad comes back again about every 20 years.

There’s also variations on the Bible code ideas where things like Chinese characters are claimed to contain entire sermons and prophecies. People in many churches drink it right up—until those who are fluent in Chinese shake their heads and say “What!!!”

1 Like

Then we should be due for another one in about two years. Wm. Dembski, formerly of the Discovery Institute, reviewed The Bible Code favorably in First Things in 1998.