Which is to say, then, that understanding has nothing to do with the information we find in DNA.
DNA certainly does not understand the information encoded in it. Neither does a TV understand anything of the broadcast signal. In the same way, a piece of paper understands nothing of the grocery list written on it. Neither does the pen that was used to write the list. Neither does a hard drive or an ethernet port understand this conversation we are having.
Yet all these things convey information. Understanding has nothing to do with information.
The meaning is also called “semantics”.
This is random data has a meaning assigned by you. It is a piece of data you want to see reliably transmitted to check the accuracy of a communication channel.
There is some irony here in your prior statement. Remember this exchange?
So, apparently one can assign semantic meaning to random data at will. And it can be transmitted even if you are the only party that understands it’s significance and how it should be interpreted.
This is the issue though @NonlinOrg. Who can actually quantify the among of meaning or information there is in a message without understanding it? This is impossible. We never know if what looks like random noise to us is actually meaningful and important information to someone else.
DNA is certainly not designed to communicate with human scientists. We cannot even process short stretches of it in our brain, and have to rely on computer software at every step to even to begin to think about it.
So, here is the real question, the real task we are faced with. I can give give you two sequences:
- a sequence of DNA that is totally random (and I will not tell you how I generated it)
- a sequence of DNA the same length encodes a biologically important function.
We can extend this further. I can give you as many pairs of examples as you like (thought not infinite =).
Please tell me how you will determine the amount of information in each of these DNA sequences? Do you expect to be able to easily tell the difference between the two? How would you quantify the the amount of information in them? Can you quantify the amount of order? Would you be able to determine which one was random vs. functional? What mathematical formula or algorithm would you use?
My point is that it trivial for me to give you sequences where:
- Neither you nor experts in the field would be able tell the difference between the random and functional sequences.
- Neither could anyone write a piece of software that could tell the difference.
- Neither could anyone design a biological experiment to discriminate the two sequences.
- Neither could anyone even compute the true entropy of these sequences. Because I generated the random sequence, I will be able to tell you the true entropy.
- Neither can anyone even compute the true information content of these sequences. Because I know which one encodes the biological function and which one is random, you would have to give me different numbers for each sequence.
If you think I am wrong, you can always take me up on the challenge. We can see how far you get.
This is the core of the problem. If you cannot understand the data entirely, you have absolutely no way to confidently f answer the important questions about it. Applying a formula to it or qualitatively reasoning about it gets you no where of consequence. You might as well just be staring at static on a TV screen. The fact that it looks like static to you tells you absolutely nothing about what it really is.
If this is true, and it is. What exactly is the information theory argument for Intelligent Design?