Years ago a powerful comptroller of the company I worked for told me of a lesson he learned as a young accountant. His job at the time was looking over expense account. He had been trained to go through hotel receipts, airline tickets, meal receipts etc to be sure everything matched. It was kind of like an algorithm he had been trained to do. One day his boss called him in and asked him about a particular expense report, and again Dave went through all the things that needed checking and told the boss, “Nope I don’t see what the problem is”. To which the boss said, “This guy never travels for our company!”
I tell this to illustrate a point that was somewhat raised by Matthew Connolly when he made the comment about how does the human mind apprehend math since it is not material, can’t taste it, feel it, smell it etc. That is what got me thinking along the lines I am laying out here.
The story above illustrates that algorithms are rote, follow the paths the algorithm’s programmer put into the program and are not very creative. I think it is in creativity that humans show that their minds are not algorithmic, which must be the case if consciousness is an epiphenomenon of the brain’s computations. The consensus view is that the human mind is the result of computation–which of course requires algorithms, but there is much that indicates algorithms won’t capture what the human mind can do.
Consider what Denning says:
"Today a different interpretation of thinking is challenging the old idea. Many of us believe that thinking is not logical deduction, but the creation of new ideas. Logical deduction seems too mechanical. When we recall our moments of insight, we often say that our emotional state affected us and that we had a bodily sense of our creation before we could put it into words. We regard thinking as a phenomenon that occurs before articulation in language, and it seems that machines, which are programmed inside language, cannot generate actions outside language." Peter J Denning, "Is thinking computable?" American Scientist March April 1990, p. 3
This limitation of the language used to program the algorithm is what makes it hard for computers to do creative thinking. Computers would, like my accountant friend, be unable to see outside of the algorithm, to create an entirely new line of thinking. Like maybe a new interpretation of an old phenomenon. Denning then says:
"Like a system of logic, an interpretation cannot include all phenomena. Our powers of conscious observation give us a capacity to step outside a particular interpretation and devise extensions or alternatives. Thus consciousness itself cannot be captured by any fixed description or interpretation . How then can consciousness be captured by an algorithm, which is, by its very nature, a fixed interpretation? This question applies also to algorithms that are apparently designed to shift their interpretations, because, the rules for shifting constitute an interpretation themselves ." Peter J Denning, "Is thinking computable?" American Scientist March April 1990, p4
Im going to illustrate what Denning is saying about interpretations above. Consider programming a computer to decide the truth or falsehood of various statements. and it runs into the sentence
The present king of France is bald (a sentence first used afaik, by Bertrand Russell)
Interpreting this sentence as a statement about a non-existent king, we know immediately that the sentence is false. We don’t know it because the present king of France has hair, we know it because there is no present king of France and anything said about him is irrelevant. Note I didn’t say meaningless, we all perfectly understand the assertion that the poor king has no hair.
So, if we program the computer to recognize that this is true and the computer uses normal rules of logic and negation (the negative of a false statement is true), then we find the computer proclaiming that the sentence
The present king of France is not bald
as true! Of course this is false as well because there is no present king of France.
So we go back to the drawing board and reprogram the computer to drop the rule of the excluded middle for this specific paradoxical sentence alone (leaving the excluded middle alive for other sentences to which it applies. The excluded middle says
"for every signifcant sentence, either it or its negation must be true
So now we can program the computer with this exception to that rule. And we are happy. Except for…
What if we interpret this last sentence, the present king of France is not bald as meaning "Since there is no present king of France he certainly isn’t bald? In that case the last sentence is true, and our computer algorithm with the excluded middle exception is now wrong.
But wait, there is more. Go back to the Present King of France is bald ? We agreed that it is false, UNLESS, we interpret it as meaning, the nearest relative of the last king of France is a shoemaker living in Tulsa, Oklahoma, and he has the right to the throne by descent, and he is bald! Under that interpretation, the original statement would be true.
I would contend that no algorithm could encapsulate all these options because they are limited to its programming and must follow it robotically. A dog tied to a street sign, who walks around it over and over eventually having the leash wrapped so many times around the pole and the dogs neck is at the pole, it never crosses the dogs mind to reverse course. It is beyond his ability to solve that problem. Similarly algorithms, no matter how good are not good at thinking out of the limitations of the software. Humans seem remarkably capable of thinking new thoughts. To me this is just another indication that the human mind is something other than an epiphenomenon.
Note: Russell thinking of this sentence in light of the excluded middle was an act of creation unlikely to be created by an algorithm of his brain, and I added a few creative interpretations to the analysis which I have never read in any philosophy books. That too was a creative act,small as it was.