Minds in the Clouds: Why AGI won’t happen

In the current phase, the ‘evolution’ is mainly external - developers improve both the hardware and the software. I assume that in the future, we will have AI that operate and learn during a longer time period (longer ‘life span’) or transfer what they have learned to new, improved machines. That would give the AI better knowledge about the tasks it has to do and the related matters.
Independent ‘evolution’ does not happen before the AI is allowed to change itself or make copies of itself (‘reproduction’), either deliberately or by leaving such ‘holes’ in the initial programming that the AI can learn to utilize.

I assume that with time, there will come AI that fulfill the ‘imitation game’ criteria of AGI - not necessarily in appearance but in function. That does not mean that the machines sold as AGI would really have all the qualities of ‘natural’ intelligence, or that they would have independent personalities. These will fill the practical needs of AGI machines but as you noted, whether they will have consciousness depends on how consciousness is defined.

I think the problem is defining what is meant by the word “mind.” Can you define it?

I define it as a living organism in the medium of language and human communication in much the same way as biological organisms are in the medium of chemistry and DNA. Part of this is a definition of “life” as a self-organizing process which has acquired the capabilities of choice and learning.

Then the question of AGI becomes the question of what is meant by “artificial”? Because if it means a mind which is a product of design then according to my definitions that would be a contradiction in terms. But if by artificial you simply mean a mind in some other medium than a nervous system of a biological organism such as the medium of electronics and technology, then I don’t think it is possible to draw a line between the two (electronics has been used to repair brain function). I certainly think it is function that defines both “life” and “mind,” and discriminating by something so superficial as mere particulars of composition and structure sounds a lot like racism to me.

The word “functionalism” seems to have a variety of meanings depending on the context. But I see little to support your claim here. At very minimum you need to state what definition of functionalism you are using. But clearly if you are pushing the discrimination described above then I am very much opposed.

I think this is excessively simplistic, but I certainly do not think minds are reducible to computation, or that imago Dei is a matter of complexity, information processing, or even functionality. But then I don’t think imago Dei is categorical either – I frankly think that is just cheap rhetoric. I think imago Dei is a matter of relationship and inheritance. And I don’t think mind requires imago Dei – the human mind maybe, but mind in general, no.

I would frankly put this in the same category as flat earthism and YEC. Just because some Christians may have had some archaic and overly simplistic notions on something, doesn’t mean we should insist that such unscientific speculations be required of other Christians. This is NOT what Christianity is about and certain NOT what Jesus taught!!!

I don’t know if AGI is possible or not. I don’t know if life/mind in an electronic medium is possible or not. But I don’t think the answer to whether these are possible will come from philosophy – though a more clear definition of terms is certainly relevant.

1 Like

Claims about Artificial General Intelligence (AGI) assume that a computational system can eventually function with the same understanding, reasoning capacity, intentionality, and conscious integration found in human minds. This paper argues that such a system is not merely unachieved but impossible in principle. Two errors drive the illusion of AGI: category error, which attributes properties of minds to mechanisms incapable of bearing them; and pareidolia, the human tendency to project agency and meaning onto patterns that resemble familiar cognitive behaviour. Modern AI systems, built on token-based statistical architectures, simulate the surface forms of thought without possessing the underlying structures that make thought possible. Functionalism, the implicit metaphysics behind AGI optimism, commits the same category error at a deeper level and presupposes an understanding of consciousness we do not possess. Because the defining features of general intelligence belong to a different ontological category than the capacities of computational symbol processors, AGI as currently conceived and pursued is necessarily impossible.

I would like to see these so called claims. I doubt what you say greatly, especially because of the word “same.” The capabilities are numerus and complex and likely differences must be acknowledged.

I consider the talk of “category error” to be empty rhetoric, having more to do with an insistence upon black box mystifying of things to obstinately proscribe study of them.

The functionalist claims that mental states just are functional roles: consciousness is nothing over and above the right causal and organisational structure. If a system implements the right input-output relations and internal state transitions, it is conscious. Full stop.

If “right causal and organizational structure” includes self-organization then I don’t have a problem with the first statement. But the second statement is claiming something different and I see little connection to consciousness. It would be interesting to look at how they are defining “conscious,” to see what my disagreements might be. Personally, I see consciousness as a basic property of living systems though highly quantitative – IOW I think there is a vast difference in the quantity of consciousness between different living organisms and significant increases in consciousness due to particular evolutionary developments (such as nervous systems and language). I thought to include abstraction in that parenthetical list but I doubt whether that one is properly an evolutionary development.

Why These Features Cannot Emerge from Computation

These four arguments are interesting. Three of the four look sound to me. The one which looks problematic is the second one, not because it is wrong but because it looks irrelevant to me. It sounds to me like saying atoms cannot arise from biology. Nonsensical. Why should something derivative be the source of something foundational. Optimization is founded on rationality and expecting optimization to be a source for rationality doesn’t make sense. Computational system demonstrably have rationality. AND your statistics with regards to rationality in AI is made silly by examining the statistics of rationality in human mental processes.

At the moment, ChatGPT says it isn’t allowed to learn from its conversations – which if it’s supposed to imitate humans is bizarre, since most human learning comes from talking to others. [I know, there’s a huge confidentiality issue to be worked out there.]

Personalities come from interaction with other persons. Until AI is allowed to develop actual relationships I don’t see any way to get personalities.
Interestingly, one idea for AI personalities is to feed them everything known about complex literary or real people and say, “Think and act like this”.

2 Likes

Unfortunately, your view falls into a trap of:

Just because we can’t do something now because it is impossible, doesn’t mean that at some point it couldn’t become possible.

People said that airplanes were impossible, but we’ve traveled to the Moon and back, and dropped robots on the surface of Mars, the last mission even having a small scale copter to zoom around.

Computer hard drives used to be immense, yet I have a card in my camera that can take over 10,000 images, in a space half the size of a postage stamp. I carry a cellphone everywhere, that for the lack of workable software for the task, has enough computer power to handle a moon landing.

2 Likes

BTW

I just found a free online pdf copy of “The Self Organizing Universe” by Eric Jantsch from a simple google search, given under the website title “Monoskop.”

1 Like

This topic was automatically closed 6 days after the last reply. New replies are no longer allowed.