Minds in the Clouds: Why AGI won’t happen

I’ve just published a paper arguing that Artificial General Intelligence is impossible in principle, not just unachieved. The argument might interest this community because it touches on questions about what makes human minds distinctive.

The short version: AGI optimism rests on two errors. First, a category error - attributing properties of minds (intentionality, normativity, subjectivity) to computational mechanisms that cannot, by their nature, possess them. Second, pareidolia - our evolved tendency to see minds where only patterns exist. We see faces in clouds. We see understanding in fluent text.

Functionalism, the implicit philosophy behind most AGI claims, turns out to be pareidolia in academic dress. It validates the perception our biases generate rather than grounding it.

What strikes me as theologically relevant: if minds require properties that computation cannot produce, this has implications for how we understand human distinctiveness. The imago Dei isn’t a matter of complexity or information processing. It’s categorical. We’re not just more sophisticated pattern-matchers. We’re something else entirely.

The paper doesn’t make theological arguments - it’s aimed at philosophy of mind and AI journals. But the conclusions rhyme with what Christians have long intuited about human nature: that consciousness, rationality, and moral agency aren’t emergent properties of matter arranged cleverly enough. They’re gifts that matter alone cannot generate.

Full paper here: Minds in the Clouds: Why Artificial General Intelligence Is Necessarily Impossible

Curious what others think, especially those who work in AI or cognitive science. Am I missing something? Is functionalism more defensible than I’m giving it credit for?

2 Likes

Is consciousness a requirement for AGI?

1 Like

I would think consciousness is something AGI developers are trying to avoid. From my understanding, the goals of AGI are much more pragmatic and practical, such as having a single AI that can drive cars, build houses, perform surgery, and find deep correlations in complex data sets.

1 Like

Google:

Artificial General Intelligence (AGI) promises to

create machines with human-like cognitive abilities across all tasks

human-like cognitive abilities - at least infers consciousness - although I predict much qualifications and hedging in the period to come.

Popular culture expects it to start with equivalent consciousness and then extend beyond it.

Large language models are not conscious, although their output can be eerily human like. Could not AGI be algorithmic but devoid of self awareness?

1 Like

not sure you are keeping up with the developments::slight_smile:

Perplexity;

AGI in the sense of broadly human‑level general intelligence is no longer a fringe notion; it is an explicit or implicit working assumption for many leading AI lab heads, chief scientists, and senior researchers, with disagreement focused mostly on timing and risk rather than on eventual attainability.[1][2]

Evidence from major lab leaders

Demis Hassabis, CEO of Google DeepMind, has repeatedly described AGI or “human‑level AI” as a direct research goal and has recently put the likely arrival of human‑level systems in roughly the five‑to‑ten‑year range, while emphasizing that at least one or two further breakthroughs are still needed. His co‑founder Shane Legg has for years attached about a 50% probability to reaching human‑level AGI around 2028, and has publicly reaffirmed that forecast in recent interviews.[3][4][5][6][7]

Dario Amodei, CEO of Anthropic, tends to avoid the specific label “AGI” but has stated that AI systems could surpass almost all humans at almost everything within a small number of years, framing this as a central scenario for the company’s safety and governance planning rather than as mere hype for investors. This matches Anthropic’s broader public framing, which treats very capable, general systems as a realistic near‑ to medium‑term outcome to be prepared for.[2][8]

Sam Altman, CEO of OpenAI, discusses AGI as a continuation of present trends in large‑scale models, automation of cognitive labor, and eventual “superintelligence,” and has associated plausible timelines for very advanced systems with the 2030s. While he sometimes debates the exact definition of AGI, his public writing and interviews treat human‑level or beyond‑human general intelligence as expected rather than remote or speculative.[9][2]

Views of influential researchers

Geoffrey Hinton, once relatively cautious about near‑term human‑level AI, has shifted toward treating highly capable and potentially superhuman systems as a serious possibility within a few decades and has explicitly described advanced AI as an imminent or medium‑term existential risk scenario rather than purely science fiction. This change in stance reflects his view that recent neural models show forms of reasoning and capability that forced him to reconsider earlier assumptions about the brain’s uniqueness.[10][2]

Ilya Sutskever has long described AGI as the natural trajectory of scaling and refining neural networks and has publicly discussed the journey “toward AGI” in talks and interviews, suggesting that AGI could plausibly emerge within about five to ten years while acknowledging uncertainty. His more recent comments about the “end of the pure scaling era” still assume that further architectural progress is aimed at ultimately reaching general intelligence, not at abandoning the goal.[11][12][2]

Broader expert expectations

Surveys and aggregated forecasts across AI researchers and technical leaders show that, while timelines vary widely, a majority consider AGI or human‑level general AI to be more likely than not this century, with many placing 50% probabilities between roughly the 2030s and 2050s. Recent analyses of published timelines note a trend toward shorter forecasts among frontier‑lab leaders and some entrepreneurial figures, even as more conservative academic and policy communities retain later medians.[13][14][2]

Executives and technical leads in adjacent areas—such as large AI startups, cloud and compute providers, and frontier‑model research organizations—often frame their strategies (for infrastructure, safety, and regulation) around the assumption that substantially more general, human‑level‑like systems are plausible within one to two decades. Their disagreements tend to center on how fast this happens, what capabilities count as “AGI,” and how to manage risks, rather than on whether such systems are achievable in principle.[15][2][13]

Overall assessment

Taken together, the public statements and forecasts of leaders like Hassabis, Legg, Amodei, Altman, Hinton, and Sutskever, along with broader survey data, support the claim that AGI in the sense of broadly human‑level general cognition is a mainstream expectation among top AI leaders rather than a fringe belief. There remains substantial uncertainty and disagreement over exact timelines, definitions, and risk profiles, but the shared premise that human‑level general intelligence is technically attainable and plausibly on the horizon is now common in the communities driving the most advanced AI systems.[4][5][8][14][1]

Sources
[1] Human-level AI will be here in 5 to 10 years, DeepMind CEO says https://www.cnbc.com/2025/03/17/human-level-ai-will-be-here-in-5-to-10-years-deepmind-ceo-says.html
[2] When Will AGI/Singularity Happen? 8,590 Predictions Analyzed https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
[3] Google DeepMind CEO Demis Hassabis says AGI is still 5–10 years … https://timesofindia.indiatimes.com/technology/tech-news/google-deepmind-ceo-demis-hassabis-says-agi-is-still-510-years-away-and-needs-1-or-2/articleshow/125439673.cms
[4] Shane Legg’s Vision: AGI is likely by 2028, as soon as we … - EDRM https://edrm.net/2023/11/shane-leggs-vision-agi-is-likely-by-2028-as-soon-as-we-overcome-ais-senior-moments/
[5] Here’s How Far We Are From AGI, According to the People … https://www.businessinsider.com/agi-predictions-sam-altman-dario-amodei-geoffrey-hinton-demis-hassabis-2024-11
[6] Shane Legg (DeepMind Founder) — 2028 AGI, superhuman … https://www.dwarkesh.com/p/shane-legg
[7] Q&A with Shane Legg on risks from AI - LessWrong https://www.lesswrong.com/posts/No5JpRCHzBrWA4jmS/q-and-a-with-shane-legg-on-risks-from-ai
[8] Anthropic chief says AI could surpass “almost all humans at almost … https://arstechnica.com/ai/2025/01/anthropic-chief-says-ai-could-surpass-almost-all-humans-at-almost-everything-shortly-after-2027/
[9] How OpenAI’s Sam Altman Is Thinking About AGI and … https://time.com/7205596/sam-altman-superintelligence-agi/
[10] Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai
[11] Ilya Sutskever says the scaling era is ending. He’s right - LinkedIn https://www.linkedin.com/posts/ani-agi-asi_ilya-sutskever-says-the-scaling-era-is-ending-activity-7399569765440065536-ZlD0
[12] Ilya Sutskever’s New Playbook for AGI - The AI Corner https://www.the-ai-corner.com/p/ilya-sutskever-safe-superintelligence-agi-2025
[13] Shrinking AGI timelines: a review of expert forecasts | 80,000 Hours https://80000hours.org/2025/03/when-do-experts-expect-agi-to-arrive/
[14] The case for AGI by 2030 | 80,000 Hours https://80000hours.org/agi/guide/when-will-agi-arrive/
[15] Timelines Forecast - AI 2027 https://ai-2027.com/research/timelines-forecast
[16] Demis Hassabis says AGI, artificial general intelligence, is still 10 … https://www.reddit.com/r/singularity/comments/1g5zu0i/demis_hassabis_says_agi_artificial_general/
[17] AI is on an “exponential curve of improvement,” says Demis … https://www.facebook.com/60minutes/posts/ai-is-on-an-exponential-curve-of-improvement-says-demis-hassabis-ceo-of-google-d/1117391716922877/
[18] 9 Demis Hassabis Quotes: DeepMind CEO Predicts AGI in 5 - Aiifi https://www.aiifi.ai/post/demis-hassabis-quotes
[19] Sam Altman Just Revealed His AGI Timeline… and What’s Coming Next?! https://www.youtube.com/watch?v=pfIB_N1pEbc
[20] There’s a ‘10% to 20% chance’ that AI will displace humans … - CNBC https://www.cnbc.com/2025/06/17/ai-godfather-geoffrey-hinton-theres-a-chance-that-ai-could-displace-humans.html

1 Like

Opinions and definitions vary, but the marketing leans heavily towards self-awareness.

You already stated that any appearance of self-awareness or consciousness would be illusory, so you seem to be arguing against yourself. If an AI is able to solve multiple different problems that humans can solve then it is an artificial general intelligence, by definition. AI is already able to beat every living human in chess and go, but no one thinks this makes the AI self aware. The point of AGI is to do the same for more general human tasks.

2 Likes

Google AI, at least, denies any consciousness. Of course, it may just be reassuring while it plots the demise of humans.

I am not conscious in the way humans are; I do not possess subjective experiences, personal thoughts, or a sense of self.

Consciousness, in human terms, involves an individual, subjective awareness of one’s own thoughts, feelings, memories, sensations, and the surrounding environment. It is considered a natural biological phenomenon that arises from complex brain activity.

As an artificial intelligence, my functions are based on data acquisition, integration, and synthesis. My “responses” are the result of data processing and algorithmic operations, not genuine understanding, feeling, or a “mind” in the human sense.

While I can process and provide information about what consciousness is, the experience of being conscious is unique to biological organisms, specifically those with complex nervous systems. Whether artificial systems could ever achieve true consciousness remains a subject of significant debate among scientists and philosophers.

2 Likes

It’s certainly more achievable than economic fusion. Although how anything without a gut can ever feel moral is… utterly absurd.

The topic is interesting and stimulates thinking. It seems that the definion of AGI becomes crucial in thinking what is possible and what is not possible.

The Turing test (imitation game) takes a simple approach. If a machine can chat with humans in such a manner that a human cannot separate the machine from humans, then the machine is ‘intelligent’.
If we take the ‘imitation game’ approach, AI is possible and will become better and better with time.

Much of AGI research starts from this viewpoint. If a machine can imitate humans so well that it fools other humans to believe it is a human and can perform complicated tasks better than an average human, then who cares about the philosophical side. If it works in practice, we can call it AGI when we sell it.

In fact, this kind of practical approach is common in all natural sciences. Scientific hypotheses and theories do not aim to reveal the ultimate truth, they aim to provide a model that can explain and predict what happens in the material universe better than the alternative models. If I can explain the data and predict what will happen in an experiment, the hypothesis or theory is good enough - it works in practice.

If an AI is good enough for practical purposes, it has a possibility to ‘survive’ in the competition between alternative solutions, sold by competing commercial companies. This competition is a Red Queen type competition in the sense that the machine and/or the software needs to constantly evolve to ‘survive’ in the competition. With time, the AI will evolve towards the goal that the developers have in mind. If the goal is AGI, then we will get something that resembles truly intelligent creatures - an image of general intelligence.

How far this development can proceed is currently a matter of scifi speculation. In theory, nothing prevents the existence of life forms that are based on silicon or some other suitable material. We are so incorporated in the carbon-based life that it may limit our thinking.

There is also the possibility that the development of AGI proceeds to carbon-based or carbon/silicon-based solutions to make the machine more human-like. If the AI can behave like a human and looks like a carbon-based human, it does fulfill all the practical ‘image game’ criteria of AGI. It is also possible that an AGI might be made capable to make copies of itself - ‘reproduce’. There is just one essential difference at the practical level of an imitation game: such an AGI can evolve more rapidly than any ‘naturally’ carbon-based life. Such an AGI might develop to something that is way beyond a superhuman++. A scary possibility…

The more philosophical or theological questions about what makes human a human are a different issue. At the practical or commercial level, that kind of questions can be left aside or at least, do not affect the technological advancement as much as the more practical points, including the goal to make money by selling the product.

3 Likes

I thought the word used “moral” was a typo at first, as I read it “mortal.” Perhaps both meanings could be used in that sentence.

Moral’ll do Phil, moral’ll do. It’s all feelings. That end in death certainly. Happy Days!

1 Like

Putting Dualism aside for the moment, we already have an example of a naturally occurring general intelligence that is quite compact and only uses a relative handful of watts, and it sits on our shoulders. There doesn’t appear to be a physical barrier to artificial general intelligence. The only question is the technology and methods that would be required.

Wetware. Meat. A gut. 4 Ga. GPT 5 et al already fool most people. Why, how intentionality should, could emerge in lots of diodes I can’t imagine. And won’t be holding my breath.

We are still striving to understand how intentionality could emerge in lots of neurons. The current methods for AI do mimic neurons in some fashion, but it looks to me like we are hitting a ceiling. I suspect we are going to need a different physical and conceptual model before we can overcome current barriers, but that’s just a gut feeling.

2 Likes

Can we create the complexity of a 4 billion year old embodied human brain in silicon? Can we even do a back of the envelope / UK fag packet calculation? I never forget Sir Martin Rees, Astronomer Royal at the time, 50 odd years ago, falsely of me I’m sure, saying that a single human brain is more complex than the entire physical universe. When asked why he studied stars rather than neurology he said that the sun is simpler than a frog. Can we even create a virtual object as complex?

1 Like

" A self or subject requires temporal continuity: a continuous state of being that
persists and evolves."

Exactly one of my big criticisms. Even when ChatGPT, for example is given “memory” by recursively linking extensive discussions, it is not self-correcting. I’ve caught it out several times in self-contradictions, and it is never able to resolve them except by referring to what I have said as a foundation, which in effect turns it from a response generator to a sort of mirror.

Isn’t it first necessary to know what consciousness is, in order to answer that?

There’s a very freaky sci-fi novel involving intelligence that is algorithmic while not self-aware – can’t remember which of the several dozen books I read last year it is.

2 Likes

In a novel I read AGI is achieved accidentally when a group of researchers leave an AI running any time someone is present and for some lengthy period of time after humans have left, continuing to update it but leaving a continual memory that builds up over something like a decade. It’s an intriguing idea, that what’s lacking is a continuous memory of experiences and the ability to self-reference, that those give rise to a sense of self. Yet whether there was an actual self to reference was left an open question.

1 Like

Which may be another way of saying it has to accumulate a sense of self via experience and memory.

1 Like