not sure you are keeping up with the developments:
Perplexity;
AGI in the sense of broadly human‑level general intelligence is no longer a fringe notion; it is an explicit or implicit working assumption for many leading AI lab heads, chief scientists, and senior researchers, with disagreement focused mostly on timing and risk rather than on eventual attainability.[1][2]
Evidence from major lab leaders
Demis Hassabis, CEO of Google DeepMind, has repeatedly described AGI or “human‑level AI” as a direct research goal and has recently put the likely arrival of human‑level systems in roughly the five‑to‑ten‑year range, while emphasizing that at least one or two further breakthroughs are still needed. His co‑founder Shane Legg has for years attached about a 50% probability to reaching human‑level AGI around 2028, and has publicly reaffirmed that forecast in recent interviews.[3][4][5][6][7]
Dario Amodei, CEO of Anthropic, tends to avoid the specific label “AGI” but has stated that AI systems could surpass almost all humans at almost everything within a small number of years, framing this as a central scenario for the company’s safety and governance planning rather than as mere hype for investors. This matches Anthropic’s broader public framing, which treats very capable, general systems as a realistic near‑ to medium‑term outcome to be prepared for.[2][8]
Sam Altman, CEO of OpenAI, discusses AGI as a continuation of present trends in large‑scale models, automation of cognitive labor, and eventual “superintelligence,” and has associated plausible timelines for very advanced systems with the 2030s. While he sometimes debates the exact definition of AGI, his public writing and interviews treat human‑level or beyond‑human general intelligence as expected rather than remote or speculative.[9][2]
Views of influential researchers
Geoffrey Hinton, once relatively cautious about near‑term human‑level AI, has shifted toward treating highly capable and potentially superhuman systems as a serious possibility within a few decades and has explicitly described advanced AI as an imminent or medium‑term existential risk scenario rather than purely science fiction. This change in stance reflects his view that recent neural models show forms of reasoning and capability that forced him to reconsider earlier assumptions about the brain’s uniqueness.[10][2]
Ilya Sutskever has long described AGI as the natural trajectory of scaling and refining neural networks and has publicly discussed the journey “toward AGI” in talks and interviews, suggesting that AGI could plausibly emerge within about five to ten years while acknowledging uncertainty. His more recent comments about the “end of the pure scaling era” still assume that further architectural progress is aimed at ultimately reaching general intelligence, not at abandoning the goal.[11][12][2]
Broader expert expectations
Surveys and aggregated forecasts across AI researchers and technical leaders show that, while timelines vary widely, a majority consider AGI or human‑level general AI to be more likely than not this century, with many placing 50% probabilities between roughly the 2030s and 2050s. Recent analyses of published timelines note a trend toward shorter forecasts among frontier‑lab leaders and some entrepreneurial figures, even as more conservative academic and policy communities retain later medians.[13][14][2]
Executives and technical leads in adjacent areas—such as large AI startups, cloud and compute providers, and frontier‑model research organizations—often frame their strategies (for infrastructure, safety, and regulation) around the assumption that substantially more general, human‑level‑like systems are plausible within one to two decades. Their disagreements tend to center on how fast this happens, what capabilities count as “AGI,” and how to manage risks, rather than on whether such systems are achievable in principle.[15][2][13]
Overall assessment
Taken together, the public statements and forecasts of leaders like Hassabis, Legg, Amodei, Altman, Hinton, and Sutskever, along with broader survey data, support the claim that AGI in the sense of broadly human‑level general cognition is a mainstream expectation among top AI leaders rather than a fringe belief. There remains substantial uncertainty and disagreement over exact timelines, definitions, and risk profiles, but the shared premise that human‑level general intelligence is technically attainable and plausibly on the horizon is now common in the communities driving the most advanced AI systems.[4][5][8][14][1]
Sources
[1] Human-level AI will be here in 5 to 10 years, DeepMind CEO says https://www.cnbc.com/2025/03/17/human-level-ai-will-be-here-in-5-to-10-years-deepmind-ceo-says.html
[2] When Will AGI/Singularity Happen? 8,590 Predictions Analyzed https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
[3] Google DeepMind CEO Demis Hassabis says AGI is still 5–10 years … https://timesofindia.indiatimes.com/technology/tech-news/google-deepmind-ceo-demis-hassabis-says-agi-is-still-510-years-away-and-needs-1-or-2/articleshow/125439673.cms
[4] Shane Legg’s Vision: AGI is likely by 2028, as soon as we … - EDRM https://edrm.net/2023/11/shane-leggs-vision-agi-is-likely-by-2028-as-soon-as-we-overcome-ais-senior-moments/
[5] Here’s How Far We Are From AGI, According to the People … https://www.businessinsider.com/agi-predictions-sam-altman-dario-amodei-geoffrey-hinton-demis-hassabis-2024-11
[6] Shane Legg (DeepMind Founder) — 2028 AGI, superhuman … https://www.dwarkesh.com/p/shane-legg
[7] Q&A with Shane Legg on risks from AI - LessWrong https://www.lesswrong.com/posts/No5JpRCHzBrWA4jmS/q-and-a-with-shane-legg-on-risks-from-ai
[8] Anthropic chief says AI could surpass “almost all humans at almost … https://arstechnica.com/ai/2025/01/anthropic-chief-says-ai-could-surpass-almost-all-humans-at-almost-everything-shortly-after-2027/
[9] How OpenAI’s Sam Altman Is Thinking About AGI and … https://time.com/7205596/sam-altman-superintelligence-agi/
[10] Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai
[11] Ilya Sutskever says the scaling era is ending. He’s right - LinkedIn https://www.linkedin.com/posts/ani-agi-asi_ilya-sutskever-says-the-scaling-era-is-ending-activity-7399569765440065536-ZlD0
[12] Ilya Sutskever’s New Playbook for AGI - The AI Corner https://www.the-ai-corner.com/p/ilya-sutskever-safe-superintelligence-agi-2025
[13] Shrinking AGI timelines: a review of expert forecasts | 80,000 Hours https://80000hours.org/2025/03/when-do-experts-expect-agi-to-arrive/
[14] The case for AGI by 2030 | 80,000 Hours https://80000hours.org/agi/guide/when-will-agi-arrive/
[15] Timelines Forecast - AI 2027 https://ai-2027.com/research/timelines-forecast
[16] Demis Hassabis says AGI, artificial general intelligence, is still 10 … https://www.reddit.com/r/singularity/comments/1g5zu0i/demis_hassabis_says_agi_artificial_general/
[17] AI is on an “exponential curve of improvement,” says Demis … https://www.facebook.com/60minutes/posts/ai-is-on-an-exponential-curve-of-improvement-says-demis-hassabis-ceo-of-google-d/1117391716922877/
[18] 9 Demis Hassabis Quotes: DeepMind CEO Predicts AGI in 5 - Aiifi https://www.aiifi.ai/post/demis-hassabis-quotes
[19] Sam Altman Just Revealed His AGI Timeline… and What’s Coming Next?! https://www.youtube.com/watch?v=pfIB_N1pEbc
[20] There’s a ‘10% to 20% chance’ that AI will displace humans … - CNBC https://www.cnbc.com/2025/06/17/ai-godfather-geoffrey-hinton-theres-a-chance-that-ai-could-displace-humans.html