What Does ‘AGI’ Actually Mean? (And Why Your Vendor Won’t Tell You)

Post 2 in the Preparing for AGI series.

Imagine a ninety-minute board AI strategy meeting that ends without anyone establishing what was being argued about. The chief technology officer (CTO) is using ‘AGI’ to mean a system that could replace most knowledge workers. The chief risk officer (CRO) is using it to describe a regulatory category. The vendor in the room is using it to mean their product (six months from now!). Three people, one acronym, three completely different conversations.

This could be the most expensive misunderstanding in enterprise AI today. ‘AGI’ (Artificial General Intelligence) is a single label doing very different work in different situations. Vendor pitches use one definition. Regulatory filings use another, or more often, avoid the term entirely. Research papers use a third, in which today’s frontier models already sit somewhere on the scale. Until a board clarifies which definition is being discussed, any debate about timing (AGI by 2027, AGI is decades away, etc.) is essentially unprovable. People are not disagreeing about facts. They are disagreeing about semantics.

Definition 1: The strict one

AGI, as classically understood, is a system that can do almost any cognitive task a human can, and can easily switch between different types of tasks. By that standard, nothing currently deployed is AGI. No current model can learn a new skill the way a person does. Picking it up from a handful of examples, retaining it across months, generalising it to wholly new situations. Frontier systems still hallucinate, still fail at sustained multi-hour tasks, still need careful prompting to function reliably outside familiar territory. Under this strict definition, “when will we have AGI?” is largely indistinguishable from “when will we have a digital person?” Many serious researchers believe the answer is decades, or never with the current technical approach.

This is the definition academics tend to favour. It is also the definition vendors quietly avoid in sales meetings.

Definition 2: The economic one

A looser definition, favoured by the chief executives of frontier AI laboratories such as OpenAI, Anthropic and Google DeepMind, and increasingly by financial markets, is closer to ‘AGI is AI that can do most economically valuable knowledge work to a human standard.’ OpenAI’s charter puts it precisely: ‘highly autonomous systems that outperform humans at most economically valuable work.’ [1]. Anthropic’s chief executive Dario Amodei, in his 2024 essay Machines of Loving Grace, deliberately avoids the term AGI altogether and uses ‘powerful AI’ instead. He defines it as a system smarter than a Nobel laureate across most relevant fields, able to use any tools a remote worker can use, with the capacity to be run as millions of parallel instances. He thinks it could arrive as early as 2026 [2].

But notice what this definition does. It removes the requirement to match the full breadth of human cognition. It removes physical embodiment. It removes the philosophical questions about consciousness and understanding. What it does keep, though, is the part that matters commercially: can the system replace the work that people are paid to do?

Under this definition, the bullish 2027 forecasts become at least debatable. They are still contested (many researchers think it will take far longer) but the conversation is grounded in something measurable. The trouble is that the same term, AGI, is now pointing at a much narrower target than the strict definition allows. A vendor saying “we’re approaching AGI” under definition two, and a board member hearing “approaching AGI” through definition one, are not having the same conversation.

Definition 3: The levels framework

A third approach, proposed in a 2023 paper by Google DeepMind researchers, rejects the idea that AGI is a single threshold to cross. Instead, Morris and colleagues set out five levels of AGI performance (Emerging, Competent, Expert, Virtuoso and Superhuman) crossed with two dimensions of generality (narrow versus general) [3]. The framework deliberately mirrors the levels-of-autonomy approach used in self-driving cars, where ‘Level 3’ means something specific and contractual rather than aspirational [4].

This is very important. Under DeepMind’s own framework, today’s leading large language models are classified as ‘Emerging AGI’ overall (broadly comparable to an unskilled human across general tasks) [4]. But on narrow tasks, the same models already perform at ‘Competent’ or even ‘Expert’ level. For example, AlphaFold outperforms human scientists at protein structure prediction, and the best coding models outperform most professional developers on certain programming benchmarks. In other words, on this framework, parts of ‘AGI’ have already arrived. Other parts may take decades, and there is no single moment at which AGI is ‘achieved’.

This is the most decision-useful framing of the three, because it allows boards to ask the question that has an answer. Where, in our specific operations, is the technology already at ‘Expert’ level? Where is it stuck at ‘Emerging’? “Have we reached AGI?” has no answer. “Has the technology reached Expert level for our contract review workflow?” can be tested.

Why your vendor won’t explain clearly

Lack of clarity in definitions is commercially useful – until it isn’t. The clearest illustration sits inside the contract between OpenAI and Microsoft, and the way that contract has unravelled tells you everything you need to know about why definitions matter.

For most of the past five years, Microsoft’s licence to OpenAI’s technology contained an ‘AGI clause’: once OpenAI’s board declared that the company had reached AGI, Microsoft would lose access to anything built beyond that point. As reported by The Information and others in late 2024, the two companies had separately agreed an internal financial definition tied to OpenAI generating $100 billion in profits [5]. One term, two private definitions inside the same partnership. A technical-sounding term operating as a financial trigger.

The clause then proceeded to fall apart in two stages. In October 2025, when OpenAI completed its recapitalisation as a Public Benefit Corporation and Microsoft’s stake settled at roughly 27% (about $135 billion), the unilateral nature of the clause was removed. Any AGI declaration would now have to be verified by an independent expert panel rather than called by OpenAI’s board alone [6]. Six months later, on 27 April 2026, both companies took the further step of removing the AGI trigger entirely. Microsoft’s licence is now non-exclusive and runs to a fixed calendar date (2032) regardless of what OpenAI builds before then [7].

Read that again. Two of the companies most invested in the concept of AGI – one of which has built its corporate identity around achieving it – looked at the term, looked at the billions of dollars hanging on its definition, and replaced it with a date. ‘2032’ is the most direct possible admission that the term is too unstable to bear contractual weight. When a definition must do legal work, parties retreat to something an auditor can measure.

Regulators reached the same conclusion earlier and more quietly. The EU AI Act, the most significant AI law in force today, does not use the term ‘AGI’ at all. Its operative category is ‘general-purpose AI model’ (GPAI), defined in Article 3 by capability and, crucially, by the amount of compute used to train it [8]. Models trained with more than 10²⁵ floating-point operations are presumed to carry ‘systemic risk’ and trigger heavier obligations under Article 51 [9]. That is a regulatory definition that can actually be measured. It is also the definition that creates legal exposure, regardless of what anyone calls the underlying system in marketing materials.

The lesson for boards is not that AGI is a fiction. It is that the term, on its own, is unfit for contractual or strategic use. Wherever a real decision needs to be made, definitions converge on something measurable: a profit threshold, a compute threshold, a date. Vendor pitches that lean heavily on the unqualified term ‘AGI’ are doing the opposite, borrowing the urgency of the concept while avoiding the precision that would let you hold them to it.

The same headline, three readings

Take a typical news item: “AI firm announces AGI by 2027.” Under definition one, the claim is unconvincing. Under definition two, it is contested but coherent. Under definition three, it is meaningless without specifying which performance level on which axis. A board that does not know which definition is being used cannot decide whether to invest, hedge or ignore.

The same applies in reverse. When a respected researcher says, “AGI is decades away”, as Yann LeCun and others routinely do, they almost always mean definition one. They are not denying that economically transformative AI is plausible on shorter timelines. The bullish and the sceptical camps frequently agree on the underlying capability picture and disagree mainly on what to call the result.

The takeaway

Before debating timing, ask: “under which definition?” It costs nothing. It takes thirty seconds. It is the single biggest improvement most boardrooms can make to their AI conversations. The next time a vendor says ‘AGI’, a regulator says ‘GPAI’, or a research paper says ‘Level 3 Expert’, the right response is not to nod. It is to ask what each of them means, and to insist that the same term, in the same room, points to the same thing.

The AGI debate is not going to be settled by a board meeting. The conversation about it can be.

References

[1] OpenAI, “OpenAI Charter”, 2018, https://openai.com/charter/

[2] Dario Amodei, “Machines of Loving Grace: How AI Could Transform the World for the Better”, 2024, https://www.darioamodei.com/essay/machines-of-loving-grace

[3] Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet and Shane Legg, “Levels of AGI for Operationalizing Progress on the Path to AGI”, https://arxiv.org/abs/2311.02462

[4] Will Douglas Heaven, “Google DeepMind wants to define what counts as artificial general intelligence” MIT Technology Review, 2023, https://www.technologyreview.com/2023/11/16/1083498/google-deepmind-what-is-artificial-general-intelligence-agi/

[5] Maxwell Zeff, “Microsoft and OpenAI have a financial definition of AGI: Report” TechCrunch, 2024, https://techcrunch.com/2024/12/26/microsoft-and-openai-have-a-financial-definition-of-agi-report/

[6] Owen Hughes, “Microsoft and OpenAI Reset Partnership with New AGI Terms” TechRepublic, 2025, https://www.techrepublic.com/article/news-microsoft-openai-reset-partnership/

[7] Alex McFarland, “Microsoft Loses OpenAI Exclusivity and AGI Clause in Amended Deal” Unite.AI, 2026, https://www.unite.ai/microsoft-loses-openai-exclusivity-and-agi-clause-in-amended-deal/

[8] European Union, Regulation (EU) 2024/1689 (Artificial Intelligence Act), Article 3 (Definitions), https://artificialintelligenceact.eu/article/3/

[9] European Commission, Guidelines for providers of general-purpose AI models under the AI Act, 2025, https://digital-strategy.ec.europa.eu/en/policies/guidelines-gpai-providers


Discover more from Curious About

Subscribe to get the latest posts sent to your email.


Posted

in

,

by

Comments

Leave a comment