The AGI Question Your Board Can’t Answer (And Why That’s Fine)

Post 1 of a series on preparing for AGI

Dario Amodei, the CEO of Anthropic, told the World Economic Forum (WEF) in January that artificial general intelligence (AGI) is likely within a few years, possibly by 2027 [1]. In the same month, the most recent combined summary of nearly 5,000 AI researchers’ forecasts put the 50% likelihood somewhere around 2040 to 2061, with several large surveys clustering closer to 2059 [2].

Three decades apart. Both estimates come from credible, technically literate people who have studied the question seriously.

If you are a board member or business leader, you cannot resolve this disagreement. Nobody can. The good news is that you don’t have to.

The disagreement isn’t going away

Some board-level questions become more specific as evidence grows. This isn’t one of them. The gap between the lab leaders and the wider research community has been roughly stable for two years. It reflects honest disagreement about whether today’s approach to AI (large models trained on enormous amounts of data) will keep improving at the current pace, or whether it will run into barriers that are hard to see from the inside.

Both sides have good points. Lab leaders highlight increasing capability, more money, and faster release cycles. Researchers outside labs see clear limits: current systems don’t learn from experience, have trouble with long tasks, and are costly to use at scale. Your next board report won’t solve this.

It’s tempting to wait and say ‘Come back when things are clearer.’ To see AGI as something far off, not a key focus now.

I would argue against this. Not because the technology arrival is certain, but because the strategic question is garbled.

‘When?’ is the wrong question

The right question for a board is not ‘when will AGI arrive?’ It is ‘what is the cost of being wrong in either direction?’

Imagine your board prepares seriously for transformative AI. Invests in data quality, governance, AI literacy, modern infrastructure, and change management. And then, the technology disappoints. The forecasts that excited people in 2026 turn out to be optimistic. What have you wasted?

Almost nothing. Better data is useful in any future scenario. So is governance designed before incidents force it. As is a workforce that understands the tools it’s being asked to use. And infrastructure that is not three architectures behind the times. The investments that prepare an organisation for AGI are mostly the same investments that pay off under ordinary digital improvement, regulatory readiness, and competitive parity. Any downside of ‘wasted’ preparation is small and recoverable.

Now imagine the reverse. The board waits, on the sensible-sounding view that the technology is uncertain. Transformative AI arrives faster than expected. Not necessarily AGI in the strictest sense, but capable enough to reshape who hires whom, who buys what, and who is exposed to which regulations. What does waiting cost?

Becoming irrelevant when compared to AI-fluent competitors. Employees that can’t adapt quickly enough. Management problems that only appear as issues such as data leaks regulatory findings, or customer complaints. Reputational damage that is harder to fix than to prevent. These costs depend on the company’s size, not just its AI spending.

This is how a one-sided bet works. The downside of preparation is small. The downside of inaction, if the bullish case is even partly right, is potentially existential.

Boards already make this kind of call

If this argument feels familiar, it should. Boards routinely make decisions of exactly this type. Building resilience against scenarios they cannot precisely time or predict.

You don’t know when your organisation will be hit by a serious cyber incident. You invest in cyber controls anyway, because the asymmetry is obvious. Prevention is relatively cheap, breach response is not. You don’t know when the next pandemic-scale disruption will arrive. You build continuity plans anyway, because boards that did so in 2019 looked very different in 2020 from boards that didn’t. You don’t know precisely when climate exposure will translate into stranded assets. You disclose, hedge and adapt because the cost of being unprepared scales faster than the cost of preparing.

AGI sits in the same family of risks. The question isn’t whether you can forecast it. The question is whether the cost of being wrong in either direction is symmetric. It isn’t.

The world isn’t waiting either way

The most uncomfortable part of the ‘wait and see’ position is that the conditions you would be waiting for are already arriving – driven by today’s AI, not by AGI.

Around 88% of organisations now use AI in at least one business function [3], but only 29% report significant return on investment from generative AI [4]. So, currently, nearly all companies are using AI, but only some are getting a lot of value from it. The variable that separates the 29% from the rest, by every credible study, isn’t the model, it’s organisational readiness. Data, governance, talent, change management. The boring, foundational things.

Regulation isn’t waiting either. The EU AI Act becomes broadly enforceable on 2 August 2026, with penalties up to €35 million or 7% of global turnover for the most serious breaches [5]. Its reach extends to non-EU organisations whose AI outputs affect EU residents. The same extraterritorial structure as GDPR. State-level legislation in the United States and sector-led regulation in the United Kingdom are tightening on the same timetable.

And the workforce isn’t waiting either. About 92% of top leaders are building ‘AI expert’ groups in their companies, while 60% intend to fire those who don’t adapt [4]. Retraining takes 1.5 to 3 years; changing company culture takes much longer. Companies starting in 2026 will lead those starting in 2028. Not due to technology changes, but because their company will be more prepared.

Waiting for the AGI question to be answered doesn’t stop any of this. It simply gives up control.

One question for your next AI discussion

If you take one thing from this post into your next board meeting, ask a single question:

‘Which AGI scenario is our strategy planning for, and what is the cost if we’re wrong?’

Almost every AI strategy relies on a specific future happening. Big infrastructure commitments, vendor lock-in, ambitious agentic deployments, redundancy programmes. All these make assumptions about how quickly AI will improve and be dependable. Most strategies don’t state this reliance. Stating it is the first thing board leadership should do. Pressure-testing it against different scenarios is the second.

You don’t need to know when AGI will arrive. You need to know which scenario your strategy is quietly assuming, what it costs you if that scenario doesn’t unfold, and whether the bet is reversible. That is a conversation any board can have, with the people and information you already have in the room.

The forecasts will keep diverging. Your strategy doesn’t have to.

This is the first post in a series on preparing for AGI. Future posts will examine the resilience framework, the workforce question, regulation, and the signals worth watching in 2026.

References

[1] Amodei, D, “The Day After AGI (session at the World Economic Forum Annual Meeting, Davos)”, 2026, https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/the-day-after-agi/

[2] Dilmegani, C. and Ermut, S. (AImultiple), “AGI/Singularity: 9,800 Predictions Analyzed”, 2026, https://aimultiple.com/artificial-general-intelligence-singularity-timing

[3] McKinsey & Company, “The State of AI in 2025”,  2025, https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

[4] Workplace Intelligence and Writer, “Enterprise AI Adoption in 2026: Why 79% Face Challenges Despite High Investment”, 2026, https://writer.com/blog/enterprise-ai-adoption-2026/

[5] European Union, “Regulation (EU) 2024/1689 (Artificial Intelligence Act)”, 2024, https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng

Implementation timeline summarised in: Software Improvement Group, A Comprehensive EU AI Act Summary [January 2026 update]. January 2026. https://www.softwareimprovementgroup.com/blog/eu-ai-act-summary/.

 Note: in November 2025 the European Commission proposed a “Digital Omnibus” delay to certain high-risk obligations; at the time of writing this remains a proposal and has not been adopted.


Discover more from Curious About

Subscribe to get the latest posts sent to your email.


Posted

in

,

by

Comments

Leave a comment