There is a number circulating in the AI safety community that serious people cite with the calm of someone reading a quarterly earnings report. Approximately 25% probability that the development of artificial general intelligence leads to human extinction, or to outcomes so catastrophic that they are functionally equivalent to it. The figure comes from surveys of machine learning researchers and from the stated beliefs of credentialed scientists who work on these systems daily. It is not a fringe estimate. It is not science fiction. It is the considered judgment of people who understand, better than almost anyone alive, what they are building.
The question this post addresses is not whether the estimate is correct. It is whether the people engaged in the AGI debate, on all sides, are reasoning about it correctly.
They are not.
The accelerationist position holds that a 25% risk of existential catastrophe is acceptable, even unremarkable, given the magnitude of potential benefit. AGI that cures cancer, eliminates poverty, accelerates scientific discovery at a pace no human institution could match. Weigh the upside against the downside, run the expected value calculation, and the math supports moving forward. The 25% is a cost of doing transformative business. If you would accept a 1-in-4 chance of side effects for a drug that cured terminal illness, you should accept a 1-in-4 chance of catastrophic risk for a technology that might end scarcity itself. Speed is not recklessness. It is the rational response to a calculation that clearly favors action.
The strongest version of this argument does not flinch from the number. It sits with it. It acknowledges that the risk is real, and then it points to the counterfactual: what is the probability of catastrophic harm from not developing AGI? Climate collapse, pandemic, resource conflict, the slow grinding failure of institutions that cannot coordinate fast enough to solve the problems they have already created. If the choice is between 25% risk from AGI and some comparable or higher risk from civilizational decline, then the expected value argument runs in only one direction. You build, and you build fast, because the alternative is also a kind of dying.
This is not a trivial argument. It is internally coherent and it deserves to be engaged on its own terms.
The doomer position holds that any non-trivial probability of human extinction is a categorical disqualifier. Not a cost to be weighed. A stop sign. The logic is deontological rather than consequentialist: there are outcomes so severe, so permanent, and so irreversible that no expected value calculation can justify accepting them as a probability distribution you choose to enter. A 25% chance that you lose everything, forever, is not 25% of acceptable. It is a number that should end the conversation about moving forward, full stop. The researchers sounding the alarm, from Nick Bostrom’s foundational work on superintelligence to the more recent public statements of Yoshua Bengio and others, are not catastrophizing. They are applying elementary decision theory to a situation where the downside has no lower bound and no recovery.
The doomer position also has a structural critique of the accelerationist expected value argument that is worth taking seriously. Expected value reasoning works when you can run the experiment more than once. When a drug fails in Phase 3 trials, you learn from the failure, adjust the compound, and try again. The law of large numbers is real: across many trials, outcomes converge on their probabilities and you accumulate the information needed to improve your approach. None of that applies here. There is one Earth. There is one first attempt at AGI development at civilizational scale. The feedback loop that makes expected value reasoning valid is precisely the thing that does not exist when the outcome is permanent and total. The doomer is not being irrational. The doomer is pointing out that the accelerationist is using a statistical framework that requires retries for a situation in which there are none.
What both positions miss is the specific way that the 25% figure distorts ordinary probabilistic intuition, and what that distortion demands architecturally.
Standard risk reasoning is calibrated on reversibility. When an engineer designs a bridge to a certain safety tolerance, the implicit logic is that if a failure mode occurs, the event informs the next design. When a fund manager accepts a 20% probability of a bad outcome on a position, the logic is that across a portfolio and across time, the wins will outweigh the losses and the information from the losses will sharpen future decisions. Probability reasoning is a tool for navigating a world in which you get more than one turn. It is a navigation instrument, not a calculator for one-way doors.
The 25% AGI extinction estimate is not a standard risk. It is a one-way door probability. You do not get to observe the bad outcome and update your model. There is no portfolio of civilizations across which you diversify. There is no institutional memory that survives to build the better version. The accelerationist who runs the expected value calculation and finds it favorable is using a financial model in a physics problem, and the physics does not care about the model.
A 25% existential risk is not 25% of ordinary risk. It is a fundamentally different category of number, and treating it as though the same reasoning applies is not boldness. It is a categorical error.
But the doomer makes a different categorical error by treating the 25% as a fixed constant and concluding that the only rational response is cessation. The probability is not fixed. It is a function of choices, specifically the choices made about alignment research, governance architecture, deployment sequencing, and oversight infrastructure. A 25% probability at current investment levels in safety research is not the same as a 25% probability at the investment levels that the number itself implies are warranted. The doomer who calls for halting development is, in many cases, implicitly assuming that the probability cannot be moved. The entire history of engineered safety suggests that it can, if the engineering is taken seriously enough.
What neither position is willing to say clearly is this: the 25% figure, if taken at face value, is not a green light and not a stop sign. It is a specification. It tells you what the highest-priority engineering problem in human history is, and it tells you what level of institutional seriousness that problem requires.
The position that the 25% probability actually supports is neither acceleration nor cessation. It is rigor.
The expected value of safety investment at this probability level is not merely large. It is, under any remotely defensible utility function that assigns non-trivial weight to human survival, effectively unbounded. If the probability of extinction is 25% and you can reduce it by 5 percentage points through serious alignment research, the value of that reduction, calculated over the full population of humans who would exist in a non-catastrophic future, exceeds the GDP of every nation currently operating. This is not rhetoric. It is arithmetic applied to the actual stakes. The reason it feels hyperbolic is that human cognition is not equipped to reason intuitively about numbers of this magnitude. That is not a reason to dismiss the calculation. It is a reason to be suspicious of the intuitions that tell you the number feels too large.
The institutional implication is direct. Alignment and interpretability research, governance architecture for AI development, oversight mechanisms that can operate at the speed at which these systems are being deployed: these are not regulatory obstacles to progress. They are the engineering requirements that the probability estimate itself specifies. A reasonable engineer, told that a system has a 1-in-4 chance of catastrophic failure with no recovery mechanism, does not argue about whether to invest in safety. The engineer asks what investment level reduces the failure probability to an acceptable range and then pursues it. The AGI development community has, on balance, not yet organized itself around that question with the institutional seriousness it requires.
The Middle Way position here is not that we split the difference between the accelerationists and the doomers, finding some comfortable rhetorical middle ground between enthusiasm and fear. The Middle Way position is that both camps are reasoning about the wrong variable. The accelerationists are arguing about whether to proceed. The doomers are arguing about whether to proceed. Neither is arguing, with sufficient institutional force, about how to proceed in a way that takes the 25% seriously as an engineering constraint rather than a talking point.
RaaS Stewardship, as a framework for governing agentic systems at the enterprise level, is built on exactly the distinction the AGI debate is missing: the difference between deploying capable systems recklessly and deploying them within a governance architecture that preserves human judgment where it cannot be safely delegated. The enterprise analog is not the civilizational one, but the architectural logic is the same. Bounded autonomy. Verifiable outcomes. Human oversight on the decisions that cannot be recovered from if the agent gets them wrong. The question is not capability. It is governance infrastructure, and governance infrastructure is an engineering problem, not a philosophical one.
If the 25% is real, and the people who spend their careers on this believe it is, then the most important sentence in the AGI debate is not “we must accelerate” or “we must stop.” It is: what would institutional-grade safety architecture actually look like, and why have we not built it yet?
The operational translation of that question, for organizations building or deploying AI systems at any scale today, is explored in the research at crownpointadvisorygroup.com. The governance frameworks developed there for enterprise agentic deployment are grounded in the same architectural logic: capable systems, within bounded governance, with human judgment preserved at the irreversible decisions.