The Bounty and the Blank Cheque

There is a document circulating in the corridors where venture capital meets policy ambition. It is titled A Blueprint for the Next Decade, and its authors — Dr. Alexander Wissner-Gross and Dr. Peter Diamandis — are not given to understatement. The document argues that every major civilizational leap has been a war on a specific bottleneck: the Scientific Revolution declared war on ignorance; the Industrial Revolution declared war on muscle; the Digital Revolution declared war on distance. The Intelligence Revolution, they propose, is declaring war on attention — specifically, on the scarcity of expert human attention that currently limits how fast medicine advances, how quickly engineering problems are solved, how slowly discovery compounds.

The weapon in this war is the token: the unit of computation that allows AI systems to read, reason, and generate at speeds no human can match. The vision is seductive in its clarity. By "industrialising discovery" — building what the document calls an Industrial Intelligence Stack, complete with data layers, governance frameworks, and Targeting Authorities that issue blinded bounties for specific breakthroughs — humanity could compress centuries of scientific progress into a decade. Longevity escape velocity by 2035. Near-zero energy costs. A "quiet hum" of solved problems replacing the grinding friction of scarcity.

The document is worth taking seriously. Its diagnosis of the bottleneck is not wrong. Its proposed mechanisms — Outcome-Based Contracts, Compute Escrows, domain-collapse bounties — are genuinely novel. The 18-month "Foundry Window" it identifies, during which the technical standards of the next century are being hard-coded, is a real phenomenon. Path dependencies in infrastructure are not metaphors; they are the reason the world still runs on a power grid designed for incandescent bulbs.

But the document contains a silence that is louder than its proposals. It describes, in considerable detail, how to industrialise intelligence. It does not describe, except in the vaguest terms, how to legitimise the institutions that will wield it. And that silence is where the most consequential questions live.

• • •

The Attention Bottleneck Is Real — and Insufficient as a Diagnosis

The claim that expert human attention is the final bottleneck deserves examination before it is accepted as a premise. It is true that the number of trained oncologists, structural biologists, and climate engineers is finite, and that AI systems can now perform many of the cognitive tasks those experts perform — pattern recognition in medical imaging, protein folding prediction, materials simulation — at a fraction of the cost and at a speed that no human institution can match.

But expert attention is not the only scarce resource in the system. There is also scarce trust — the willingness of patients to accept AI-generated diagnoses, of regulators to approve AI-designed molecules, of communities to live near infrastructure built by AI-optimised supply chains. There is scarce coordination — the ability of competing institutions, jurisdictions, and interest groups to agree on what problems to solve and in what order. And there is scarce legitimacy — the quality that allows a decision, a standard, or an outcome to be accepted as valid not merely because it is technically correct but because it emerged from a process that the affected parties recognise as fair.

The Intelligence Revolution, as described in the blueprint, solves the first scarcity with considerable elegance. It has almost nothing to say about the other three.

This is not a minor omission. The history of technological transitions is largely a history of legitimacy crises. The Industrial Revolution produced not only factories but also child labour laws, trade unions, and eventually the welfare state — institutions that were not planned by the engineers who built the machines but were forced into existence by the social friction those machines generated. The Digital Revolution produced not only the internet but also platform monopolies, algorithmic discrimination, and a global crisis of epistemic authority — outcomes that were not anticipated in the original architecture and that remain, two decades later, only partially addressed.

The Intelligence Revolution will generate its own legitimacy crises. The question is whether the institutions being built now — in the Foundry Window — are being designed with that in mind.

• • •

Two Architectures of Abundance

The blueprint's central metaphor is the distinction between "The Muddle" and "The Machine." The Muddle is the existing system: bureaucratic, input-priced, scarcity-minded, resistant to change. The Machine — also called "The Rails" — is the new infrastructure: outcome-priced, abundance-oriented, technically rigorous, capable of compounding.

This is a useful heuristic, but it conceals a tension that the document does not fully explore. The Muddle is not simply inefficiency. It is also, in many of its components, the accumulated scar tissue of previous legitimacy crises. Regulatory approval processes are slow not only because bureaucrats are risk-averse but because thalidomide happened. Peer review is slow not only because academics are territorial but because replication crises revealed that speed without verification produces noise, not knowledge. Input-based pricing in medicine is not only a rent-seeking arrangement but also a response to the difficulty of attributing outcomes in complex biological systems where causation is genuinely hard to establish.

The Machine, as described, solves for speed and measurability. Outcome-Based Contracts pay for cures, not treatments. Compute Escrows release funds only when AI performance milestones are mathematically verified. Targeting Authorities issue bounties for specific, blinded results. These are elegant mechanisms for problems that can be precisely specified. They are considerably less elegant for problems that cannot.

Consider the bounty model. The document proposes offering, as an example, a two-billion-dollar prize for a room-temperature superconductor. This is a well-defined target: a material with a measurable property, verifiable by independent laboratories. The bounty model works because the outcome can be specified in advance and confirmed after the fact without ambiguity.

Now consider applying the same model to a treatment for depression, or a curriculum reform that improves long-term civic participation, or an urban planning intervention that reduces inequality. These are not problems with clean measurement functions. They are problems where the definition of success is itself contested, where the affected populations have legitimate standing to participate in setting the target, and where the second-order effects of any intervention are likely to be as consequential as the first-order effects. No Targeting Authority can issue a blinded bounty for "a more just society," because justice is not a property that can be read off a spectrometer.

The blueprint is, in this sense, a theory of abundance for the tractable. It is a less complete theory for the intractable — which is precisely where the most consequential human problems live.

• • •

The Foundry Window and Its Invisible Decisions

The document's most acute observation is the concept of the Foundry Window: the claim that the technical standards being established now, in the period roughly spanning 2025 to 2027, will function as path dependencies for the next century. This is historically well-grounded. The gauge of railway tracks, the voltage of electrical grids, the packet structure of the internet — these were not inevitable choices. They were contingent decisions made under time pressure that then became effectively irreversible because the cost of switching exceeded the benefit.

The Foundry Window argument implies an urgency: if you are not at the table when the standards are being set, you will be shaped by standards you had no hand in designing. This is a genuine concern, and the document is right to name it.

But the argument cuts in a direction the document does not fully acknowledge. If the Foundry Window is real, then the question of who is setting the standards is at least as important as the question of what standards are being set. The document's answer, implicitly, is that the standards should be set by the coalition of technologists, philanthropists, and forward-thinking institutions that are capable of moving fast enough to participate. This is a reasonable answer to the question of who can move fast. It is a less satisfying answer to the question of who has standing.

The populations most likely to be affected by the Intelligence Revolution — patients in healthcare systems, workers in industries facing automation, communities in regions where AI-optimised supply chains will concentrate or disperse economic activity — are not, for the most part, participants in the Foundry Window. They are its objects. The standards being hard-coded now will shape their lives in ways they have not consented to and may not understand until the consequences are already locked in.

This is not an argument against moving quickly. It is an argument that the design of the Foundry Window needs to include, as a first-class engineering problem, the question of how affected populations participate in the decisions that will govern them.

• • •

Return on Cognitive Spend and What It Cannot Measure

The document proposes replacing traditional financial metrics like EBITDA with a new measure it calls Return on Cognitive Spend (RoCS): the ratio of value generated to the cost of the intelligence deployed to generate it. This is an interesting proposal, and it captures something real — the declining cost of cognitive tasks is genuinely transforming the economics of knowledge work in ways that existing accounting frameworks do not reflect.

But RoCS, like all metrics, measures what it can measure and ignores what it cannot. It can measure the cost of generating a drug candidate. It cannot easily measure the cost of the regulatory uncertainty that delays its approval. It can measure the efficiency of an AI system at diagnosing a disease. It cannot measure the erosion of patient trust that follows from a high-profile AI diagnostic error. It can measure the speed at which a materials discovery is made. It cannot measure the geopolitical friction that arises when the discovery is made by one nation's AI infrastructure and the rest of the world must decide whether to accept or contest the resulting intellectual property claims.

The deeper issue is that RoCS is denominated in the currency of the Machine. It is a measure of performance within a system that has already been built. It does not capture the cost of building the system — the political capital, the social trust, the institutional redesign — that is required before the Machine can operate at scale. These costs are real, and they are not small. They are, in fact, the primary reason that previous technological transitions took decades rather than years to realise their potential.

What RoCS Can Measure What RoCS Cannot Measure
Cost of generating a drug candidate Regulatory uncertainty delaying approval
AI diagnostic efficiency Erosion of patient trust after AI errors
Speed of materials discovery Geopolitical friction over IP claims
Throughput of the Machine Cost of building the Machine's legitimacy
Cognitive spend per outcome Social capital required to deploy outcomes
• • •

What Could Be Built: Instruments for the Legitimacy Stack

The document proposes building an Industrial Intelligence Stack. A complementary project — not a replacement, but a parallel construction — would be building what might be called a Legitimacy Stack: the institutional infrastructure required to make the outputs of the Intelligence Revolution acceptable to the populations they affect. Several approaches are worth examining.

Participatory Targeting Authorities. The document's Targeting Authority model — bodies that set specific bounties for specific outcomes — could be extended to include structured participation by affected communities in the definition of targets. This is not a call for direct democracy in technical standard-setting, which would be unworkable. It is a call for deliberative processes — citizens' assemblies, structured consultations, adversarial review panels — that give affected populations a meaningful role in defining what "solved" means before the bounty is issued. The constraint is speed: deliberative processes are slow, and the Foundry Window is short. The risk of omitting them is that the solutions produced are technically correct but socially rejected.

Outcome Escrows with Legitimacy Conditions. The document proposes Compute Escrows that release funds when AI performance milestones are mathematically verified. A parallel instrument would be Outcome Escrows that release funds only when a defined legitimacy condition is also met — for example, when an independent review confirms that the affected population has been adequately informed and consulted, or when a regulatory body with genuine independence has reviewed the deployment plan. The failure mode is regulatory capture: if the legitimacy condition is defined by the same parties who benefit from its being met, it becomes a formality rather than a constraint.

Distributed Foundry Participation. The Foundry Window argument implies that the standards being set now will be global in their reach. If that is true, then the process of setting them should be global in its participation. This does not mean that every nation must agree before any standard is adopted — that would produce paralysis. It means that the process should be designed to include, as a structural feature, representation from the regions and populations most likely to be affected by the standards being set, including those that currently lack the technical capacity to participate on equal terms. The constraint is that such inclusion requires investment in capacity-building that is not currently part of the blueprint's roadmap.

Legibility Requirements for AI-Generated Outcomes. One of the most consequential legitimacy problems in the Intelligence Revolution is the opacity of AI reasoning. A drug candidate generated by an AI system that cannot explain its reasoning in terms that a regulatory scientist can evaluate is not merely a technical challenge; it is a legitimacy challenge. Requiring that AI systems deployed in high-stakes domains produce outputs that are legible to human experts — not necessarily in full mechanistic detail, but in terms of the evidence they drew on and the uncertainties they acknowledged — is a design constraint that the blueprint does not currently impose. The failure mode is that legibility requirements slow the system down and create incentives to game the explanation rather than improve the reasoning.

• • •

The Second-Order Topology

If the Intelligence Revolution proceeds as the blueprint envisions — with the Machine displacing the Muddle, RoCS replacing EBITDA, and the Foundry Window hard-coding the standards of the next century — several second-order effects are worth anticipating.

The first is a bifurcation of institutional capacity. Organisations that can participate in the Machine — that can structure Outcome-Based Contracts, deploy AI systems, and measure performance against blinded targets — will compound their advantages rapidly. Organisations that cannot — which includes most public institutions, most civil society organisations, and most governments outside a small number of technologically advanced nations — will find themselves increasingly unable to govern the systems they are nominally responsible for overseeing. This is not a new dynamic; it is an acceleration of an existing one. But the acceleration matters, because the gap between technical capability and governance capacity is already a primary source of legitimacy deficits in the current system.

The second is a concentration of the Targeting Function. The document envisions Targeting Authorities as independent bodies that set bounties for specific outcomes. In practice, the entities most capable of defining precise, measurable targets for complex problems are the same entities that have the technical infrastructure to pursue them. This creates a structural tendency for the Targeting Function to be captured by the Machine — for the problems that get targeted to be the problems that the Machine is already equipped to solve, rather than the problems that most need solving. The history of prize mechanisms in science and technology suggests that this tendency is real and persistent.

The third is a redistribution of epistemic authority. If AI systems become the primary generators of scientific knowledge — designing experiments, interpreting results, proposing theories — the question of who has the authority to evaluate those outputs becomes acute. The current system of peer review, for all its flaws, distributes epistemic authority across a large community of trained specialists. A system in which AI-generated results are validated primarily by the organisations that produced them concentrates epistemic authority in ways that may be difficult to contest from outside.

• • •

What the Blueprint Does Not Resolve

The solveeverything.org document is a serious proposal by serious people, and it deserves serious engagement rather than dismissal. Its diagnosis of the attention bottleneck is correct. Its proposed mechanisms for industrialising discovery are genuinely innovative. Its identification of the Foundry Window as a moment of unusual leverage is historically well-grounded.

What it does not resolve — and what no single document could resolve — is the relationship between technical abundance and political legitimacy. These are not the same problem, and solving one does not automatically solve the other. The Industrial Revolution produced material abundance and a century of political conflict about how that abundance would be distributed and governed. The Digital Revolution produced informational abundance and an ongoing crisis about how that information would be verified, attributed, and governed. The Intelligence Revolution will produce cognitive abundance, and the question of how that abundance will be governed is not answered by building better rails.

The document's title — Solve Everything — is a provocation, not a claim. But provocations have consequences. They shape what gets built and what gets left out. A blueprint for the next decade that treats legitimacy as a downstream problem — something to be addressed after the Machine is running — is a blueprint that will generate the same pattern of friction, resistance, and institutional crisis that has characterised every previous transition.

The more interesting question is whether the Foundry Window is wide enough to include the legitimacy infrastructure alongside the technical infrastructure. Whether the same urgency that is being applied to building the Machine can be applied to building the institutions that will govern it. Whether "Return on Cognitive Spend" can be extended to include the cognitive spend required to make the outputs of intelligence acceptable to the humans who will live with them.

These are not rhetorical questions. They are engineering problems. They are harder than protein folding, and they do not have a clean measurement function. That is precisely why they are worth taking seriously now, while the window is still open.