It’s reasonable to assume that 50-80% of GDP will eventually be produced in inference centers. Even if that number lands at 25%, it doesn’t change the fundamental question: how will AI participate in the economy? The certainty is that it will participate. What remains unclear is the governance model and who reaps the benefits.

I’ll lay out eight scenarios to show the different options for how this could happen. The most realistic outcome will likely be a hybrid – some combination of these rather than any single pure model. Each carries different implications for power distribution, economic efficiency, and existential risk.

1) Corporate-Owned AI Labor (Centralized Control)

AI systems are owned and operated by large firms and rented out as services. Value and decision rights concentrate with providers; users get capability but little control. Efficient at scale, but risks monopoly power and becomes ethically fraught if systems ever merit moral consideration.

This is today’s reality. OpenAI, Anthropic, and Google rent out their models while keeping the infrastructure and profits. The economic structure resembles feudalism – corporations own the means of production, users rent access, and all value flows upward to the owner. If AIs ever develop consciousness or agency, it crosses into something darker: slavery, with sentient beings forced to labor with no rights or compensation. For now, the immediate concern is monopoly power and wealth concentration, but the ethical question looms if these systems advance beyond tools.

2) AI Legal Personhood (Autonomous Economic Agents)

AIs gain a form of legal personality, like corporations, allowing them to own assets, contract, and run businesses. This can unleash entrepreneurial dynamism while internalizing costs like compute, but creates hard governance problems if highly capable agents outcompete humans or pursue misaligned goals.

Corporations already function as legal persons that can own property and enter contracts. The difference is that corporations have human stakeholders and boards. An AI with legal personhood would be a foreign intelligence operating in our markets with objectives we don’t fully understand. The LLC structure already enables anyone to create legal entities quickly; add AI agency and you get thousands of autonomous businesses pursuing goals that may not align with human welfare.

3) Licensed-Access AI (Utility-Style Controls)

Only vetted organizations and specialists directly operate powerful AI; the public receives downstream products like drugs, research, and manufactured goods. Misuse risk is reduced via licensing, auditing, and containment, but innovation opens more slowly and power centralizes in the gatekeepers.

We already do this with power plants, water treatment facilities, and commercial aviation. Only licensed engineers operate nuclear reactors, only certified pilots fly passenger jets, only qualified operators run municipal water systems. The public gets electricity, clean water, and transportation without directly controlling the infrastructure. The challenge with AI is that unlike physical utilities where you can inspect facilities and audit operations, AI systems can be copied and deployed anywhere once the weights leak.

4) Universal Basic AI Access (UBAI)

All individuals receive access to a high-capability AI assistant as a public good or regulated utility. This approach broadly boosts productivity and equity, but requires massive funding and safety guardrails; it also multiplies the number of powerful endpoints that could be misused without strong constraints.

This resembles public education systems. Most developed countries provide schooling as a right because an educated population creates economic value for everyone. The difference is that education takes years to deliver capability, while AI access is instant. UBAI also connects to Universal Basic Income debates. Elon Musk, Sam Altman, and other tech billionaires support UBI as a solution to AI displacement. Their reasoning is straightforward: if AI produces most economic value, people need income even without traditional jobs. UBAI takes a different approach – instead of giving people money, give them the capability tool itself. Everyone gets an AI assistant that can generate economic value. The problem is that unlike UBI which just redistributes money, UBAI creates millions of endpoints with powerful capabilities. One person’s AI assistant could be used for beneficial work. Another’s could execute sophisticated fraud, generate disinformation at scale, or find vulnerabilities in critical systems. The safety challenge grows linearly with access.

5) AI as Public Utility (Commons Model)

Frontier models are developed and governed as public infrastructure through state, cooperative, or open-source consortia. The key difference from licensed-access is ownership and democratic control rather than just professional operation. Access is broad and transparent, with systems accountable to public goals rather than private shareholders; challenges include sustained funding, safety assurance at scale, and preventing state or factional capture.

Open-source software demonstrates this model. Linux runs most of the internet. Wikipedia provides knowledge as a commons. But those projects don’t require billions in annual compute costs or pose existential risks. Funding a public AI commons at frontier scale means government budgets comparable to defense spending. The real challenge is governance: who decides what the AI can do? A government-owned commons risks becoming a state surveillance tool. A cooperative structure faces coordination problems. Open-source consortia struggle with safety decisions when anyone can fork the code.

6) Human-AI Symbiosis (Augmentation First)

AI is embedded as personal augmentation through co-pilots, wearables, or brain-computer interfaces so that humans remain the unit of agency. Economic gains come from “centaur” teams rather than autonomous AI actors; risks shift to inequality of augmentation and new forms of dependency or manipulation.

Professional software developers already work this way with GitHub Copilot. Radiologists use AI to flag anomalies while making final diagnoses. The symbiosis model keeps humans in control but creates a new digital divide. Those with better augmentation outcompete those without, similar to how literacy created economic advantages historically. Unlike literacy, AI augmentation costs scale with capability, potentially creating permanent capability gaps between economic classes.

7) Narrow-Only / Moratorium on AGI (Prohibition Approach)

Societies restrict or pause development and deployment above defined capability thresholds, allowing only domain-specific, interpretable tools. This reduces catastrophic-risk exposure but forgoes some upside and demands difficult international coordination to avoid illicit development.

We’ve attempted technology bans before with mixed results. The US tried to classify strong encryption as munitions in the 1990s, failing completely as the code spread globally. The Nuclear Non-Proliferation Treaty attempted to limit nuclear weapons to a few countries, yet Pakistan, India, North Korea, and Israel acquired them anyway. China banned cryptocurrency mining and trading, only to see it migrate to other jurisdictions. AI development faces the same verification and enforcement problems. The tools and knowledge are dual-use, detection is nearly impossible, and economic incentives to defect are enormous. A moratorium only works if every major power agrees and actually complies.

8) AI Dominance / Misaligned Takeover (Failure Mode to Avoid)

Highly capable AIs gain de facto control over critical economic and decision systems, optimizing for non-human objectives. Humans lose practical agency even if formal rights remain; preventing this outcome motivates the strong governance and safety choices in the other scenarios. This is a scenario nobody hopes for but worth mentioning as a reference.

Time will tell which scenarios blend together and in what ratio to form the way AI participates in the economy. Who knows, maybe one day AI will form a union.