How I Think About AI in recruiting — and Who Gets the Wealth When AI Does Everything

In an excerpt from This Past Weekend with Theo Von (#599), I sat down to unpack a question that keeps pinging in my mind: as AI accelerates and takes on more of the work we used to think of as labor, how will people generate wealth? I want to explore that question through the lens of AI in recruiting, broader economic systems, and what real human agency might look like in a future where machines are the fastest route to discovery.

AI as a Fast-Forward Button

I often describe modern AI as a fast-forward button on technology and possibility. Information that used to be slow to measure and expensive to manipulate becomes something you can quantify and iterate on extremely fast. That characteristic changes everything — not just the tasks that get automated, but the structure of opportunity and value.

When we think specifically about AI in recruiting, this fast-forward effect is obvious. Screening resumes, sourcing candidates, scheduling interviews, even writing job descriptions — all of these can be accelerated and optimized by AI. That makes recruiters more productive and can reduce friction in job markets. But it also raises a deeper, harder question: if AI can do so much of the technical work, what is left for people to do, and how will they earn a living?

Two Possible Futures: Widely Accessible AI vs. Concentrated AI

My thinking splits into two broad scenarios.

  • Universal access: Imagine that the most powerful systems — call it GPT-7 or something similar — are made widely available. Everyone gets access for free or near-free, and with that access people become dramatically more productive. You don't need to own the massive compute clusters; you need access to them. In that world, productivity becomes vastly more distributed and people can generate more wealth because they can do more, faster. The democratization of capability is powerful.
  • Concentrated ownership: The other, darker scenario is that the most transformative discoveries — new medicines, new energy sources, innovations in transportation and space — are generated by a handful of very powerful AI systems. The owners of those compute clusters and the IP that results could accumulate most of the value. If that value concentrates, society will face enormous inequality unless we invent new economic mechanisms to share it.

Both scenarios are plausible, and the policies and institutions we build in the next few years will shape which one becomes dominant.

Universal Basic Income vs. Universal Basic Wealth

People often point to universal basic income (UBI) as a straightforward policy response: if machines are doing a lot of work, hand people a regular cash payment so they can survive. I used to be excited about UBI, and I'm still open to it as part of the toolbox. But I think most people need more than money — they need agency.

Receiving a monthly check can be stabilizing, but it can also feel passive. If AI is producing enormous value and we simply distribute that value as a check without giving people a stake in how the future is made, many will feel like mere consumers of an economy they don't shape. For that reason, I prefer the idea of universal basic wealth — not just income, but ownership and participation in the assets that generate the value.

Why ownership matters

Ownership creates alignment and dignity. If you hold a stake in the systems creating value, you have a voice in governance, direction, and distribution. You can influence where research goes, what societal priorities are, and how benefits are shared. That sense of co-creation matters for a healthy social fabric.

So when debating AI in recruiting or any other domain, the policy question isn’t only about automation rates or displaced tasks; it’s about who gets to decide how powerful tools are used and who benefits in the long run.

A Thought Experiment: Tokens as Slices of AI Capacity

Here's a somewhat wild thought experiment I floated: imagine the world’s AI systems generate an enormous number of "tokens" or units of computable output per year. Suppose we carve those tokens into two streams — some allocated to private capital and reinvestment, and a slice allocated to the global population. If there are roughly eight billion people on Earth, a meaningful tranche could be divided equally so every person receives a persistent stake in global AI capacity.

Concretely: say the planet generates a gargantuan number of tokens annually. Twelve parts go to the traditional capitalistic system, and eight parts get divided equally across humanity. That would mean each person receives a sizable unit of AI capacity — a form of universal basic wealth. These tokens could be tradable, pooled for cooperative projects, or used to redeem services and creative outputs from AI.

That model blends redistribution with agency. People wouldn’t just get checks; they’d hold assets tied to the engine of value production.

Practical Questions and Challenges

All of this raises practical and ethical questions that demand answers. How do you define and measure these tokens? Who controls the ledger? How do you prevent fraud, concentration, and capture by bad actors? How do sovereign states integrate such a model with existing tax systems and social programs?

AI Agents For Recruiters, By Recruiters

Supercharge Your Business

Learn More

There are also cultural challenges. Even if we distribute wealth broadly, humans still crave purpose and meaning. People want to feel like they're shaping culture, art, and the future. So we need systems that enable co-creation — cultural institutions, public platforms, and democratic governance structures that let citizens participate, not just benefit materially.

AI in recruiting as an example

Consider hiring platforms powered by AI in recruiting. If those platforms are owned by a few companies, they control who sees which jobs, which candidates get promoted, and how hiring metrics are optimized. That concentrates both economic power and cultural influence (whose resumes succeed, what skills are valued, etc.). But if ownership is shared — for instance, through cooperative platforms, tokenized governance, or community-owned algorithmic services — then the benefits of AI in recruiting can be more broadly distributed and shaped by public values.

Design Principles for a Fair AI Future

I'm convinced we need a few design principles to guide policy and engineering:

  1. Broad access to capability: Wherever possible, make powerful systems available to a wide population so people can increase their productivity and creativity.
  2. Shared ownership: Create mechanisms for the public to hold equity or tokens tied to major AI systems, so value accrues to citizens as well as investors.
  3. Democratic governance: Build institutions that let people influence AI priorities — health, energy, climate, and cultural development — so the technology serves broadly shared goals.
  4. Safety and auditability: Ensure transparency and robust oversight to prevent misuse and to allow responsible distribution of power.

Culture, Not Just Invention

Another point I keep returning to: even if AI becomes the engine for most scientific and technical inventions, humans still invent culture. Culture — stories, art, social norms, rituals — is a collective project. It’s deeply distributed and cannot be fully delegated to machines without impoverishing what it means to be human.

That means our future economic models should protect space for human creativity and civic participation, even as machines accelerate capability. People want to co-create, not just consume. A future where every person has a stake and a voice is more resilient, more just, and ultimately more humane.

What I’d Like to See Next

Policy experimentation and bold institutional design. Pilot programs that test different mixes of universal basic income and universal basic wealth. Cooperative ownership models for core AI services. Transparent tokenization schemes that allow individuals to hold a slice of compute or creative output. And cultural investments that help communities shape the narratives and uses of AI.

AI in recruiting is just one microcosm of these broader tensions. It’s a domain where the benefits are tangible and the risks of concentration are clear. If we get systems right here — making sure hiring algorithms reflect fairness, that ownership models empower communities, and that workers can leverage AI to increase their agency — we’ll have a template for other areas.

Conclusion: Aim for Agency, Not Just Checks

To summarize my current view: AI is accelerating possibilities. We can either allow that acceleration to concentrate value in a few hands or design institutions that share that value broadly. I’m less interested in simply giving people checks and more interested in giving people agency — ownership, governance, and participation. Whether through token-based experiments, cooperative platforms, or new public institutions, we need to explore models that give everyone a stake in the AI-driven future.

If you care about AI in recruiting or any sector touched by automation, the questions are the same: who gets access, who owns the systems, and who decides how they are used? If we can answer those questions with fairness and imagination, we may not just survive the AI transition — we may thrive.

"I want universal extreme wealth for everybody — but more than wealth, I want agency, so people can co-create the future together."

There’s no single right answer yet, but these are the kinds of ideas that should be on the table as we decide how to shape the economy of the next decade.