Skip to content

Will AGI Be a Mainframe or a PC?

AGI6 min read

Post banner

I've been wondering lately about where AI is headed. Right now, we're seeing these frontier models – GPT-4, Claude, Gemini – getting more powerful across virtually every domain. They're impressive general-purpose systems, but something about this trajectory doesn't quite match how human intelligence operates.

People don't work that way. We are not clones of a single mind. We specialize. We become experts in specific domains. We have a wide variety of opinions and tastes. There are billions of unique humans, but there are only a handful of frontier AIs (and, increasingly, frontier models all sound alike).

Yes, you can fine-tune AI models or use reinforcement learning to make them better at specific tasks, or to give them a different voice, but economically, that's not where the action is. The market currently rewards general-purpose frontier models over specialized ones.

So here's the question I keep coming back to: Will this continue? Are we heading toward a future dominated by a few massive frontier models that do everything better than the rest? Or will the ecosystem eventually fragment into specialized models for specific domains?

The Mainframe vs. PC Metaphor

This question reminds me of computing's evolution. Will AGI be more like a mainframe or a PC?

Will AGI be like Mainframe computers?Will AGI be like Mainframe computers? Foreground
Will AGI be like Mainframe computers?

In the mainframe era, computing lived in air-conditioned rooms, cost millions, and was accessible only through specialists. The parallels to today's AI landscape are hard to miss:

  • Frontier models require enormous computing resources
  • They're developed by just a handful of well-funded companies
  • You access them through APIs controlled by these companies
  • Each generation makes the previous one look primitive

Right now, we're living in the AI equivalent of the mainframe era. A few companies are racing to build ever-more-capable general models, with each new release rendering previous versions obsolete. The barriers to entry keep rising, and the companies at the top are collecting most of the value.

But computing didn't stop with mainframes, did it?

How AI Training Has Evolved

To understand where we might be heading, it helps to look at how we got here.

The initial AI boom came from pre-training on larger and larger datasets. Companies essentially grabbed vast amounts of text from the internet and taught models to predict patterns in that data. More data meant better performance – a straightforward scaling strategy.

But those early models were weird to use. They'd output text that looked like internet documents and were inconsistent and hard to prompt effectively.

Then came fine-tuning – teaching models to be better conversationalists – which made them more useful to interact with. Next was reinforcement learning, which further improved how models interact with users, leading to the chatbots we use today.

Here's what's interesting: I think we're reaching the limits of pre-training. The largest models have already digested essentially all of the public internet. They're starting to converge in their base capabilities because they're training on the same data.

The frontier has shifted to reinforcement learning – improving reasoning abilities and teaching models to think more carefully before answering questions. A lot of effort is currently underway to improve that reasoning across a lot of tasks. And as we develop more specialized tasks and behaviors through reinforcement, we might start seeing the first signs of meaningful specialization.

Why Specialization Might Win

So far, the market hasn't shown much appetite for specialized models. General capabilities still rule the day. But I think that might change, and here's why:

The knowledge needed to push model performance from good to great isn't evenly distributed. To move from, say, 90% to 99% performance in specialized domains, you need input from genuine experts – top lawyers, doctors, physicists, and so on.

I'm seeing this firsthand. As someone with a physics background, I'm getting requests to help train models on physics concepts. Companies are deploying serious resources to recruit specialists who can elevate model performance from, say, the 80th percentile to the 90th.

But it’s reasonable to expect this strategy to hit a wall. As you move up the expertise ladder, the experts become rarer, more expensive, and less inclined to help train systems that might eventually replace them. They know their value and will demand favorable terms – possibly including ownership stakes in the resulting systems.

I don’t think we are there yet: frontier models are still struggling with basic agent capabilities like using browsers or manipulating files. These skills are prerequisites for many specialized tasks, where you need to look up information or manipulate data to answer complex questions. But I think as we start to see models with basic agency, the need for specialization may grow.

The Coming Tipping Point

Innovation follows Darwinian principles – whoever innovates fastest wins. Right now, the mainframe model might be winning because there's a benefit to improving diverse capabilities simultaneously. General models get better across the board when they improve in multiple domains.

But I suspect we'll hit a tipping point when:

  1. Returns on general capabilities start to diminish
  2. The knowledge needed to improve becomes increasingly specialized
  3. The value of domain-specific excellence exceeds the convenience of generality

When that happens, specialization will accelerate rapidly. We might see law firms developing their own legal AI trained on proprietary data. Medical institutions might create diagnostic systems tailored to their patient populations. Creative studios might build tools specifically for their artistic domains.

Two Possible Futures

These different paths create very different economic landscapes.

In the mainframe future, a few AI companies dominate and capture most of the value. They race to create AGI first, believing that will secure their enormous valuations. It's a winner-takes-most scenario, where OpenAI, Anthropic, and a few others control the most valuable technology in the world. This seems like the default scenario if the industry considers as it has. I find this outcome to be bleak.

The alternative – what I'll call the PC future – distributes value more widely. Many players create specialized models for different domains. Your company might have its own AI specialists, or you might even have a personal AI assistant truly customized to your needs. Value flows to those who understand specific domains deeply, not just to those who control general models.

PC Future: Different Models (and devices) for different domainsPC Future: Different Models (and devices) for different domains Foreground
PC Future: Different Models (and devices) for different domains

I find the PC future more appealing. It's a world where more people benefit from AI, where specialized AI co-workers become the norm rather than tools controlled by a few tech giants.

My Bet on the Future

If I had to put money down, I'd bet on the specialized, decentralized future becoming more prominent than many current players expect. Not immediately – we'll likely see frontier models dominate for some time – but eventually.

The challenges of acquiring specialized knowledge combined with the natural evolution of technologies suggest that AI, like most technologies before it, will move from general to specific applications over time.

Of course, this doesn't mean frontier models will disappear. Just as cloud computing didn't make data centers obsolete, general AI will continue to play a crucial role. But the ecosystem will likely become more diverse than today's race toward AGI suggests.

The mainframe didn't have the final word in computing. I doubt frontier models will have the final word in AI either.

© 2025 by Fully Doxxed. All rights reserved.