Porter’s Five Forces for LLM Builders—And Why It’s Not Enough


Every MBA student learns Porter’s Five Forces in their first strategy class. It’s one of the most influential business frameworks of the past 50 years—a structured way to analyze industry competition and profitability.

But can a framework from 1979 make sense of an industry that barely existed in 2020?

Let’s apply Porter’s Five Forces to the LLM industry—OpenAI, Anthropic, Google, Meta, and the rest—and see what it reveals. Then let’s examine where this classic framework breaks down.

In 1979, Harvard Business School professor Michael Porter published “How Competitive Forces Shape Strategy.” His insight: industry profitability isn’t random. It’s determined by five competitive forces.

                    Threat of
                    New Entrants
                         |
                         v
    Supplier -----> Industry <----- Buyer
    Power           Rivalry          Power
                         ^
                         |
                    Threat of
                    Substitutes

The five forces:

  1. Threat of New Entrants — How easily can new competitors enter? High barriers = good for incumbents.

  2. Supplier Power — Can suppliers dictate terms? Powerful suppliers capture value from the industry.

  3. Buyer Power — Can customers dictate terms? Powerful buyers squeeze industry margins.

  4. Threat of Substitutes — Can customers switch to alternatives? Substitutes cap prices and profits.

  5. Industry Rivalry — How intensely do competitors fight? Intense rivalry erodes margins.

The core insight: When all five forces are weak, the industry is profitable (think pharmaceuticals). When they’re strong, margins suffer (think airlines).

Now let’s apply this to LLM builders.

Assessment: Bifurcating—high for frontier, low for “good enough”

Building a GPT-4 or Claude-class model requires:

Barrier Scale
Training compute $100M–$1B+ per frontier model
Talent Perhaps <1,000 people worldwide can lead frontier training runs
Data Trillions of high-quality tokens, increasingly scarce and legally contested
Time 2-3 years to build an organization capable of frontier work
Compute access NVIDIA H100 allocation is rationed; relationships matter

This looks like an oligopoly in formation—only a handful of labs can compete at the frontier.

2023: Training a competitive model required $100M+
2024: Llama 3 and Mistral are free
2025: Fine-tune a 70B model for your use case: $10K-$100K

Meta’s open-weights strategy (giving away Llama) deliberately lowered barriers. Anyone can now deploy a capable model. Dozens of startups offer fine-tuning, hosting, and customization.

The bifurcation: Frontier model development = high barriers, oligopoly. Model deployment and customization = low barriers, fragmented competition.

Assessment: Extremely high

The LLM industry’s suppliers have enormous leverage:

NVIDIA's position:
- 80%+ market share in AI training chips
- H100/H200 GPUs are the constraint on frontier training
- 70%+ gross margins
- Demand exceeds supply; allocation is strategic
- AMD and Intel are years behind

No other supplier in tech has this leverage. LLM builders are price-takers with NVIDIA.

AWS, Google Cloud, and Azure control compute access. But:

  • They compete with each other (reduces power)
  • They’re also competitors in LLMs (creates tension)
  • Exclusive partnerships (Microsoft-OpenAI, Google-Anthropic, Amazon-Anthropic) muddy the supplier-customer relationship

This is perhaps the highest-leverage “supplier”:

Top AI researcher compensation: $5-50M packages
Single departures reshape companies:
  - Ilya Sutskever leaving OpenAI
  - Noam Shazeer leaving Google for Character.ai
  - Dario and Daniela Amodei leaving OpenAI to found Anthropic

Acqui-hires are really talent acquisitions:
  - Microsoft absorbed Inflection's team
  - Google absorbed Character.ai's team

When your most important “supplier” is a small group of irreplaceable humans with perfect information and mobility, supplier power is maximal.

2023: "Train on the internet" was accepted
2024: Reddit, Twitter/X, publishers demand licensing fees
2025: NYT sues OpenAI, everyone lawyers up

High-quality training data is the new oil. Suppliers are learning to extract rents.

Frontier training runs require:
- Hundreds of megawatts
- Uninterrupted power for months
- Physical data center capacity

Power availability is becoming a strategic constraint.

The supplier power problem: The “picks and shovels” players—NVIDIA, cloud providers, talent—capture enormous industry value. LLM builders are squeezed in the middle.

Assessment: Medium, but rising

Enterprises can choose from:
- OpenAI (GPT-4, GPT-4o)
- Anthropic (Claude)
- Google (Gemini)
- Amazon Bedrock (multiple models)
- Azure OpenAI Service
- Open source (Llama, Mistral)
- Dozens of specialized providers

Procurement departments play vendors against each other. Multi-vendor strategies are common.

But switching costs exist:

  • Fine-tuned models are vendor-specific
  • Integration and compliance work is non-trivial
  • Prompt engineering is model-specific
Developer reality:
- OpenAI-compatible APIs are everywhere
- Swap Claude for GPT for Llama with minimal code
- Price arbitrage is easy (use cheapest model that works)
- Open source is free

For developers, switching costs approach zero.

ChatGPT has brand recognition
But switching to Claude or Gemini = 30 seconds
No data lock-in, no integration cost

Why buyer power is rising:

  • Model capabilities are converging (GPT-4 ≈ Claude ≈ Gemini for most tasks)
  • Prices collapsed 90%+ in 18 months
  • Open source offers “free”
  • Commoditization is accelerating

Assessment: High and multi-dimensional

Porter defines substitutes as different products that serve the same need. For LLMs, substitutes include:

Llama 3 405B rivals GPT-4 for many tasks.
Cost: Free (plus inference compute).

This isn't traditional substitution—it's strategic commoditization.
Meta gives away models to commoditize competitors.
A fine-tuned 7B model can beat GPT-4 for specific tasks.
Faster, cheaper, easier to deploy.
For structured problems, small models win.
Not every problem needs a $0.01/request LLM call.
Logistic regression still works for classification.
Rule engines still work for business logic.
The substitute is "don't use an LLM."
For some tasks, humans are still:
- Higher quality
- More accountable
- Required for compliance
- Cheaper at low volume
Regulated industries can't send data to APIs.
With open weights, deploy Llama behind your firewall.
The substitute is "same capability, different deployment."

Key insight: The “substitutes” in LLMs aren’t just competing products. Open source is a strategic weapon that reshapes the entire competitive landscape.

Assessment: Intense, multi-front war

Frontier labs:    OpenAI, Anthropic, Google DeepMind, Meta AI, xAI
Cloud-native:     Amazon (Titan), Microsoft (Phi), Cohere, AI21
China:            Baidu, Alibaba, ByteDance, Moonshot
Startups:         Mistral, Reka, Adept, Inflection (now Microsoft)

New credible entrants emerge quarterly.

Dimension Current State
Price Race to bottom. GPT-4-class dropped from $60/M tokens to $2-5/M.
Capabilities Context windows (128K → 2M), multimodal, reasoning, tool use
Speed Latency matters for real-time use cases
Safety/Trust Enterprise buyers care. Regulatory positioning.
Distribution Microsoft has Office. Google has Search. Apple has devices.
Ecosystem Plugins, integrations, developer tools, fine-tuning services
Talent Acqui-hires, poaching, research prestige
High fixed costs:       Frontier training costs $100M+; need volume to recoup
Low marginal costs:     Serving one more API call costs fractions of a cent
Low differentiation:    Models converging in capability
High strategic stakes:  "This is the next platform"—everyone must compete

This is a classic formula for intense rivalry and margin compression.

The framework reveals real insights about the LLM industry:

1. Supplier power is the strategic constraint

NVIDIA and talent capture enormous value. LLM builders are squeezed. This explains:

  • Why labs pay $10M+ for researchers
  • Why NVIDIA has 75% gross margins while labs struggle to profit
  • Why compute partnerships (Microsoft-OpenAI, Google-Anthropic) are existential

2. Rivalry will compress margins

The conditions for intense competition all exist: high fixed costs, low marginal costs, multiple well-funded competitors, converging capabilities. This suggests:

  • API pricing will continue falling
  • Differentiation will be difficult
  • Profitability will be elusive for most players

3. Buyer power is increasing

As models commoditize, leverage shifts to customers. This explains:

  • Why OpenAI cut prices repeatedly
  • Why enterprises demand multi-model strategies
  • Why open source matters strategically

4. Barriers are bifurcated

Frontier model training is an oligopoly; deployment is fragmented. This suggests:

  • Consolidation at the frontier (3-5 labs)
  • Fragmentation in applications (hundreds of companies)
  • Value capture will be contested in the middle

Now for the critique. Porter’s framework has fundamental limitations when applied to the LLM industry.

Porter assumes industry structure is stable enough to analyze. LLM reality:

2022: GPT-3 is impressive but niche
2023: ChatGPT changes everything; Google declares "code red"
2024: Open source catches up; prices collapse 90%
2025: Agents? New architectures? Regulation?

A five forces analysis has a half-life of maybe 6 months. The framework wants stability; the industry delivers chaos.

Porter needs clear industry boundaries. Where are the boundaries here?

- Foundation model training?
- API inference services?
- Consumer chatbots?
- Enterprise AI platforms?
- AI features in existing products?
- Chips for AI?
- AI applications?

Is Anthropic competing with OpenAI (yes), Google (yes), Notion (sort of), and McKinsey (maybe)? Industry boundaries are fractal.

Porter assumes clean categories. Real relationships are messier:

Player Their Roles
Microsoft Investor in OpenAI + competitor (Copilot) + cloud supplier + customer + distribution partner
Google Cloud supplier to Anthropic + direct competitor (Gemini) + search incumbent + investor
NVIDIA Supplier to everyone + potential competitor (if they build models) + platform kingmaker
Meta Competitor + supplier (gives away Llama) + uses AI for its own apps
Amazon Cloud supplier + investor in Anthropic + competitor (Titan) + customer

When Microsoft is simultaneously OpenAI’s investor, supplier, distribution partner, and competitor, “supplier power” and “rivalry” collapse into game theory.

Porter’s original framework ignores complementors entirely. (He added them later in the “Value Net” with Adam Brandenburger.)

For LLMs, complementors are critical:

Complementors:
- Fine-tuning services (Scale AI, Together AI)
- Vector databases (Pinecone, Weaviate, Chroma)
- Orchestration frameworks (LangChain, LlamaIndex)
- Evaluation tools (Braintrust, Weights & Biases)
- App developers building on APIs
- Enterprise integrators

The health of the complementor ecosystem determines API adoption. OpenAI’s moat is partly its developer ecosystem—not captured by five forces.

Porter treats industries as atomistic competition among independent firms. LLMs have network effects:

Developer network effects:
  More devs → more tools/libraries → easier development → more devs

Data flywheel:
  More users → more feedback/RLHF data → better models → more users

Ecosystem lock-in:
  More integrations → higher switching costs → more integrations

Mindshare compounding:
  "GPT" became generic term → default choice → self-reinforcing

These winner-take-most dynamics aren’t well captured by five forces, which assumes roughly linear competition.

Porter treats substitutes as competing products that serve the same need—like generic drugs undercutting branded pharmaceuticals.

But Meta’s Llama strategy isn’t substitution. It’s strategic commoditization:

Meta's playbook:
1. Spend $X billion training Llama
2. Give it away for free
3. Commoditize the model layer
4. Prevent OpenAI/Google from locking in developers
5. Value accrues to apps (where Meta competes)
6. Attract research talent who want open work
7. Shape standards and ecosystem

Open source here is a weapon, not a substitute. It requires game theory to analyze, not substitution curves.

Porter treats regulation as part of the external environment—relevant but background. For LLMs, regulation could restructure everything:

Potential impacts:
- EU AI Act: Compliance costs, transparency requirements
- US regulation: Unclear but potentially significant
- Export controls: NVIDIA chips restricted to China
- Copyright: Training data lawsuits could invalidate models
- Safety mandates: Testing requirements, release delays
- Licensing: Some propose licensing frontier labs

A single regulatory decision could make the entire five forces analysis obsolete overnight. Regulation isn’t background—it’s a primary strategic variable.

Porter treats labor as a factor input, like capital or materials. In LLMs, talent is THE moat:

Facts about AI talent:
- Perhaps 100 people can lead frontier training runs
- A single researcher leaving can reshape a company
- Acqui-hires are talent acquisitions disguised as M&A
- Research culture and publication freedom are competitive advantages
- $10-50M packages for top researchers

“Supplier power of labor” doesn’t capture this. Talent strategy deserves its own framework, not a bullet point under suppliers.

Porter assumes competition within a paradigm. But fundamental breakthroughs can invalidate everything:

Paradigm shifts:
2017: Transformers invented → RNNs obsolete
2020: Scaling laws proven → Massive compute becomes strategy
2022: RLHF/ChatGPT → Interaction paradigm shifts
2024: ???

The next architectural breakthrough could make current frontier models obsolete. Porter doesn’t handle Schumpeterian disruption well.

Porter assumes profit-maximizing firms. The LLM industry has:

- OpenAI: Capped-profit, safety mission (complicated by Microsoft deal)
- Anthropic: Public Benefit Corporation, safety-focused
- Meta AI: Gives away models; strategic motivation unclear
- Academic labs: Publication incentives, not profit

When major competitors aren’t straightforwardly profit-maximizing, competitive analysis gets strange. Why is Meta giving away $100M+ models?

Porter’s Five Forces is a starting point, not a complete analysis. For LLMs, combine it with:

Framework What It Adds
Platform Economics Network effects, two-sided markets, ecosystem orchestration
Co-opetition (Brandenburger & Nalebuff) Analyzing “frenemies,” complementors, the Value Net
Disruption Theory (Christensen) Open source as low-end disruption
Real Options Valuing optionality in uncertain tech bets
Strategic Inflection Points (Andy Grove) Recognizing and navigating paradigm shifts
Ecosystem Strategy (Adner) Orchestrator vs participant positioning
Regulatory Strategy (Baron) Shaping regulation as competitive advantage
Talent Strategy Talent acquisition, culture, and retention as moat

What Five Forces gets right about LLMs:

  • Supplier power (NVIDIA, talent) is the key constraint
  • Rivalry will compress margins
  • Buyer power is increasing as models commoditize
  • Barriers are bifurcated (oligopoly at frontier, fragmented below)

What it misses:

  • Speed of change (analysis obsolete in months)
  • Boundary fluidity (what industry is this?)
  • Dual-role players (competitor-supplier-partner hybrids)
  • Complementors and ecosystems
  • Network effects and winner-take-most dynamics
  • Open source as strategic weapon, not just substitute
  • Existential regulatory risk
  • Talent as THE moat
  • Discontinuous technological change

The verdict:

Porter’s Five Forces is a useful starting point for analyzing the LLM industry. It surfaces important dynamics and forces structured thinking.

But it’s wildly insufficient on its own.

The LLM industry breaks Porter’s assumptions about stable boundaries, clean competitive categories, profit-maximizing actors, and incremental change. To understand where the industry is going, you need to combine five forces with platform economics, game theory, ecosystem strategy, and regulatory analysis.

Porter gave us the foundation. The LLM industry demands we build beyond it.