DEV Community

Cover image for The Evolution of RPC Infrastructure Billing Models
Petar
Petar

Posted on

The Evolution of RPC Infrastructure Billing Models

From hourly node pricing to usage complexity—and the rise of capacity-based RPC infrastructure

If you’ve built anything in Web3 over the past few years, you’ve likely felt the same friction I have: infrastructure billing that feels like a moving target.

You’re not just asking how to scale—you’re asking how much your RPC node usage is going to cost when you do. And too often, the answer is: it depends.

But it wasn’t always like this.

🧱 Era 1: Buying Hours, Not Output

In the early days of blockchain development, we ran our own full nodes. Ethereum, BNB, Solana—whatever chain we needed, we spun up geth, parity, or solana-validator on a cloud VM, paid by the hour, and hoped for the best.

The model was straightforward: provision compute, pay for uptime.

Scaling, though, was manual and brittle. You added more nodes to handle traffic spikes. If you hit performance ceilings, you threw more CPU at the problem. You paid the same whether your app made ten requests a second or ten thousand—what mattered was how long the machine ran.

Costs weren’t tied to actual demand—they were tied to time.

It was simple, but far from efficient. Most teams overprovisioned just to be safe.

Then came managed blockchain node API providers, and the model started to change.

⚖️ Era 2: Usage-Based Pricing—With Asterisks

The second generation of RPC infrastructure came with shared nodes, elastic scaling, and pay-as-you-go APIs. Providers like Alchemy, QuickNode, Infura, and Helius promised: only pay for what you use.

It sounded like a massive leap forward for anyone building apps on top of blockchain RPC nodes.

But the details revealed new complexity. “Usage” wasn’t flat-rate. It was weighted by method and intensity.

A request to get the latest block number might cost 1 unit. A call to eth_getLogs over 10,000 blocks? That could cost 60x more. The more compute or data you pulled, the faster you burned through your plan.

To handle this, providers introduced:

  • Compute units (CUs)
  • API credits
  • Method-based multipliers

The logic made sense from an infrastructure perspective. The developer experience? Not so much.

You weren’t just building dApps—you were budgeting for method calls.

Instead of focusing on product features or user needs, you started thinking in quotas:

  • Are we calling too many “heavy” RPC methods?
  • Will this dashboard blow up our blockchain API quota?
  • Should we remove logs indexing to stay under budget?

It worked—but it punished experimentation and scale. Especially for projects running bots, cross-chain analytics, or real-time data.

⚡ Era 3: Capacity-Based Billing for RPC Nodes

Recently, a new model has started gaining traction—capacity-based pricing for blockchain infrastructure.

Instead of charging per request (or per credit), you pay for sustained throughput—how many RPC requests per second your app is allowed to send.

This is best represented by services like Chainstack’s Unlimited Node. You select an RPS tier—25, 100, 250, 1000—and get unlimited blockchain API access within that capacity.

  • No multipliers
  • No quotas
  • No overage fees

It’s like paying for an internet connection by bandwidth instead of per byte.

This approach flips the priorities:

  • ✅ You stop optimizing around billing edge cases
  • ✅ You start designing for real-world performance
  • ✅ You plan capacity the same way you would with any backend system

🧭 The Evolution in One Line

We went from:

💻 Paying for machines → 🔢 Paying for complexity → ⚡ Paying for capacity

And in that final stage, blockchain infrastructure finally feels like modern backend engineering.

📊 Comparing Blockchain Node API Billing Models

Billing Model Example Providers What You Pay For
Dedicated Instances Self-hosted, Ankr, early Infura Uptime (hours of node operation)
Usage-Based (CU/Credit) Alchemy, QuickNode, Helius Method-weighted usage (per request)
RPS-Tiered (Capacity) Chainstack (Unlimited Node) Sustained RPC request throughput (RPS)

Each model had its time and use case.

  • Need a personal dev node for light querying? Self-hosted or usage-based still works.
  • Running an indexing pipeline, trading bot, or AI agent? Throughput-based pricing gives you predictability.

✅ Why Capacity-Based RPC Pricing Makes Sense Today

It’s not always the cheapest model—but it’s the clearest.

You know what your system can handle, and what it will cost. No surprises if someone uses a heavy method. No overage charges when you hit your dashboard too hard.

From a builder’s point of view:

  • No request rationing
  • Predictable billing
  • Scale without financial guesswork

It also shifts how we think about blockchain node APIs: not as metered utilities, but as high-availability infrastructure you can rely on.

🔁 Real-World Friction, Resolved

I’ve worked on projects that had to throttle features—not because of performance, but because of billing complexity. We paused bots during market spikes to avoid massive bills. We held off on backfills to conserve credits.

Those constraints weren’t technical. They were billing constraints.

With RPS-tiered RPC nodes, that anxiety disappears. You plan for traffic. You build what you need.

Get Unlimited Node on Chainstack

🧩 Final Thoughts on RPC Node Billing Models

The evolution of RPC infrastructure billing mirrors the broader maturity of the space.

We’ve gone from raw compute, to complexity-based credits, to something that finally aligns with how we build everywhere else: capacity-driven, product-friendly pricing.

It's not about counting requests anymore. It's about asking:

What kind of performance does my app need to thrive?

The best billing model is the one that lets you stop thinking about billing.


If you’re experimenting with capacity-based models or have war stories from the credit-era, I’d love to hear them in the comments.

Top comments (0)