What Cursor’s Pro Plan "Unlimited-with-Rate-Limits" Means

Cursor’s Pro plan is now unlimited with rate limits. Learn what that means, how rate limits work, what burst and local limits mean and why users are confused.

Oliver Kingsley

Oliver Kingsley

19 June 2025

What Cursor’s Pro Plan "Unlimited-with-Rate-Limits" Means

Cursor has shaken up its Pro plan recently. The new model—"unlimited-with-rate-limits"—sounds like a dream, but what does it actually mean for developers? Let’s delve into Cursor’s official explanation, user reactions, and how you can truly optimize your workflow.

Cursor Pro Plan Rate Limits: Everything You Need to Know

Understanding how rate limits work is key to getting the most out of your Cursor Pro Plan. Cursor meters rate limits based on underlying compute usage, and these limits reset every few hours. Here’s a clear breakdown of what that means for you.

What Are Cursor Rate Limits?

Cursor applies rate limits to all plans on Agent. These limits are designed to balance fair usage and system performance. There are two main types of rate limits:

1. Burst Rate Limits:

2. Local Rate Limits:

Both types of limits are based on the total compute you use during a session. This includes:

💡
Tired of ambiguous rate limits and confusing quotas? Apidog is your all-in-one API development platform—design, test, and document APIs with ease. Plus, Apidog MCP Server is free and lets you connect your API docs directly to AI-powered IDEs like Cursor. Sign up now and experience next-level productivity!
button

How Do Rate Limits Work?

What Happens If You Hit a Limit?

If you use up both your local and burst limits, Cursor will notify you and present three options:

  1. Switch to models with higher rate limits (e.g., Sonnet has higher limits than Opus).
  2. Upgrade to a higher tier (such as the Ultra plan).
  3. Enable usage-based pricing to pay for requests that exceed your rate limits.

Can I Stick with the Old Cursor Pro Plan?

Yes! If you prefer a simple, lump-sum request system, you can keep the legacy Pro Plan. Just go to your Dashboard > Settings > Advanced to control this setting. For most users, the new Pro plan with rate limits will be preferable.

Quick Reference Table

Limit Type Description Reset Time
Burst Rate Limit For short, high-activity sessions Slow to refill
Local Rate Limit For steady, ongoing usage Every few hours

User Reactions: Confusion, Frustration, and Calls for Clarity

Cursor’s new pricing model has sparked a wave of discussion—and not all of it is positive. Here’s what users are saying:

Key Takeaway:

What Rate Limits Mean for Your Workflow: The Developer’s Dilemma

So, what does “unlimited-with-rate-limits” mean for your day-to-day coding?

If you hit a rate limit:

Rate Limit Scenarios

Scenario What Happens?
Light daily use Rarely hit limits, smooth experience
Bursty coding sessions May hit burst/local limits, need to wait
Heavy/enterprise use May need Ultra plan or usage-based pricing

Pro Tip: If you want to avoid the uncertainty of rate limits and get more out of your API workflow, Apidog’s free MCP Server is the perfect solution. Read on to learn how to set it up!

Use Apidog MCP Server with Cursor to Avoid Rate Limit

Apidog MCP Server lets you connect your API specifications directly to Cursor, enabling smarter code generation, instant API documentation access, and seamless automation—all for free. This means Agentic AI can directly access and work with your API documentation, speeding up development while avoiding hitting the rate limit in Cursor.

Step 1: Prepare Your OpenAPI File

Step 2: Add MCP Configuration to Cursor

configuring MCP Server in Cursor

For MacOS/Linux:

{
  "mcpServers": {
    "API specification": {
      "command": "npx",
      "args": [
        "-y",
        "apidog-mcp-server@latest",
        "--oas=https://petstore.swagger.io/v2/swagger.json"
      ]
    }
  }
}

For Windows:

{
  "mcpServers": {
    "API specification": {
      "command": "cmd",
      "args": [
        "/c",
        "npx",
        "-y",
        "apidog-mcp-server@latest",
        "--oas=https://petstore.swagger.io/v2/swagger.json"
      ]
    }
  }
}

Step 3: Verify the Connection

Please fetch API documentation via MCP and tell me how many endpoints exist in the project.

Conclusion: Don’t Let Rate Limits Hold You Back

Cursor’s shift to an “unlimited-with-rate-limits” model reflects a growing trend in AI tooling: offer flexibility without compromising infrastructure stability. For most developers, this change provides more freedom to work dynamically throughout the day, particularly those who don’t rely on high-volume interactions.

However, the lack of clear, quantifiable limits has created friction, especially among power users who need predictable performance. Terms like “burst” and “local” limits sound technical yet remain vague without concrete figures. Developers planning long, compute-heavy sessions or working on large files may find themselves unexpectedly throttled. And while options like upgrading or switching models are available, they still introduce an element of disruption to a smooth coding workflow.

The good news? You’re not locked in. Cursor allows users to stick with the legacy Pro plan if the new system doesn’t suit your needs. And if you want to supercharge your AI-assisted coding even further, integrating Apidog’s free MCP Server can help you bypass some of these limitations entirely. With direct API access, instant documentation sync, and powerful automation tools, Apidog enhances your productivity while keeping you in control.

With Apidog MCP Server, you can:

button

Explore more

A Developer's Guide to the OpenAI Deep Research API

A Developer's Guide to the OpenAI Deep Research API

In the age of information overload, the ability to conduct fast, accurate, and comprehensive research is a superpower. Developers, analysts, and strategists spend countless hours sifting through documents, verifying sources, and synthesizing findings. What if you could automate this entire workflow? OpenAI's Deep Research API is a significant step in that direction, offering a powerful tool to transform high-level questions into structured, citation-rich reports. The Deep Research API isn't jus

27 June 2025

How to Get Free Gemini 2.5 Pro Access + 1000 Daily Requests (with Google Gemini CLI)

How to Get Free Gemini 2.5 Pro Access + 1000 Daily Requests (with Google Gemini CLI)

Google's free Gemini CLI, the open-source AI agent, rivals its competitors with free access to 1000 requests/day and Gemini 2.5 pro. Explore this complete Gemini CLI setup guide with MCP server integration.

27 June 2025

How to Use MCP Servers in LM Studio

How to Use MCP Servers in LM Studio

The world of local Large Language Models (LLMs) represents a frontier of privacy, control, and customization. For years, developers and enthusiasts have run powerful models on their own hardware, free from the constraints and costs of cloud-based services.However, this freedom often came with a significant limitation: isolation. Local models could reason, but they could not act. With the release of version 0.3.17, LM Studio shatters this barrier by introducing support for the Model Context Proto

26 June 2025

Practice API Design-first in Apidog

Discover an easier way to build and use APIs