What Cursor’s Pro Plan "Unlimited-with-Rate-Limits" Means

Cursor’s Pro plan is now unlimited with rate limits. Learn what that means, how rate limits work, what burst and local limits mean and why users are confused.

Oliver Kingsley

Oliver Kingsley

19 June 2025

What Cursor’s Pro Plan "Unlimited-with-Rate-Limits" Means

Cursor has shaken up its Pro plan recently. The new model—"unlimited-with-rate-limits"—sounds like a dream, but what does it actually mean for developers? Let’s delve into Cursor’s official explanation, user reactions, and how you can truly optimize your workflow.

Cursor Pro Plan Rate Limits: Everything You Need to Know

Understanding how rate limits work is key to getting the most out of your Cursor Pro Plan. Cursor meters rate limits based on underlying compute usage, and these limits reset every few hours. Here’s a clear breakdown of what that means for you.

What Are Cursor Rate Limits?

Cursor applies rate limits to all plans on Agent. These limits are designed to balance fair usage and system performance. There are two main types of rate limits:

1. Burst Rate Limits:

2. Local Rate Limits:

Both types of limits are based on the total compute you use during a session. This includes:

💡
Tired of ambiguous rate limits and confusing quotas? Apidog is your all-in-one API development platform—design, test, and document APIs with ease. Plus, Apidog MCP Server is free and lets you connect your API docs directly to AI-powered IDEs like Cursor. Sign up now and experience next-level productivity!
button

How Do Rate Limits Work?

What Happens If You Hit a Limit?

If you use up both your local and burst limits, Cursor will notify you and present three options:

  1. Switch to models with higher rate limits (e.g., Sonnet has higher limits than Opus).
  2. Upgrade to a higher tier (such as the Ultra plan).
  3. Enable usage-based pricing to pay for requests that exceed your rate limits.

Can I Stick with the Old Cursor Pro Plan?

Yes! If you prefer a simple, lump-sum request system, you can keep the legacy Pro Plan. Just go to your Dashboard > Settings > Advanced to control this setting. For most users, the new Pro plan with rate limits will be preferable.

Quick Reference Table

Limit Type Description Reset Time
Burst Rate Limit For short, high-activity sessions Slow to refill
Local Rate Limit For steady, ongoing usage Every few hours

User Reactions: Confusion, Frustration, and Calls for Clarity

Cursor’s new pricing model has sparked a wave of discussion—and not all of it is positive. Here’s what users are saying:

Key Takeaway:

What Rate Limits Mean for Your Workflow: The Developer’s Dilemma

So, what does “unlimited-with-rate-limits” mean for your day-to-day coding?

If you hit a rate limit:

Rate Limit Scenarios

Scenario What Happens?
Light daily use Rarely hit limits, smooth experience
Bursty coding sessions May hit burst/local limits, need to wait
Heavy/enterprise use May need Ultra plan or usage-based pricing

Pro Tip: If you want to avoid the uncertainty of rate limits and get more out of your API workflow, Apidog’s free MCP Server is the perfect solution. Read on to learn how to set it up!

Use Apidog MCP Server with Cursor to Avoid Rate Limit

Apidog MCP Server lets you connect your API specifications directly to Cursor, enabling smarter code generation, instant API documentation access, and seamless automation—all for free. This means Agentic AI can directly access and work with your API documentation, speeding up development while avoiding hitting the rate limit in Cursor.

Step 1: Prepare Your OpenAPI File

Step 2: Add MCP Configuration to Cursor

configuring MCP Server in Cursor

For MacOS/Linux:

{
  "mcpServers": {
    "API specification": {
      "command": "npx",
      "args": [
        "-y",
        "apidog-mcp-server@latest",
        "--oas=https://petstore.swagger.io/v2/swagger.json"
      ]
    }
  }
}

For Windows:

{
  "mcpServers": {
    "API specification": {
      "command": "cmd",
      "args": [
        "/c",
        "npx",
        "-y",
        "apidog-mcp-server@latest",
        "--oas=https://petstore.swagger.io/v2/swagger.json"
      ]
    }
  }
}

Step 3: Verify the Connection

Please fetch API documentation via MCP and tell me how many endpoints exist in the project.

Conclusion: Don’t Let Rate Limits Hold You Back

Cursor’s shift to an “unlimited-with-rate-limits” model reflects a growing trend in AI tooling: offer flexibility without compromising infrastructure stability. For most developers, this change provides more freedom to work dynamically throughout the day, particularly those who don’t rely on high-volume interactions.

However, the lack of clear, quantifiable limits has created friction, especially among power users who need predictable performance. Terms like “burst” and “local” limits sound technical yet remain vague without concrete figures. Developers planning long, compute-heavy sessions or working on large files may find themselves unexpectedly throttled. And while options like upgrading or switching models are available, they still introduce an element of disruption to a smooth coding workflow.

The good news? You’re not locked in. Cursor allows users to stick with the legacy Pro plan if the new system doesn’t suit your needs. And if you want to supercharge your AI-assisted coding even further, integrating Apidog’s free MCP Server can help you bypass some of these limitations entirely. With direct API access, instant documentation sync, and powerful automation tools, Apidog enhances your productivity while keeping you in control.

With Apidog MCP Server, you can:

button

Explore more

How to Use Amazon EKS MCP Server

How to Use Amazon EKS MCP Server

Discover how to install and use Amazon EKS MCP Server with Cline in VS Code. Create EKS clusters, deploy NGINX, and troubleshoot pods with AI-driven ease!

19 June 2025

Cursor Pro Plan Goes Unlimited (with Rate Limits)

Cursor Pro Plan Goes Unlimited (with Rate Limits)

Cursor’s new Pro plan promises an unlimited-with-rate-limits model, but what does that really mean? Dive into the details, user concerns, and find out whether it is a smart upgrade or sneaky shift.

19 June 2025

How to Run Minimax M1 via API: A Complete Guide

How to Run Minimax M1 via API: A Complete Guide

MiniMax M1, developed by a Shanghai-based AI startup, is a groundbreaking open-weight, large-scale hybrid-attention reasoning model. With a 1 million token context window, efficient reinforcement learning (RL) training, and competitive performance, it’s ideal for complex tasks like long-context reasoning, software engineering, and agentic tool use. This 1500-word guide explores MiniMax M1’s benchmarks and provides a step-by-step tutorial on running it via the OpenRouter API. 💡Want a great API

19 June 2025

Practice API Design-first in Apidog

Discover an easier way to build and use APIs