DEV Community

Cover image for A Month with Supercode.sh: How I Learned to Stop Worrying and Love the Cursor agent
John Wilson
John Wilson

Posted on

A Month with Supercode.sh: How I Learned to Stop Worrying and Love the Cursor agent

I’m not exactly a vibe-coder. I’ve been writing code for over a decade, and building an app from scratch isn’t a problem for me. But Cursor and its built-in AI tools have definitely made me faster — no question there. I can one-shot entire components or backend endpoints with just one prompt. But still, the experience isn’t smooth. Too often I end up arguing with the AI. Yes, it helps a lot, but I still find myself doing way too many things manually.

I build web apps with React (Next.js), backend in Node.js, and I’ve been living inside Cursor for over eight months. During that time I tried nearly every major extension: Context7, Task Master, Figma MCP, several flavors of Memory Bank, etc. And even then, I still often had to fight the AI instead of collaborating with it.

About a month ago I came across Supercode.sh. Initially, I was skeptical. After three weeks of usage, I realized: this is not a plugin, it’s a sanity layer between me and the Cursor agent.

The first Grail: Prompt Enhancing

One of the core problems that plugins like Task Master, Memory Bank, and others try to solve is getting the right context into the prompt — and breaking down your idea into concrete subtasks the AI can actually execute. This becomes especially important when you’re not giving the AI a detailed step-by-step plan, but just describing a feature you want to see in your product.

But if you rely on the built-in agent — even one running in “thinking” mode — to do that decomposition for you, the results often fall short. You get hallucinated steps, irrelevant assumptions, or just a vague mess that you end up cleaning up by hand.

And here’s the second problem: you almost always get better results with a long, detailed prompt. Sometimes 2–3x better. But writing (or even dictating) a really detailed prompt takes time and energy — especially if it’s task-specific and the relevant context is dynamic.

Yes, you can drop context into rules, but those are static. If you’re working on routing, you don’t need DB rules. If you’re working on auth, you don’t care about layout structure. That’s the limit of static rule files — and it’s exactly why most people keep those rules generic rather than including concrete, highly specific facts.

Which is exactly where Supercode surprised me. There’s this little button called Prompt Enhance — and it does actual magic. It clearly knows my stack, my file layout, and how things are structured. And even if I type in a barebones one-liner like “build me a date formatting util”, hitting Prompt Enhance enriches it with enough clarity and specifics that the result is dramatically better.

If you constantly type stuff like refactor this into reusable hook and get garbage back — yeah, me too. Supercode’s prompt enhancer rewrites these stubby prompts into full, well-scoped instructions.

Moreover, a few weeks ago, they added Advanced Prompt Enhancers. If you hold that same button, you get a dropdown with new options. The one I use constantly is Decompose Tasks — it gives you a clean breakdown of subtasks, which for me fully replaced Task Master. It outputs a clear, step-by-step implementation plan the AI can follow.

The other one, Suggest Details, takes a vague or underthought prompt and turns it into a structured feature description. Both of these enhancers gave me a measurable productivity boost — 2–3x better outcomes across the board. And on some fuzzier tasks where I hadn’t even fully defined the idea yet, it improved the result tenfold. I’m not exaggerating — it feels like free brainpower.

Honestly, Prompt Enhance is the feature that sold me on Supercode. It changed how I use Cursor daily. I initially subscribed to the Basic plan — but within a week, I upgraded to Premium. With around 70 Cursor prompts a day (sometimes more), the productivity boost I get from prompt enhancers is phenomenal. I use it literally hundreds of times every week, and it’s made a huge difference in both speed and output quality.

The second Grail: Voice prompts

I used to be very skeptical about voice input. First of all, it felt awkward to speak out my tasks — when I write a prompt, I think while typing, I revise, rephrase, sometimes even pivot the idea halfway through. Voice felt like committing too early.

Second, my native language isn’t English. And that creates a whole separate mess when it comes to voice input. Most voice systems — even the best ones — fail badly when you mix technical terms or code-like things into a sentence. You get nonsense. I’ve used SuperWhisper on my Mac a lot, and I still use it regularly for voice dictation on my iPhone. But when I tried using it to generate Cursor prompts, the result was garbage.

That changed completely with Supercode. It not only handles technical terms and English insertions inside my native-language speech with surprising accuracy, it also processes full prompts — even 1–2 minute long ones — fast and clean.

And something unexpected happened: my entire model of how I interact with agent in Cursor shifted. Before, I wouldn’t even bother asking the AI to help with simple tasks — like generating a date formatting function. It was just faster to write it myself.

But now, when I trust that my voice prompt will be interpreted correctly, I don’t hesitate. I can just say the task out loud in 15 seconds and be done. That’s way less friction than typing it or building the function from scratch. And unlike before, now it’s actually faster than writing the prompt manually. Honestly, outside of editing existing code, I’ve nearly stopped typing in Cursor at all.

The third Grail: Architecture Mode

As I mentioned at the beginning — I’m a developer. Which means I’m not just building UI components or adding product features. Often, I’m implementing entire modules: backend services, infrastructure pieces, edge-case failover flows. It always felt strange to me that, in order to plan and discuss those parts of the system, I had to leave Cursor and jump into ChatGPT. Not only did that mean copying a ton of context into ChatGPT to get meaningful help — but after a few prompt iterations, I’d have to drag all that back into Cursor.

And the strangest part? I have the same models — GPT-4o, o3 — running under both (and much more in Cursor, btw). I’ve tried to ask Cursor agent to plan before implementing — to just talk through architecture and get a roadmap without jumping into code — but unfortunately, it would still often glitch out. It would start editing files or generating docs, even though all I wanted was to talk it through. Just a planning session, nothing more.

Why was there no way to plan architecture inside the IDE? What made this even more frustrating was the fact that other plugins and VS Code tools — like RooCode, Cline, etc — already had a dedicated “Architecture Mode”. In those tools, you can just talk through your system design or implementation plan without worrying about context — because the agent already has full access to your project.

It made me wonder: why didn’t Cursor have this yet? I never found an answer. But I did find the solution.

Supercode adds mode called “SC:Architect”, and it does exactly what I’ve been wishing for. Now, when I’m starting a big feature or planning a new service, I use one hotkey to switch into Architecture Mode, and another to send the task. Instead of code, I get a full architecture plan: structure, interfaces, responsibilities. It’s contextual. It understands the stack. And it’s integrated.

Sometimes I edit what I get. But often it’s already spot-on. And the real magic? After reviewing the plan, I just flip back into agent mode — same thread — and tell Cursor: “Go ahead, implement this.”

That’s exactly the kind of collaboration I expect from an AI devtool. This feature alone — more than voice, more than prompt enhancement — has reshaped how I approach bigger features. It’s my new starting point, my new default.

It seems so obvious now — planning architecture with an AI agent who already has access to your full project context is infinitely better than trying to explain it to an out-of-context chatbot. And yet, only the Supercode team had the clarity to build that in. Thank god they did.

Cursor Rules: Teaching the AI to Think Like I Do

You probably already know what Cursor Rules are — little chunks of text that automatically get injected into context based on the file or always, helping the AI better understand your project’s structure and coding style. The idea is solid, but writing those rules by hand can take a lot of time. And yes, Cursor recently introduced auto-generated rules — but the ones it creates for itself are often overly verbose. They tend to focus heavily on obvious or low-priority details, while completely missing key structural or architectural points that actually matter. Sometimes, these auto-rules can even hurt output quality rather than help it.

Sure, there are community repos like Cursor Directory, Cursor Repos, and others that help — but even for something as mainstream as a Next.js frontend or an Express.js backend, you still end up stitching things together: one rule for project structure, another for routing, another for how you want to write components. And let’s be honest — there’s nothing groundbreaking in most of them. Tailwind, shadcn/ui, file-by-role structure — just the usual stuff.

Supercode fixes this pain. Not only can you install individual rules right inside Cursor with instant search and preview — but the real game changer is Rules Packs. These are curated, multi-rule bundles tailored for specific tech stacks. When I’m working on a Next.js project, I just type “Next.js” into the search bar and with one click I’ve got 5–6 rules installed that instantly teach the AI how I want my project structured.

Even if my preferences slightly differ from what the community Rules Pack includes, it’s easy to adjust. I can open up the installed rules, tweak a few lines to better match my style, and be done in minutes. That result is still far better than starting from scratch — and it actually reflects how I want my AI to behave.

My projects have strict architecture: endpoints in one place, logic in another, shared utilities somewhere else. Pre-Supercode, I had to explain that structure to the AI every time. Over. And over.

Now, I activate a rule pack tailored to my stack — and suddenly the AI got it. It started generating code that fits the structure, uses the right layers, and doesn’t dump logic into controllers. Niiice.

Tiny things to mention

  • Auto Docs — one‑click pass that generates project‑level documentation (general structure, data layer, business services) and feeds it back into context. Makes both humans and AI-agent smarter.

  • User‑defined Prompts — a personal prompt library (even Git‑versioned). My team shares common macros like “sync API.md” or “clean imports” and runs them from the palette.

  • Tweaks — bundled setups (MemoryBank, Context7, custom modes) that install with one click and even bypass Cursor’s 5‑mode limit. Saves half an hour of manual wiring.

  • Enhance Gemini — background watchdog that helps avoid “Gemini moment”, when it turns into advice‑bot instead of coder. A silent lifesaver.
  • Voice Commands — map phrases like “run tests” or “deploy” to VS Code tasks via a simple JSON file, then trigger them by voice without spending Cursor request on that.

None of these are headline features, but together they shave minutes off daily friction — the sort of polish that makes Supercode.sh feel like part of the IDE, not just another extension.

Final thoughts

The field of AI-assisted development is evolving fast. What started as autocomplete and “explain this code” has now become an entire layer of developer experience — where prompts, context, voice, and dynamic modes blend into how we write and reason about software.

What’s becoming clear is that it’s not about raw model power anymore. It’s about tools that amplify the developer’s intent — tools that know when to step in, when to stay quiet, and how to speak your language (literally and metaphorically).

The right tool won’t just answer your questions — it will change how you ask them. It won’t just generate code — it will change the shape of your workflow.

And once that shift happens, there’s no going back.

Top comments (0)