Technology Apr 28, 2026 · 4 min read

How to Use Cursor's Composer 2, Gemini, Grok & More in Claude Code as Another Dev

Why I stopped trusting Claude to review Claude. 8 prompts I run instead. TL;DR: Install Cursor's CLI, then npx skills@latest add Vesely/skills/cursor-agent. In Claude Code, ask things like "review this branch via cursor-agent with composer, gemini, and gpt-5.5 in parallel". Three models look at th...

DE
DEV Community
by David Veselý
How to Use Cursor's Composer 2, Gemini, Grok & More in Claude Code as Another Dev

Why I stopped trusting Claude to review Claude. 8 prompts I run instead.

TL;DR: Install Cursor's CLI, then npx skills@latest add Vesely/skills/cursor-agent. In Claude Code, ask things like "review this branch via cursor-agent with composer, gemini, and gpt-5.5 in parallel". Three models look at the same problem at once. Claude merges what they find. Works with any model in cursor-agent --list-models, not just those three.

Asking Claude to review the code Claude just wrote is like asking someone to grade their own homework. They will find what they were already looking for. Multi-model review loops are harder to fool because the sycophancy bias does not survive crossing providers.

So I started fanning work out to other models from inside Claude Code. The skill wraps Cursor's headless cursor-agent CLI. Claude Code starts the jobs in parallel, waits for the answers, then merges the useful parts.

Setup

You need Cursor's CLI installed and a Cursor account. Install the CLI:

curl https://cursor.com/install -fsS | bash

Run agent once to log in. Then add the skill in Claude Code:

npx skills@latest add Vesely/skills/cursor-agent

The skill defaults to read-only mode and never writes to your repo. A fetched answer can look but not touch.

8 patterns I actually use

1. Code review before merge

Anything I'm about to push, especially branches Claude wrote most of.

Use /cursor-agent to review the recent changes with composer-2, gemini-3.1-pro, and gpt-5.5 in parallel. Real issues only, file:line references, no nitpicks. Merge the findings: what all three flagged, where they disagreed, what only one saw.

2. "Did Claude actually do what it said?"

When the summary feels too smooth.

Ask /cursor-agent with grok to compare Claude's summary against the recent diff and test output. Flag anything in the summary the diff or tests don't support.

3. Architecture and implementation plan review

Before letting Claude write a single line on something non-trivial.

Ask /cursor-agent with gpt-5.5 in plan mode to critique the plan in this doc. Where am I over-engineering? What risks am I underweighting? Be blunt.

4. Security review across models

Auth, user input, file paths, anything touching shell. Each model spots different attack surfaces.

Use /cursor-agent to security-review the recent changes with composer-2, gpt-5.5, and gemini-3.1-pro in parallel. What's the worst input a user could send? Where am I missing defensive checks? Real risks only, no theater. Merge what each model caught and where they disagreed.

5. Copy and text review

Anything user-facing or public — landing pages, error messages, docs, emails.

Ask /cursor-agent with gpt-5.5 to read the draft in this file. Where does it sound stiff or generic? Which sentences sound polished but say nothing? Suggest sharper variants for the weakest few.

6. "Which variant should I pick?"

Two or three options on the table and I'm fence-sitting.

Ask /cursor-agent with gpt-5.5 to rank the three options in this doc. Which would you pick and why?

7. "Are you really happy with that?"

Claude says it's done and something feels slightly off. A workflow that almost reads right. Code that compiles but smells.

Ask /cursor-agent with composer-2 to look at the recent output cold. I think it's wrong but I can't say why. What would you change?

8. Stuck on an issue, need different ideas

Thirty minutes of debugging and Claude and I are circling the same hypotheses. Kimi gives an off-axis read from outside the usual bubble.

Use /cursor-agent with gemini-3.1-pro and kimi-k2.5 in parallel. Look at the error and what I've already tried in this thread. What are the top three things I might be missing? Don't repeat what I've ruled out.

My go-to second models

A few months in, my defaults have settled. GPT-5.5 became my go-to for both code review and copy, sharp feedback without padding. Composer 2 is the fast one, called whenever I'm impatient. Codex stays out on purpose; for GPT-5.x reviews I run the standalone codex CLI directly.

What's your favorite second opinion? Which model do you reach for on code review, and which on copy? Drop it in the comments. Curious where mine breaks down for someone else.

DE
Source

This article was originally published by DEV Community and written by David Veselý.

Read original article on DEV Community
Back to Discover

Reading List