Adversarial Review

🌐Community
by poteto · vlatest · Repository

Simulates harsh, critical reviews to identify product weaknesses and improve robustness against negative feedback.

Install on your platform

We auto-selected Claude Code based on this skill’s supported platforms.

1

Run in terminal (recommended)

terminal
claude mcp add adversarial-review npx -- -y @trustedskills/adversarial-review
2

Or manually add to ~/.claude/settings.json

~/.claude/settings.json
{
  "mcpServers": {
    "adversarial-review": {
      "command": "npx",
      "args": [
        "-y",
        "@trustedskills/adversarial-review"
      ]
    }
  }
}

Requires Claude Code (claude CLI). Run claude --version to verify your install.

About This Skill

What it does

This skill simulates harsh, critical reviews of work (like code or text) to identify weaknesses and improve robustness against negative feedback. It spawns reviewers using a separate AI model ("opposite model") which challenges the original work from distinct perspectives grounded in cognitive science principles. The final output is a synthesized verdict; this skill is designed for review and analysis, not automatic changes.

When to use it

  • Reviewing code diffs or other significant changes before merging or deployment.
  • Identifying potential vulnerabilities or weaknesses in written content.
  • Gaining insights into how users might negatively perceive a product or feature.
  • Improving the overall quality and robustness of work by proactively addressing criticism.
  • Assessing whether a piece of work effectively achieves its intended purpose.

Key capabilities

  • Reviewer Spawning: Creates reviewers using a separate AI model (Codex or Claude) to provide adversarial feedback.
  • Lens-Based Review: Reviewers operate from distinct perspectives defined by "lenses" (Skeptic, Architect, Minimalist).
  • Intent Assessment: Requires explicit definition of the work's intended purpose before review begins.
  • Scalable Reviewer Count: The number of reviewers spawned depends on the size of the change being reviewed (1 for small changes, 2 for medium, and 3 for large).
  • Verdict Synthesis: Combines individual reviewer outputs into a single, synthesized verdict.

Example prompts

  • "Review this code diff: [code diff]"
  • "Assess these changes to the product description: [product description]"
  • "Provide an adversarial review of this draft document, focusing on clarity and potential misunderstandings."

Tips & gotchas

  • Model Dependency: Reviewers must be spawned using the CLI of the opposite AI model (Codex for Claude agents, Claude for Codex agents). Using subagents or internal delegation will not work.
  • Intent is Key: Clearly state the intended purpose of the reviewed work before initiating the adversarial review process.
  • No Automatic Changes: This skill provides a verdict; it's up to you to evaluate and decide which findings warrant action.

Tags

🛡️

TrustedSkills Verification

Unlike other registries that point to live repositories, TrustedSkills pins every skill to a verified commit hash. This protects you from malicious updates — what you install today is exactly what was reviewed and verified.

Security Audits

Gen Agent Trust HubPass
SocketPass
SnykPass

Details

Version
vlatest
License
Author
poteto
Installs
85

🌐 Community

Passed automated security scans.