Disciplined Quality Evaluation

🌐Community
by terraphim · vlatest · Repository

This skill systematically assesses product or service attributes against defined criteria, ensuring consistent and objective quality evaluations for better decision-making.

Install on your platform

We auto-selected Claude Code based on this skill’s supported platforms.

1

Run in terminal (recommended)

terminal
claude mcp add disciplined-quality-evaluation npx -- -y @trustedskills/disciplined-quality-evaluation
2

Or manually add to ~/.claude/settings.json

~/.claude/settings.json
{
  "mcpServers": {
    "disciplined-quality-evaluation": {
      "command": "npx",
      "args": [
        "-y",
        "@trustedskills/disciplined-quality-evaluation"
      ]
    }
  }
}

Requires Claude Code (claude CLI). Run claude --version to verify your install.

About This Skill

What it does

This skill provides structured and disciplined evaluation of generated text. It assesses outputs against defined criteria, providing a detailed breakdown of strengths and weaknesses. The skill aims to improve the quality of AI agent responses through rigorous analysis and actionable feedback.

When to use it

  • Reviewing draft content: Evaluate marketing copy or blog posts before publishing to ensure clarity and accuracy.
  • Assessing chatbot performance: Analyze chatbot conversations for tone, helpfulness, and adherence to brand guidelines.
  • Improving code generation: Critique generated code snippets for efficiency, readability, and correctness.
  • Evaluating creative writing: Provide feedback on stories or poems based on specific literary criteria.

Key capabilities

  • Structured evaluation against defined criteria
  • Detailed breakdown of strengths and weaknesses
  • Actionable feedback for improvement
  • Assessment of tone, clarity, accuracy, and adherence to guidelines

Example prompts

  • "Evaluate this marketing copy: [paste text here] using the following criteria: clarity, persuasiveness, call to action effectiveness."
  • "Assess this chatbot conversation transcript for helpfulness and politeness. Highlight areas needing improvement."
  • "Critique this Python code snippet for efficiency and readability: [paste code here]."

Tips & gotchas

The quality of the evaluation heavily relies on clearly defined criteria provided in the prompt. Vague or missing criteria will result in less useful feedback.

Tags

🛡️

TrustedSkills Verification

Unlike other registries that point to live repositories, TrustedSkills pins every skill to a verified commit hash. This protects you from malicious updates — what you install today is exactly what was reviewed and verified.

Security Audits

Gen Agent Trust HubPass
SocketPass
SnykPass

Details

Version
vlatest
License
Author
terraphim
Installs
13

🌐 Community

Passed automated security scans.