Scholar Evaluation
This skill analyzes text to assess its scholarly quality – helpful for verifying research and ensuring accurate information.
Install on your platform
We auto-selected Claude Code based on this skill’s supported platforms.
Run in terminal (recommended)
claude mcp add scholar-evaluation npx -- -y @trustedskills/scholar-evaluation
Or manually add to ~/.claude/settings.json
{
"mcpServers": {
"scholar-evaluation": {
"command": "npx",
"args": [
"-y",
"@trustedskills/scholar-evaluation"
]
}
}
}Requires Claude Code (claude CLI). Run claude --version to verify your install.
About This Skill
What it does
This skill, Scholar Evaluation, provides a structured methodology for assessing scholarly work using the ScholarEval framework. It enables AI agents to analyze academic papers, research proposals, literature reviews, and other scholarly writing across multiple quality dimensions. The evaluation is based on peer-reviewed research assessment criteria, allowing for comprehensive analysis of various aspects like problem formulation, methodology, data analysis, and presentation.
When to use it
- Evaluating research papers to determine their quality and rigor.
- Assessing the comprehensiveness and quality of literature reviews.
- Reviewing the design of a research methodology.
- Providing structured feedback on academic work.
- Benchmarking research quality against established criteria.
Key capabilities
- Applies the ScholarEval framework for systematic evaluation.
- Evaluates scholarly work across dimensions like Problem Formulation, Literature Review, Methodology, Data Collection, Analysis & Interpretation, Results & Findings, and Scholarly Writing & Presentation.
- Identifies strengths and weaknesses within each dimension.
- Provides scores where appropriate based on detailed criteria (see references/evaluation_framework.md).
Example prompts
- "Evaluate this research paper using the ScholarEval framework." [followed by text of the paper]
- "Assess the methodology section of this research proposal." [followed by text of the proposal's methodology]
- "Provide feedback on the clarity and organization of this literature review." [followed by text of the review]
Tips & gotchas
- The skill requires clarification regarding the evaluation scope (Comprehensive, Targeted, or Comparative) if it is ambiguous.
- Detailed criteria and rubrics for each evaluation dimension are found in the
references/evaluation_framework.mdfile.
Tags
TrustedSkills Verification
Unlike other registries that point to live repositories, TrustedSkills pins every skill to a verified commit hash. This protects you from malicious updates — what you install today is exactly what was reviewed and verified.
Security Audits
| Gen Agent Trust Hub | Pass |
| Socket | Pass |
| Snyk | Pass |
🌐 Community
Passed automated security scans.