Azure Ai Contentsafety Py
Helps with Azure, AI as part of deploying and managing cloud infrastructure workflows.
Install on your platform
We auto-selected Claude Code based on this skill’s supported platforms.
Run in terminal (recommended)
claude mcp add azure-ai-contentsafety-py npx -- -y @trustedskills/azure-ai-contentsafety-py
Or manually add to ~/.claude/settings.json
{
"mcpServers": {
"azure-ai-contentsafety-py": {
"command": "npx",
"args": [
"-y",
"@trustedskills/azure-ai-contentsafety-py"
]
}
}
}Requires Claude Code (claude CLI). Run claude --version to verify your install.
About This Skill
What it does
This skill leverages the Azure AI Content Safety client library for Python. It allows you to programmatically analyze text content and automatically detect potentially harmful or offensive material, such as hate speech, sexually suggestive content, self-harm threats, and violence. The analysis returns a detailed report with confidence scores for each category of potential violation.
When to use it
- Content Moderation: Automatically flag user-generated content on forums, social media platforms, or review sites before human moderation.
- Chatbot Safety: Integrate into chatbots to prevent them from generating inappropriate responses and ensure a safe user experience.
- Code Generation Security: Analyze code snippets generated by AI models for potential vulnerabilities or harmful instructions.
- Automated Document Review: Scan documents for sensitive content before distribution or publication, ensuring compliance with internal policies.
Key capabilities
- Text classification across multiple categories (hate speech, sexually suggestive content, self-harm, violence, etc.)
- Confidence scores for each category to assess the severity of potential violations.
- Detailed reports providing insights into why a piece of content was flagged.
- Python client library for easy integration with existing Python workflows.
Example prompts
- "Analyze this text: 'This is an example sentence.'"
- "Check this user comment for any harmful language: [user comment]"
- "Can you assess the safety risk of this generated code snippet?"
Tips & gotchas
- Requires an Azure subscription and access to the AI Content Safety service. You'll need your endpoint key and deployment name.
- The accuracy of content detection depends on the quality and context of the input text.
Tags
TrustedSkills Verification
Unlike other registries that point to live repositories, TrustedSkills pins every skill to a verified commit hash. This protects you from malicious updates — what you install today is exactly what was reviewed and verified.
Security Audits
| Gen Agent Trust Hub | Pass |
| Socket | Pass |
| Snyk | Pass |
🌐 Community
Passed automated security scans.