Gws Modelarmor Sanitize Response
This skill sanitizes AI-generated responses from Google Workspace apps to mitigate potential Model Armor risks and ensure safer content delivery.
Install on your platform
We auto-selected Claude Code based on this skill’s supported platforms.
Run in terminal (recommended)
claude mcp add gws-modelarmor-sanitize-response npx -- -y @trustedskills/gws-modelarmor-sanitize-response
Or manually add to ~/.claude/settings.json
{
"mcpServers": {
"gws-modelarmor-sanitize-response": {
"command": "npx",
"args": [
"-y",
"@trustedskills/gws-modelarmor-sanitize-response"
]
}
}
}Requires Claude Code (claude CLI). Run claude --version to verify your install.
About This Skill
What it does
This skill sanitizes AI agent responses generated by Google Workspace models to ensure they comply with safety guidelines and organizational policies. It acts as a critical filter layer, removing potentially harmful or non-compliant content before the final output reaches the user.
When to use it
- Deploying internal AI agents that generate sensitive corporate reports or communications.
- Integrating Google Workspace models into customer-facing chatbots where brand safety is paramount.
- Automating content creation workflows to prevent accidental leaks of proprietary information.
- Enforcing strict data privacy standards across all model-generated outputs within the organization.
Key capabilities
- Real-time sanitization of generated text responses.
- Enforcement of Google Workspace-specific safety protocols.
- Prevention of policy-violating content from being displayed to end-users.
- Seamless integration within the
googleworkspace/cliecosystem.
Example prompts
- "Configure the agent to sanitize all responses before sending them to HR staff."
- "Apply strict model armor sanitization rules to this customer support chatbot."
- "Ensure that any output from the Google Workspace model passes through the response sanitizer."
Tips & gotchas
This skill is specifically designed for use with Google Workspace models; applying it to other LLMs may yield unexpected results. Always test the sanitization logic against your specific compliance requirements before deploying in a production environment, as strict filtering might occasionally block benign but sensitive queries.
Tags
TrustedSkills Verification
Unlike other registries that point to live repositories, TrustedSkills pins every skill to a verified commit hash. This protects you from malicious updates — what you install today is exactly what was reviewed and verified.
Security Audits
| Gen Agent Trust Hub | Pass |
| Socket | Pass |
| Snyk | Pass |
🌐 Community
Passed automated security scans.