Gws Modelarmor Sanitize Prompt

🌐Community
by googleworkspace · vlatest · Repository

This skill sanitizes prompts used in Google Workspace AI models to mitigate potential risks and ensure safer interactions.

Install on your platform

We auto-selected Claude Code based on this skill’s supported platforms.

1

Run in terminal (recommended)

terminal
claude mcp add gws-modelarmor-sanitize-prompt npx -- -y @trustedskills/gws-modelarmor-sanitize-prompt
2

Or manually add to ~/.claude/settings.json

~/.claude/settings.json
{
  "mcpServers": {
    "gws-modelarmor-sanitize-prompt": {
      "command": "npx",
      "args": [
        "-y",
        "@trustedskills/gws-modelarmor-sanitize-prompt"
      ]
    }
  }
}

Requires Claude Code (claude CLI). Run claude --version to verify your install.

About This Skill

What it does

This skill sanitizes prompts before they are sent to Google AI Studio models, ensuring inputs comply with safety guidelines and preventing the injection of harmful or policy-violating content. It acts as a pre-processing filter to maintain secure interactions within your Google Workspace environment.

When to use it

  • Public-facing chatbots: Deploy customer service agents that must strictly adhere to safety policies without risking model abuse.
  • Data ingestion pipelines: Clean unstructured user inputs before processing them through generative AI workflows in Google Cloud.
  • Compliance requirements: Enforce organizational data governance rules by blocking prompts containing sensitive or prohibited information.
  • Internal tools: Secure internal knowledge assistants to prevent accidental leakage of confidential corporate data via prompt injection.

Key capabilities

  • Real-time sanitization of user inputs before model inference.
  • Integration with Google AI Studio models for enhanced safety filtering.
  • Prevention of malicious or policy-violating prompt injections.
  • Deployment via the googleworkspace/cli toolset.

Example prompts

  • "How do I install the gws-modelarmor-sanitize-prompt skill to secure my chatbot?"
  • "What happens when a user submits a harmful prompt to an agent using this sanitization layer?"
  • "Can I configure the sanitize-prompt skill to block specific keywords before they reach Google AI Studio?"

Tips & gotchas

Ensure your Google Workspace account has access to Google AI Studio features, as this skill relies on that integration for effective filtering. While highly effective at blocking obvious violations, complex adversarial attacks may still require additional monitoring layers beyond basic sanitization.

Tags

🛡️

TrustedSkills Verification

Unlike other registries that point to live repositories, TrustedSkills pins every skill to a verified commit hash. This protects you from malicious updates — what you install today is exactly what was reviewed and verified.

Security Audits

Gen Agent Trust HubPass
SocketPass
SnykPass

Details

Version
vlatest
License
Author
googleworkspace
Installs
157

🌐 Community

Passed automated security scans.