🔧

Ai Video Producer

🔓Unverified
by zysilm-ai · v1.0.0 · MITRepository

>

Install on your platform

We auto-selected Claude Code based on this skill’s supported platforms.

1

Run in terminal (recommended)

terminal
claude mcp add ai-video-producer npx -- -y @trustedskills/ai-video-producer
2

Or manually add to ~/.claude/settings.json

~/.claude/settings.json
{
  "mcpServers": {
    "ai-video-producer": {
      "command": "npx",
      "args": [
        "-y",
        "@trustedskills/ai-video-producer"
      ]
    }
  }
}

Requires Claude Code (claude CLI). Run claude --version to verify your install.

About This Skill

What it does

The AI Video Producer skill facilitates complete AI video production workflows locally. It leverages WAN 2.2 for video generation and Qwen Image Edit 2511 via ComfyUI to create keyframes, all running directly on consumer GPUs without relying on cloud APIs. The skill guides you through a structured process, allowing you to define visual style and narrative before generating the video content.

When to use it

  • Creating short animated scenes: Generate fight sequences or lifestyle videos based on prompts like "Create a 15-second anime fight scene" or "Create a video from meme 'I want to save money vs you only live once'."
  • Developing consistent visual styles: Use the “Philosophy-First” approach for projects requiring a cohesive look and feel across multiple scenes.
  • Iterative video creation: Refine your vision through an iterative process, reviewing keyframes and videos at each step before proceeding.
  • Local video generation: When you need complete control over your data and prefer to avoid cloud-based services.

Key capabilities

  • Runs entirely locally on consumer GPUs.
  • Utilizes WAN 2.2 for video generation.
  • Employs Qwen Image Edit 2511 via ComfyUI for keyframe generation.
  • Supports "Video-First" (recommended) and "Keyframe-First" pipeline modes.
  • Offers a philosophy-first approach to ensure visual coherence.
  • Fast generation: ~36 seconds per image, ~2 minutes per 5-second video.
  • Low VRAM requirement: Works on 10GB GPUs using GGUF quantization.

Example prompts

  • "Create a 15-second anime fight scene with two warriors."
  • "Create a video from meme 'I want to save money vs you only live once'."
  • “Generate a short video showcasing a futuristic cityscape.”

Tips & gotchas

  • A GPU with at least 10GB of VRAM is required for operation due to GGUF quantization.
  • The "Video-First" pipeline mode is recommended for maintaining visual continuity between scenes.

Tags

🛡️

TrustedSkills Verification

Unlike other registries that point to live repositories, TrustedSkills pins every skill to a verified commit hash. This protects you from malicious updates — what you install today is exactly what was reviewed and verified.

Verified Commit56d760af

Installing this skill downloads the exact code at commit 56d760af, not the current state of the repository. This prevents supply-chain attacks from unauthorized updates.

Security Audits

Gen Agent Trust HubPass
SocketPass
SnykPass

Details

Version
v1.0.0
License
MIT
Author
zysilm-ai
Installs
0
Updated
Jan 9, 2026
Published
Dec 28, 2025

🔓 Unverified

Not yet reviewed. Use with caution.

Pinned commit56d760af

Install command fetches the verified snapshot, not the live repository.