How MCP Skills Work Together with AI Agents: Full Guide
How MCP skills work together with AI agents โ full lifecycle from finding a skill in the registry to your AI agent calling its tools, with architecture diagrams and real-world examples.
๐Last updated 4 March 2026
Here's how it works: you find a skill, add its config to your AI client, the client launches the skill as a background subprocess, and the AI model calls that subprocess's tools when your conversation needs them. Communication is JSON-RPC over stdio โ no ports, no network config, just pipes between processes.
Understanding how MCP skills work under the hood changed how I debug problems and design setups. You don't need to know this to use skills โ but once you do, you'll never be confused by a broken config again.
The Architecture in One Diagram
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ TrustedSkills Registry โ
โ skill: weather ยท npm: @trustedskills/weather-mcp โ
โโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ (1) You discover the skill
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ AI Client Config โ
โ { "mcpServers": { "weather": { โ
โ "command": "npx", โ
โ "args": ["-y", "@trustedskills/weather-mcp"] } } } โ
โโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ (2) Client starts MCP server as subprocess
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MCP Server Process โ
โ npx @trustedskills/weather-mcp โ
โ Listens on stdio ยท Exposes: โ
โ get_weather(location, units) โ
โ get_forecast(location, days) โ
โโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ (3) Client discovers tools
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Claude โ
โ "What's the weather in Sydney?" โ
โ โ calls get_weather({ location: "Sydney" }) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
We built an MCP skill for a legal team that pulled case precedents from their internal database. The AI client, the MCP server, and the database were all on the same machine โ no network exposure, no firewall rules to change. The stdio-based architecture meant setup took 20 minutes instead of a day of DevOps work.
The Full Lifecycle, Step by Step
Step 1: Discovery
You browse TrustedSkills, find a skill that does what you need. Each listing shows you the tools it exposes and the exact config JSON to copy.
Step 2: Configuration
Paste that JSON into your AI client's config file. For Claude Desktop it's claude_desktop_config.json. The config just tells the client: "when you start up, run this command."
Step 3: Server Launch
When you restart the AI client, it reads the config and launches each MCP server as a child process. That's literally what npx -y @package/name is doing โ the client runs it as a subprocess and connects via stdio.
Step 4: Tool Discovery
The client sends the server a message: "what tools do you have?" The server responds with a JSON list of tool definitions. Here's what that looks like:
{
"tools": [
{
"name": "get_weather",
"description": "Get current weather for a location",
"inputSchema": {
"type": "object",
"properties": {
"location": { "type": "string" },
"units": { "type": "string", "enum": ["metric", "imperial"] }
},
"required": ["location"]
}
}
]
}
Step 5: Tool Calling
User asks something. Claude decides a tool is relevant and calls it:
{
"method": "tools/call",
"params": {
"name": "get_weather",
"arguments": { "location": "Sydney", "units": "metric" }
}
}
Step 6: Result
The server does the work โ calls an API, reads a file, queries a database โ and returns the result. Claude uses it to form a response. You see a helpful answer; the JSON flew by invisibly.
Skill vs MCP Server: What's the Real Difference?
| Aspect | MCP Server | Skill |
|---|---|---|
| What it is | A running process implementing the MCP protocol | A packaged, documented, versioned capability |
| Registry listing | No | Yes โ with metadata, verification status, etc. |
| Has SKILL.md | Not required | Yes โ standardised description file |
| Verification | N/A | Unverified / Community / Verified / Featured |
How the Protocol Works (the Short Version)
MCP uses JSON-RPC 2.0 over stdio. No HTTP server. No ports. The client writes JSON to the server's stdin; the server writes JSON to its stdout. That's the whole protocol.
It's simple by design โ works the same on Mac, Windows, and Linux. No firewall rules, no network configuration, no port conflicts. The subprocess is isolated to process-level communication only.
Some advanced setups use SSE (Server-Sent Events) over HTTP for remote MCP servers, but that's uncommon for skills installed from TrustedSkills.
End-to-End Example: Weather Skill
You ask Claude Desktop: "What's the weather in Tokyo?"
- Weather MCP server is running (launched from your config)
- Claude sees the
get_weathertool is available - Claude decides this query needs that tool
- It calls
get_weather({ location: "Tokyo", units: "metric" }) - The MCP server calls the Open-Meteo API
- Returns:
{ temperature: 18, condition: "Partly cloudy" } - Claude gives you a natural-language answer using that data
Total time: under a second. Feels like magic; it's just well-designed plumbing.
Frequently Asked Questions
How does Claude know when to use an MCP tool?
The tool definitions include a name and description. Claude reads those and decides whether a tool is relevant to what you're asking. The better the tool description, the more reliably Claude uses it at the right moment.
Can MCP skills access my local files?
Only if they're designed to. A filesystem skill can access files; a weather skill can't. Each skill only does what its tools define. This is why checking a skill's verification status matters before installing it.
What happens when an MCP server crashes?
Claude will stop offering that skill's tools and may show an error. Restart your AI client to relaunch the server. Claude Desktop shows crash details in Settings โ Developer โ MCP Logs.
Can I run many MCP skills simultaneously?
Yes. Each skill runs as a separate subprocess. You can have dozens running at once โ Claude picks whichever tools are most relevant for each query.
TrustedSkills Team
The TrustedSkills team builds and tests AI agent integrations across Claude, OpenClaw, Cursor, and VS Code. We verify every skill in our registry and have set up hundreds of MCP configs across every major platform.