Agent Ui

🌐Community
by inference-sh · vlatest · Repository

Dynamically generates user interfaces based on inference results, enabling interactive data exploration and decision support.

Install on your platform

We auto-selected Claude Code based on this skill’s supported platforms.

1

Run in terminal (recommended)

terminal
claude mcp add inference-sh-agent-ui npx -- -y @trustedskills/inference-sh-agent-ui
2

Or manually add to ~/.claude/settings.json

~/.claude/settings.json
{
  "mcpServers": {
    "inference-sh-agent-ui": {
      "command": "npx",
      "args": [
        "-y",
        "@trustedskills/inference-sh-agent-ui"
      ]
    }
  }
}

Requires Claude Code (claude CLI). Run claude --version to verify your install.

About This Skill

What it does

This skill, Agent Ui, dynamically generates user interfaces based on inference results from AI agents. It allows for interactive data exploration and decision support by providing a runtime environment with built-in tool lifecycle management (pending, progress, approval, results) and human-in-the-loop approval flows. The UI is driven by declarative JSON responses from the agent, enabling flexible widget creation and real-time token streaming.

When to use it

  • When you need a user interface for interacting with an AI agent's output in a structured way.
  • For applications requiring human review or approval of agent actions.
  • To build interfaces that display data generated by the agent, such as forms or visualizations.
  • In scenarios where real-time updates and streaming responses are important.
  • When you want to incorporate client-side tools into your agent workflows.

Key capabilities

  • Runtime Included: No backend logic is required for the UI component.
  • Tool Lifecycle Management: Tracks tool status (pending, progress, approval, results).
  • Human-in-the-Loop Approval Flows: Built-in support for human review and approval of agent actions.
  • Declarative JSON UI: Generates UIs from agent responses in JSON format.
  • Streaming: Provides real-time token streaming.
  • Client-Side Tools: Supports tools that run directly within the browser.

Example prompts

These are not prompts to the skill, but configurations for an agent using the skill:

  • proxyUrl = "/api/inference/proxy" (required to connect)
  • agentConfig.core_app.ref = 'openrouter/claude-haiku-45@0fkg6xwb' (specifies a core AI model)
  • agentConfig.system_prompt = 'You are helpful.' (sets the agent's persona)

Tips & gotchas

  • API Proxy Required: This skill requires an API proxy route setup as described in the documentation (/api/inference/proxy).
  • Environment Variable: Ensure you have set the INFERENCE_API_KEY environment variable.
  • Client-Side Tools: To utilize client-side tools, use the createScopedTools function and pass it to the agent's configuration.

Tags

🛡️

TrustedSkills Verification

Unlike other registries that point to live repositories, TrustedSkills pins every skill to a verified commit hash. This protects you from malicious updates — what you install today is exactly what was reviewed and verified.

Security Audits

Gen Agent Trust HubPass
SocketPass
SnykPass

Details

Version
vlatest
License
Author
inference-sh
Installs
161

🌐 Community

Passed automated security scans.