Local Llm Ops

🌐Community
by bobmatnyc · vlatest · Repository

Provides LLMs guidance and assistance for building AI and machine learning applications.

Install on your platform

We auto-selected Claude Code based on this skill’s supported platforms.

1

Run in terminal (recommended)

terminal
claude mcp add local-llm-ops npx -- -y @trustedskills/local-llm-ops
2

Or manually add to ~/.claude/settings.json

~/.claude/settings.json
{
  "mcpServers": {
    "local-llm-ops": {
      "command": "npx",
      "args": [
        "-y",
        "@trustedskills/local-llm-ops"
      ]
    }
  }
}

Requires Claude Code (claude CLI). Run claude --version to verify your install.

About This Skill

What it does

This skill, Local LLM Ops (Ollama), provides a complete toolchain for running local Large Language Models (LLMs) on Apple Silicon devices. It includes setup scripts, command-line chat launchers, benchmarks, and diagnostic tools to manage the entire lifecycle of your LLM applications – from installation and model pulling to benchmarking performance. The skill streamlines the process of setting up and using LLMs locally, allowing for experimentation and development without relying on external services.

When to use it

  • You want to run Large Language Models (LLMs) locally on an Apple Silicon Mac.
  • You need a quick way to benchmark the performance of different LLM models.
  • You're encountering issues setting up or running your local LLM environment and require diagnostic assistance.
  • You want to experiment with different task modes like coding, creative writing, or analytical tasks using various LLMs.

Key capabilities

  • Automated Setup: Provides a setup_chatbot.sh script for easy installation and configuration of Ollama.
  • Chat Launchers: Offers multiple command-line chat launchers (chatllm, chat, chat.py) for interacting with LLMs.
  • Task Modes: Supports different task modes (coding, creative, analytical) using specific models like codellama:70b and llama3.1:70b.
  • Benchmarking: Includes a benchmark workflow (./scripts/run_benchmarks.sh) to evaluate LLM performance with configurable prompts and token limits.
  • Diagnostics: Provides a diagnostic script (./diagnose.sh) to troubleshoot setup issues.

Example prompts

  • ./chatllm - Launches the primary chat launcher for interacting with your local LLMs.
  • ./chat -t coding -m codellama:70b - Starts a chat session in "coding" mode using the codellama:70b model.
  • ./scripts/run_benchmarks.sh - Executes the benchmark script to measure LLM performance.

Tips & gotchas

  • Prerequisites: Requires Homebrew (brew) for installing Ollama and managing services.
  • Model Availability: You must pull at least one model (e.g., ollama pull mistral) before launching chat or benchmarks.
  • API Endpoint: The Ollama API runs on http://localhost:11434.

Tags

🛡️

TrustedSkills Verification

Unlike other registries that point to live repositories, TrustedSkills pins every skill to a verified commit hash. This protects you from malicious updates — what you install today is exactly what was reviewed and verified.

Security Audits

Gen Agent Trust HubPass
SocketPass
SnykPass

Details

Version
vlatest
License
Author
bobmatnyc
Installs
66

🌐 Community

Passed automated security scans.