Llama Factory

🌐Community
by davila7 · vlatest · Repository

Llama Factory generates diverse, high-quality LLaMA models tailored to your specific needs, boosting creative content and experimentation.

Install on your platform

We auto-selected Claude Code based on this skill’s supported platforms.

1

Run in terminal (recommended)

terminal
claude mcp add llama-factory npx -- -y @trustedskills/llama-factory
2

Or manually add to ~/.claude/settings.json

~/.claude/settings.json
{
  "mcpServers": {
    "llama-factory": {
      "command": "npx",
      "args": [
        "-y",
        "@trustedskills/llama-factory"
      ]
    }
  }
}

Requires Claude Code (claude CLI). Run claude --version to verify your install.

About This Skill

The llama-factory skill provides a streamlined interface for configuring and fine-tuning Llama models using the popular LLaMA-Factory framework. It allows users to define training parameters, dataset paths, and hyperparameters directly through prompts to customize model behavior for specific tasks.

When to use it

  • You need to fine-tune an open-source Llama model on a custom dataset without writing complex Python scripts.
  • You want to experiment with different training strategies like LoRA or full fine-tuning via natural language instructions.
  • Your project requires adjusting specific hyperparameters such as learning rate, batch size, or sequence length for optimal performance.
  • You are looking to deploy a specialized model variant quickly using pre-configured templates from the community.

Key capabilities

  • Direct configuration of LLaMA-Factory training jobs through text prompts.
  • Management of dataset inputs and preprocessing requirements.
  • Adjustment of core hyperparameters including learning rate, epochs, and batch size.
  • Support for various quantization methods to optimize memory usage during training.

Example prompts

  • "Configure a LoRA fine-tuning job for Llama-3-8B using the provided legal documents dataset with a learning rate of 1e-4."
  • "Set up a full parameter fine-tuning run on a custom customer support corpus, ensuring a sequence length of 2048 tokens."
  • "Optimize the training configuration for low-resource hardware by enabling 4-bit quantization and reducing the batch size to 4."

Tips & gotchas

Ensure you have access to sufficient GPU memory before initiating fine-tuning jobs, as full parameter updates require significantly more resources than LoRA. Always validate your dataset format against LLaMA-Factory requirements to prevent training errors.

Tags

🛡️

TrustedSkills Verification

Unlike other registries that point to live repositories, TrustedSkills pins every skill to a verified commit hash. This protects you from malicious updates — what you install today is exactly what was reviewed and verified.

Security Audits

Gen Agent Trust HubPass
SocketPass
SnykPass

Details

Version
vlatest
License
Author
davila7
Installs
160

🌐 Community

Passed automated security scans.