Firecrawl Scraping

🌐Community
by casper-studios · vlatest · Repository

Extracts structured data from dynamically loaded content on websites using headless browser automation.

Install on your platform

We auto-selected Claude Code based on this skill’s supported platforms.

1

Run in terminal (recommended)

terminal
claude mcp add firecrawl-scraping npx -- -y @trustedskills/firecrawl-scraping
2

Or manually add to ~/.claude/settings.json

~/.claude/settings.json
{
  "mcpServers": {
    "firecrawl-scraping": {
      "command": "npx",
      "args": [
        "-y",
        "@trustedskills/firecrawl-scraping"
      ]
    }
  }
}

Requires Claude Code (claude CLI). Run claude --version to verify your install.

About This Skill

What it does

This skill, Firecrawl Scraping, extracts structured data from individual web pages and converts them into clean, LLM-ready markdown or other formats. It utilizes headless browser automation to handle JavaScript rendering, anti-bot protection measures, and dynamically loaded content. The tool is designed for scraping single pages; multi-page website crawling should use the Apify Website Content Crawler instead.

When to use it

  • Scraping a specific article from a blog or documentation site.
  • Extracting data from a webpage with heavy JavaScript rendering that standard scrapers struggle with.
  • Accessing content behind basic anti-bot protections (e.g., premium news sites like the WSJ or NYT using the "stealth" proxy).
  • Generating summaries of web pages.
  • Capturing screenshots of webpages.

Key capabilities

  • JavaScript Rendering: Handles websites that rely heavily on JavaScript to load content.
  • Anti-Bot Protection Handling: Offers proxy modes ("basic," "stealth," and "auto") to bypass basic anti-bot measures.
  • Output Formatting: Supports markdown (default), HTML, summary (AI-generated), screenshot, and links extraction.
  • Environment Variable Configuration: Uses a .env file for API key management (FIRECRAWL_API_KEY).
  • Respects robots.txt: The tool adheres to the rules defined in a website's robots.txt file.

Example prompts

Tips & gotchas

  • API Key Required: You must obtain and configure a Firecrawl API key (stored in a .env file) to use the skill. Do not commit your API key directly into code repositories.
  • Credit Consumption: Scraping uses credits, with stealth proxy modes potentially consuming more. Monitor your credit balance on the Firecrawl dashboard (https://firecrawl.dev/app).
  • JavaScript Rendering Issues: If a page isn't rendering correctly (empty content), it might be due to complex JavaScript; try increasing the timeout duration using --timeout.

Tags

🛡️

TrustedSkills Verification

Unlike other registries that point to live repositories, TrustedSkills pins every skill to a verified commit hash. This protects you from malicious updates — what you install today is exactly what was reviewed and verified.

Security Audits

Gen Agent Trust HubPass
SocketPass
SnykPass

Details

Version
vlatest
License
Author
casper-studios
Installs
62

🌐 Community

Passed automated security scans.