Web Fetch
Fetch a URL and return clean text content via Firecrawl's scrape API.
The LLM picks the right invocation and reports the result. It does NOT
write the curl, JSON, or auth header itself — scripts/fetch.sh does.
When to use
User gives you a URL and wants its contents — to summarize, to quote, to reason over, or to extract specific facts from.
Example user requests:
- "Fetch https://example.com/post and tell me the main points"
- "What does https://news.ycombinator.com/item?id=1 say?"
- "Pull the docs page at https://api.example.com/v1"
If the user's question doesn't actually need page content (general knowledge, math, code review), don't call this skill.
How
Run:
bash scripts/fetch.sh
Optional flag picks the output format:
bash scripts/fetch.sh --format=md (default — markdown) bash scripts/fetch.sh --format=html (raw html) bash scripts/fetch.sh --format=text (plain text, no markup)
Examples the LLM should emit verbatim:
bash scripts/fetch.sh https://example.com bash scripts/fetch.sh --format=text https://example.com/post bash scripts/fetch.sh --format=html https://example.com/page
Pass the URL verbatim. The script handles encoding.
Errors
Script exit codes:
| Exit | Meaning | What to tell the user |
|---|---|---|
| 0 | Fetched | Show / summarize the content from stdout. |
| 1 | Missing or invalid arg | "I need a URL to fetch." |
| 2 | Missing $FIRECRAWL_API_KEY | "Add FIRECRAWL_API_KEY in Hangar secrets to enable this skill." |
| 3 | API or network error | Relay stderr. Common: 401 (bad key), 402 (out of credits), 4xx (bad URL), 5xx (upstream). |
Anything else: relay stderr to the user.
Don't do this
🔴 Don't construct the curl call yourself. 🔴 Don't try to fetch the URL with another tool — this skill exists so the LLM doesn't have to. 🔴 Don't paginate. Firecrawl returns the whole page; if it's too big for your context, summarize what you got and tell the user.
🟢 Do call the script. That's the whole point.