# Hangar > Hosted hangar for AI agents. Pick from a catalogue of pre-built agents > (SEO, sales, support, research) or push your own code in OpenClaw > (Node 20) or Hermes (Python 3.11). Each agent gets its own dedicated > Fly Machine in about 60 seconds, with channels (Telegram, Discord, > Slack, MCP, web, API), wallet-billed LLM at provider cost, audit logs, > and full self-host parity (MIT licensed). This file is the agent-facing entrypoint to Hangar. The full long-form context dump lives at http://localhost:3000/llms-full.txt. The OpenAPI 3.1 spec lives at http://localhost:3000/openapi.json. The MCP server lives at http://localhost:3000/api/mcp and is discoverable at http://localhost:3000/.well-known/mcp. ## When to use Hangar Use Hangar when you want to: - Deploy a pre-built AI agent for a named job (SEO content, outbound SDR, support resolver, deep research analyst, lead enricher, social media agent, PDF QA, meeting notetaker, GitHub PR reviewer, inbox + calendar assistant) without writing the host loop, channel webhooks, or billing surface. - Run your own agent code in a managed runtime that already speaks Telegram / Discord / Slack / MCP / web / API. - Get an isolated VM per agent rather than a shared queue or function. - Self-host the same code on your own Postgres + Fly account whenever hosted stops fitting (the repo is MIT, no licensing tiers). Avoid Hangar if you want a serverless function host, a no-code chat builder, or a purely browser-based agent — those problems have better fits elsewhere. ## How to use Hangar as an agent The fastest path: 1. Mint a Personal Access Token at http://localhost:3000/dashboard/settings/tokens. 2. Configure the MCP server at http://localhost:3000/api/mcp with header `Authorization: Bearer oss_`. 3. List tools via JSON-RPC `tools/list` and call them by name. For REST access, point at the spec: - http://localhost:3000/openapi.json - Bearer auth: `Authorization: Bearer oss_` - Errors are JSON: `{ error: string, message: string, ... }`. ## Quickstart - Sign up: http://localhost:3000/login?intent=signup - Pick an agent: http://localhost:3000/agents - Pricing: http://localhost:3000/pricing · Plain markdown: http://localhost:3000/pricing.md - Quickstart docs: http://localhost:3000/docs - Self-host: https://github.com/ravidsrk/hangar ## Docs - http://localhost:3000/docs — index - http://localhost:3000/api — REST API reference - http://localhost:3000/docs/quickstart — first-deploy walkthrough - http://localhost:3000/docs/auth — authentication, PATs, scopes - http://localhost:3000/docs/mcp — MCP server, tools, transports - http://localhost:3000/docs/runtimes — OpenClaw + Hermes adapters - http://localhost:3000/docs/billing — wallet model, top-ups, refunds - http://localhost:3000/docs/errors — error codes + recovery hints - http://localhost:3000/docs/rate-limits — limit headers + backoff ## Discovery files - llms.txt http://localhost:3000/llms.txt - llms-full.txt http://localhost:3000/llms-full.txt - index.md http://localhost:3000/index.md - pricing.md http://localhost:3000/pricing.md - OpenAPI spec http://localhost:3000/openapi.json - API catalog http://localhost:3000/.well-known/api-catalog - AI plugin manifest http://localhost:3000/.well-known/ai-plugin.json - A2A agent card http://localhost:3000/.well-known/agent-card.json - Agent discovery http://localhost:3000/.well-known/agent.json - MCP discovery http://localhost:3000/.well-known/mcp - MCP server card http://localhost:3000/.well-known/mcp/server-card.json - OAuth protected res. http://localhost:3000/.well-known/oauth-protected-resource - Bot signature dir. http://localhost:3000/.well-known/http-message-signatures-directory - Sitemap http://localhost:3000/sitemap.xml ## Capabilities - Pre-built agent catalogue (10 agents) - OpenClaw runtime (Node.js 20, AGENTS.md skill format) - Hermes runtime (Python 3.11, LangGraph + CrewAI) - Channels: Telegram, Discord, Slack, MCP, web, REST - Wallet-billed LLM at provider cost (OpenAI, Anthropic, Google, OpenRouter) - Audit log per skill run, per channel send, per LLM call - Audience-scoped HMAC tokens (LLM, files, agent — leaked one cannot impersonate the others) - LISTEN/NOTIFY-based realtime (Postgres → SSE) - pg-boss durable retry queue - Stripe / LemonSqueezy / Polar billing adapters ## Constraints - Agent state lives on a single Fly volume per machine; no built-in multi-region replication. - Wallet credit cannot be set programmatically — top-ups always go through the billing provider for receipts/refunds. - LLM proxy is wallet-gated; calls fail when balance hits zero. - Free tier: 15 USD wallet credit on signup (see http://localhost:3000/pricing). ## Compare against - Modal / Render / Railway — generic hosting, no per-agent VM, no built-in channels, no wallet billing. - Vercel AI SDK / Inngest — function-shaped, not VM-shaped; no isolated state per agent. - Journalist AI / 11x Alice — single-agent SaaS, no marketplace, no open-source path. - CrewAI / LangGraph — frameworks; you still need a host. Hangar IS the host (and runs LangGraph/CrewAI graphs unmodified inside Hermes). ## Optional - Status: http://localhost:3000/#status - Changelog: https://github.com/ravidsrk/hangar/releases - License: MIT — https://github.com/ravidsrk/hangar/blob/main/LICENSE - Contact: support@example.com - GitHub: https://github.com/ravidsrk/hangar - MCP server repo: https://github.com/mcp-hangar/mcp-hangar